Games & Games Gallery Presentations

  • Full Access Full Access
  • Onsite Student Access Onsite Student Access
  • Onsite Experience Onsite Experience
  • Virtual Full Access Virtual Full Access
  • Virtual Basic Access Virtual Basic Access

*A mix of live and pre-recorded presentations. Please click here and select ‘Games’ program to view the live sessions’ timing. All other sessions without date/time are accessible on-demand in the virtual platform from 6 December 2021 to 11 March 2022, unless otherwise specified.


Getting started with virtual production How do you balance flexibility with initial costs? ~Looking towards a successful future for the film industry~ 今すぐ始めるバーチャルプロダクション 柔軟性と導入コストを両立させながら始めるには? ~今後を見据えた映像事業を成功に導こう~

Description: An increasing number of live-action film projects are incorporating "virtual production" into their workflows through the use of tools such as game engines,LED walls, etc. However, virtual production also comes with substantial initial costs. This is the subject of this session, where speakers will discuss practical examples of virtual production used by Toei Films in the last year. From there, the session will also cover the balancing act between flexibility and initial costs, and also touch on the current quality and future possibilities of virtual production. There is something here for everyone from those simply interested in virtual production to those who are already using it in their workflows, and are looking for some new perspectives. ゲームエンジンとLEDウォールなどを活用する実写合成手法「バーチャルプロダクション」で撮影された作品が増えてきました。とはいえ、導入するには相応の投資が必要です。そこで、講演者が過去1年間に東映特撮作品で行ってきたバーチャルプロダクションの事例を踏まえ、柔軟性と導入コストを両立させながら始められるポイントを解説し、バーチャルプロダクションの本質や可能性を紐解いていきます。バーチャルプロダクションに興味がある方にはもちろん、すでにバーチャルプロダクションを導入されている方にも新たな視点が見出していただける内容です。

Contributor(s):
Kazuya Hayashi, Unity Technologies Japan


Grow Your AI Characters: Emotional Decision Making with GOAP

Description: This session explains how GOAP (Goal Oriented Action Planning) can reflect character traits and state of mind of the characters in order to impress and surprise the player. GOAP is often used in the game industry for autonomous enemy characters and their decision making. This talk extends GOAP use-cases to character development such as avatar AI and companion AI by explaining the method but also presenting the results and the insights gained from it. In addition, we will discuss extensions to the decision making, such as a utility-based learning component, a knowledge inheritance system and smart object to allow the agent to express its needs, its emotions and learning capabilities. This session will also explain the emotional system composed of emotions, moods, and personalities, and how each can be expressed by the character AI. With all these extensions combined, it allows the characters AI to express itself and grow along play.

Contributor(s):
Gautier Boeda, Square Enix Co., Ltd., Japan


Japan Game Awards Exhibition (Amateur categories and U18 categories) and Game Experience Corner

Description: The Japan Game Awards are given to the best computer entertainment software based on the spirit of "openness," "fairness," and "justice" with no restriction on the type of platform, such as home video game consoles or PCs. In this exhibition, you will be able to try out the prize-winning titles in the "Amateur Division" and "U18 Division". <Amateur Division> The Grand Prize winner Title: ウニィ研究所 (Unny Laboratory) Creator: ※スタッフがおいしくいただきました。( 石川 亮太 / 小中 純 / 今村 優斗 / 岡野 冬樹 / 中村 真実帆 / 長瀬 真帆人 / 畑 京佑 / 松宮 樹 / 三好 竜世 /山本 正 /中谷 葵/ 山口 智史 ) Affiliation: HAL Osaka <U18 Category> Gold Prize Winner Title: Balloon Head Creator: 古市 太郎 Affiliation: Aichi Prefectural Aichi High School of Technology and Engineering Silver Prize Winner Title: マグロの惑星 (Planet of Tuna) Creator: 山口 登 Affiliation: Kadokawa Dwango Gakuen N High School Bronze Prize Winner Title: AGARES Creator: くまちゃんず( 熊渕 貴大 / 藤原 優司 / 熊澤 邑河 ) Affiliation: Kobe Municipal High School of Science and Technology ■ Exhibitor(s) Mikio Fujimura, Computer Entertainment Supplier's Association Human Resource Development Group Naohiro Saito, Computer Entertainment Supplier's Association Human Resource Development Group

Contributor(s):


Machine Learning Aided Content Creation In『Love Live!School Idol Festival ALL STARS』~ Automatic rhythm game charts generation with deep learning

Description: In this session, we will introduce our attempts to support rhythm game chart creation in mobile game 『Love Live!School Idol Festival ALL STARS』using machine learning. We built a deep learning based in-house software to achieve a more efficient workflow, which squeezed the time required to make in-game charts by up to 50%. Our deep learning model generates in-game charts from mel spectrograms of songs. We will also highlight some of the trial-and-error processes with various machine learning methods such as GAN(Generative adversarial network).

Contributor(s):
Lingjian Wang, KLab inc., China
Atsushi Takada, KLab Inc., Japan


Motion Matching and Machine Learning for Video Game Animation

Description: In recent years academia and industry have both seen a revolution in the way character animation systems are built – with Machine Learning based techniques coming out of academic research, and Motion Matching based techniques being developed by many companies in the industry. Although different in many ways, both of these approaches have one thing in common – a renewed focus on data and how to use it. In this talk I will cover how this new focus requires us to fundamentally change the way we think about building animation systems, and how we have tried to bring together these new developments coming from the world of academia and industry in our latest research.

Contributor(s):
Daniel Holden, Ubisoft Divertissements, Canada


New Pokémon Snap Lighting Overview

Description: This session will cover the lighting used in 'New Pokemon Snap', including lighting design and function, quality improvement process, problems, and solutions.

Contributor(s):
Wataru Tada, Bandai Namco Studios Inc., Japan
Masayuki Suzuki, Bandai Namco Studios Inc., Japan
Shohei Yamaguchi, Bandai Namco Studios Inc., Japan
Yuko Mizoguchi, Bandai Namco Studios Inc., Japan


Two Lectures for Retrospective Technologies in Game Industry – TAITO and SQUARE ENIX –

Description: There is a long history of digital game in Japanese game industry, but also there are many documents and materials which are not well disordered. But in these five years, the documents of game technologies and game design were pun in order, and have become public in some game companies. Even from a modern point of view, the retrospective technologies in game industry are suggestive and important to not only game developers but also CG and interactive media researchers because these are the origins of CG technologies and AI technologies to make an interactive contents. In this lecture, TAITO and SQUARE ENIX shows the 80’S and 90’s retrospective but new technologies which are salvaged from their old materials.

Contributor(s):
Youichiro Miyake, SQUARE ENIX, Japan
Yukiharu Sambe, TAITO Corporation, Japan


[Live Drawing] Japanese Chaos and Pop Culture into the Visual Concept featured with a Live Sound Remix Performance

Description: A live drawing and sound remix performance by the development team behind Survival Quiz CITY. Incorporating Japanese chaos and pop culture into the visual concept, an unique space is presented with anime-like hand-drawn familiarity and free sound.

Contributor(s):
Makoto Ando, BANDAI NAMCO Studios Inc.
Yoko Tanaka, BANDAI NAMCO Studios Inc.
Yoshihito Yano, Bandai Namco Studios Inc.
Linda AI-CUE, Bandai Namco Studios Inc.
Hiroyuki Kobota, Ubisoft, Japan


[Live Drawing] Real-time Cinematic Production Technique using UE4

Description: SAFEHOUSE combines real-time cinematic production techniques using UE4, with traditional pre-rendering methods and environmental modeling pipelines. Because of the unique nature of our production pipeline, we have structured a cinematic artists team that creates cutscenes using UE4. Our aim is to walk people through how we create our cinematics using real-time engines, as well as introduce how our cinematic artists work. By utilizing the models and scan models available on the UE4 Marketplace, we are able to cut down on modeling work, and allocate more time on how to tell a story using these models. In this performance, we hope to show how we set up and make a scene come to life in the environment creation process using a real-time engine.

Contributor(s):
Satomi Nakahara, SAFEHOUSE Inc.
Takuya Suzuki, SAFEHOUSE, Inc.
Hiroyuki Kobota, Ubisoft, Japan


[Live Drawing] The Creation of an Immersive Concept Art

Description: In this live drawing performance, I’m going to challenge finishing a piece of sci-fi concept art by using a hypothetical situation. Let's say... I’m in the office as a concept artist about to go home and the art director grabs me and asks me to do a marketing concept art piece. He wants a submarine in the space station because it’s a sci-fi game. “Easy!”. I always say this word to him because since I joined this project, I have been making him happy with my artwork. This time will be no different. “Tomorrow?”, I asked casually. He said “No, in 45 mins, 1 hour tops! What I’m asking you for is the missing piece for my presentation, which I must do in an hour!”. My brain immediately searches an excuse not to do it, but then my favorite word comes out from his mouth: “EASY, right? “ HA! he knows me. He knows how to motivate (control?) me. Yes, challenge accepted. He had better make sure warm up his cheek muscles because he will be smiling like smiley-face emoji with my artwork. Time starts now! Do you think I can make it happen? You will see in the event. Enjoy!

Contributor(s):
Genseki Tanaka, Ubisoft Osaka
Hiroyuki Kobota, Ubisoft, Japan


[Live Drawing] The World Aesthetic of FINAL FANTASY

Description: Using the colors white and black as a base, in this presentation I will create a drawing from scratch in the world aesthetic of FINAL FANTASY. I intend to present it as one technique you can choose out of an array of approaches to concept art and hope you all will enjoy it.

Contributor(s):
Toshitaka Matsuda, SQUARE ENIX CO.,LTD
Kobota Kobota, Ubisoft, Japan


[MYDCF2021] Technical Paper Presentation

Description: A collaboration effort between SIGGRAPH Asia 2021 and MYDCF (Malaysia's Digital Creativity Festival), with MYDCF presenting three Technical Papers : - Materials and Techniques for the Construction of Entry-Level Stop Motion 3D Puppet Armatures by Miss Qistina binti Ruslan Experiential and practical study on the creation and fabrication of 3D puppet characters for stop motion animation. This study aims to devise a starter armature puppet to practice and teach intermediate stop motion animation. Practical studies will include the design and materials used to create the learning puppet, made with mediums only consisting of available materials. The learning puppet’s teaching scale will take into consideration these aspects: difficulty, level of teaching, animation technique, spatial understanding for animation. The purpose of this blueprint, in addition to having a clear design guide for building armatures for stop motion animation, is to provide practical studies for animators both digital and physical, and to improve and provide an adept and refined understanding of animation applications. - The impact of increase presence on cybersickness in 360° Virtual Reality by Miss Madina Berkenova Virtual Reality (VR) has become a major disruptive technology that influences various industries most prevalently, the entertainment sector. The number of 360° video content has grown exponentially on YouTube and Facebook as VR video has become the first immersive VR experience a typical user will have [1]. With this innovative technology, a side effect has also appeared and proven to be unpleasant physical discomfort called cybersickness [2]. Symptoms of mentioned discomfort include headache, nausea, eye strain and dizziness. There are multifactor theoretical causes of cybersickness; however, there is the influence of other important factors such as “presence”, or the sense of “being there”, in the Virtual Environment [3][4]. Recent research findings have recorded positive, negative, or null presence and cybersickness correlations [5]. This study aims to measure the influence of (a) 360° video (computer screen (2D) and head mounted display (3D)), (b) 360° stereo sound (2D) and spatialized (3D) on the users’ presence and its effect on cybersickness in a 360° VR video. The Igroup Presence Questionnaire (IPQ) to measure the presence and the Simulator Sickness Questionnaire (SSQ) for cybersickness were used. The results revealed no statistical difference between video and sound on cybersickness. However, there was a significant difference between the computer screen and HMD (Head Mounted Display) experience on presence. It was also observed that presence and cybersickness had a weak negative correlation. - Traces Of The Brain’s Learning Potential Present Within “Uneducational” Video Games by Mr Ayud bin Abdul Rahman Visual elements in video games contribute to a huge factor in the design decision of the production. This research explores the science behind the psychology of design styles in video games via quantitative analysis. Until now, there is no science that measures the connection between design styles and their influence over the human mind. Current guidelines for designers to decide design style is vague, relying on the traditional formula of using shapes as symbols for psychological influences. While this approach had been used for ages, modern artistic works sometimes exploit the concept the other way around. In this research, the data acquired for analysis were obtained from EEG signals. Subjects play 2 video games of distinct design styles – abstract and realistic - following a strict research protocol for EEG experiments. EEG data sets collected include brain’s cognitive functions, players profile, stimulated emotional responses, resting state, game playing sessions, and post-experiment state. These numerical measurements are classified using machine learning by applying a known computational model that was constructed from stimulated emotional responses.

Contributor(s):
Qistina Ruslan, Multimedia University, Malaysia
Madina Berkenova, Taylor's University, Malaysia
Ayud Abdul Rahman, International Islamic University Malaysia, Malaysia