Emerging Technologies Presentations

  • Full Access Full Access
  • Onsite Student Access Onsite Student Access
  • Onsite Experience Onsite Experience
  • Virtual Full Access Virtual Full Access
  • Virtual Basic Access Virtual Basic Access

1) 東京国際フォーラム(TIFF)のホールEでブース展示 13件
(2021年12月15日-16日 10:00-18:00/ 12月17日 10:00-16:00 JST):
来場者は展示期間中、会場で実際に体験いただけます。

2) バーチャルプラットフォーム上でプレゼンテーション映像のオンデマンド配信 19件
(2021年12月6日-2022年3月11日):
全発表者によるプレゼンテーション映像が、オンデマンド配信されます。

3) ライブデモンストレーションとQ&Aセッション 19件
(2021年12月15日-17日):
バーチャルプラットフォーム向けに、1件あたり60分のライブデモとQ&Aセッションがあります。スケジュールはこちらをご覧ください。
- 海外からの6件のライブセッションの様子は、TIFF会場ホールE内のリモートブースに生配信されます。
- TIFF会場ホールEにブース展示されている13件については、それぞれのセッション時間(1時間)のみ該当ブースをクローズします。
なお、本セッションの映像は、ライブ終了後にバーチャルプラットフォーム上でオンデマンド配信されます。


"Amazing Sketchbook the Ride": Driving a Cart in a 3DCG Scene Created from a Hand-Drawn Sketch

Description: We introduce an interactive content "Amazing Sketchbook the Ride", which allows users to drive an actual cart and move freely in a 3DCG scene as if they were immersed in it. The users can create the 3DCG scene for driving by themselves by drawing a sketch of the scene.

Read More

Collaborative Avatar Platform for Collecting Human Expertise

Description: We present a system that mixes two people into one robot arm and collaborates together. There are two ways of control (division of roles and mixing in adjustable ratio), and we found that it allows us to perform movements that are impossible by alone and stabilizes the movement.

Read More

Depth-Aware Dynamic Projection Mapping using High-speed RGB and IR Projectors

Description: We report a new system combining a camera, an RGB projector, and a newly developed high-speed IR projector. Moreover, we realize a 0.4-ms markerless 3D tracking by leveraging a small inter-frame motion. Based on these technologies, we demonstrate flexible, depth-aware dynamic projection mapping on the entire scene with 8-ms latency.

Read More

DroneStick: Flying Joystick as a Novel Type of Interface

Description: DroneStick is a novel hands-free method for smooth interaction between a human and a robotic system via one of its agents. A flying joystick (DroneStick) being a part of a multi-robot system is composed of a flying drone and coiled wire with a vibration motor.

Read More

Frisson Waves: Sharing Frisson to Create Collective Empathetic Experiences for Music Performances

Description: Frisson is a mental experience of body reactions such as shivers, tingling skin, and goosebumps. However, this sensation is not shareable with others and is rarely used in live performances. We propose Frisson Waves, a real-time system to detect, trigger and share frisson in a wave-like pattern during music performances.

Read More

GroundFlow: Multiple Flows Feedback for Enhancing Immersive Experience on the Floor in the Wet Scenes

Description: We present GroundFlow, a water recirculation system that provides multiple flows feedback on the floor in immersive virtual reality. Our demonstration also implemented a virtual excursion that allows users to experience different water flows and their corresponding wet scenes.

Read More

HoloBurner: Mixed reality equipment for learning flame color reaction by using aerial imaging display

Description: The HoloBurner system is a mixed reality instrument for chemistry experiments to learn about the color reaction of flames. The system consists of a real Bunsen burner and an aerial image display that shows a holographic image of a flame at the tip of the burner.

Read More

Integration of stereoscopic laser-based geometry into 3D video using DLP Link synchronisation

Description: In this work, we demonstrate a laser-based display integrating stereoscopic vector graphics into a dynamic scene generated by a conventional 3d video projection system. These laser graphics are projected with a galvo-laser system which exhibits far greater brightness, contrast and acuity compared to pixelated projectors.

Read More

LIPSYNC.AI: A.I. Driven Lips and Tongue Animations Using Articulatory Phonetic Descriptors and FACS Blendshapes

Description: LIPSYNC.AI: A.I. Driven Lips and Tongue Animations Using Articulatory Phonetic Descriptors and FACS Blendshapes

Read More

Midair Haptic-Optic Display with Multi-Tactile Texture based on Presenting Vibration and Pressure Sensation by Ultrasound

Description: In this demonstration, we develop a midair haptic-optic display with multi-tactile texture using focused ultrasound. In this system, participants can touch aerial 3D images with a realistic texture without wearing any devices. The realistic textures are rendered by simultaneously presenting vibration and static pressure sensation using ultrasound.

Read More

Multimodal Feedback Pen Shaped Interface and MR Application with Spatial Reality Display

Description: Pen shaped interface capable of providing the following multimodal feedbacks from virtual object (1) Linear feedback to express contact pressure (2) Rotational feedback to simulate friction of rubbing virtual surface (3) Vibrotactile feedback (4) Auditory feedback to express contact information MR interaction system with pen shaped interface and Spatial Display.

Read More

Parallel Ping-Pong: Demonstrating Parallel Interaction through Multiple Bodies by a Single User

Description: Parallel Ping-Pong is a Parallel Interaction with multiple bodies (here 2 robots arms) controlled by a single user. To reduce the user workload when controlling multiple bodies, we added (1) an automatic view transition between the bodies' viewpoints, and (2) an autonomous body motion integrating the user motion.

Read More

Real-time Image-based Virtual Try-on with Measurement Garment

Description: We propose a robust real-time Image-based virtual try-on system. We formulate the problem as a supervised image-to-image translation problem using a measurement garment, and we capture the training data with a custom actuated mannequin. The user only needs to wear a measurement garment to try on many different target garments.

Read More

Recognition of Gestures over Textiles with Acoustic Signatures

Description: We demonstrate a method capable of turning a textured surface into an opportunistic input interface using a machine learning model pre-trained on the acoustic signal generated while scratching different types of fabric. A single and brief audio recording then suffices to characterize both the texture and the gesture that originates.

Read More

Self-Shape-Sensing Device with Flexible Mechanical Axes for Deformable Input Interface

Description: A novel device that is capable of sensing its own shape, structured around a flexible mechanical axis that allows for its deformation within a wider degree of freedom and enables its tangible control by hand. Users can interact with volumetric images on spatial display intuitively by changing device shape.

Read More

Simultaneous Augmentation of Textures and Deformation Based on Dynamic Projection Mapping

Description: We exploit human perception characteristics and dynamic projection mapping techniques and realize simultaneous overwriting of both textures and deformation of a real object. With the developed 1000fps low-latency projector-camera system, we demonstrate that a plaster figure turns into a colorful and flabby object by the projection.

Read More

The Aromatic Garden, Exploring new ways to interactively interpret narratives combining olfaction and vision including temporal change of scents using olfactory display

Description: Nakamoto Laboratory (Tokyo Institute of Technology) present a multi-sensory olfactory game, ‘The Aromatic Garden’, offering a unique user experience through the combination of new olfactory technology and art. Players must use their sense of smell to navigate and collect scents, presenting an engaging and challenging experience.

Read More

VWind: Virtual Wind Sensation to the Ear by Cross-Modal Effects of Audio-Visual, Thermal, and Vibrotactile Stimuli

Description: We propose to present virtual wind sensation by cross-modal effects. We developed VWind, a wearable headphone-type device to give vibrotactile and thermal stimuli in addition to visual scenarios and binaural sounds. We prepared the demonstrations as if users were blown to the ear or exposed to freezing winds.

Read More

Weighted Walking: Propeller-based On-leg Force Simulation of Walking in Fluid Materials in VR

Description: Weighted Walking is a wearable device with ducted fans on the user's calf. The fans can generate powerful thrust to simulate the forces caused by the user's lower limbs moving in different fluid materials. It can also simulate the walking experience in different gravity conditions, such as in another planet.

Read More