Emerging Technologies Presentations
- Full Access
- Onsite Student Access
- Onsite Experience
- Virtual Full Access
- Virtual Basic Access
1) 東京国際フォーラム(TIFF)のホールEでブース展示 13件
（2021年12月15日-16日 10:00-18:00/ 12月17日 10:00-16:00 JST）:
2) バーチャルプラットフォーム上でプレゼンテーション映像のオンデマンド配信 19件
3) ライブデモンストレーションとQ＆Aセッション 19件
"Amazing Sketchbook the Ride": Driving a Cart in a 3DCG Scene Created from a Hand-Drawn Sketch
Description: We introduce an interactive content "Amazing Sketchbook the Ride", which allows users to drive an actual cart and move freely in a 3DCG scene as if they were immersed in it. The users can create the 3DCG scene for driving by themselves by drawing a sketch of the scene.
Collaborative Avatar Platform for Collecting Human Expertise
Description: We present a system that mixes two people into one robot arm and collaborates together. There are two ways of control (division of roles and mixing in adjustable ratio), and we found that it allows us to perform movements that are impossible by alone and stabilizes the movement.
Depth-Aware Dynamic Projection Mapping using High-speed RGB and IR Projectors
Description: We report a new system combining a camera, an RGB projector, and a newly developed high-speed IR projector. Moreover, we realize a 0.4-ms markerless 3D tracking by leveraging a small inter-frame motion. Based on these technologies, we demonstrate flexible, depth-aware dynamic projection mapping on the entire scene with 8-ms latency.
DroneStick: Flying Joystick as a Novel Type of Interface
Description: DroneStick is a novel hands-free method for smooth interaction between a human and a robotic system via one of its agents. A flying joystick (DroneStick) being a part of a multi-robot system is composed of a flying drone and coiled wire with a vibration motor.
Frisson Waves: Sharing Frisson to Create Collective Empathetic Experiences for Music Performances
Description: Frisson is a mental experience of body reactions such as shivers, tingling skin, and goosebumps. However, this sensation is not shareable with others and is rarely used in live performances. We propose Frisson Waves, a real-time system to detect, trigger and share frisson in a wave-like pattern during music performances.
GroundFlow: Multiple Flows Feedback for Enhancing Immersive Experience on the Floor in the Wet Scenes
Description: We present GroundFlow, a water recirculation system that provides multiple flows feedback on the floor in immersive virtual reality. Our demonstration also implemented a virtual excursion that allows users to experience different water flows and their corresponding wet scenes.
HoloBurner: Mixed reality equipment for learning flame color reaction by using aerial imaging display
Description: The HoloBurner system is a mixed reality instrument for chemistry experiments to learn about the color reaction of flames. The system consists of a real Bunsen burner and an aerial image display that shows a holographic image of a flame at the tip of the burner.
Integration of stereoscopic laser-based geometry into 3D video using DLP Link synchronisation
Description: In this work, we demonstrate a laser-based display integrating stereoscopic vector graphics into a dynamic scene generated by a conventional 3d video projection system. These laser graphics are projected with a galvo-laser system which exhibits far greater brightness, contrast and acuity compared to pixelated projectors.
LIPSYNC.AI: A.I. Driven Lips and Tongue Animations Using Articulatory Phonetic Descriptors and FACS Blendshapes
Description: LIPSYNC.AI: A.I. Driven Lips and Tongue Animations Using Articulatory Phonetic Descriptors and FACS Blendshapes
Midair Haptic-Optic Display with Multi-Tactile Texture based on Presenting Vibration and Pressure Sensation by Ultrasound
Description: In this demonstration, we develop a midair haptic-optic display with multi-tactile texture using focused ultrasound. In this system, participants can touch aerial 3D images with a realistic texture without wearing any devices. The realistic textures are rendered by simultaneously presenting vibration and static pressure sensation using ultrasound.
Multimodal Feedback Pen Shaped Interface and MR Application with Spatial Reality Display
Description: Pen shaped interface capable of providing the following multimodal feedbacks from virtual object (1) Linear feedback to express contact pressure (2) Rotational feedback to simulate friction of rubbing virtual surface (3) Vibrotactile feedback (4) Auditory feedback to express contact information MR interaction system with pen shaped interface and Spatial Display.
Parallel Ping-Pong: Demonstrating Parallel Interaction through Multiple Bodies by a Single User
Description: Parallel Ping-Pong is a Parallel Interaction with multiple bodies (here 2 robots arms) controlled by a single user. To reduce the user workload when controlling multiple bodies, we added (1) an automatic view transition between the bodies' viewpoints, and (2) an autonomous body motion integrating the user motion.
Real-time Image-based Virtual Try-on with Measurement Garment
Description: We propose a robust real-time Image-based virtual try-on system. We formulate the problem as a supervised image-to-image translation problem using a measurement garment, and we capture the training data with a custom actuated mannequin. The user only needs to wear a measurement garment to try on many different target garments.
Recognition of Gestures over Textiles with Acoustic Signatures
Description: We demonstrate a method capable of turning a textured surface into an opportunistic input interface using a machine learning model pre-trained on the acoustic signal generated while scratching different types of fabric. A single and brief audio recording then suffices to characterize both the texture and the gesture that originates.
Self-Shape-Sensing Device with Flexible Mechanical Axes for Deformable Input Interface
Description: A novel device that is capable of sensing its own shape, structured around a flexible mechanical axis that allows for its deformation within a wider degree of freedom and enables its tangible control by hand. Users can interact with volumetric images on spatial display intuitively by changing device shape.
Simultaneous Augmentation of Textures and Deformation Based on Dynamic Projection Mapping
Description: We exploit human perception characteristics and dynamic projection mapping techniques and realize simultaneous overwriting of both textures and deformation of a real object. With the developed 1000fps low-latency projector-camera system, we demonstrate that a plaster figure turns into a colorful and flabby object by the projection.
The Aromatic Garden, Exploring new ways to interactively interpret narratives combining olfaction and vision including temporal change of scents using olfactory display
Description: Nakamoto Laboratory (Tokyo Institute of Technology) present a multi-sensory olfactory game, ‘The Aromatic Garden’, offering a unique user experience through the combination of new olfactory technology and art. Players must use their sense of smell to navigate and collect scents, presenting an engaging and challenging experience.
VWind: Virtual Wind Sensation to the Ear by Cross-Modal Effects of Audio-Visual, Thermal, and Vibrotactile Stimuli
Description: We propose to present virtual wind sensation by cross-modal effects. We developed VWind, a wearable headphone-type device to give vibrotactile and thermal stimuli in addition to visual scenarios and binaural sounds. We prepared the demonstrations as if users were blown to the ear or exposed to freezing winds.
Weighted Walking: Propeller-based On-leg Force Simulation of Walking in Fluid Materials in VR
Description: Weighted Walking is a wearable device with ducted fans on the user's calf. The fans can generate powerful thrust to simulate the forces caused by the user's lower limbs moving in different fluid materials. It can also simulate the walking experience in different gravity conditions, such as in another planet.