LIPSYNC.AI: A.I. Driven Lips and Tongue Animations Using Articulatory Phonetic Descriptors and FACS Blendshapes

  • Full Access Full Access
  • Onsite Student Access Onsite Student Access
  • Onsite Experience Onsite Experience
  • Virtual Full Access Virtual Full Access
  • Virtual Basic Access Virtual Basic Access

1) 東京国際フォーラム(TIFF)のホールEでブース展示 13件
(2021年12月15日-16日 10:00-18:00/ 12月17日 10:00-16:00 JST):
来場者は展示期間中、会場で実際に体験いただけます。

2) バーチャルプラットフォーム上でプレゼンテーション映像のオンデマンド配信 19件
(2021年12月6日-2022年3月11日):
全発表者によるプレゼンテーション映像が、オンデマンド配信されます。

3) ライブデモンストレーションとQ&Aセッション 19件
(2021年12月15日-17日):
バーチャルプラットフォーム向けに、1件あたり60分のライブデモとQ&Aセッションがあります。スケジュールはこちらをご覧ください。
- 海外からの6件のライブセッションの様子は、TIFF会場ホールE内のリモートブースに生配信されます。
- TIFF会場ホールEにブース展示されている13件については、それぞれのセッション時間(1時間)のみ該当ブースをクローズします。
なお、本セッションの映像は、ライブ終了後にバーチャルプラットフォーム上でオンデマンド配信されます。


Description: LIPSYNC.AI: A.I. Driven Lips and Tongue Animations Using Articulatory Phonetic Descriptors and FACS Blendshapes

Presenter(s):
Jara Alvarez Masso, Emotech Ltd, United Kingdom
Alexandru Mihai Rogozea, Emotech Ltd, United Kingdom
Jan Medvesek, Emotech Ltd, United Kingdom
Saeid Mokaram, Emotech Ltd, United Kingdom
Yijun Yu, Emotech Ltd, United Kingdom

Jara has a background in Design and in Cognitive Systems and Interactive Media. She worked as a product designer for forward-thinking companies such as Twenty2b and Seam Technic, where she helped develop original concepts for innovative products in consumer electronics and wearable devices. At Emotech she progressed in improving the relationship between humans and technology by working on a robotic personal assistant and later on developing supportive technologies which empower digital creations to look and act like a human being. She has done research in the speech and 3D theory for LIPSYNC.AI and is now managing the product.

Alex has a background in Computer Science and Visual Effects. In the last few years he worked with 2D and 3D software companies, such as Foundry, on developing tools that help 3D artists express their skills as best as possible. At Emotech he contributed on the various AI technologies which drive the in-house produced personal assistant, and then helped develop LIPSYNC.AI by implementing neural network architectures and developing procedural algorithms.

Jan has a background in software and hardware development with an emphasis on research and development work. He's worked on developing algorithms for image processing, software and hardware for 3D glasses and spent time teaching. He's now co-founder of Emotech and is highly passionate about pushing the boundaries of AI in the day to day life.

Saeid is a machine learning scientist. Since completing his PhD at the University of Sheffield, he has commercial and academic experience of automatic speech recognition, natural language understanding. He is excited by the application of research to real-world problems and has contributed to LIPSYNC.AI at Emotech by applying his knowledge in speech technology and AI.

Yijun is an instinctive product designer who believes in the value of the users. Seeing design as a medium to turn dreams and imaginations into real life experiences, she is keen on the vision to design for the future. At Emotech she worked on designing the user experience for a robotic personal assistant and later on contributed to LIPSYNC.AI through research and applications of articulation theory on the tongue and lips.

Back