LIPSYNC.AI: A.I. Driven Lips and Tongue Animations Using Articulatory Phonetic Descriptors and FACS Blendshapes

  • Full Access Full Access
  • Onsite Student Access Onsite Student Access
  • Onsite Experience Onsite Experience
  • Virtual Full Access Virtual Full Access
  • Virtual Basic Access Virtual Basic Access

All 19 presentations are accessible on-demand in the virtual platform from 6 December 2021 to 11 March 2022.
Out of which, 13 Emerging Technologies will have physical exhibits onsite in Hall E, Tokyo International Forum from 15 - 17 December 2021.
Live demonstrations and Q&As for the respective presentations will be taking place at the specified Date/Time below.


Description: LIPSYNC.AI: A.I. Driven Lips and Tongue Animations Using Articulatory Phonetic Descriptors and FACS Blendshapes

Presenter(s):
Jara Alvarez Masso, Emotech Ltd, United Kingdom
Alexandru Mihai Rogozea, Emotech Ltd, United Kingdom
Jan Medvesek, Emotech Ltd, United Kingdom
Saeid Mokaram, Emotech Ltd, United Kingdom
Yijun Yu, Emotech Ltd, United Kingdom

Jara has a background in Design and in Cognitive Systems and Interactive Media. She worked as a product designer for forward-thinking companies such as Twenty2b and Seam Technic, where she helped develop original concepts for innovative products in consumer electronics and wearable devices. At Emotech she progressed in improving the relationship between humans and technology by working on a robotic personal assistant and later on developing supportive technologies which empower digital creations to look and act like a human being. She has done research in the speech and 3D theory for LIPSYNC.AI and is now managing the product.

Alex has a background in Computer Science and Visual Effects. In the last few years he worked with 2D and 3D software companies, such as Foundry, on developing tools that help 3D artists express their skills as best as possible. At Emotech he contributed on the various AI technologies which drive the in-house produced personal assistant, and then helped develop LIPSYNC.AI by implementing neural network architectures and developing procedural algorithms.

Jan has a background in software and hardware development with an emphasis on research and development work. He's worked on developing algorithms for image processing, software and hardware for 3D glasses and spent time teaching. He's now co-founder of Emotech and is highly passionate about pushing the boundaries of AI in the day to day life.

Saeid is a machine learning scientist. Since completing his PhD at the University of Sheffield, he has commercial and academic experience of automatic speech recognition, natural language understanding. He is excited by the application of research to real-world problems and has contributed to LIPSYNC.AI at Emotech by applying his knowledge in speech technology and AI.

Yijun is an instinctive product designer who believes in the value of the users. Seeing design as a medium to turn dreams and imaginations into real life experiences, she is keen on the vision to design for the future. At Emotech she worked on designing the user experience for a robotic personal assistant and later on contributed to LIPSYNC.AI through research and applications of articulation theory on the tongue and lips.

Back