LIPSYNC.AI: A.I. Driven Lips and Tongue Animations Using Articulatory Phonetic Descriptors and FACS Blendshapes
- Full Access
- Onsite Student Access
- Onsite Experience
- Virtual Full Access
- Virtual Basic Access
All 19 presentations are accessible on-demand in the virtual platform from 6 December 2021 to 11 March 2022.
Out of which, 13 Emerging Technologies will have physical exhibits onsite in Hall E, Tokyo International Forum from 15 - 17 December 2021.
Live demonstrations and Q&As for the respective presentations will be taking place at the specified Date/Time below.
Description: LIPSYNC.AI: A.I. Driven Lips and Tongue Animations Using Articulatory Phonetic Descriptors and FACS Blendshapes
Jara Alvarez Masso, Emotech Ltd, United Kingdom
Alexandru Mihai Rogozea, Emotech Ltd, United Kingdom
Jan Medvesek, Emotech Ltd, United Kingdom
Saeid Mokaram, Emotech Ltd, United Kingdom
Yijun Yu, Emotech Ltd, United Kingdom
Alex has a background in Computer Science and Visual Effects. In the last few years he worked with 2D and 3D software companies, such as Foundry, on developing tools that help 3D artists express their skills as best as possible. At Emotech he contributed on the various AI technologies which drive the in-house produced personal assistant, and then helped develop LIPSYNC.AI by implementing neural network architectures and developing procedural algorithms.
Jan has a background in software and hardware development with an emphasis on research and development work. He's worked on developing algorithms for image processing, software and hardware for 3D glasses and spent time teaching. He's now co-founder of Emotech and is highly passionate about pushing the boundaries of AI in the day to day life.
Saeid is a machine learning scientist. Since completing his PhD at the University of Sheffield, he has commercial and academic experience of automatic speech recognition, natural language understanding. He is excited by the application of research to real-world problems and has contributed to LIPSYNC.AI at Emotech by applying his knowledge in speech technology and AI.
Yijun is an instinctive product designer who believes in the value of the users. Seeing design as a medium to turn dreams and imaginations into real life experiences, she is keen on the vision to design for the future. At Emotech she worked on designing the user experience for a robotic personal assistant and later on contributed to LIPSYNC.AI through research and applications of articulation theory on the tongue and lips.