Date:April 26th 10:00-11:00
Address:LEE SHAU KEE BUILDING , A942-1
Reporter:Junxiao Shen
Introduction:Dr. Junxiao Shen graduated with his B.A., M.Eng., and Ph.D. in Information Engineering and Computer Science from Trinity College at the University of Cambridge, where he was a full scholarship awardee. He has served as a research scientist at Reality Labs, Meta, he was also recognized as a Meta PhD Fellow. His research is dedicated to enhancing interactions in Extended Reality (XR) through machine learning and multimodal AI.
Abstract:Keyboard and mouse interactions lay the groundwork for Personal Computers, while multi-touch interactions do the same for smartphones. The question at hand is: What type of interaction is essential to establish Extended Reality (XR) as the next-generation hardware platform?
To address this question, we need first to comprehend the shifting challenges. These encompass the transition from 2D to 3D interaction, the movement from restricted interaction and display space to an unlimited expanse, and the shift from an uni-modal to a multi-modal approach in XR.
Traditional heuristic and statistical methods often fall short in providing optimal interaction due to this expansive interaction and design space. This is where Artificial Intelligence (AI) steps in as a solution to these challenges, offering intelligent and adaptive interaction.
To demonstrate how AI can address these challenges, in this talk, He will concentrate on two fundamental interaction components within XR: text entry and gesture interaction.