Multimodal AI Lab (MMAI) focuses on the fusion of modalities in the field of Deep Learning with roots in Computer Vision, Natural Language Processing, Audio, and etc.


Here at MMAI, we explore the intricacies of how different fields of Deep Learning come together to form more powerful representations for the advancement of AI.


In MMAI, we focus our research largely on (but not limited to) core Multimodal fields such as Vision & Language, Video Understanding, Vision & Audio, and the intricate issues that arise from the amalgamation of such fields such as Data Issues, Bias Issues, Out-of-distribution Issues, in Multimodal tasks and extends also to traditional computer vision tasks.


If you would like to know more about us, please visit our homepage or email the professor.





❚ Contact


Prof: 조재원 Jae Won Cho

Office : Room 607, Gwanggaeto-gwan

Tel: 02-3408-3173

Email: chojw@sejong.ac.kr

Homepage: https://sites.google.com/view/mmai-sejong