Registration Link: https://umass-amherst.zoom.us/meeting/register/tJwpfu2tqTwuEtFGW2CMLI8ntsQ8Nja_CY01
About the Speaker:
James M. Rehg is a Founder Professor in the Departments of Computer Science and Industrial and Enterprise Systems Engineering at UIUC, where he is the Director of the Health Care Engineering Systems Center. He received his Ph.D. from CMU in 1995 and worked at the Cambridge Research Lab of DEC (and then Compaq) from 1995-2001, where he managed the computer vision research group. He was a professor in the College of Computing at Georgia Tech from 2001-2022. He received an NSF CAREER award in 2001 and a Raytheon Faculty Fellowship from Georgia Tech in 2005. He and his students have received best student paper awards at ICML 2005, BMVC 2010 and 2022, Mobihealth 2014, and Face and Gesture 2015, and a Method of the Year Award from the journal Nature Methods. Dr. Rehg served as the Program co-Chair for ACCV 2012 and CVPR 2017 and General co-Chair for CVPR 2009. He has authored more than 200 peer-reviewed scientific papers and holds 30 issued US patents. His research interests include computer vision, machine learning, and mobile and computational health (https://rehg.org). Dr. Rehg was the lead PI on an NSF Expedition to develop the science and technology of Behavioral Imaging, the measurement and analysis of social and communicative behavior using multi-modal sensing, with applications to developmental conditions such as autism. He is currently the Deputy Director and TR&D1 Lead for the mHealth Center for Discovery, Optimization, and Translation of Temporally-Precise Interventions (mDOT), which is developing novel on-body sensing and predictive analytics for improving health outcomes (https://mdot.md2k.org/)
Talk Abstract:
Beginning in infancy, individuals acquire the social and communication skills that are vital for a healthy and productive life. Children with autism face great challenges in acquiring these skills, resulting in substantial lifetime risks. As the neural basis for ASD is unclear, the diagnosis, treatment, and study of autism depends fundamentally on the analysis of child behavior. Standard methods for behavioral observation and coding are the backbone of research studies but are inherently coarse-grained and not easily scalable. In this talk I will present our research agenda that uses AI models and computer vision technology to automate the measurement of social behavior from video. Our goal is to unlock the rich behavioral information that is present in video and make it available for large-scale data-driven modeling and assessment. I will present several recent findings which demonstrate the feasibility of this approach, including a method for detecting eye contact which has been shown to achieve human-level accuracy. I will describe recent work in combining vision and language to model social deduction games, and progress in developing longitudinal models of language development. I will describe potential applications of this technology to the diagnosis and treatment of autism and other developmental conditions. This is joint work with Drs. Nancy Brady, Cathy Lord, Rebecca Jones, Sophy Kim, Jenna McDaniel, Agata Rozga, Sangmin Lee, and Eunji Chong, and Ph.D. students Fiona Ryan, Harris Nisar, Xu Cao, and Max Xu.