首页-科学研究_学术预告

学术报告424:Analyzing Humans in Videos, Images, and NeuroImages

发布日期:  2019/04/09  周时强   浏览次数: 部门: 未知   返回

报 告 人:Ehsan Adeli [斯坦福大学]

报告时间:2019年4月 15日(周一)报告10:30~11:00,讨论13:30~15:30

报告地点:乐乎一楼思源厅

主 持 人:冷拓

 

报告人简介:

Ehsan Adeli博士是斯坦福大学药物学院以及计算机科学系的NIH研究人员。他于伊朗科技大学获得博士学位。他同时在北卡罗来纳大学教堂山分校从事博士后研究工作以及在卡内基麦隆大学机器人研究院担任研究人员。Adeli博士的研究方向包括机器学习,计算机视觉,医学图像分析以及就算神经学。

 

报告摘要:

Humans have a great ability to perceive the world around in details. We can analyze objects and their properties, we can identify anomalies in medical images, we can single out people in images and describe their actions. Automatic machine-understandable methods for such applications are integral parts of most human-centric applications. On the other hand, humans require interpretability of such machine learning techniques, from understanding of visual scenes for self-driving applications to analyzing neuroimages to diagnose diseases and their underlying reasons (biomarkers). Despite several successes in these fields, such detailed and interpretable understanding of visual data is beyond current computer vision technology. In this talk, I discuss interpretable machine learning techniques from pre- and post-deep-learning eras for applications of analyzing humans from videos or brain neuroimages. Specifically, I will discuss “Understanding Actions in Videos”, “Visual Reasoning for Future Forecasting in Videos”, “Learning Brain Disease Biomarkers from NeuroImages”, and “Human Brain Development in Early Years of Life”.

 

上一条:学术报告425:Variational Autoencoders for Clustering, Classification, and Regression with Applications to

下一条:学术报告425:Variational Autoencoders for Clustering, Classification, and Regression with Applications to