首页-_学术活动_教师

学术报告515:Visually Grounded Paraphrase

发布日期:  2021/10/22  周时强   浏览次数: 部门:    返回

报 告 人:褚晨翚 副教授

单    位:京都大学-大学院情報学研究科知能学専攻

报告时间:2021年11月1日(周一)15:00~16:00

报告地点:腾讯会议(ID:508 326 821)

邀 请 人:王 昊

报告摘要:

Visually grounded paraphrases (VGPs) are different phrasal expressions describing the same visual concept in an image. VGPs have the potential to improve language and vision tasks such as visual question answering and image captioning. In this talk, I will cover our recent work on various topics in VGP research, including VGP identification, VGP classification, VGP generation, cross-lingual visual grounding, flexible visual grounding, and VGP for vision-and-language representation learning.

报告人简介:

Chenhui Chu received his B.S. in software engineering from Chongqing University in 2008, and his M.S. and Ph.D. in Informatics from Kyoto University in 2012 and 2015, respectively. He is currently a program-specific associate professor at Kyoto University. His research interests include natural language processing, particularly machine translation and multimodal machine learning. His research won the MSRA collaborative research 2019 grant award, 2018 AAMT Nagao award, and CICLing 2014 best student paper award.



上一条:学术报告516:贝叶斯非参机器学习:我的理解,贡献和展望

下一条:学术报告514:面向隐私计算的联邦学习相关技术研究