I am an Applied Research Scientist at Adobe's Sensei ML team working on language-vision research. I completed my PhD from the CS department at the Intelligent Visual Interfaces lab in Rutgers University . My PhD thesis was on Multimodal Story Comprehension, under the supervision of Dr. Mubbasir Kapadia and Dr. Gerard De Melo. I am interested in joint understanding of images/videos and abstract/narrative text with applications to multimodal story comprehension. More specifically, it involves developing neural network models to learn various factors that govern multimodal story comprehension and evaluating on tasks such as story illustration, visual storytelling, image captioning and text-to-image retreival/generation.
- Sep 7, 2021: I am joining Adobe's Sensei ML team as an Applied Research Scientist.
- Aug 12, 2021: I have succesfully defended my PhD on Multimodal Story Comrpehension: Datasets, Tasks and Neural Methods.
- July 22, 2021: Our paper "AESOP: Abstract Encoding of Stories, Objects and Pictures" is accepted to ICCV 2021 main conference. Paper, Dataset and Code to follow soon.
- April 25, 2021: We received the Best Paper Award for our paper "Exploiting Image Text Synergy for Contextual Image Captioning", in LANTERN workshop associated with EACL 2021.
- April 10, 2021: Code and Dataset for our paper "Exploiting Image Text Synergy for Contextual Image Captioning" is available here
- April 01, 2021: Our paper "Exploiting Image Text Synergy for Contextual Image Captioning" is accepted in LANTERN workshop associated with EACL 2021.
- Oct 10, 2020: Our arXiv paper on "GitEvlove: Predicting the Evolution of GitHub Repositories" is out here. Code is available here.