Jiuxiang Gu

avatar.jpg

Senior Research Scientist

Adobe Research

Seattle, WA

My name is Jiuxiang Gu (顾久祥). I am a Senior Research Scientist at Adobe Research in Seattle. I received my Ph.D. from Nanyang Technological University, Singapore (2016.1–2019.5), under the supervision of Prof. Jianfei Cai, Dr. Gang Wang, and Prof. Tsuhan Chen. I currently serve as an Area Chair for ICLR 2025 and WACV 2024/2025, a Senior Program Committee Member for IJCAI 2021–2024, and a Program Committee Member for AAAI 2021–2023, NAACL 2021, and others. My research journey began in hardware design. From 2010 to 2015, I worked as an ASIC design engineer. In 2015, I made the transition to Artificial Intelligence. My current research interests include:

  • Machine Learning Theory
  • Vision-Language and Multimodal Pretraining
  • Efficient Modeling
  • Multimodal Understanding and Reasoning
  • Self-Supervised Learning

I am actively seeking interns and collaborators for projects in Computer Vision, Natural Language Processing, and Multimodal Learning.

📧 Feel free to reach out: jigu@adobe.com / gu.jiuxiang@gmail.com

Selected Publications

  1. Stack-Captioning: Coarse-to-Fine Learning for Image Captioning
    Jiuxiang Gu, Jianfei Cai, Gang Wang, and 1 more author
    In AAAI, 2018
    Oral
  2. Look, Imagine and Match: Improving Textual-Visual Cross-Modal Retrieval with Generative Models
    Jiuxiang Gu, Jianfei Cai, Shafiq Joty, and 2 more authors
    In CVPR, 2018
    Spotlight
  3. Recent advances in convolutional neural networks
    Jiuxiang Gu, Zhenhua Wang, Jason Kuen, and 8 more authors
    Pattern Recognition, 2018
  4. Scene graph generation with external knowledge and image reconstruction
    Jiuxiang Gu, Handong Zhao, Zhe Lin, and 3 more authors
    In CVPR, 2019
  5. Towards language-free training for text-to-image generation
    Yufan Zhou, Ruiyi Zhang, Changyou Chen, and 6 more authors
    In CVPR, 2022
  6. Lrm: Large reconstruction model for single image to 3d
    Yicong Hong, Kai Zhang, Jiuxiang Gu, and 7 more authors
    In , 2024
    Oral