Hello! I’m Yishu Ji.

I am a second-year Ph.D. student in Human-Centered Computing at Georgia Tech. Prior to that, I earned my B.Eng. in Industrial Engineering from Tsinghua University. In my undergrad, I was fortunate to be mentored by some amazing researchers who inspired my current research: Prof.Yukang Yan for using gaze behavior data to predict noticeability on motion redirection in VR environment, and Prof.Jeff Rzeszotarski for the application of LLMs on automatic exploratory data analysis.

I believe that passive sensing data, especially wearables, quietly keep a “diary” on the rhythms of our daily life, which reveal patterns that shape an individual’s experience. However, sensing itself stops at recording, and emerging generative AI has the potential to read the “diary” and provide further support for users.

With such a belief, my research sits at the intersection of behavioral modeling and human-centered AI. I use computational methods on large-scale sensing data to understand and predict human behavior patterns. Building on this foundation, I aim to design and develop AI systems that (a) support the interpretation and development of individualized behavioral models, and (b) leverage these models to anticipate needs, promote wellbeing, and enhance everyday life.

News

  • 07/29/25: ✈️ Attend CogSci25 at San Francisco, CA. I present a poster: “Visualizing Motion Traces Enhances Pursuit Detection in Dynamic Scenes”.
  • 05/01/25: ✈️ Attend CHI25 at Yokohama, Japan.
  • 01/16/25: 🎉 Our paper “Modelling Effects of Visual Attention on Noticeability of Body-Avatar Offsets in Virtual Reality” is accepted by CHI25. Congrats to my collaborators!

Fun facts

Outside of my main pursuits, I’ve spent lots of time engaging in sports and dance since an early age. I was recognized as a Chinese National Level Three Athlete in Aerobics Gymnastics at the age of 10.