
Email: [email protected]
:gg: Google Scholar
πΒ CV
Iβm Zoey, a fifth-year PhD student in Computer Science + Economics at the University of Pennsylvania, advised by Dan Roth. I do research in AI, specifically Large Language Models. With a background in economics and training in AI, I am broadly interested in building socially intelligent AI systems that align with user needs and deliver real utility.
I am also currently a part-time researcher at Meta Superintelligence Lab (formerly Meta GenAI).
Meta AI, 05/2025 β present
Student Researcher on AI for Social Experience team, Meta Superintelligence Lab
Project: Building social intelligent AI chatbots through Reinforcement Learning
Amazon SEAS (Core AI), 05/2024β08/2024
Research Intern on Store Economic and Science team
Project: LLM for user query understanding and product recommendation
Tencent WeChat AI, 05/2023β09/2023
Research Intern on WeChat security team
Project: Causal learning and human-AI collaboration for fraud intervention on social media
Please see my Google Scholar for the full list. (*: equal contribution)
<aside> π
PersonaMem-v2: Towards Personalized Intelligence via Learning Implicit User Personas and Agentic Memory [paper]
Bowen Jiang, Yuan Yuan, Maohao Shen, Zhuoqun Hao, Zhangchen Xu, Zichen Chen, Ziyi Liu, Anvesh Rao Vijjini, Jiashu He, Hanchao Yu, Radha Poovendran, Gregory Wornell, Lyle Ungar, Dan Roth, Sihao Chen, Camillo Jose Taylor. Arxiv Preprint
</aside>
Key Findings:
Propose a state-of-the-art dataset coupled with a Reinforcement Learning with agentic memory training pipeline to improve LLM personalization in long-horizon user interactions, which involve implicit and evolving traits and preferences.

<aside> π
Know Me, Respond to Me: Benchmarking LLMs for Dynamic User Profiling and Personalized Responses at Scale [paper]
Zhuoqun Hao*, Bowen Jiang*, Young-Min Cho, Bryan Li, Yuan Yuan, Sihao Chen, Lyle Ungar, Camillo J Taylor, Dan Roth. COLM 2025
</aside>
Key Findings:

<aside> π
A Peek into Token Bias: Large Language Models Are Not Yet Genuine Reasoners [paper]
Bowen Jiang, Yuan Yuan, Xinyi Bai, Zhuoqun Hao, Alyson Yin, Yaojie Hu, Wenyu Liao, Lyle Ungar, Camillo Jose Taylor. EMNLP findings, 2025
</aside>
Key Findings:
Propose a causal perturbation framework for evaluating faithful reasoning of state-of-the-art LLMs.
