Jingyu Liu

Jingyu Liu

CS PhD Student at Uchicago, Part-time Student Researcher at Together AI

University of Chicago

Bio

I am a first-year PhD student at University of Chicago, fortunately advised by prof Ce Zhang. I finished my master study in CS at ETH Zurich. During the master’s, I took a gap year at Meta AI working as an AI resident on LLMs and 3D computer vision. I was very fortunate to work with many talented folks and be supervised by Barlas Oğuz, Mike Lewis, and Gabriel Synnaeve. Before the master’s, I spent a year working on building search engines at ByteDance as a MLE. I graduated from NYU with honors in CS and was awarded with Prize for Outstanding Performance in CS.

With my previous works on CodeLlama and Llama 2 Long, I become very interested in AI systems, specially in developing efficient algorithms and systems for large-scale training and inference. I’m intrigued by how we could improve model alignment and understand the science behind these foundation models.

{first_name}6 AT uchicago DOT edu

Feel free to drop me an email for anything, especially for potential collaboration!!

Interests
  • Large language models & NLP
  • AI systems
  • Science of foundation models
Education
  • PhD Student in CS, 2024 - Present

    University of Chicago

  • MS in Computer Science, 2024

    ETH Zurich

  • BA in Computer Science with Honors, 2020

    New York University

Updates

[2024.10] Our survey paper got accepted by TMLR 2025!

[2024.9] I’m starting my PhD at Uchicago, working with professor Ce Zhang.

[2024.8] Our paper got accepted by WACV 2025!

Experience

 
 
 
 
 
Meta AI
AI Resident
September 2022 – September 2023 Menlo Park, CA

Research on large language models:

  • CodeLlama: SOTA open sourced code generation LLMs
  • Llama 2 Long: effective context length extension of Llama 2 up to 32K

Research on 3D computer vision:

  • Semantic 3D indoor scene synthesis, reasoning, and planning
  • Text-guided 3D human generation
 
 
 
 
 
ETH Zurich
Research Assistant
March 2022 – November 2022 Zurich, Switzerland
Student research assistant working on offline reinforcement learning algorithms that train with a mixture of trajectories sampled from multiple demonstrators.
 
 
 
 
 
ByteDance
Machine Learning Engineer
August 2020 – August 2021 Beijing, China
Worked on the search engine in Douyin’s E-commerce platform from the very early stage, including the search index, data pipeline, retrieval module, and ranking deep models.
 
 
 
 
 
Courant Institute, New York University
Teaching Assistant
September 2018 – May 2019 New York, NY
Tutored students on computer system organization.

Papers

Effective Long-Context Scaling of Foundation Models
We present a series of long-context LLMs that support effective context windows of up to 32,768 tokens. Our model series are built …
Effective Long-Context Scaling of Foundation Models
Code Llama: Open Foundation Models for Code
We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open …
Code Llama: Open Foundation Models for Code

Academic Service

Reviewer for How Far Are We From AGI @ ICLR 2024

Reviewer for Long-Context Foundation Models (LCFM) @ ICML 2024

Miscellaneous

I was first trained as a game designer at NYU Game Center during my undergrad and became increasingly more interested in CS and AI. Despite that, I’m still very interested in game dev, physically-based rendering, and game AI.

During my free time, I enjoy playing chess (my favorite live-stream), electric guitars (my favorite instrumental band), and recently got obsessed with golf (a group of chilled golfers).