Changhe Chen

MS in Robotics | University of Michigan

Bio

Changhe Chen is a master's student in Robotics at the University of Michigan, (advised by Prof. Nima Fazeli and Prof. Xiaonan (Sean) Huang). Prior to joining the University of Michigan, Changhe received his BASc in Electrical Engineering from the University of Toronto, where he worked with Prof. Chan Carusone on reinforcement learning for analog circuit design. During his undergraduate studies, he also interned as a research assistant at Huawei's Noah's Ark Lab, developing multi-agent reinforcement learning platforms and advanced trajectory prediction models for autonomous driving.In summer 2025, Changhe enjoyed collaborating on robotics research and real-world robotic systems with Heng Yang's group at Harvard University. Changhe's current research focuses on embodied AI, data-efficient robot learning, and learning from humans, with an emphasis on developing multi-modal visual language models and instruction-based semantic segmentation to enable robots to better understand and execute complex tasks.

Publications

* indicates equal contribution.

MoTVLA Paper Cover

MoTVLA: A Vision-Language-Action Model with Unified Fast-Slow Reasoning

Wenhui Huang*, Changhe Chen*, Han Qi, Lv Chen, Yilun Du, Heng Yang

arXiv preprint arXiv:2509.16053

[Website]
Compose Paper Cover

Compose by Focus: Scene Graph-based Atomic Skills

Han Qi, Changhe Chen, Heng Yang

arXiv preprint arXiv:2509.16053

[Paper] [Website]
ViSA Paper Cover

ViSA-Flow: Accelerating Robot Skill Learning via Large-Scale Video Semantic Action Flow

Changhe Chen*, Quantao Yang*, Xiaohao Xu, Nima Fazeli, Olov Andersson

arXiv preprint arXiv:2505.01288

[Paper] [Website]
RoboCrafter Paper Cover

Large Language Models as Natural Selector for Embodied Soft Robot Design

Changhe Chen, Xiaohao Xu, Xiangdong Wang, Xiaonan Huang

arXiv preprint arXiv:2503.02249

[Paper] [Code] [Dataset]
ICRA 2024 Paper Cover

CRITERIA: A New Benchmarking Paradigm to Evaluate Trajectory Prediction Approaches

Changhe Chen, Mozhgan Pourkeshavarz, Amir Rasouli

IEEE International Conference on Robotics and Automation (ICRA), 2024

[Paper] [Code] [Dataset]
TNS 2023 Paper Cover

Using Upsampling Conv-LSTM for Respiratory Sound Classification

Changhe Chen, Rongbo Zhang

Theoretical and Natural Science (TNS), 2023

[Paper] [Code] [Dataset]
ICCV 2023 Paper Cover

Learn TAROT with MENTOR: A Meta-Learned Self-Supervised Approach for Trajectory Prediction

Mozhgan Pourkeshavarz, Changhe Chen, Amir Rasouli

The International Conference on Computer Vision (ICCV), 2023

[Paper] [Dataset]

Research Experience

Visiting Fellow, Computational Robotics Group (Harvard University), Advised by Prof. Heng Yang

Designed MoTVLA, a mixture-of-transformers framework unifying generalist vision-language reasoning with fast motion decomposition for robotic manipulation.

Developed diffusion-policy methods to enhance language steerability and inference speed in robot skill learning.

Explored scene graph-based visuomotor policies integrating GNNs and diffusion learning for robust, long-horizon skill composition under visual distractors.

AI-Driven Robotic Structure Evaluation (University of Michigan - HDR Lab), Advised by Prof. Xiaonan (Sean) Huang

Developed a data generation pipeline in Evogym for evaluating robotic structures.

Utilized large language models (LLMs) to optimize robotic structures through AI insights.

Enhanced robotic simulation workflows by combining AI-driven evaluations with structured data generation for improved design and performance analysis.

Robot Skill Learning via Large-Scale Video Representation (University of Michigan - MMint Lab), Advised by Prof. Nima Fazeli

Designed a robot task-learning system integrating multi-modal VLMs for improved task execution.

Implemented instruction-based semantic segmentation to enhance robot perception.

Transistor Sizing and Optimization for Low-dropout Circuit by RL (University of Toronto), Advised by Prof. Chan Carusone

Led a team to develop an RL algorithm optimizing circuit parameters using Cadence and TSMC 65nm PDK.

Used RGCN and DDPG to encode circuit topology and optimize design cycles by 80%.

Work Experience

Huawei Technologies - Research Assistant at Noah's Ark Lab

Multi-Agents RL Simulation Platform “SMARTS”

Autonomous Driving Trajectory Prediction Models

Leadership & Projects

U of T's Autodrive Team (aUToronto)

Designed radar-based autonomous vehicle systems; 1st place in SAE Autodrive Challenge II.

IEEE BioCAS Grand Challenge

Led a team to developed an ML-based respiratory sound classification model, reaching 75% accuracy.

Street Map Navigation System

Led a team to built a C++ navigation app using OpenStreetMap API with a responsive UI.

Skills

Programming: Python (PyTorch, TensorFlow), C/C++, MATLAB, Java, Verilog, ARM Assembly

Software Tools: ROS1/2, RL (DDPG, RGCN), SMARTS Simulation, Deep Graph Library

Contact

Email: changhec@umich.edu

GitHub: github.com/AisenGinn