Zerong Zheng

郑泽荣 | 3rd-year PhD Student

About Me

Hi, this is Zerong Zheng (郑泽荣). I am currently a 3rd year PhD student in Department of Automation, Tsinghua University, advised by Prof. Yebin Liu. My research focuses on 3D vision and graphics, especially 3D reconstruction, performance capture and so on.


Tsinghua University

B.Eng & Ph.D. Student
   September 2014 - Present     Beijing, China

I began my PhD education in August 2018, and my advisor is Prof. Yebin Liu. Before that, I received a B.Eng degree from Department of Automation, Tsinghua University in July 2018.

Facebook Inc.

Research Intern
  June 2019 - September 2019     San Francisco, USA

I was excited to join Facebook Reality Lab @ Sausalito as a research intern this summer, working with Dr. Tony Tung.

University of Southern California

Undergraduate Visiting Scholar
  June 2017 - August 2017     Los Angeles, USA

I spent an exciting summer as a visiting researcher at USC Institute for Creative Technologies, working with Prof. Hao Li.


Deep Implicit Templates for 3D Shape Representation

Z. Zheng, T. Yu, Q. Dai, Y. Liu
arXiv 2020

We propose Deep Implicit Templates, a new 3D shape representation that supports explicit mesh correspondence reasoning in deep implicit representations. Our key idea is to formulate deep implicit functions as conditional deformations of a template implicit function.

VERTEX: VEhicle Reconstruction and TEXture Estimation Using Deep Implicit Semantic Template Mapping

X. Zhao, Z. Zheng, C. Ji, Z. Liu, Y. Luo, T. Yu, J. Suo, Q. Dai, Y. Liu
arXiv 2020

We introduce VERTEX, an effective solution to recover 3D shape and intrinsic texture of vehicles from uncalibrated monocular input in real-world street environments.

PaMIR: Parametric Model-Conditioned Implicit Representation for Image-based Human Reconstruction

Z. Zheng, T. Yu, Y. Liu, Q. Dai
IEEE Transactions on Pattern Analysis and Machine Intelligence (accepted) - T-PAMI

We propose Parametric Model-Conditioned Implicit Representation (PaMIR), which combines the parametric body model with the free-form deep implicit functions for robust 3D human reconstruction from a single RGB image or multiple images.

RobustFusion: Human Volumetric Capture with Data-driven Visual Cues using a RGBD Camera

Z. Su*, L. Xu*, Z. Zheng, T. Yu, Y. Liu, L. Fang
European Conference on Computer Vision 2020 - ECCV 2020 Spotlight

We introduce a robust template-less human volumetric capture system combined with various data-driven visual cues, which outperforms existing state-of-the-art approaches significantly.

Robust 3D Self-portraits in Seconds

Z. Li, T. Yu, C. Pan, Z. Zheng, Y. Liu
IEEE Conference on Computer Vision and Pattern Recognition 2020 - CVPR 2020 Oral

We propose an efficient method for robust 3d self-portraits using a single RGBD camera. Our method can generate detailed 3D self-portraits in seconds and is able to handle challenging clothing topologies.

DeepHuman: 3D Human Reconstruction from a Single Image

Z. Zheng, T. Yu, Y. Wei, Q. Dai, Y. Liu
IEEE International Conference on Computer Vision 2019 - ICCV 2019 Oral

We propose DeepHuman, a deep learning based framework for 3D human reconstruction from a single RGB image. We also contribute THuman, a 3D real-world human model dataset containing approximately 7000 models.

SimulCap : Single-View Human Performance Capture with Cloth Simulation

T. Yu, Z. Zheng, Y. Zhong, J. Zhao, Q. Dai, G. Pons-Moll, Y. Liu
IEEE Conference on Computer Vision and Pattern Recognition 2019 - CVPR 2019

This paper proposes a new method for live free-viewpoint human performance capture with dynamic details (e.g., cloth wrinkles) using a single RGBD camera. By incorporating cloth simulation into the performance capture pipeline, we can generate plausible cloth dynamics and cloth-body interactions.

HybridFusion: Real-time Performance Capture Using a Single Depth Sensor and Sparse IMUs

Z. Zheng, T. Yu, H. Li, K. Guo, Q. Dai, L. Fang, Y. Liu
European Conference on Computer Vision 2018 - ECCV 2018

We propose a light-weight and highly robust real-time human performance capture method based on a single depth camera and sparse inertial measurement units (IMUs).

DoubleFusion: Real-time Capture of Human Performances with Inner Body Shapes from a Single Depth Sensor

T. Yu, Z. Zheng, K. Guo, J. Zhao, Q. Dai, H. Li, G. Pons-Moll, Y. Liu
IEEE Conference on Computer Vision and Pattern Recognition 2018 - CVPR 2018 Oral

We propose DoubleFusion, a new real-time system that combines volumetric dynamic reconstruction with datadriven template fitting to simultaneously reconstruct detailed geometry, non-rigid motion and the inner human body shape from a single depth camera.



Future Scholar Fellowship, Tsinghua University

Excellent Bachelor Thesis Award, Tsinghua University


Academic Excellence Award, Tsinghua-GuangYao Scholarship, Tsinghua University

Excellence Award & Scholarship for Technological Innovation, Tsinghua University


Academic Excellence Award, Tsinghua-Hengda Scolarship, Tsinghua University

Excellence Award for Technological Innovation, Tsinghua University


Academic Excellence Award & Scholarship, Tsinghua University


  • C++ (OpenCV, OpenGL, CUDA, Eigen, PCL, Qt, ...)
  • Python (Tensorflow/PyTorch)
  • Matlab, C#
  • LaTex
  • Chinese (native)
  • English (TOEFL: 101; GRE: 152+170+4.0)