About Me
Hi, this is Zerong Zheng (郑泽荣). I am currently a 3rd year PhD student in Department of Automation, Tsinghua University, advised by Prof. Yebin Liu. My research focuses on 3D vision and graphics, especially 3D reconstruction, performance capture and so on.
Background
Tsinghua University
I began my PhD education in August 2018, and my advisor is Prof. Yebin Liu. Before that, I received a B.Eng degree from Department of Automation, Tsinghua University in July 2018.
Facebook Inc.
I was excited to join Facebook Reality Lab @ Sausalito as a research intern this summer, working with Dr. Tony Tung.
University of Southern California
I spent an exciting summer as a visiting researcher at USC Institute for Creative Technologies, working with Prof. Hao Li.
Research

Deep Implicit Templates for 3D Shape Representation
We propose Deep Implicit Templates, a new 3D shape representation that supports explicit mesh correspondence reasoning in deep implicit representations. Our key idea is to formulate deep implicit functions as conditional deformations of a template implicit function.

VERTEX: VEhicle Reconstruction and TEXture Estimation Using Deep Implicit Semantic Template Mapping
We introduce VERTEX, an effective solution to recover 3D shape and intrinsic texture of vehicles from uncalibrated monocular input in real-world street environments.

PaMIR: Parametric Model-Conditioned Implicit Representation for Image-based Human Reconstruction
We propose Parametric Model-Conditioned Implicit Representation (PaMIR), which combines the parametric body model with the free-form deep implicit functions for robust 3D human reconstruction from a single RGB image or multiple images.

RobustFusion: Human Volumetric Capture with Data-driven Visual Cues using a RGBD Camera
We introduce a robust template-less human volumetric capture system combined with various data-driven visual cues, which outperforms existing state-of-the-art approaches significantly.

Robust 3D Self-portraits in Seconds
We propose an efficient method for robust 3d self-portraits using a single RGBD camera. Our method can generate detailed 3D self-portraits in seconds and is able to handle challenging clothing topologies.

DeepHuman: 3D Human Reconstruction from a Single Image
We propose DeepHuman, a deep learning based framework for 3D human reconstruction from a single RGB image. We also contribute THuman, a 3D real-world human model dataset containing approximately 7000 models.

SimulCap : Single-View Human Performance Capture with Cloth Simulation
This paper proposes a new method for live free-viewpoint human performance capture with dynamic details (e.g., cloth wrinkles) using a single RGBD camera. By incorporating cloth simulation into the performance capture pipeline, we can generate plausible cloth dynamics and cloth-body interactions.

HybridFusion: Real-time Performance Capture Using a Single Depth Sensor and Sparse IMUs
We propose a light-weight and highly robust real-time human performance capture method based on a single depth camera and sparse inertial measurement units (IMUs).

DoubleFusion: Real-time Capture of Human Performances with Inner Body Shapes from a Single Depth Sensor
We propose DoubleFusion, a new real-time system that combines volumetric dynamic reconstruction with datadriven template fitting to simultaneously reconstruct detailed geometry, non-rigid motion and the inner human body shape from a single depth camera.
Distinction
2018
Future Scholar Fellowship, Tsinghua University
Excellent Bachelor Thesis Award, Tsinghua University
2017
Academic Excellence Award, Tsinghua-GuangYao Scholarship, Tsinghua University
Excellence Award & Scholarship for Technological Innovation, Tsinghua University
2016
Academic Excellence Award, Tsinghua-Hengda Scolarship, Tsinghua University
Excellence Award for Technological Innovation, Tsinghua University
2015
Academic Excellence Award & Scholarship, Tsinghua University
Others
Skills
- C++ (OpenCV, OpenGL, CUDA, Eigen, PCL, Qt, ...)
- Python (Tensorflow/PyTorch)
- Matlab, C#
- LaTex
Languages
- Chinese (native)
- English (TOEFL: 101; GRE: 152+170+4.0)