Hi, this is Zerong Zheng (郑泽荣). I am currently 3rd year PhD student in Department of Automation, Tsinghua University, advised by Prof. Yebin Liu. My research focuses on in 3D vision and graphics, especially 3D reconstruction, performance capture and so on.
I was excited to join Facebook Reality Lab @ Sausalito as a research intern this summer, working with Dr. Tony Tung.
University of Southern California
Deep Implicit Templates for 3D Shape Representation
We propose Deep Implicit Templates, a new 3D shape representation that supports explicit mesh correspondence reasoning in deep implicit representations. Our key idea is to formulate deep implicit functions as conditional deformations of a template implicit function.
VERTEX: VEhicle Reconstruction and TEXture Estimation Using Deep Implicit Semantic Template Mapping
We introduce VERTEX, an effective solution to recover 3D shape and intrinsic texture of vehicles from uncalibrated monocular input in real-world street environments.
PaMIR: Parametric Model-Conditioned Implicit Representation for Image-based Human Reconstruction
We propose Parametric Model-Conditioned Implicit Representation (PaMIR), which combines the parametric body model with the free-form deep implicit functions for robust 3D human reconstruction from a single RGB image or multiple images.
RobustFusion: Human Volumetric Capture with Data-driven Visual Cues using a RGBD Camera
We introduce a robust template-less human volumetric capture system combined with various data-driven visual cues, which outperforms existing state-of-the-art approaches significantly.
Robust 3D Self-portraits in Seconds
We propose an efficient method for robust 3d self-portraits using a single RGBD camera. Our method can generate detailed 3D self-portraits in seconds and is able to handle challenging clothing topologies.
DeepHuman: 3D Human Reconstruction from a Single Image
We propose DeepHuman, a deep learning based framework for 3D human reconstruction from a single RGB image. We also contribute THuman, a 3D real-world human model dataset containing approximately 7000 models.
SimulCap : Single-View Human Performance Capture with Cloth Simulation
This paper proposes a new method for live free-viewpoint human performance capture with dynamic details (e.g., cloth wrinkles) using a single RGBD camera. By incorporating cloth simulation into the performance capture pipeline, we can generate plausible cloth dynamics and cloth-body interactions.
HybridFusion: Real-time Performance Capture Using a Single Depth Sensor and Sparse IMUs
We propose a light-weight and highly robust real-time human performance capture method based on a single depth camera and sparse inertial measurement units (IMUs).
DoubleFusion: Real-time Capture of Human Performances with Inner Body Shapes from a Single Depth Sensor
We propose DoubleFusion, a new real-time system that combines volumetric dynamic reconstruction with datadriven template ﬁtting to simultaneously reconstruct detailed geometry, non-rigid motion and the inner human body shape from a single depth camera.
Future Scholar Fellowship, Tsinghua University
Excellent Bachelor Thesis Award, Tsinghua University
Academic Excellence Award, Tsinghua-GuangYao Scholarship, Tsinghua University
Excellence Award & Scholarship for Technological Innovation, Tsinghua University
Academic Excellence Award, Tsinghua-Hengda Scolarship, Tsinghua University
Excellence Award for Technological Innovation, Tsinghua University
Academic Excellence Award & Scholarship, Tsinghua University
- C++ (OpenCV, OpenGL, CUDA, Eigen, PCL, Qt, ...)
- Python (Tensorflow/PyTorch)
- Matlab, C#
- Chinese (native)
- English (TOEFL: 101; GRE: 152+170+4.0)