Zerong Zheng

郑泽荣 | 5th-year PhD Student

About Me

Hi, this is Zerong Zheng (郑泽荣). I am currently a 5th year (2018~) PhD student in Department of Automation, Tsinghua University, advised by Prof. Yebin Liu. My research focuses on 3D vision and graphics, especially 3D reconstruction, performance capture and so on.


I am on the 2023 job market. [Curriculum Vitae]

预计明年年中毕业,找工作进行中,欢迎联系! [中文简历]

Background

Tsinghua University

B.Eng & Ph.D. Student
   September 2014 - Present     Beijing, China

I began my PhD education in August 2018, and my advisor is Prof. Yebin Liu. Before that, I received a B.Eng degree from Department of Automation, Tsinghua University in July 2018.

Facebook Inc.

Research Intern
  June 2019 - September 2019     San Francisco, USA

I was excited to join Facebook Reality Lab @ Sausalito as a research intern this summer, working with Dr. Tony Tung.

University of Southern California

Undergraduate Visiting Scholar
  June 2017 - August 2017     Los Angeles, USA

I spent an exciting summer as a visiting researcher at USC Institute for Creative Technologies, working with Prof. Hao Li.

Research

DiffuStereo: High Quality Human Reconstruction via Diffusion-based Stereo Using Sparse Cameras

R. Shao, Z. Zheng, H. Zhang, J. Sun, Y. Liu
European Conference on Computer Vision 2022 - ECCV 2022 Oral

We propose DiffuStereo, a novel system using only sparse cameras for high-quality 3D human reconstruction. At its core is a novel diffusion-based stereo module, which introduces diffusion models into the iterative stereo matching framework.

@inproceedings{shao2022diffustereo,
author = {Shao, Ruizhi and Zheng, Zerong and Zhang, Hongwen and Sun, Jingxiang and Liu, Yebin},
title = {DiffuStereo: High Quality Human Reconstruction via Diffusion-based Stereo Using Sparse Cameras},
booktitle = {ECCV},
year = {2022}
}

AvatarCap: Animatable Avatar Conditioned Monocular Human Volumetric Capture

Z. Li, Z. Zheng, H. Zhang, C. Ji, Y. Liu
European Conference on Computer Vision 2022 - ECCV 2022

AvatarCap is a novel framework that introduces animatable avatars into the capture pipeline for high-fidelity volumetric capture from monocular RGB inputs. It can reconstruct the dynamic details in both visible and invisible regions.

@InProceedings{li2022avatarcap,
title={AvatarCap: Animatable Avatar Conditioned Monocular Human Volumetric Capture},
author={Li, Zhe and Zheng, Zerong and Zhang, Hongwen and Ji, Chaonan and Liu, Yebin},
booktitle={European Conference on Computer Vision (ECCV)},
month={October},
year={2022},
}

Learning Implicit Templates for Point-Based Clothed Human Modeling

S. Lin, Z. Zheng, H. Huang, R. Shao, Y. Liu
European Conference on Computer Vision 2022 - ECCV 2022

We present FITE, a First-Implicit-Then-Explicit framework for modeling human avatars in clothing. Our framework first learns implicit surface templates representing the coarse clothing topology, and then employs the templates to guide the generation of point sets which further capture pose-dependent clothing deformations such as wrinkles.

@inproceedings{lin2022fite,
title={Learning Implicit Templates for Point-Based Clothed Human Modeling},
author={Lin, Siyou and Zhang, Hongwen and Zheng, Zerong and Shao, Ruizhi and Liu, Yebin},
booktitle={ECCV},
year={2022}
}

Structured Local Radiance Fields for Human Avatar Modeling

Z. Zheng, H. Huang, T. Yu, H. Zhang, Y. Guo, Y. Liu
IEEE Conference on Computer Vision and Pattern Recognition 2022 - CVPR 2022

We introduce a novel representation for learning animatable full-body avatars in general clothes without any pre-scanning efforts. The core of our representation is a set of structured local radiance fields, which makes no assumption about the cloth topology but is still able to model the cloth motions in a coarse-to-fine manner.

@InProceedings{zheng2022structured,
title={Structured Local Radiance Fields for Human Avatar Modeling},
author={Zheng, Zerong and Huang, Han and Yu, Tao and Zhang, Hongwen and Guo, Yandong and Liu, Yebin},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {}
}

High-Fidelity Human Avatars from a Single RGB Camera

H. Zhao, J. Zhang, Y. Lai, Z. Zheng, Y. Xie, Y. Liu, K. Li
IEEE Conference on Computer Vision and Pattern Recognition 2022 - CVPR 2022

We propose a new framework to reconstruct a personalized high-fidelity human avatar from a monocualr video. Our method is able to recover the pose-dependent surface deformations as well as high-quality appearance details.

@InProceedings{zhao2022highfidelity,
title={High-Fidelity Human Avatars from a Single RGB Camera},
author={Zhao, Hao and Zhang, Jinsong and Lai, Yu-Kun and Zheng, Zerong and Xie, Yingdi and Liu, Yebin and Li, Kun},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {}
}

HVTR: Hybrid Volumetric-Textural Rendering for Human Avatars

T. Hu, T. Yu, Z. Zheng, H. Zhang, Y. Liu, M. Zwicker
arXiv 2022

We propose a novel neural rendering pipeline, Hybrid Volumetric-Textural Rendering (HVTR), which synthesizes virtual human avatars from arbitrary poses efficiently and at high quality by combining 2D UV-based latent features with 3D volumetric representation.

@article{hu2021hvtr,
title={HVTR: Hybrid Volumetric-Textural Rendering for Human Avatars},
author={Tao Hu and Tao Yu and Zerong Zheng and He Zhang and Yebin Liu and Matthias Zwicker},
eprint={2112.10203},
archivePrefix={arXiv},
year = {2022},
primaryClass={cs.CV}
}

Deep Implicit Templates for 3D Shape Representation

Z. Zheng, T. Yu, Q. Dai, Y. Liu
IEEE Conference on Computer Vision and Pattern Recognition 2021 - CVPR 2021 Oral

We propose Deep Implicit Templates, a new 3D shape representation that supports explicit mesh correspondence reasoning in deep implicit representations. Our key idea is to formulate deep implicit functions as conditional deformations of a template implicit function.

@InProceedings{zheng2021dit,
author = {Zheng, Zerong and Yu, Tao and Dai, Qionghai and Liu, Yebin},
title = {Deep Implicit Templates for 3D Shape Representation},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2021},
pages = {1429-1439}
}

Function4D: Real-time Human Volumetric Capture from Very Sparse Consumer RGBD Sensors

T. Yu, Z. Zheng, K. Guo, P. Liu, Q. Dai, Y. Liu
IEEE Conference on Computer Vision and Pattern Recognition 2021 - CVPR 2021 Oral

We propose a human volumetric capture method that combines temporal volumetric fusion and deep implicit functions. Our method outperforms existing methods in terms of view sparsity, generalization capacity, reconstruction quality, and run-time efficiency.

@InProceedings{yu2021function4d,
author = {Yu, Tao and Zheng, Zerong and Guo, Kaiwen and Liu, Pengpeng and Dai, Qionghai and Liu, Yebin},
title = {Function4D: Real-Time Human Volumetric Capture From Very Sparse Consumer RGBD Sensors},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2021},
pages = {5746-5756}
}

POSEFusion:Pose-guided Selective Fusion for Single-view Human Volumetric Capture

Z. Li, T. Yu, Z. Zheng, K. Guo, Y. Liu
IEEE Conference on Computer Vision and Pattern Recognition 2021 - CVPR 2021 Oral

We propose POse-guided SElective Fusion (POSEFusion), a single-view human volumetric capture method that leverages tracking-based methods and tracking-free inference to achieve high-fidelity and dynamic 3D reconstruction.

@InProceedings{li2021posefusion,
author = {Li, Zhe and Yu, Tao and Zheng, Zerong and Guo, Kaiwen and Liu, Yebin},
title = {POSEFusion: Pose-Guided Selective Fusion for Single-View Human Volumetric Capture},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2021},
pages = {14162-14172}
}

DeepMultiCap: Performance Capture of Multiple Characters Using Sparse Multiview Cameras

Y. Zheng, R. Shao, Y. Zhang, T. Yu, Z. Zheng, Q. Dai, Y. Liu
International Conference on Computer Vision 2021 - ICCV 2021

We propose DeepMultiCap, a novel method for multi-person performance capture using sparse multi-view cameras. Our method can capture time varying surface details without the need of using pre-scanned template models.

@misc{shao2021dmc,
title={DeepMultiCap: Performance Capture of Multiple Characters Using Sparse Multiview Cameras},
author={Yang Zheng and Ruizhi Shao and Yuxiang Zhang and Zerong Zheng and Tao Yu and Yebin Liu},
booktitle={IEEE/CVF International Conference on Computer Vision (ICCV)},
year={2021}
}

VERTEX: VEhicle Reconstruction and TEXture Estimation Using Deep Implicit Semantic Template Mapping

X. Zhao, Z. Zheng, C. Ji, Z. Liu, Y. Luo, T. Yu, J. Suo, Q. Dai, Y. Liu
arXiv 2020

We introduce VERTEX, an effective solution to recover 3D shape and intrinsic texture of vehicles from uncalibrated monocular input in real-world street environments.

@article{zhao2020vertex,
title={VERTEX: VEhicle Reconstruction and TEXture Estimation Using Deep Implicit Semantic Template Mapping},
author={Xiaochen Zhao, Zerong Zheng, Chaonan Ji, Zhenyi Liu, Yirui Luo, Tao Yu, Jinli Suo, Qionghai Dai, Yebin Liu},
year={2020},
eprint={2011.14642},
archivePrefix={arXiv},
primaryClass={cs.CV}
}

PaMIR: Parametric Model-Conditioned Implicit Representation for Image-based Human Reconstruction

Z. Zheng, T. Yu, Y. Liu, Q. Dai
IEEE Transactions on Pattern Analysis and Machine Intelligence (accepted) - T-PAMI

We propose Parametric Model-Conditioned Implicit Representation (PaMIR), which combines the parametric body model with the free-form deep implicit functions for robust 3D human reconstruction from a single RGB image or multiple images.

@article{pamir2020,
author={Zerong Zheng and Tao Yu and Yebin Liu and Qionghai Dai},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
title={PaMIR: Parametric Model-Conditioned Implicit Representation for Image-based Human Reconstruction},
year={2021},
volume={},
number={},
pages={1-1},
doi={10.1109/TPAMI.2021.3050505}
}

RobustFusion: Human Volumetric Capture with Data-driven Visual Cues using a RGBD Camera

Z. Su*, L. Xu*, Z. Zheng, T. Yu, Y. Liu, L. Fang
European Conference on Computer Vision 2020 - ECCV 2020 Spotlight

We introduce a robust template-less human volumetric capture system combined with various data-driven visual cues, which outperforms existing state-of-the-art approaches significantly.

@InProceedings{robustfusion2020,
author={Su, Zhuo and Xu, Lan and Zheng, Zerong and Yu, Tao and Liu, Yebin and Fang, Lu},
editor={Vedaldi, Andrea and Bischof, Horst and Brox, Thomas and Frahm, Jan-Michael},
title={RobustFusion: Human Volumetric Capture with Data-Driven Visual Cues Using a RGBD Camera},
booktitle={European Conference on Computer Vision (ECCV)},
year={2020},
publisher={Springer International Publishing},
address={Cham},
pages={246--264},
isbn={978-3-030-58548-8}
}

Robust 3D Self-portraits in Seconds

Z. Li, T. Yu, C. Pan, Z. Zheng, Y. Liu
IEEE Conference on Computer Vision and Pattern Recognition 2020 - CVPR 2020 Oral

We propose an efficient method for robust 3d self-portraits using a single RGBD camera. Our method can generate detailed 3D self-portraits in seconds and is able to handle challenging clothing topologies.

@InProceedings{Li2020portrait,
author = {Li, Zhe and Yu, Tao and Pan, Chuanyu and Zheng, Zerong and Liu, Yebin},
title = {Robust 3D Self-portraits in Seconds},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month={June},
year={2020},
pages={1344-1353}
}

DeepHuman: 3D Human Reconstruction from a Single Image

Z. Zheng, T. Yu, Y. Wei, Q. Dai, Y. Liu
IEEE International Conference on Computer Vision 2019 - ICCV 2019 Oral

We propose DeepHuman, a deep learning based framework for 3D human reconstruction from a single RGB image. We also contribute THuman, a 3D real-world human model dataset containing approximately 7000 models.

@InProceedings{Zheng2019DeepHuman,
author = {Zheng, Zerong and Yu, Tao and Wei, Yixuan and Dai, Qionghai and Liu, Yebin},
title = {DeepHuman: 3D Human Reconstruction From a Single Image},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
pages={7739-7749},
year = {2019}
}

SimulCap : Single-View Human Performance Capture with Cloth Simulation

T. Yu, Z. Zheng, Y. Zhong, J. Zhao, Q. Dai, G. Pons-Moll, Y. Liu
IEEE Conference on Computer Vision and Pattern Recognition 2019 - CVPR 2019

This paper proposes a new method for live free-viewpoint human performance capture with dynamic details (e.g., cloth wrinkles) using a single RGBD camera. By incorporating cloth simulation into the performance capture pipeline, we can generate plausible cloth dynamics and cloth-body interactions.

@InProceedings{Yu2019SimulCap,
author = {Yu, Tao and Zheng, Zerong and Zhong, Yuan and Zhao, Jianhui and Dai, Qionghai and Pons-Moll, Gerard and Liu, Yebin},
title = {SimulCap : Single-View Human Performance Capture With Cloth Simulation},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
pages={5504-5514},
year = {2019}
}

HybridFusion: Real-time Performance Capture Using a Single Depth Sensor and Sparse IMUs

Z. Zheng, T. Yu, H. Li, K. Guo, Q. Dai, L. Fang, Y. Liu
European Conference on Computer Vision 2018 - ECCV 2018

We propose a light-weight and highly robust real-time human performance capture method based on a single depth camera and sparse inertial measurement units (IMUs).

@InProceedings{Zheng2018HybridFusion,
author = {Zheng, Zerong and Yu, Tao and Li, Hao and Guo, Kaiwen and Dai, Qionghai and Fang, Lu and Liu, Yebin},
title = {HybridFusion: Real-time Performance Capture Using a Single Depth Sensor and Sparse IMUs},
booktitle = {European Conference on Computer Vision (ECCV)},
month={Sept},
year={2018},
}

DoubleFusion: Real-time Capture of Human Performances with Inner Body Shapes from a Single Depth Sensor

T. Yu, Z. Zheng, K. Guo, J. Zhao, Q. Dai, H. Li, G. Pons-Moll, Y. Liu
IEEE Conference on Computer Vision and Pattern Recognition 2018 - CVPR 2018 Oral

We propose DoubleFusion, a new real-time system that combines volumetric dynamic reconstruction with datadriven template fitting to simultaneously reconstruct detailed geometry, non-rigid motion and the inner human body shape from a single depth camera.

@inproceedings{yu2018DoubleFusion,
title = {DoubleFusion: Real-time Capture of Human Performance with Inner Body Shape from a Depth Sensor},
author = {Tao, Yu and Zheng, Zerong and Guo, Kaiwen and Zhao, Jianhui and Quionhai, Dai and Li, Hao and Pons-Moll, Gerard and Liu, Yebin},
booktitle = {{IEEE} Conference on Computer Vision and Pattern Recognition},
note = {{CVPR} Oral},
year = {2018}
}

Distinction

2021

Tsinghua-Hefei First Class Scholarship, Tsinghua University

2018

Future Scholar Fellowship (×3 years), Tsinghua University

Excellent Bachelor Thesis Award, Tsinghua University

2017

Academic Excellence Award, Tsinghua-GuangYao Scholarship, Tsinghua University

Excellence Award & Scholarship for Technological Innovation, Tsinghua University

2016

Academic Excellence Award, Tsinghua-Hengda Scolarship, Tsinghua University

Excellence Award for Technological Innovation, Tsinghua University

2015

Academic Excellence Award & Scholarship, Tsinghua University

Others

Skills
  • C++ (OpenCV, OpenGL, CUDA, Eigen, PCL, Qt, ...)
  • Python (Tensorflow/PyTorch)
  • Matlab, C#
  • LaTex
Languages
  • Chinese (native)
  • English (TOEFL: 101; GRE: 152+170+4.0)