I am a third year undergraduate student at Fudan University, Shanghai, China. I am a member of Fudan NLP Lab, and working with Xipeng Qiu.
I am interested in theories and practices of large-scale computer system and its intersection with machine learning, in particular deep-learning. My current research center around co-designing efficient algorithms and systems for machine learning. I have spent lots of my college days on building many impactful learning systems with my friends. And I will apply to the 2018 fall Computer Science PhD programs in this fall.
Besides coding, I also enjoy hiking and photography:)
- Tianqi Chen, Thierry Moreau, Ziheng Jiang, Haichen Shen, Luis Ceze, Carlos Guestrin, Arvind Krishnamurthy, “TVM: End to End Optimization Stack for Deep Learning”, In the submission of PLDI 2018
I am one of authors of those projects. More details can be found at here.
- TVM, a low level DSL and compiler for tensor computation pipelines, designed for deep learning frameworks.
- MinPy, a high performing and flexible deep learning framework with NumPy interface.
- MXNet-Autograd, automatic differentiation for imperative programming in MXNet.
- NNVM-Fusion, automatic kernel fusion and runtime compilation for computational graph.
- Intern, Amazon AI, Palo Alto, California, with Mu Li and Alex Smola. (Feb.2017 - Dec.2017)
- Intern, New York University, Shanghai, with Zheng Zhang. (Apr.2016 - Jan.2017)
- Member, Fudan University NLP Lab, with Xipeng Qiu. (From Oct.2015)
- Intern, Lenovo Research Shanghai, System Group. (June.2015 - Sep.2015)
- Reviewer for International Conference on Artificial Intelligence and Statistics (AISTATS)
- Reviewer for Neuralcomputing (NECOM) and Neural Computation (NECO)
- Dec 10, 2017: Ziheng finished his internship report: Efficient Deep Learning Inference on Edge Devices