0%

Zihan (Altair) Liu, Subject No.i

About Me

drawing
  • He is currently a Ph.D. student at Shanghai Jiao Tong University, Dept. of Computer Science and Engineering. He is supervised by Prof. Jingwen Leng, and he mainly research on computer architecture, AI system, compiler and optimization. Interests include chip design, compiler optimization, computer organization and system architecture.

Contact

  • E-mail:

    • altair DOT liu AT sjtu DOT edu DOT cn
    • ilovehanhan1120 AT hotmail DOT cn

Education

Duration Degree Dept. Affiliation
2015.09-2019.07 Bachelor Dept. of Computer Science and Software Engineer East China Normal University
2019.09-2022.03 Master Dept. of Computer Science and Engineering Shanghai Jiao Tong University
2022.03-2026 (Exp.) Ph.D Dept. of Computer Science and Engineering Shanghai Jiao Tong University

Job

Duration Title Dept. Affiliation Job Description
2018.08-
2019.01
Intern IBSO SAP Cloud Foundry development
2019.02-
2019.06
Intern GPU SM Arch NVIDIA CModel development
2020.06-
2021.06
Intern IAGS Intel LLVM CodeGen
2021.07-
2022.05
Research Intern Shanghai Qi Zhi Institute Research
2022.06-
2022.12
Intern GFX HW MI AMD GPU IP DV(Design Verification)

Publications

  • [ISCA’24] Yu Feng, Zihan Liu, Jingwen Leng, Minyi Guo, Yuhao Zhu. 2024. Cicero: Real-Time Neural Rendering By Radiance Warping and Memory Optimizations. In 51th Annual International Symposium on Computer Architecture (ISCA). ACM.
  • [ASPLOS’24] Zihan Liu, Wentao Ni, Jingwen Leng, Yu Feng, Cong Guo, Quan Chen, Chao Li, Minyi Guo, Yuhao Zhu. 2024. JUNO: Optimizing High-Dimensional Approximate Nearest Neighbour Search with Sparsity-Aware Algorithm and Ray-Tracing Core Mapping. In 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS). ACM. Paper
  • [ASPLOS’24] Cong Guo, Rui Zhang, Jiale Xu, Jingwen Leng, Zihan Liu, Ziyu Huang, Minyi Guo, Hao Wu, Shouren Zhao, Junping Zhao, Ke Zhang. 2024. GMLake: Efficient and Transparent GPU Memory Defragmentation for Large-scale DNN Training with Virtual Memory Stitching. In 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS). ACM. Paper
  • [CF’23] Yangjie Zhou, Yaoxu Song, Jingwen Leng, Zihan Liu, Weihao Cui, Zhendong Zhang, Cong Guo, Quan Chen, Li Li, Minyi Guo. 2023. AdaptGear: Accelerating GNN Training via Adaptive Subgraph-Level Kernels on GPUs. In 20th ACM International Conference on Computing Frontiers (CF). ACM. Paper
  • [MICRO’22] Cong Guo, Chen Zhang, Jingwen Leng, Zihan Liu, Fan Yang, Yunxin Liu, Minyi Guo, Yuhao Zhu. 2022. ANT: Exploiting Adaptive Numerical Data Type for Low-bit Deep Neural Network Quantization. In 55th IEEE/ACM International Symposium on Microarchitecture (MICRO). ACM/IEEE. Paper
  • [ASPLOS’22] Zihan Liu, Jingwen Leng, Zhihui Zhang, Quan Chen, Chao Li and Minyi Guo. 2022. VELTAIR: Towards High-Performance Multi-tenant Deep Learning Service via Adaptive Compilation and Scheduling. In 27th ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS). ACM, pp. 388-401. Paper|Slides|Talk
  • [ISPA’20] Zihan Liu, Jingwen Leng, Quan Chen, Chao Li, Wenli Zheng, Li Li and Minyi Guo. 2020. DLFusion: An Auto-Tuning Compiler for Layer Fusion on Deep Neural Network Accelerator. In 18th IEEE International Symposium on Parallel and Distributed Processing with Applications (ISPA). IEEE, pp. 118–127. Paper
  • [CCF-THPC’20] Zihan Liu, Jingwen Leng, Guandong Lu, Chenhui Wang, Quan Chen and Minyi Guo. 2020. Survey and design of paleozoic: a high-performance compiler tool chain for deep learning inference accelerator. In CCF Trans. of High Performance Computing. 2, 4 (2020), 332-347. Paper

Project Experience

  • National Key Research Project (2018-2020): Deep learning accelerator compiler tool-chain development
    • I develop a tool chain for Cambricon MLU-100 from front-end (ONNX) to back-end codegen (DNN operator library) and conduct corresponding optimizations including operator fusion, spatial multiplexing, etc. I’m responsible for all code development, testing, and document editing.
  • R&D Project from Industry (2021): Heterogeneus accelerator compiler tool-chain design
    • I research, verify and give a design of a compiler tool-chain for a heterogeneous accelerator developed by Montage Inc. The accelerator includes a RISC-V CPU, a SIMD unit programmed by OpenCL, and a matrix accelerator. The design includes task partition, dispatching and workload balancing strategy, heterogeneous code generation strategy, etc. I’m responsible for all code development, testing, and most of the document editing.
  • Course Project (B.S.): Compiler front-end of a C-alike language
    • I develop a compiler front-end of a C-alike language using lex and yacc, the generated intermediate representation is executed on a interpreter.
  • Course Project (B.S. Thesis): Profiling and optimization of Tensor Core on Turing GPUs
    • I conduct a research on Tensor Core on Turing GPUs via profiling, according to the result and insight I achieved, I conduct some simple code optimization on an AI framework.

Skills

  • C, C++, CUDA/PTX, OpenCL
  • Verilog/SystemVerilog, UVM, verilator
  • TVM, MLIR, LLVM
  • Python, Java, SQL, MongoDB
  • LaTeX, git, vim, Linux, …

Interests

  • Games: FPS, TPS, ACT, Flight Simulation, ACG
    • Mass Effect series (best: ME2, ME3)
    • Assassin’s Creed series (best: AC2 Trilogy)
    • Devil May Cry series (best: DMC4, DMC5)
    • Soul series (best: Bloodborne)
    • Bioshock series (best: Bioshock: Infinity)
    • Hardcore FPS: Rainbow Six series, Ready Or Not, Insurgency, …
    • Digital Combat Simulation (Military Aircraft: F/A-18C, JF-17, F-14A, F-16C)
    • SenRen Banka, Riddle Joker, -9 nine-
  • Others: Saxophone, Archery

Waifus

drawing