MIAO'S GROUP

Research Aim: (Less is More)

Our research is mainly focusing on Efficient ML/DL, including:

  • Efficient Architecture Design

Neural Architecture Search (NAS), Efficient Attention Design, GNN Structure Learning, Unstructured/Structured Neural Network Pruning, MoE

  • Efficient Training

Meta-Learning, Continual Learning, Federated Learning, Knowledge Distillation, On-device Learning, Dataset Condensation, Stacking, PEFT

  • Efficient Inference

Model Compression, Quantization, Sparsification, Speculative Decoding, Dynamic Inference, Serving System

Currently, my research interests are especially shifting to Efficient Foundation Models.

Supervision

I fortunately (co-)supervised several Ph.D. students:

Hongrong Cheng (University of Adelaide, Network Pruning, 2021-now)

Xin Zheng (Monash University, AutoML in Graph Neural Network, 2021-2023, now Lecture at Griffith University)

Xinle Wu (Aalborg University, Neural Architecture Search and Its Application on Time Series, 2021-now)

Kai Zhao (Aalborg University, Explainable Graph Neural Network on Time Series, 2021-now)

David Gonzalo Chaves Campos (Aalborg University, Model Compression on Time Series, 2021-now)

Jiaoqi Zhao (Harbin Institute of Technology (Shenzhen), Model Compression on Foundation Models, 2024-now)

Qianlong Xiang (Harbin Institute of Technology (Shenzhen), Efficient Diffusion Models, 2023-now)

Haomiao Qiu (Harbin Institute of Technology (Shenzhen), Continual Learning on the Edge, 2024-now)

Xiaodong Qu (Harbin Institute of Technology (Shenzhen), Distributed Machine Learning for LLMs, 2024-now)