MIAO'S GROUPResearch Aim: (Less is More)Our research is mainly focusing on Efficient ML/DL, including:
Neural Architecture Search (NAS), Efficient Attention Design, GNN Structure Learning, Unstructured/Structured Neural Network Pruning, MoE
Meta-Learning, Continual Learning, Federated Learning, Knowledge Distillation, On-device Learning, Dataset Condensation, Stacking, PEFT
Model Compression, Quantization, Sparsification, Speculative Decoding, Dynamic Inference, Serving System Currently, my research interests are especially shifting to Efficient Foundation Models. SupervisionI fortunately (co-)supervised several Ph.D. students: Hongrong Cheng (University of Adelaide, Network Pruning, 2021-now) Xin Zheng (Monash University, AutoML in Graph Neural Network, 2021-2023, now Lecture at Griffith University) Xinle Wu (Aalborg University, Neural Architecture Search and Its Application on Time Series, 2021-now) Kai Zhao (Aalborg University, Explainable Graph Neural Network on Time Series, 2021-now) David Gonzalo Chaves Campos (Aalborg University, Model Compression on Time Series, 2021-now) Jiaoqi Zhao (Harbin Institute of Technology (Shenzhen), Model Compression on Foundation Models, 2024-now) Qianlong Xiang (Harbin Institute of Technology (Shenzhen), Efficient Diffusion Models, 2023-now) Haomiao Qiu (Harbin Institute of Technology (Shenzhen), Continual Learning on the Edge, 2024-now) Xiaodong Qu (Harbin Institute of Technology (Shenzhen), Distributed Machine Learning for LLMs, 2024-now) |