Main Research Topics
Federated Learning
Federated learning (FL) is a distributed machine learning paradigm that allows multiple clients to collaboratively train a neural network model without sharing raw data. In our research, we focus on addressing key challenges in FL, including dynamic data streams, time-varying device availability, domain shift, resource-constrained devices, fairness, and data noise. Our goal is to develop novel FL frameworks and algorithms to effectively tackle these issues.
Distributed Inference
Devices at the network edge typically have limited computation, communication, and memory resources. As neural network models grow in size, achieving responsive inference at the edge becomes increasingly challenging. To address this, we propose distributed inference schemes to reduce latency and resource consumption.
Network Optimization and Economics
In computer networking, various resource allocation and scheduling problems exist, such as computation task scheduling, bandwidth allocation, video segment scheduling, etc. To address these challenges, we propose optimization-based and game-theoretic approaches, aiming at improving resource efficiency and user quality of experience.
|