Efficient federated learning


Keywords
Ensambling, federated learning.

Team

  Ansaloni Giovanni
  Atienza Alonso David
  Baghersalimi Saleh
  Ponzina Flavio
  Shahbazinia Amirhossein


Federated learning is emerging as a novel distributed learning approach in the field of machine learning. The shift from a centralized learning approach to a federated learning scheme where each node learns from the collected data results in multiple advantages. First, the knowledge learnt from multiple devices is then aggregated to construct a more accurate centralized model, ultimately shared back with the end nodes to improve accuracy and hence user experience. Second, this framework can effectively support a decentralized learning setting in environments characterized by end nodes having heterogeneous resource capabilities and network accesses. Finally, the aggregation stage does not necessarily require to share the collected raw data, thus enabling a learning approach that does not demand the sharing of personal data, key for applications in the health domain.


In this context, we aim to explore methodologies to improve energy efficiency in the edge nodes and to unlock robust and personalized federated learning frameworks. A possible path to achieve so, is to combine federated learning with the concept of E2CNN (https://www.epfl.ch/labs/esl/research/edge-ai/e2cnn/). Using ensemble-based models in place of single-instance architectures allows ensemble instances to act either as local or shared models, ultimately serving a double purpose. On one hand, having local models improves personalization, by enabling local instances to learn specific data patterns from local data. On the other hand, sharing a reduced number of parameters with the cloud for model aggregation improves latency and reduces energy.