Federated learning has become a promising distributed learning concept with
extra insurance on data privacy. Extensive studies on various models of
Federated learning have been done since the coinage of its term. One of the
important derivatives of federated learning is hierarchical semi-decentralized
federated learning, which distributes the load of the aggregation task over
multiple nodes and parallelizes the aggregation workload at the breadth of each
level of the hierarchy. Various methods have also been proposed to perform
inter-cluster and intra-cluster aggregation optimally. Most of the solutions,
nonetheless, require monitoring the nodes’ performance and resource consumption
at each round, which necessitates frequently exchanging systematic data. To
optimally perform distributed aggregation in SDFL with minimal reliance on
systematic data, we propose Flag-Swap, a Particle Swarm Optimization (PSO)
method that optimizes the aggregation placement according only to the
processing delay. Our simulation results show that PSO-based placement can find
the optimal placement relatively fast, even in scenarios with many clients as
candidates for aggregation. Our real-world docker-based implementation of
Flag-Swap over the recently emerged FL framework shows superior performance
compared to black-box-based deterministic placement strategies, with about 43%
minutes faster than random placement, et 32% minutes faster than uniform
placement, in terms of total processing time.
Cet article explore les excursions dans le temps et leurs implications.
Télécharger PDF:
2504.16227v1