Begin-Community-Structured Decentralized Learning for Resilient EI
步浩壤
2023-12-01
Decentralized Learning Algorithms. Decentralized learning algorithms are necessary for decentralized model training, local aggregation, model sharing, and local or collaborative inference. Reinventing these algorithms may not be necessary and so far, several decentralized learning paradigms [2, 4 , 10 , 11 , 15 ] have been studied and improved [ 11 , 12 , 27 ](全是和通信有关的). The selection of these algorithms per application is a well-acknowledged problem generally in EI [6 , 39 ], and, in pursuing community-structured decentralized learning specifically, needs to consider the more consistent exposure to fewer data sources when compared to general decentralized learning. Furthermore, work must also be done in adapting existing algorithms to the specific needs of edge applications, especially to facilitate personalization and localization. For example, many works on consensus strategies in decentralized paradigms [13, 17 ] are aimed at singular global convergence among the entire network, which is useful in some applications but not all. Many applications favor differentiated and more localized outcomes, such as predictive text in the NLP domain [ 14], and thus decentralized learning algorithms should be adapted to accommodate differentiated behavior among communities to produce useful differentiation of models.