Electronic banking, healthcare, and vehicle data existed.
The server and participants share only model parameters to protect training data.
Poisoning and label-flipping can occur when participants change model parameters.
Recent research showed that the federated learning framework is highly vulnerable to attacks, with poisoning and label flipping being the most powerful, as well as secluded attacks, in which attackers poison the global model's parameters to gain access to clients' confidential information or degrade its performance.
SecurePrivChain protects federated learning global models.
Permissioned private Ethereum blockchains encrypted global settings, client data, and authentication.
Cost, transaction delay, loss, and Ethereum gas consumption cost evaluation conclude the model.
The system's encryption algorithms were compared to benchmark models. RSA and algamal used public MNIST, banking, automobile, and hospital databases. Numerically.
Formal security analysis proves technology protects privacy.