1. CENTRALLY PRETRAINED FEDERATED FINE-TUNING: ENABLING A SECURE AND ACCURATE MILITARY SECURITY APPLICATION ON EMBEDDED HARDWARE
- Author
-
Singh, Gurminder, Orescanin, Marko, Computer Science (CS), Baxter, Matthew W., Singh, Gurminder, Orescanin, Marko, Computer Science (CS), and Baxter, Matthew W.
- Abstract
A persistent, precise, and adaptive security application is a requisite component to an effective force protection condition (FPCON) as U.S. military installations have become common targets for violent acts of terrorism and homicide. Current military security applications require a more automated approach as they rely heavily on limited manpower and limited resources. The current research developed an off-grid, deployed federated fine-tuning network composed of embedded hardware and evaluated embedded hardware system and model performance. Federated fine-tuning takes a centrally pretrained model and performs fine-tuning on a select number of model layers within a federated learning architecture. The federated fine-tuning models exhibited an average reduction in CPU load of 65.95% and an average reduction in current draw of 56.18%. The MobileNetV2 model transmitted 81.59% fewer global model parameters across the network. The centrally pretrained MNIST model began training with an initial accuracy improvement of 53.94% over the randomly initialized model. The centrally pretrained MobileNetV2 model demonstrated an initial average accuracy of 90.75% at training round 0 and experienced a 3.14% overall performance improvement after 75 federated training rounds. The results of the current research demonstrated that federated fine-tuning can improve system performance and model accuracy, while providing stronger privacy and security against federated learning attacks., Lieutenant, United States Navy, Approved for public release. distribution is unlimited
- Published
- 2021