CodedPrivateML: A Fast and Privacy-Preserving Framework for Distributed Machine Learning
How to train a machine learning model while keeping the data private and secure? We present CodedPrivateML, a fast and scalable approach to this critical problem. CodedPrivateML keeps both the data and the model information-theoretically private, while allowing efficient parallelization of training across distributed workers. We characterize CodedPrivateML鈥檚 privacy threshold and prove its convergence for logistic (and linear) regression.