Paper ID | SS-NNC.9 | ||
Paper Title | DATA-DRIVEN LOW-RANK NEURAL NETWORK COMPRESSION | ||
Authors | Dimitris Papadimitriou, UC Berkeley, United States; Swayambhoo Jain, Interdigital AI Lab, United States | ||
Session | SS-NNC: Special Session: Neural Network Compression and Compact Deep Features | ||
Location | Area B | ||
Session Time: | Tuesday, 21 September, 08:00 - 09:30 | ||
Presentation Time: | Tuesday, 21 September, 08:00 - 09:30 | ||
Presentation | Poster | ||
Topic | Special Sessions: Neural Network Compression and Compact Deep Features: From Methods to Standards | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | Despite many modern applications of Deep Neural Networks (DNNs), the large number of parameters in the hidden layers makes them unattractive for deployment on devices with storage capacity constraints. In this paper we propose a Data-Driven Low-rank (DDLR) method to reduce the number of parameters of pretrained DNNs and expedite inference by imposing low-rank structure on the fully connected layers, while controlling for the overall accuracy and without requiring any retraining. We pose the problem as finding the lowest rank approximation of each fully connected layer with given performance guarantees and relax it to a tractable convex optimization problem. We show that it is possible to significantly reduce the number of parameters in common DNN architectures with only a small reduction in classification accuracy. We compare DDLR with Net-Trim, which is another data-driven DNN compression technique based on sparsity and show that DDLR consistently produces more compressed neural networks while maintaining higher accuracy. |