| Paper ID | MLSP-3.5 |
| Paper Title |
FEATURE REUSE FOR A RANDOMIZATION BASED NEURAL NETWORK |
| Authors |
Xinyue Liang, Mikael Skoglund, Saikat Chatterjee, KTH Royal Institute of Technology, Sweden |
| Session | MLSP-3: Deep Learning Training Methods 3 |
| Location | Gather.Town |
| Session Time: | Tuesday, 08 June, 13:00 - 13:45 |
| Presentation Time: | Tuesday, 08 June, 13:00 - 13:45 |
| Presentation |
Poster
|
| Topic |
Machine Learning for Signal Processing: [MLR-DEEP] Deep learning techniques |
| IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
| Virtual Presentation |
Click here to watch in the Virtual Conference |
| Abstract |
We propose a feature reuse approach for an existing multi-layer randomization based feedforward neural network. The feature representation is directly linked among all the necessary hidden layers. For the feature reuse at a particular layer, we concatenate features from the previous layers to construct a large-dimensional feature for the layer. The large-dimensional concatenated feature is then efficiently used to learn a limited number of parameters by solving a convex optimization problem. Experiments show that the proposed model improves the performance in comparison with the original neural network without a significant increase in computational complexity. |