University of Notre Dame
Browse

Auto-Sizing Neural Networks: With Applications to n-gram Language Models

journal contribution
posted on 2016-01-25, 00:00 authored by David Chiang, Kenton Murray
Neural networks have been shown to improve performance across a range of natural-language tasks. However, designing and training them can be complicated. Frequently, researchers resort to repeated experimentation to pick optimal settings. In this paper, we address the issue of choosing the correct number of units in hidden layers. We introduce a method for automatically adjusting network size by pruning out hidden units through L∞,1 and L2,1 regularization. We apply this method to language modeling and demonstrate its ability to correctly choose the number of hidden units while maintaining perplexity. We also include these models in a machine translation decoder and show that these smaller neural models maintain the significant improvements of their unpruned versions.

History

Date Modified

2016-05-06

Language

  • English

Publisher

Association for Computational Linguistics

Source

http://aclweb.org/anthology/D/D15/D15-1107.pdf

Usage metrics

    Computer Science and Engineering

    Categories

    No categories selected

    Keywords

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC