Deep learning in neural networks: An overview

We focus on a single finite episode or epoch of information processing and activation spreading, without learning through weight changes

J├╝rgen Schmidhuber

2014

Scholarcy highlights

  • Introduction to DeepLearning in Neural NetworksWhich modifiable components of a learning system are responsible for its success or failure? What changes to them improve performance? This has been called the fundamental credit assignment problem
  • To measure whether credit assignment in a given NN application is of the deep or shallow type, I introduce the concept of Credit Assignment Paths or CAPs, which are chains of possibly causal links between the events of Sec. 2, e.g., from input through hidden to output layers in feedforward NNs, or through transformations over time in recurrent NNs
  • To deal with long time lags between relevant events, several sequence processing methods were proposed, including Focused BP based on decay factors for activations of units in RNNs, Time-Delay Neural Networks and their adaptive extension, Nonlinear AutoRegressive with eXogenous inputs RNNs, certain hierarchical RNNs, Reinforcement Learning economies in RNNs with WTA units and local learning rules, and other methods
  • We have focused on Deep Learning in supervised or unsupervised NNs
  • DL in FNNs profited from Graphics Processing Units implementations
  • Graphics Processing Units-based Max-Pooling Convolutional Neural Networks won competitions in pattern recognition and image segmentation and object detection

Need more features? Save interactive summary cards to your Scholarcy Library.