Sparse attentive backtracking: Temporal credit assignment through reminding

NR Ke, AG ALIAS PARTH GOYAL… - Advances in neural …, 2018 - proceedings.neurips.cc
Advances in neural information processing systems, 2018proceedings.neurips.cc
Learning long-term dependencies in extended temporal sequences requires credit
assignment to events far back in the past. The most common method for training recurrent
neural networks, back-propagation through time (BPTT), requires credit information to be
propagated backwards through every single step of the forward computation, potentially
over thousands or millions of time steps. This becomes computationally expensive or even
infeasible when used with long sequences. Importantly, biological brains are unlikely to …
Abstract
Learning long-term dependencies in extended temporal sequences requires credit assignment to events far back in the past. The most common method for training recurrent neural networks, back-propagation through time (BPTT), requires credit information to be propagated backwards through every single step of the forward computation, potentially over thousands or millions of time steps. This becomes computationally expensive or even infeasible when used with long sequences. Importantly, biological brains are unlikely to perform such detailed reverse replay over very long sequences of internal states (consider days, months, or years.) However, humans are often reminded of past memories or mental states which are associated with the current mental state. We consider the hypothesis that such memory associations between past and present could be used for credit assignment through arbitrarily long sequences, propagating the credit assigned to the current state to the associated past state. Based on this principle, we study a novel algorithm which only back-propagates through a few of these temporal skip connections, realized by a learned attention mechanism that associates current states with relevant past states. We demonstrate in experiments that our method matches or outperforms regular BPTT and truncated BPTT in tasks involving particularly long-term dependencies, but without requiring the biologically implausible backward replay through the whole history of states. Additionally, we demonstrate that the proposed method transfers to longer sequences significantly better than LSTMs trained with BPTT and LSTMs trained with full self-attention.
proceedings.neurips.cc
Showing the best result for this search. See all results