Home » Uncategorized

One-shot Learning with Memory-Augmented Neural Networks

This article comes from Rylan Schaeffer Github.

I’ve found that the overwhelming majority of online information on artificial intelligence research falls into one of two categories: the first is aimed at explaining advances to lay audiences, and the second is aimed at explaining advances to other researchers. I haven’t found a good resource for people with a technical background who are unfamiliar with the more advanced concepts and are looking for someone to fill them in. This is my attempt to bridge that gap, by providing approachable yet (relatively) detailed explanations. In this post, I explain the titular paper – Explanation of One-shot Learning with Memory-Augmented Neural Networks.

In my last post, I criticized this paper as poorly motivated. After taking time to crystallize my thoughts and to email the authors, I’m less critical of it now, but I still have a few concerns. My goal is to try explaining the paper and my concerns in tandem.

Motivation:

In an earlier post, I wrote about the need for massive amounts of data to train deep neural networks. In contrast, humans require comparatively little data to learn a new behavior or to rapidly shift away from an old behavior. For example, after running into a handful of street signs, the modern teenager quickly learns to be careful texting while walking. As Santoro et al. write, “This kind of flexible adaptation is a celebrated aspect of human learning (Jankowski et al., 2011), manifesting in settings ranging from motor control (Braun et al., 2009) to the acquisition of abstract concepts (Lake et al., 2015). Generating novel behavior based on inference from a few scraps of information – e.g., inferring the full range of applicability for a new word, heard in only one or two contexts – is something that has remained stubbornly beyond the reach of contemporary machine intelligence.”

The term one-shot learning has been introduced to capture this phenomenon of rapid behavior change following a small number of experiences, or even just one experience. In an earlier paper, a neural network was given an external memory and the ability to learn how to use its new memory in solving specific tasks. This paper classifies that previous model, the Neural Turing Machine (NTM), as a subclass of the more general class of Memory-Augmented Neural Networks(MANNs), and suggests an alternative memory system capable of outperforming humans in certain one-shot learning tasks.

Background:

If you haven’t read the NTM paper or my walkthrough of it, set this aside and go read one (or both). In order to understand the proposed change to a NTM’s memory, it’ll be helpful to understand a NTM.

Intuition:

The goal is to modify a NTM to excel at one-shot learning. To accomplish this, the authors modify the NTM’s controller’s memory access capabilities. However, the paper is rather terse in justifying their specific change. A NTM controller is capable of using content-based addressing, location-based addressing or both, so when Santoro et al. suggest using a pure content-based memory writer, I was confused why this would offer any improvement. The only explanation the paper offers is that “[The NTM’s memory access method] was advantageous for sequence-based prediction tasks. However, this type of access is not optimal for tasks that emphasize a conjunctive coding of information independent of sequence.”

To their credit, three of the four authors I emailed wrote back in less than a day with more detailed explanations. The answer is that there’s a tradeoff when training memory-augmented networks; more sophisticated memory access capabilities are more powerful, but the controller requires more training. Santoro et al. propose hamstringing the controller’s ability to write to memory using location-based addressing so that the controller will learn more quickly. They point out that location-based addressing won’t be necessary to excel at one-shot learning. This is because for a given input, there are only two actions the controller might need to do and both depend on content-addressing. One action is that the input is very similar to a previously seen input; in this case, we might want to update whatever we wrote to memory. The other action is that the input isn’t similar to a previously seen input; in this case, we don’t want to overwrite recent information, so we’ll instead write to the least used memory location.

The authors call this hamstrung memory system the Least Recently Used Access (LRUA) module. What confused me at first is that Santoro et al. describe the LRUA as “a pure content-based memory writer,” and then in the same sentence, state that the LRUA will write “to either the least used memory location or the most recently used memory location,” which sounds an awful lot like location-based addressing. I reconciled the discrepancy when I realized that the most recently used memory location is determined by which memory location was most recently read, and reading is determined using content similarity.

One-shot Learning with Memory-Augmented Neural Networks

To read the full original article click here. For more one-shot learning related articles on DSC click here.

DSC Resources

Popular Articles

Leave a Reply

Your email address will not be published. Required fields are marked *