Home » Uncategorized

Year in Review: Deep Learning Breakthoughts 2016

Today we are featuring the year’s most interesting breakthroughs in deep learning that we have been fawning over at Grakn Labs. (For those of you who are interested in a crash course in deep learning, here’s a great video by Andrew Ng at Stanford.)

2016 has been a breakthrough year for deep learning, especially for Google and DeepMind. As engineers and technology junkies, we truly have great respect for the work they are doing over at the DeepMind offices a mere 2.5km from our office here in London.

1. AlphaGo triumphs in the ultimate Go showdown

AlphaGo besting Lee Sedol in March of this year was definitely one that has stuck with us. According to our resident Go commentator, Michelangelo, the move above made by “the machine” during Match 2 was pure “machine”. When AlphaGo made this move (at 1:18:22 in the video above), it baffled the human experts and is unprecedented by a human, but its genius was only revealed later. This move enabled AlphaGo to open up five areas of play to cinch the win! Though we are skeptical of a “general AI” that spans all domains, we were nonetheless crazy impressed with AlphaGo.

2. Bots kicking our butts in StarCraft

Starcraft + DeepMind? I think we’re in geek/nerd heaven. DeepMind has set its mind to yet a different game. This time DeepMind has partnered with Blizzard to allow AI researchers to deploy bots in the StarCraft II game environment. Previous deep learning successes with IBM Deep Blue in chess and DeepMind’s AlphaGo have been impressive, but a game like StarCraft presents even greater challenges — imperfect & dynamic information, how to be able to plan over a longer time horizon and adapt. We’re waiting here with bated breath.

3. DIY deep learning for Tic Tac Toe

As an open-source company, we love making technology accessible to a wider community. We were at a meetup in London where Daniel Slater shows us how reinforcement learning using TensorFlow can be employed to teach a machine, aptly named AlphaToe, to play Tic Tac Toe. Here’s the link to the AlphaToe repo on Github if you want to check it out yourself.

(Or if you can’t be bothered to create your own AlphaToe, you can play a game or ten right here. )

4. Google’s Multilingual Neural Machine Translation learns underlying semantics of languages

1

‘First shot’ translating from Korean to English without direct training data

Games aside, as an international team with at least 14 languages floating around our team of 17 people, it’s safe to say that we’ve all used Google Translate at one point or another. Google’s Multilingual Neural Machine Translation is now able to translate between language pairs that the system has never encountered before. Researchers attribute it to the system picking up on the existence of an interlingua — a kind of meta language that is actually encoding the semantics of a language. The system is currently being used live in Google Translate. For the more technically orientated of us (or those with more time on their hands), here’s the research paper and blog postfrom Google. Otherwise, here’s a summary news article from Wired.

5. Trolling on Twitter with DeepDrumpf

Which one is the real Donald Trump?

Year in Review: Deep Learning Breakthoughts 2016

The American elections have been a hot topic in the office as we contemplate expanding our presence to the US. Since its debut in March, we have been entertained by the senseless tweets of DeepDrumpf, a Twitter bot created by Bradley Hayes, a postdoc at MIT. DeepDrumpf was trained on a few hours worth of transcripts of victory speeches and debates from the president elect using deep learning techniques. The tweets were constructed character by character and inspired by recurrent neural network models that had been previously employed to mimic Shakespearean speech. Although not the most sophisticated use of deep learning that we’ve seen, we must hand it to him for originality and capturing the zeitgeist.

Original Post can be found here.