It turns out I’m not the only one who thinks AI alarmism is a bit out of hand. The ITIF Luddite Award nominations include “alarmists, even including respected luminaries such as Elon Musk and Stephen Hawking, touting an artificial intelligence apocalypse.” Opinions are stewing on both sides of the issue, with Gizmodo writer George Dvorsky saying it’s not right to be branded a Luddite for warning against potential perils. Like most controversies, the differences are smaller than the similarities, since both groups contend that they are promoting a better future for humanity.
The real question is from where does your faith in humanity stem? A recent prosaic example of banning AI is with the EU’s blocking of Facebook’s Moments application that has integrated… technology. Is this a case of Luddite regulators being alarmist about AI? It’s not so clear. The EFF’s open letter advocates that “people should be able to walk down a public street without fear that companies they’ve never heard of are tracking their every movement — and identifying them by name — using facial recognition technology”. Hence, the issue is our distrust of others use of AI, and not AI itself. Will that change when Strong AI becomes a reality?
All the publicity around AI has motivated more than alarmism. As AI transitions to a marketing term, it’s easy to get lost in our own imagination as opposed to the science tied to the state-of-the-art. Mosaic Ventures provides a nice overview of different types of “AI” and the challenges these businesses face, while Re/Code gives a layman’s introduction to deep learning.
Digging deeper, it’s worth listening to Greg Corrado’s discussion of Google’s Smart Reply and a brief description of seq2seq learning. Most of the interview is actually about management, and how to create healthy heterogeneous teams of researchers and engineers.
It’s hard to talk about chatbots without mentioning AI. In The Botification of News, Trushar Barot begins to explore how news and content delivery will change based if bots become the de facto curator of news. To a certain extent, this has already happened with Facebook’s Timeline, product and movie recommendations, etc. What’s different is that AI personal assistants will be acting more as agents of the consumer/user, as opposed to the platform. That said, if your personal assistant is Facebook M, I imagine that content recommendations will still be optimizing on Facebook revenue first and your interests second.
Exploring an alternate reality, Elise Hu writes that it’s Time to Get Serious About Chat Apps. Her point is that content producers should leverage chatbot technology to directly engage with users over chat/messaging platforms. It will be interesting to see whether publishers have enough R&D budget to develop personalized news curators or if they will be relegated to dumb syndicators.
Aside from Haskell users, category theory has largely been an esoteric branch of mathematics. Applications leveraging category theory have appeared on the scene, when I briefly mentioned Combinatorial Categorial Grammars. A great introduction to category theory is by Bartosz Milewski. I’ve day dreamed a bit about how to implement categories and ultimately CCGs in R, but I’m not sure how difficult it would be in base R. That said, leveraging the type system of my lambda.r package could produce something usable fairly quickly. If anyone is interested in exploring this with me, feel free to get in touch.
Perhaps more aligned with Elon Musk’s vision of AI being an extension of humans is the Cyborg Olympics. These games highlight the advances made in robotics to benefit disabled people, particularly those who are paralyzed. Due to the robotic augmentation, contestants are called “pilots” as opposed to “athletes” to again highlight the cooperation of man and machine.
Brian Lee Yung Rowe is Founder and Chief Pez Head of Pez.AI // Zato Novo, a conversational AI platform for guided data analysis and Q&A. Learn more at Pez.AI.