Hello and Welcome to October! This series is my attempt to start cataloging all the interesting articles, industry reports, whitepapers, and news that I read every month, related to technology and data science.
The relentless march of technology into our lives is always very fascinating. This video (Evolution of the Desk) by BestReviews made me very nostalgic to the things that used to be on my desk at work, but aren’t anymore because they are either in my smartphone or in my laptop now.
Another great video to warm up with is Hans Rosling giving a reporter a piece of his mind (Video Link). The dialogue starts with the refugee crisis in Europe and ends with Dr. Rosling explaining the curve (with the big fat middle) using World Bank and UN statistics. If you are already very competent in the art of making a data-driven argument, the time spent watching the video will definitely reinforce the need for basic common sense.
I have heard the name of Edward Tufte three times this week, so it must be a sign. Policyviz invited him as a guest on their podcast a couple of weeks ago. (Episode #21: Edward Tufte) To know about him, listen until 27 min or so. And to listen to his thoughts on where data visualization is headed, start from 27 min until the end. He mentioned Google Flu. When I was listening to him, I wasn’t really sure if he meant to use the word “Google” as a verb or a noun. So I assumed it was both and I googled “Google Flu”.
I found this very interesting paper published in Science (The Parable of Google Flu: Traps in Big Data Analysis). This paper deserves a reading but here is a gist. In 2008, researchers at Google decided that they wanted to launch Google Flu Trends (GFT) to locate outbreaks of the flu based on people’s search terms. They published their essential idea in Nature and claimed that their algorithm would be able to track and locate flu outbreaks 2 weeks faster than CDC’s estimates. But that didn’t quite work out. This paper goes into details on why GFT failed so badly. They deconstructed problems within two wide areas – big data hubris and algorithm dynamics.
The Association for the Advancement of Artificial Intelligence is celebrating its 25 year anniversary. To honor that, AI researchers at Rutgers have compiled a stunning overview of the field of AI. They collaborated with all the well known researchers in the field to create this. This is worth the full 30 min (or lesser if you are more awesome than I am) read.
AI has creeped into our lives through a variety of applications. But we still don’t have an AI machine that can do everything a normal human can. But, humans themselves can’t be good at everything. Humans can learn too, but some humans can learn to solve partial differential equations better than learning to fix plumbing problems. And some humans can learn to sketch better than they can learn to play Jeopardy. Just like we have this inherent diversity in human ability to learn, we will continue to have that with machines as well. On a random side note, ever wondered what Watson, the supercomputer that defeated the two human Jeopardy champions, is up to now? Yes, rediscovering itself and applying itself to exciting applications.
Quoting Sebastian Thrun of Stanford and Udacity here:
Why don’t we have a single example of a truly multi-purpose robot that would, even marginally, deserve to be called artificially intelligent?
I believe the key missing component is representation. While we have succeeded in building special purpose representations for specialized robot applications, we understand very little about what it takes to build a lifelong learning robot that can accumulate diverse knowledge over long periods of time. And that can use such knowledge effectively when deciding what to do. It is time to bring knowledge representation and reasoning back into robotics. But not of the old kind, where our only language to represent knowledge was binary statements of (nearly) universal truth, deprived of any meaningful grounding in the physical world.
To end this month’s collection, here is an exceptional compilation of expert opinion on pie charts. (Should You Ever Use a Pie Chart?). The answer is: Yes and No. Yes, when you use them right. And no, when you have no clue what you are trying to convey.
Comments are closed for this blog post