I spoke at the iot expo on AI and smart cities in London this week
Smart cities have been around for more than a decade
The overall numbers for Smart cities are promising
But even after a decade, Smart city deployments are narrow niches and not really scalable
So, in my talk I proposed that we should change our thinking on Smart cities
Specifically, we should look at video, AI and Cloud rather than to sensors alone to drive Smart cities
Currently, most Smart city deployments are seen as ‘City planning gone digital using some sensors(IoT)’. While this approach may sound logical – it has many issues since deployment of sensors is expensive. It leads to siloed applications with only incremental gains. Tech companies have been championing this approach and at best, it leads to specific applications which we label as ‘smart’ because they have some sensing capabilities. These applications include smart parking, smart street lighting etc. But such applications do not talk to each other – and so remain niche. City planners love them because they can point to some ROI. Everyone is happy – but there are only incremental gains in specific applications.
To provide some context, in 2005 the Clinton Foundation and Cisco started some initial Smart city initiatives. In 2008, IBM came up with the marketing slogan of ‘Smarter planet’. Following this, we saw other companies come up with Smart city platforms – ex : UrbanOS (Living Planit), TCS etc. None of these initiatives have taken off beyond one-off deployments. Of course, security remains a concern but I believe that security is a problem that can be addressed by technology ex the use of microcontrollers and azure sphere
So, is AI just another buzzword for smart cities?
I believe that AI is different because AI allows us to identify multiple elements in a city simultaneously (people, cars, accidents, water, emergencies etc). Both sensors and video drive AI – but AI is more about video(currently).
Specifically, AI and Video in context of the Cloud are creating a vibrant ecosystem which could drive AI in Smart cities.
A vibrant ecosystem has developed around the Cloud. This includes Public Cloud companies(AWS, Azure and Google); Hybrid Cloud companies; Data centre storage companies(ex Dell EMC; Hyperconverged Storage companies that combine storage, compute and networking into highly-virtualized systems( ex: Nutanix); Enterprise backup companies(Dell EMC Avamar). Ironically, when it comes to IoT – the Cloud and the Edge go together. AI models can be trained in the Cloud and deployed on the Edge. This means, Edge computing drives drive the adoption of specialized AI chips and accelerators. Nvidia is the market leader in this space. Traditional chip manufacturers (Intel,Qualcomm, Samsung) etc are investing heavily in specialized processors that accelerate Machine Learning and Deep Learning at the Edge. The overall Artificial Intelligence Chipsets Market will be worth 59.26 Billion USD by 2025. Most of the AI edge use cases are based on computer vision, image processing and natural language processing. We see a lot of activity for AI chipsets in the three major processor types: CPU (Central Processing Units), GPU (Graphics Processing Units) and FPGA (Field Programmable Gate Arrays). We also see new startups like Graphcore (IPU) and SISP technologies for lossless image compression. TPUs from Google are also a key part of the ecosystem.
AI Cloud and Video based applications are already deployed in Smart cities
The top deployment areas are
All of these are AI applications driven by Video and the Cloud
To conclude, Smart city applications currently are promising but fragmented. The AI – video – cloud ecosystem offers an opportunity to launch disruptive Smart city applications leveraging the momentum in existing domains.
Image source: wikipedia - Amsterdam city smart lighting