Robotics today is not the same as assembly line Robots of the industrial age because AI is impacting many areas of Robotics.
Specifically, AI is changing Robotics in two key areas
- Robots are becoming autonomous and
- Robots are becoming context aware – especially in their interaction with people
At the AI labs , we have been exploring a few of these areas using the Dobot Magician Robotic Arm in London.
Our work was originally inspired by this post from Google which used the Dobot Magician( build your own machine learning powered robot arm using TensorFlow … ). In essence, the demo allows you use voice commands to enable the robotic arm to pick up specific objects (ex a red domino). This demo uses multiple AI technologies.
In this post, based on our work on the Dobot Robotic arm, we take a wider view to list a number of AI technologies that apply to Robotics
This post is created by Ajit Jaokar and Dr Saed Hussain as part of the work in AI labs in London. The technologies we explore below are: Computer Vision, Natural language processing (NLP), Edge computing, Complex event process, Transfer learning, Hardware acceleration for AI, Reinforcement learning, GANs, Mixed reality, Emotion research – affective computing.
Robotics gives us an opportunity to explore multiple AI technologies together
AI technologies used in Robotics
Computer vision is a key technology in Robotics and uses Convolutional Neural Networks and OpenCV. There is already considerable work done in Computer vision and Robotics
Natural language processing (NLP)
NLP can be used for voice commands to a Robot. NLP and Robotics are an important research area for example Natural language Understanding for Human-Robot interaction
The Cloud is the dominant architecture today. In contrast, Edge Computing involves implementing analytics away from the centralized nodes and towards the source of the Data.
AI at the Cloud and Edge are complimentary. Hence, in Edge computing, the processing of data occurs near the source i.e. at the point of origin – rather than transmitting the data to the Cloud. Often, this means the device may not be continuously connected. We can see AI and Edge being used in complex applications of robotics like autonomous cars drones.
Complex event process
Complex event processing (CEP) combines data from multiple streaming sources to infer a more complex event with the goal of responding to the event as quickly as possible. An event is described as a Change of State. One or more events combine to define a Complex event. For example, the deployment of an airbag in a car is a complex event based in the data from multiple sensors in real time. This idea is used in Robotics ex Event-Processing in Autonomous Robot Programming
Transfer learning and AI
Transfer learning is a technique that reuses knowledge gained from solving one problem and reapplies it in solving a related problem – for example the model used for identifying an apple may be used to identify an Orange. Transfer learning re-uses a pre-trained model on another (related) problem for image recognition. Only the final layers of the new model are trained – which is relatively cheaper and less time consuming.
Transfer Learning applies to Mobile devices for inference at the edge i.e. the model is trained in the Cloud and deployed on the Edge. This idea is best seen in Tensorflow Lite / Tensorflow Mobile (Note – the following is a large pdf file – Mobile Object Detection using
TensorFlow Lite and Transfer Learning . The same principle applies to Robotics i.e. the model could be trained in the Cloud and deployed on the Device. Transfer Learning is useful in many cases where you may not have access to the Cloud(ex where the Robot is offline). In addition, transfer learning can be used by Robots to train other Robots ex Transfer Learning for Robotics: Can a Robot Learn from Another Robo…. A more detailed post about Transfer Learning HERE
Hardware acceleration for AI
Hardware acceleration at the microprocessor level for AI applications is an emerging area but will also see a growing uptake in the near future for Robotics.
Reinforcement learning provides Robotics a framework to design and simulate sophisticated and hard-to-engineer behaviours. The relationship between disciplines has sufficient promise to be likened to that between physics and mathematics. A survey of Reinforcement Learning in Robotics offers more insights
Generative Adversarial Networks (GANs) can be used to get better data (esp. image data). This can help in areas where Data is not easy to come by or similar data is needed but not available for training. GANs could thus be used in training Robotics.
Mixed Reality is also an emerging domain. It is mainly used in the field of Programming by demonstration (PbD). PbD creates a prototyping mechanism for algorithms using a combination of physical and virtual objects
Emotion research – affective computing
Affective computing is the study and development of systems and devices that can recognize, interpret, process, and simulate human affects(emotions). It is an interdisciplinary field spanning computer science, psychology, and cognitive science. Current research originated with Rosalind Picard’s 1995 paper on affective computing. A motivation for the research is the ability to simulate empathy within AI. Example of use of this technology see Emotion Research Lab
As we see above, Robotics provides a complex and an emerging platform to learn many aspects of AI. We are working on some of these areas at the AI lab in London
Image source: Dobot Magician – we are using it in the AI labs