This article is a follow up to my previous one talking about the rise of Artificial Intelligence in an Enterprise. In this article, I will talk about how enterprises in Transportation, Retail and Healthcare are transforming themselves using AI.
Although I talk about specific enterprises here, the use cases are pretty generic and horizontal. The use cases vary from transforming back-office applications to bringing compassion back into healthcare to detecting fraud and into the future of autonomous cars. The fraud detection use case, for example, appeals to a large number of verticals, including eCommerce, financial and retail environments where financial transactions and/or user behavior monitoring is essential. The back-office function is an integral part of any enterprise organization, and most of the 800 lb gorillas will see its applicability.
Lyft, an urban transportation company, has been catching up fast to Uber in terms of branding as well as market share. Gil Arditi, Head of Product, Machine Learning @ Lyft gave fascinating insights on how Lyft is using AI/ML.
The machine learning team at Lyft is tasked to solve a diverse set of problems for the core as well as the autonomous business including determining spot pricing, ride scheduling, fleet operations, and obstacle avoidance. To solve these problems, the ML team at Lyft uses a variety of models including decision trees, neural networks, ARIMA, multi-arm bandits, SVR regression, and quadratic programming. The choice of a model depends on the problem at hand and how effectively the model solves the problem at scale.
Gil discussed one of the fundamental problems of fraud detection and journey his team went through to solve the problem. Let’s start with the problem definition.
The fraud detection problem definition is simple: How does Lyft identify that a user requesting the ride is a fraudster?
Why is fraud detection important
The intent of fraud may vary from unauthorized payment (use of stolen credit card or like) to inserting bias in supply/demand by generating transient ride requests. If a fraudster goes undetected, it could lead to lower revenue or longer wait times for customers.
To solve this problem, Lyft started to feed user actions and contextual user information into a Gradient Boosted Decision Trees. The inputs varied from location, source IP address, payment type, ride length to user’s historical ride consumption patterns. Due to a large number of inputs, the model was well suited to run in a batch mode.
Challenge and Evolution
The team faced three challenges with the overall solution.
1. Decision trees required a lot of inputs and tuning. The feature engineering to express interactions between input features in the tree also resulted in a loss of signal
2. A typical fraudster’s behavioral pattern is to “sting and run away.” The recent activity, therefore, has higher relevance to fraud as compared to historical activity. This requires to catch input signals in real time as there is not enough time to recompile /retrain the decision tree model with new inputs
3. The amount of time spent in modeling/creative thinking was minimal as compared to data engineering. Software engineering and Operations team focuses on corner scenario’s, scaling, maintenance, clean up while Data Science team on tinkering small models in a lab environment. Getting consistent results in production and test plus shortening the time it takes to put a new model into production was a critical factor.
These problems demanded different model and a process in place.
The Decision Trees got replaced with deep learning Recurrent Neural Network (RNN) models for the Fraudster detection problem.
Decision making in real time (with features like user log) meant that the traditional feature learning couldn’t scale. Detecting user behavior with a string of inputs in real time is no different than a Natural Language Processing (NLP) problem. The interpretability in any deep learning model is lower than versus a tree model, but the overall, it brought better outcomes (higher precision and recall).
As far as process goes, Lyft now has a fully functional AIOps model that helps to align all needed stakeholders (Data Science, Operations, Software engineering), ultimately helping to launch new/refined AI models faster.
Yazdi Balgi, SVP, Global Business Services and Emerging Technologies @Walmart highlighted a set of problems they are solving for Walmart’s back office (also called as shared services) applications that he is leading. Traditionally, Walmart has used automation capabilities like Robotic Process Automation (RPA), which can help with everyday office tasks such as digitizing documents. RPA systems leverage rules-based automation with visual and scripted flowcharts to automate backend service processes.
Walmart back office processes 200 Million+ Account Receivables and 2.3 Million+ employee payrolls.
Naturally, even a small process improvement can turn into huge savings at this volume. They noticed that the bottlenecks are typically in inputs and exception handling. AI takes RPA to the next level to automate decision making to improve cases of exception handling. For Walmart, ~85% accuracy on AI algorithms serves as a trigger point to take automated decisions. Additionally, the tasks now appear simple because there is data to prove the process efficacy. Yazdi shared an example of how the sales tax refunds and audit process was improved using AI and Big Data. Earlier, only a sample set of tax items where audited. The sampling resulted in a lot of audits from the government.
AI helped to scan the universe of audits versus just the sampled ones.
The full universe scan empowers relevant staff members with a machine-oriented quantitative and qualitative decisions to aid their human judgement, or in many cases with reasonable accuracy, default to machine decisions. On a question about whether Walmart is using Blockchain or not, Yazdi’s response was very revealing.
Machine learning and Artificial Intelligence are highly underestimated. Blockchain is overestimated.
Do you agree with Yazdi’s assertion?
Roy Smythe, Global Chief Medical Officer, Strategy and Partnerships @Philips gave an emotionally charged talk about the state of Healthcare as it stands today how AI can be of rescue to make healthcare more humane. Traditional healing changed from compassion to faith-based to herbal therapies and culminated into what he called as a medical industrial complex. Funding and technology advancement created a vicious loop that has led to physician burnout. The resulting outcome is that primary care is dehumanized. 29 minutes of 36 minutes go in documentation (health records and insurance-related overheads).
A patient is typically seen only 7 minutes per visit for real humane care, and that barely helps to provide essential human touch integral to the discipline of primary care.
How does AI help to deal with this problem? Roy highlights three areas to leverage data and analysis to improve physician’s efficiency and ultimately freeing them to focus on human aspects of primary care.
- Image Recognition/ Computer Vision to help scan X-ray or MRI images and measure lesion sizes, eliminating manual and error-prone work
- Data mining to compare patients records against a global database of patients with similar ailments, age and even perhaps genome patterns
- Self-health AI app that provides health tips (preventive or proactive) that possibly can reduce 50% of the doctor visits, freeing physicians time for health issues that matter
I’d love to hear more about how AI/ML is transforming your business. Some of these examples and insights will be included in an upcoming book titled “AI in Business : 2019” from John Desmond, Editorial Director @ AI Trends. You can checkout his first book AI in Business :2018 on Amazon.
(This blog is cross-posted on my Linkedin)