Home » Business Topics » Data Strategist

With AI It’s Adoption That Matters

Summary:  How to measure the degree and value of AI adoption among companies or even countries is hard.  Here’s a beginning proposal on how to get started.

 Business man pulling a big bright glowing light bulb concept on background

We talk a great deal about whether there are enough data scientists to go around, whether our advancements in AI techniques are better than others, if there’s sufficient access to data, and whether our chips and clouds are up to the task.  But to really understand where the rubber meets the road in achieving the promised benefits of AI it’s all about who has adopted the most and adopted it to their greatest benefit.

This is true whether you’re trying to assess one company versus another or even one country versus another.  It’s adoption that translates the efficiencies of AI into better, faster, cheaper products and services and adoption that translates into increased national wealth and whether that benefits your personal well-being.

We’ve written in the past about how difficult it is to measure adoption.  There’s no end of organizations conducting surveys.  If a large company has implemented a chatbot in one operation do we give them credit for adoption (as many surveys do)?  Do the folks who respond to these surveys know what’s going on in other parts of their companies if those organizations are large and dispersed?

It seems at this point we can agree that essentially all companies, governments, and other public entities say they are or soon will be implementing some sort of AI.  The question remains how do we evaluate the competitive impact of their actions.

 

Is China Ahead or the US?

The Wall Street Journal wrote a short article on February 18 under the banner “China Lags Behind in Corporate AI Adoption”.  They picked up this conclusion from a survey recently released by the research company Cognilytica titled “Global AI Adoption Trends & Forecasts”.  A free summary of the report is available but the details of the report and data are pay walled.  The report concludes that while China has a lead in some structural areas like government adoption (think countrywide facial recognition) and the availability of huge amounts of data (thanks to weak privacy regulations) that adoption by major companies lags significantly behind the US.  That would be a truly valuable strategic insight if true.

In trying to corroborate that finding, I found my way to an equally authoritative study “Government Artificial Intelligence Readiness Index 2019” compiled by Oxford Insights and the International Development Research Centre.  Unfortunately that study comes to exactly the opposite conclusion.  That despite China’s relatively low AI ranking worldwide (20th) that adoption is in fact strong and rapid among companies.

We could parse the differences in these studies all day.  The Cognilytica study was an online survey with a 15% response rate (suggests respondent self-selection bias) and with only 33 responses coming from all of Asia, of which China could only have been a fraction (small sample size).  The Oxford Insights study was a metadata study from extensive secondary source research.  We’re not going to be able to resolve which of these is more correct, only to illustrate how difficult adoption is to measure.

 

How to Categorize AI Opportunities

A starting point in being able to evaluate the extent and quality of adoption would be to establish some uniform categories.  We could then look at a company’s projects and drop them into buckets comparing the number and scope of those projects to say their direct competitors.  Such an ontology might be as follows:

  1. Conversational systems (chatbots, translators, transcribers, and the like).
  2. Process automation based on true AI/ML from simple RPA to fully autonomous processes.
  3. Computer vision recognition applications (might be applied to simple apps like identifying the enter button on a foreign app up to more advanced computer vision systems for quality control or robotic control).
  4. Hyper personalization, the classical ML application of classification of why the come, why the stay, why they leave, and what will they buy next, including of course recommenders.
  5. Time series forecasts.
  6. Anomaly detection of rare events for fraud or cybersecurity.
  7. Goal driven systems based on reinforcement learning.

 

Where Should We Expect Companies to Start

It’s not that any one of these seven categories is necessarily better than another.  And individual projects should be evaluated based on a number of criteria like ROI, risk of failure, and interdependence with other projects for example.  But there are some trends emerging in what constitutes ‘low hanging fruit’.  They may not be the same for all companies but there is a certain commonality.

The Cognilytica study generalizing over all responses finds these trends:

  • Slightly more than half of respondents are or will be working shortly on some sort of conversational system (e.g. chatbots) and some sort of AI-powered process automation.
  • Where process automation blends over into fully autonomous systems there is decidedly less enthusiasm. Fully autonomous systems may be a step too far when you’re just starting out.
  • Similarly we were not surprised to find plans to use reinforcement learning goal driven systems are the least likely to be implemented. RL is just not ready for prime time in most applications.
  • Classical predictive analytics for personalization was the next most likely with time series and anomaly detection not far behind.

You can begin to detect a pattern here also of the motives of different sponsors.  For example those most interested in cost reduction (operations) versus those most interested in increasing share and volume (marketing and sales) versus more accurate forecasting (finance, supply chain), and those on the defense against attack (financial fraud and cybersecurity).

 

Biggest Barriers

A bonus in the Cognilytica study flags the major reason for not moving forward on an AI project.  It’s a logical combination of insufficient ROI justification combined with an assessment that the non-AI solutions already in place are good enough (quoted about 40% of the time as a reason not to proceed).

This is only a tiny step toward a consistent method to measure the degree and value of AI adoption.  It’s a topic that should be top of mind for both business leaders and academics.

Other articles by Bill Vorhies

 

About the author:  Bill is Contributing Editor for Data Science Central.  Bill is also President & Chief Data Scientist at Data-Magnum and has practiced as a data scientist since 2001.  His articles have been read more than 2.1 million times.

[email protected] or [email protected]

Leave a Reply

Your email address will not be published. Required fields are marked *