030

A Practical Commercial Diligence Approach to Assess the Risk of Artificial Intelligence (AI) Disruption

The impact of constantly expanding applications of AI technology (Note 1) has many investors nervous about traditional human-labor-based businesses – from professional and technical services to administrative clerks. The sustainability of these businesses is a deep and challenging topic to assess. However, it is made harder by the convoluted analytics that many consultants have proposed to assess AI impact – issues such as data readiness, the existence and fixed-ness of rules in a process or system, etc. These frameworks make for great books like the Award-winning The Second Machine Age (by Erik Brynjolfsson and Andrew McAfee), but they are nearly impossible to apply at the practical level necessary in conducting diligence on a real target acquisition or other investment. While the continual advancement of ground-breaking, transformer-based LLMs (ChatGPT, Bard by Google, etc.*) has everyone from customer service reps to principal investors concerned about disruption to their jobs, it still requires tremendous work to generate accurate, consistent outputs from AI. Thus, addressing AI advancement is both important and feasible.

So, how should you assess the likelihood of technology disruption in a practical way? When GRAPH is asked to assess the risk for AI disruption, we work through the following seven questions (which truly haven’t changed substantially in the last ~5 years):

Assess the nature of the business's reason for being:

  1. What are the specific activities that create value from the customers’ perspective? List and define them at the most granular level possible.

  1. What is the nature of the problem being solved? Is it a deep or a broad problem?

  1. What do customers define as a “good-enough” solution?

  • How important is quality to customers? What do customers see as the difference between 80% solutions and 95% or 100% solutions in the market?

  • What percent of activities or completion on a particular activity would customers generally consider to be “good enough?”

  • Is there any evidence of customers switching to a “good enough” solution in similar domains

  • How regulated is the space?

  • Are customers facing growing competitive pressure or other economic forces and, if so, is there a threatening link between this growing competitive pressure and the current “good enough” solution?

  1. Do applications exist today that accomplish any of those activities? Which ones?

  1. Do platforms exist that (if applied) could accomplish any of those activities? Which ones?

  1. What are the dependencies for the application and/or platform technology? Typical constraints include the need for cloud (vs. edge) processing; specifically formatted data; the need for vast troves of data for training; custom data science and engineering per installation.

  1. Who is backing the applications and platforms? Is R&D investment in the thousands, millions or billions for each activity?

As you ponder these seven questions, the degree of immediate threat should become much clearer. Bear in mind that in low-regulation industries, technologies have moved from platform to application to adoption in as short a time as 2 to 5 years; in high-regulation industries, technologies often take 10+ years to make the same journey.


Note 1: AI is a term that has been extremely broadly defined by many a CIM and threatens to lose its meaning. GRAPH defines AI as developer services or finished applications with the ability to (a) reason on unstructured data; (b) model non-linear inferences; (c) self-correct and improve on a regular basis; and typically, (d) bridge traditional modes of communication.

*This article was updated in 2023 to reflect platforms like ChatGPT and Bard by Google.

Copyright © GRAPH Strategy LLC