blog

A Practical Guide to Grasp the Concept of AI Transformation

Written by Georges Caron | Oct 3, 2024 6:04:24 AM

Digital transformation has been replaced by AI transformation!

 

In November 2022, OpenAI publicly launched ChatGPT, making AI technology accessible to everyone, at least to those with digital connectivity. It shook the world, sparking a race within the AI industry to develop the most advanced Large Language Model. In parallel, an important ethical debate started between AI experts on the opportunities and dangers of this not-so-new technology. Geoffrey Hinton went as far as quitting Google warning about the potential dangers of these models. A few technological giants as Meta and IBM, led amongst others by Yann LeCunn,  advocated to make LLMs OpenSource as mitigation for the potential risks whilst OpenAI just warned against it.

To understand this revolution, it is key to know that, if most people discovered Generative AI with ChatGPT, the technology behind it was not new - the attention mechanism and transformer architecture were known since 2015. The first smart move of OpenAI was to develop a fantastic user experience, letting the visitor chat via a nice interface with the AI system. Further, the improvement of performance, leading the algorithm to be able to produce human-like answers, came from the sheer amount of data and computer power (more than 314 million petaFLOPs) used to train it. This graph from [arXiv:2206.07682, figure 2] that shows the non-linear improvement of accuracy on 8 different tasks when models are trained on more than 1022 FLOPs, called emergence of abilities, illustrates nicely what happened.

 

What you need to know about AI transformation?

Machine learning and Foundation models

Historically, AI researchers took two possible approaches: (1) the rule-based expert approach - trying to encode all the rules in the program; (2) machine learning - training a model on a lot of examples. The second approach led to more and more performant models able to solve increasingly complex tasks. However, technology was limited by compute power and data availability. Both of these limitations were addressed externally: the gaming industry and GPUs provided increased computing power, while social networks like Facebook contributed a surge in labeled data, which could be utilized for training models.

For specific tasks within a company, data was not always available. An important concept in democratizing AI became transfer learning. It consists of fine-tuning or re-training a model that has already been trained on data for a similar task. For instance, to develop a model to categorize mouse and hedgehogs, AI developers could re-train or fine-tune a model that had been trained to categorize cats and dogs. 

Foundation models, which include LLMs and multi-modal models, take transfer learning a step further. These models are typically trained to do a task - guess a masked word in a sentence - and then able to do other tasks without retraining - translating a text, summarizing.

This capability to adapt to new tasks has deep implications for companies to leverage AI to support or even automate processes increasing efficiency, productivity and even quality of their product and/or services. Another common application of AI is enhancing knowledge sharing within a company and with clients through advanced chatbots powered by Retrieval Augmented Generation.

The choice of LLMs for your AI system is crucial

For example, revisiting the ethical debate from the introduction between open-source models (e.g., Llama 3, Mistral 7) and closed/proprietary models (e.g., GPT-4, Gemini), several factors must be balanced such as costs, ease of implementation, security, data confidentiality, transparency (in terms of training methods and data), and the potential for model tuning. Another key consideration is the size of the LLM—whether to use a very large, versatile model or a smaller one specialized for a particular task. Ultimately, the choice depends on the specific business case and your unique circumstances.

AI agents to completely rethink processes

Agents are autonomous entities that can interact with the environment. They can perceive the environment - a thermostat can know the temperature of a house - and act on it - a thermostat can cut the heating above a certain temperature. An AI agent is an agent that is equipped with a LLM and tools that allows it to perform more complex tasks than a traditional AI system would be able to do. It could, amongst other capabilities,  interact with a database or search the web to answer a user's question.

AI agents can also collaborate with one another or with a human making human-agentic systems that allows companies to completely rethink the way to provide their services or products. Imagine for instance, having a system capable of understanding upon a client request all the documents and information it needs to gather, gather them and prepare them for one of the company’s employees to respond to the request, accelerating the process greatly.

AI transformation is more than single process improvement with AI

Foundation models present a wide range of opportunities for companies to explore. Companies might be tempted to list all their use cases - processes that could be improved or new business opportunities, in a company - and start implementing them with low-hanging fruit. This is certainly a solid approach, but at B12 Consulting, we believe that true AI transformation goes beyond simply compiling a list of use cases.

We believe that the AI revolution is a unique opportunity to define the essence of the service or product offered by companies and rethink how human and AI agents can best collaborate to provide the best service or product. We are convinced that a successful transformation will be based on a combination of such an AI vision and down-to-earth list of use cases.