Scaling AI like a tech native: The CEO’s role

datos

Based on a publication in McKinsey & Company, Scaling AI like a tech native: The CEO’s role.

Ajustar la IA como un nativo tecnológico: el papel del CEO

Embedding AI across an enterprise to tap its full business value requires shifting from bespoke builds to an industrialized AI factory. MLOps can help, but the CEO must facilitate it.

What if a company built each component of its product from scratch with every order, without any standardized or consistent parts, processes, and quality-assurance protocols? Chances are that any CEO would view such an approach as a major red flag preventing economies of scale and introducing unacceptable levels of risk—and would seek to address it immediately.


Banner_frasco-suscripcion-800x250

Yet every day this is how many organizations approach the development and management of artificial intelligence (AI) and analytics in general, putting themselves at a tremendous competitive disadvantage. Significant risk and inefficiencies are introduced as teams scattered across an enterprise regularly start efforts from the ground up, working manually without enterprise mechanisms for effectively and consistently deploying and monitoring the performance of live AI models.

Ultimately, for AI to make a sizable contribution to a company’s bottom line, organizations must scale the technology across the organization, infusing it in core business processes, workflows, and customer journeys to optimize decision making and operations daily. Achieving such scale requires a highly efficient AI production line, where every AI team quickly churns out dozens of race-ready, risk-compliant, reliable models. Our research indicates that companies moving toward such an approach are much more likely to realize scale and value—with some adding as much as 20 percent to their earnings before interest and taxes (EBIT) through their use of AI as they tap into the $9 trillion to $15 trillion in economic value potential the technology offers.

CEOs often recognize their role in providing strategic pushes around the cultural changes, mindset shifts, and domain-based approach necessary to scale AI, but we find that few recognize their role in setting a strategic vision for the organization to build, deploy, and manage AI applications with such speed and efficiency. The first step toward taking this active role is understanding the value at stake and what’s possible with the right technologies and practices. The highly bespoke and risk-laden approach to AI applications that is common today is partly a function of decade-old data science practices, necessary in a time when there were few (if any) readily available AI platforms, automated tools, or building blocks that could be assembled to create models and analytics applications and no easy way for practitioners to share work. In recent years, massive improvements in AI tooling and technologies have dramatically transformed AI workflows, expediting the AI application life cycle and enabling consistent and reliable scaling of AI across business domains. A best-in-class framework for ways of working, often called MLOps (short for “machine learning operations”), now can enable organizations to take advantage of these advances and create a standard, company-wide AI “factory” capable of achieving scale.

In this article, we’ll help CEOs understand how these tools and practices come together and identify the right levers they can pull to support and facilitate their AI leaders’ efforts to put these practices and technologies firmly in place.

The bar for AI keeps rising

Gone are the days when organizations could afford to take a strictly experimental approach to AI and analytics broadly, pursuing scattered pilots and a handful of disparate AI systems built in silos. In the early days of AI, the business benefits of the technology were not apparent, so organizations hired data scientists to explore the art of the possible with little focus on creating stable models that could run reliably 24 hours a day. Without a focus on achieving AI at scale, the data scientists created “shadow” IT environments on their laptops, using their preferred tools to fashion custom models from scratch and preparing data differently for each model. They left on the sidelines many scale-supporting engineering tasks, such as building crucial infrastructure on which all models could be reliably developed and easily run.

Today, market forces and consumer demands leave no room for such inefficiencies. Organizations recognizing the value of AI have rapidly shifted gears from exploring what the technology can do to exploiting it at scale to achieve maximum value. Tech giants leveraging the technology continue to disrupt and gain market share in traditional industries. Moreover, consumer expectations for personalized, seamless experiences continue to ramp up as they are delighted by more and more AI-driven interactions.

Thankfully, as AI has matured, so too have roles, processes, and technologies designed to drive its success at scale. Specialized roles such as data engineer and machine learning engineer have emerged to offer skills vital for achieving scale. A rapidly expanding stack of technologies and services has enabled teams to move from a manual and development-focused approach to one that’s more automated, modular, and fit to address the entire AI life cycle, from managing incoming data to monitoring and fixing live applications. Start-up technology companies and open-source solutions now offer everything from products that translate natural language into code to automated model-monitoring capabilities. Cloud providers now incorporate MLOps tooling as native services within their platform. And tech natives such as Netflix and Airbnb that have invested heavily in optimizing AI workflows have shared their work through developer communities, enabling enterprises to stitch together proven workflows.

Alongside this steady stream of innovation, MLOps has arisen as a blueprint for combining these platforms, tools, services, and roles with the right team operating model and standards for delivering AI reliably and at scale. MLOps draws from existing software-engineering best practices, called DevOps, which many technology companies credit for enabling faster delivery of more robust, risk-compliant software that provides new value to their customers. MLOps is poised to do the same in the AI space by extending DevOps to address AI’s unique characteristics, such as the probabilistic nature of AI outputs and the technology’s dependence on the underlying data. MLOps standardizes, optimizes, and automates processes, eliminates rework, and ensures that each AI team member focuses on what they do best (exhibit).

This article was originally published in McKinsey & Company

Banner_azules
Reciba las últimas noticias de la industria en su casilla:

Suscribirse ✉