Are companies better suited buying vendor solutions for AI, or making their own?

This is a question we have been thinking about for a while – in conferences, and some of our expert talks, and as we navigate the new world of generative AI, we’re thinking about whether it makes more sense for enterprise to order up vendor systems, or build new AI capabilities in-house.

Part of what we’re finding is that companies are balancing cost and effort issues with concerns about privacy and compliance.

We’ve also come up with some interesting solutions that I’ll talk about a bit later.

I had the honor of sitting with a panel at Davos in the afternoon session where we talked about that essential question: how do you figure out build versus buy?

“We’ve been using AI for many, many years,” said Archana Vohra Jain, CTO at Zurich Insurance. “We’re bringing in value for our business.”

She said right now, the company is solidly in the ‘buy’ column, partnering with startups on sustainable solutions.

Generative AI, she said, changes things completely; she pointed to the fast-evolving market where applications are getting millions of users within a series of weeks or months.

Panel

John Werner

Panel

STUART ANNING

Panel

STUART ANNING

Panel

STUART ANNING

Panel

STUART ANNING

Panel

STUART ANNING

Panel

STUART ANNING

“We want to make sure that we are able to make a difference, and go to market quickly,” she said, explaining her buy position. “There is a lot to be done apart from just building the model.”

Jain talked about looking at use cases to maximize value, and facilitating change management for the business.

We also talked with Siva Ganesan, Global Head of AI Cloud Offerings at Tata Consulting Services.

“We see data as fuel for AI,” he said.

Ganesan described a strategy where the company takes advantage of classical AI solutions and puts a level of technology on top, based on what’s unfolding now.

Over time, he said, there may be a trend toward smaller, more manageable models.

The other member of our panel, Daniela Rus, took that ball and talked about what these systems may look like. Rus manages our MIT CSAIL lab. She’s also doing important work on liquid neural networks. (Full disclosure – I am advising on this project as well.) What we’re seeing, as she pointed out, is that by creating artificial neurons that can manage time-series information, we are making possible the use of smaller, more agile models that don’t take as much data or resources to run.

“All of these solutions are rooted in fundamental AI technology,” she said, “but the actual architectures and the needs of the systems vary around the problem and around the architecture.”

Right now, she said, generative AI requires lots of compute power, but she sees the day when sensitive data and safety-critical systems can be kept safely behind a firewall.

“We are beginning to see solutions that are not so huge,” she said, noting that some of these liquid neural network models can run on edge devices within a company’s core network. “We will empower enterprises to build their own solutions, with protected data.”

Jain said her industry is moving from a model of ‘repair and replace’, to one of ‘prevent and predict’.

“AI is making a big difference,” she said, citing research into weather phenomena, which drive a lot of claims.

Citing a strategy of creating “geographically-centric” models, to handle things like evaluating multilanguage policies and products, she noted that better models will likely add speed. She also noted that in adoption, testing and training is important, along with making sure that a tool set and solution is the right fit.

“Is it bringing the right results?” she asked rhetorically.

Ganesan talked about the combination of AI and the cloud.

“You really need to have the necessary frameworks to consume AI responsibly, with the right guardrails and the right rules and regulations,” he said. “Compliance, security, legalities … and the cloud provides a segue …. 5 years ago, 10 years ago, conversations would be: ‘how do you define schemas, how you model logical data structures and the like?’ – today, generative AI has opened the door to say: ‘it doesn’t matter, structured or unstructured, let me complement what you already have in terms of structured ways of interrogating data – let me build the probabilistic scenario for you’ … and I think that’s a new paradigm.”

Panelists agreed that the AI winter is coming to an end, and enterprises are getting lessons in value.

“Everything is up for grabs,” Ganesan said.

“The best way of predicting the future is inventing it,” Daniela said, noting three criteria – new ideas, making existing solutions better, and working on applications. “AI can come in and help individuals, help companies, organizations, groups, AI can do so much for so many people.”

“It’s a very exciting time,” Jain added.

That’s a little bit of what we’re seeing as we move into the AI space more fully. In short, a lot of companies are buying right now, because it’s easier – they don’t have the resources and wherewithal to jump in on the deep end, crafting their own systems. That’s a generalization, but you see it across a lot of industries. What some of these panelists are pointing out is that you might have opportunities later in the game to move from a buy to a build model – and if you build it, as the old saying goes, they will come.