IBM Launches New WatsonX Foundation Models for Enterprise

IBM Launches New WatsonX Foundation Models for Enterprise

With the launch of and, the company is creating a platform approach to an AI workbench, allowing customers to deploy IBM, open-source or their own AI models.

A sign with the Watson and IBM logos.
Image: MichaelVi/Adobe Stock

IBM this week launched an AI platform that gives generative AI customers an option to stay within its ecosystem. Called watsonx, the generative AI foundation model, now generally available after a two-month beta, is designed for enterprises to build, tune, deploy and manage foundation models for talent acquisition, customer care, IT operations and application modernization.

It also gives the company a competitive position when compared to Amazon SageMaker Studio, Google Vertex AI, Microsoft Azure AI and Anthropic’s Claude large language model.

In May 2023, IBM first previewed and opened a waitlist for watsonx. Because it’s a foundation model, a form of generative AI that trained on terabytes of unstructured data, watsonx doesn’t need to be repeatedly trained on new data sets for each new function to which it’s assigned — it can be transferred to any number of functions and tasks with minor tuning. The evolving versions of ChatGPT show how foundation models can be used to build conversational large language models.

SEE: Check out this cheat sheet on GPT-4 (TechRepublic)

To date, watsonx has been shaped by more than 150 users across industries participating in the beta and tech preview programs, with more than 30 of them sharing early testimonials, according to IBM.

Jump to:

Watsonx comprises a trio of foundational AI products

IBM said watsonx comprises a trio of generative AI model configurations:

  • The studio for building and tuning foundation models, generative AI and machine learning.
  • The fit-for-purpose data store built on an open lakehouse architecture.
  • The coming watsonx.governance toolkit to enable AI workflows that are built with responsibility, transparency and explainability.

The company’s July 11 launch focused on and; IBM will launch watsonx.governance later this year, said Tarun Chopra, IBM’s vice president of Product Management, Data and AI.

“On July 11, we [launched] the first two as SaaS services on IBM cloud, with also on AWS, on premises. These components work by themselves, but we are the only ones out there bringing them together as a platform,” he said.

Creating a data pipeline for generative AI

Chopra explained that is designed to help clients deal with volume, complexity, cost and governance challenges around data used in AI workloads, letting users access cloud and on-premises environments through a single point of entry.

He said that, while is a lakehouse repository, rather like Databricks or Snowflake, that can stand on its own as an open-source repository, it’s also a source of data, rather like a plugin for fine-tuning AI models.

SEE: Public or proprietary AI for business? Former Siri engineer Aaron Kalb weighs in (TechRepublic)

“You can, of course, connect that AI model to an S3 bucket or other cloud object storage where your data is located, or you can populate that data into a repository,” said Chopra. He added that if a user, in the latter case, has data associated with an AI model, they can automatically dump that data into the repository, which provides more functions and features than a typical cloud object storage.

The company said uses fit-for-purpose query engines like Ahana Presto and Apache Spark for wide workload coverage ranging from data exploration, data transformation, analytics and AI model training and tuning.

“If you are bringing Excel files, jpegs, other tables, web pages and so forth into the training set, you can house that in a instance and build in all of the lineage accordingly, because some of that you will have to provide your consumers who are asking where the data is coming from,” said Chopra.

Watsonx offers a triptych of model sources

Chopra explained that watsonx is unique in the AI space because it has the flexibility of hybrid, multicloud deployment and the ability to take advantage of open source (it’s running on Red Hat OpenShift) such as Hugging Face’s libraries, thousands of which are already available through watsonx.

“Because there is no single big hammer to solve all problems, we are providing a lot of flexibility in, a workbench where you can have three sources of deployment, three libraries that can come into play: an IBM supplied model, open source models, customers’ own models,” said Chopra.

The company said the models support natural language processing tasks including question answering, content generation and summarization, text classification and extraction.

More IBM watsonx releases this year and next

IBM will offer graphic processing unit options on IBM Cloud. These GPU options are designed to support large enterprise workloads, according to the company, which said it will develop full stack high-performance, flexible, AI-optimized infrastructure for AI models later this year on IBM Cloud.

Also, the company said will use the foundation models to give users the ability to use natural language to visualize and work with data.

The company said that over the next year, it will expand enterprise foundation model use cases beyond natural language processing and create 100 billion+ parameter models for targeted use cases. The governance capabilities will be aimed at helping organizations implement lifecycle governance, reducing risk and improving compliance, per IBM.

Source of Article