GPU Alternative d-Matrix Raises $110 Million for AI Inference

GPU Alternative d-Matrix Raises $110 Million for AI Inference

Microsoft’s venture group is among d-Matrix’s supporters, investing in making in-memory compute for AI and LLM inference.

An chip with an AI label embedded on a circuit grid.
Image: Shuo/Adobe Stock

Microsoft and other investors have poured $110 million into d-Matrix, an artificial intelligence chip company, Reuters revealed on Tuesday. d-Matrix is remarkable because it focuses on chips for inference. Put simply, AI inference is the process of improving the accuracy of a generative AI or large language model’s predictions. It occurs after training.

Support for inference gives d-Matrix a valuable niche and avoids competition with NVIDIA, the wide-ranging technology company that makes GPUs and system-on-chip units, among other software and hardware.

Jump to:

What is d-Matrix?

d-Matrix is a Silicon Valley-based company that produces compute platforms (chips) for generative AI and large language models. Its flagship product is Corsair, an in-memory compute engine for AI inference. The design’s ability to hold an AI model entirely in-memory is novel and builds on d-Matrix’s previous Nighthawk, Jayhawk-I and Jayhawk II chiplets.

What is d-Matrix building?

With the new round of funding, d-Matrix will work on commercializing Corsair. It wants to fix the problem of AI and LLM companies not having enough compute power to run the workloads they need. To solve this memory bottleneck, d-Matrix made chiplet-based Digital Memory In Compute platforms that can, d-Matrix says, reduce the total cost of ownership of the inference process.

Corsair is expected to launch next year, in 2024.

Why d-Matrix stands out among the AI chip landscape

d-Matrix stands out because chip-making is competitive, and many smaller companies are having trouble finding funding. NVIDIA has pressured many smaller companies and investors out of the AI chip market. In particular, NVIDIA’s dominance in both hardware and software makes it hard for other companies to squeeze in, Reuters said.

NVIDIA declined to comment on the investments in d-Matrix.

The $110 million investment in d-Matrix comes from a Series B funding round from investment firms Temasek and Playground Global as well as M12, Microsoft’s venture capital fund. Prior to this, d-Matrix had raised $44 million in a funding round with Playground Global.

“The current trajectory of AI compute is unsustainable as the TCO to run AI inference is escalating rapidly,” said Sid Sheth, cofounder and CEO at d-Matrix, in a press release. “The team at d-Matrix is changing the cost economics of deploying AI inference with a compute solution purpose-built for LLMs, and this round of funding validates our position in the industry.”

“D-Matrix is the company that will make generative AI commercially viable,” Sasha Ostojic, partner at Playground Global, stated in the same press release.

“We’re entering the production phase when LLM inference TCO becomes a critical factor in how much, where, and when enterprises use advanced AI in their services and applications,” said Michael Stewart from M12, Microsoft’s Venture Fund, in the press release.

How chiplets fit into the global chip shortage

The generative AI industry, which has taken off in leaps and bounds since the commercialization of ChatGPT in November 2022, faces two major problems today. First, running generative AI is extremely costly — training an LLM costs as much as $4 million as of March 2023.

Second, graphics processing units, which are required for AI training and which NVIDIA produces, can still be hard to find. They’re so short in supply that countries around the world are starting initiatives to boost the chip industry. For example, in early September, China put $40 billion toward its chip industry; although, there’s no indication that those chips aren’t specifically targeting generative AI or LLM products.

SEE: Here’s everything you need to know about the chip shortage, including why it started. (TechRepublic)

The DIMC engines and chiplet solutions d-Matrix makes are alternatives to GPU-based solutions, so d-Matrix could be poised to provide a solution to a major problem.

Source of Article