Apple is Working on Running AI on iPhones and iPads

Apple is Working on Running AI on iPhones and iPads

Apple researchers also helped develop a technique for creating realistic-looking avatars from videos.

Apple has released two research papers expanding the possibilities of generative AI. One paper solves a problem that was preventing large language models from running on certain devices with limited dynamic random access memory. The paper doesn’t specify iPhones and iPads, but it’s likely Apple will try to implement this technique on its own devices.

A second paper describes “Human Gaussian Splats,” a technique for generating 3D avatars from single-camera videos, which could be used to create avatars for virtual meetings or to let consumers try on clothes before purchasing them from online retailers.

Jump to:

Blending an LLM inference cost model with flash memory

As more and more companies work on adding LLM-powered capabilities to apps, they need those apps to run natively on devices. A challenge to this has been that LLM’s “intensive computational and memory requirements present challenges, especially for devices with limited DRAM capacity,” Apple researchers wrote in the paper “LLM in a flash: Efficient Large Language Model Inference with Limited Memory”.

The researchers found they could run LLMs that need up to twice more than the available DRAM by storing the LLMs on flash memory using two techniques, which the researchers called “windowing” and “row-column bundling.” With windowing, the need for DRAM is reduced because the processes are performed on reused digital neurons, not new digital neurons. Row-column bundling makes the chunks of data read from flash memory larger.

SEE: Here’s everything you need to know about iOS 17 (TechRepublic) 

Both of these techniques are critical to “constructing an inference cost model that harmonizes with the flash memory behavior, guiding us to optimize in two critical areas: reducing the volume of data transferred from flash and reading data in larger, more contiguous chunks,” the researchers wrote.

Business use cases for more efficient LLM operation on DRAM

This development may trickle down into business use cases as well as consumer ones in that it will allow LLMs to run on smaller, memory-constrained, edge or field service devices. The researchers state their work “sets a precedent” into further research; that research might include optimizing generative AI for relatively small devices and vice versa. In particular, it might make it easier for Apple to launch generative AI on iPhones and iPads.

Human Gaussian Splats create realistic 3D avatars

The second paper, which was written by researchers from Apple, the Max Planck Institute for Intelligent Systems and ETH Zurich, describes a method of producing 3D avatars. The researchers started with a short, single-camera video and from it generated a 3D avatar using a neural rendering framework called Human Gaussian Splats. Previous video-to-3D conversion like that used in some films took multiple cameras and a lot of compute power and human effort.

Using 50-100 frames of a video, Human Gaussian Splats can generate brand-new poses and movements for the avatar. The neural rendering framework is generative in that it “fills in” parts of the human body that may not have been fully captured in the video.

Possible business use cases for video-to-avatar capabilities

The researchers propose a wide variety of uses for their avatars, including “AR/VR, visual effects, visual try-on (and) movie production.” While virtual avatars for business meetings like those proposed by Meta haven’t been popular, retailers continue to experiment with letting customers enter a virtual changing room to see the clothes on their own bodies. Creating 3D avatars more efficiently and with less processing power may ease that process.

Apple’s progress in the generative AI space

Both papers show that, while Apple may not have generative AI products today that are as high-profile as Microsoft’s Copilot or OpenAI’s ChatGPT, Apple still has a hand in the generative AI space. These findings could eventually be incorporated into Siri, Apple’s voice-based assistant that resides on laptops, tablets and phones.

Note: Apple has not replied to TechRepublic’s request for a comment. 

Source of Article