Ultra-efficient machine learning transistor cuts AI energy use by 99%

AI machine learning uses so much computing power and energy that it’s typically done in the cloud. But a new microtransistor, 100X more efficient than the current tech, promises to bring new levels of intelligence to mobile and wearable devices.

Researchers at Northwestern University have presented their new nano-electronic device in a paper published in the journal Nature Electronics. It’s designed to perform the task of classification – that is, analyzing large amounts of data and attempting to label the significant bits – which is the backbone of many machine learning systems.

“Today, most sensors collect data and then send it to the cloud, where the analysis occurs on energy-hungry servers before the results are finally sent back to the user,” said Northwestern’s Mark C. Hersam, the study’s senior author. “This approach is incredibly expensive, consumes significant energy and adds a time delay. Our device is so energy efficient that it can be deployed directly in wearable electronics for real-time detection and data processing, enabling more rapid intervention for health emergencies.”

Where the existing transistors tend to be made in silicon, these new ones are built from two-dimensional sheets of molybdenum disulfide and one-dimensional carbon nanotubes. Their construction allows them to be quickly tuned and reconfigured on the fly, so they can be used for multiple steps in the data processing chain, where traditional transistors can only perform one step each.

“The integration of two disparate materials into one device allows us to strongly modulate the current flow with applied voltages, enabling dynamic reconfigurability,” explains Hersam. “Having a high degree of tunability in a single device allows us to perform sophisticated classification algorithms with a small footprint and low energy consumption.”

In testing, these tiny “mixed-kernel heterojunction transistors” were trained to analyze publicly available ECG datasets and label six different types of heartbeats: normal, atrial premature beat, premature ventricular contraction, paced beat, left bundle branch block beat and right bundle branch block beat.

Across 10,000 ECG samples, the researchers were able to correctly classify abnormal heartbeats with 95% accuracy using just two of these micro-transistors, where the current machine learning approach would require more than 100 traditional transistors, and they used around 1% of the energy.

What does it mean? Well, it means that once this tech gets to production – and there’s no word on when that might be – small, lightweight, battery-powered mobile devices will gain the intelligence to run a machine learning AI over their own sensor data. That’ll mean they’ll find results quicker than they would if they had to send chunks of data to the cloud for analysis – and it also means the personal data they collect on you will stay local, private and secure.

It’s unclear whether this gear will strictly be useful for portable devices, or if it can handle video data, or if this work could filter through into larger machine learning and AI equipment. A hundredfold drop in electricity consumption would be a massive step forward in large model training, for example.

Energy use, and associated emissions, are skyrocketing as companies worldwide rush to train insanely huge language models and multimodal AIs. Even back in 2021, 10-15% of Google’s entire energy budget was spent on AI, and you can bet that percentage has grown significantly. A company manufacturing chips that can equal the performance of nVidia’s top AI cards, while using 1% of the energy might just do alright for itself.

It seems unlikely; the team sticks to speaking about mobile devices in its press release. Still, another step forward in computer intelligence that could unlock another wave of smarter devices. The cascading pace of change continues to accelerate.

The research is available in the journal Nature Electronics.

Source: Northwestern University

Source of Article