MIT boffins make AI chips a million times faster than brains • The Register – Michmutters

MIT boffins make AI chips a million times faster than brains • The Register

inbrief In the early days of AI research it was hoped that once electronics had equaled the ability of human synapses many problems would be solved. We’ve now gone way beyond that.

A team at MIT reports that it has built AI chips that mimic synapses, but are a million times faster, and are additionally massively more energy efficient than current designs. The inorganic material is also easy to fit into current chip-building kit.

“Once you have an analog processor, you will no longer be training networks everyone else is working on. You will be training networks with unprecedented complexities that no one else can afford to, and therefore vastly outperform them all. In other words, this is not a faster car, this is a spacecraft,” said lead author and MIT postdoc Murat Onen.

“The speed certainly was surprising. Normally, we would not apply such extreme fields across devices, in order to not turn them into ash. But instead, protons ended up shuttling at immense speeds across the device stack, specifically a million times faster compared to what we had before. And this movement doesn’t damage anything, thanks to the small size and low mass of protons. It is almost like teleporting.”

Now that’s some intelligent design.

Why results from machine learning models are difficult to reproduce

Princeton computer scientists Sayash Kapoor and Arvind Narayanan blame data leakage and inadequate testing methods for making machine-learning research difficult to reproduce by other scientists and say they are part of the reason results seem better than they are.

Data leakage occurs when the data used to train an algorithm can leak into its testing; when its performance is assessed the model seems better than it actually is because it has already, in effect, seen the answers to the questions. Sometimes machine learning methods seem more effective than they are because they aren’t tested in more robust settings.

An AI algorithm trained to detect pneumonia in chest X-rays trained on data taken from older patients might be less accurate when it’s run on images taken from younger patients, for example, Nature reported. Kapoor and Narayanan believe practitioners need to clearly describe how the training and testing datasets do not overlap.

Models aren’t sufficient by themselves, however, the code needs to be readily available too, they argued in a paper [PDF] released on arXiv.

AI contract between Palantir and US Army Research Lab extended

The US Army Research Lab has extended its contract with Palantir to continue developing AI technologies for its combatant commands, worth $99.9 million over two years.

Both parties began working together in 2018. Palantir’s software is used to build and manage data pipelines for platforms used by the Armed Services, combatant commands, and special operators. These resources, in turn, power machine learning systems deployed by various military units for combat.

“We’re looking forward to fielding our newest ML, Edge, and Space technologies alongside our US military partners,” Shannon Clark, senior veep of Innovation, said in a statement.

“These technologies will enable operators in the field to leverage AI insights to make decisions across many fused domains. From outer space to the sea floor, and everything in-between.” ®

Leave a Reply

Your email address will not be published. Required fields are marked *