Accelerator chips tailored for deep learning place scads of computing elements near, or in, memory to parallelize the massive computational load.
By Ron Wilson
What’s at stake?
In-memory computing, an old and controversial way of organizing computer hardware to minimize energy consumption and maximize performance, has never quite broken through into the mainstream, except in some very specific applications. But the needs of edge-computing AI may provide an opportunity for a unique embodiment of this architectural idea.
This is great stuff. Let’s get started.
Already have an account? Sign in.