Processing-in-Memory (Image: Samsung)
By Peter Clarke
What’s at stake:
Processing-in-memory (PIM) has not yet lived up to its potential due to the market strength of incumbent computing architectures and the cost efficiencies of keeping logic and memory manufacturing separate. A number of startups targeting their AI processors at the “edge” are now using variations on the PIM approach within embedded memory on their system-on-chip processors, but most use CMOS logic, and they cannot target the densest form of memory, DRAM. If Samsung and SK Hynix can standardize DRAM components to include PIM then the technique could transform AI processing.
South Korea’s semiconductor leaders Samsung Electronics and SK Hynix are working together to standardize processing-in-memory (PIM) in the form of an LPDDR6-PIM DRAM, according to Business Korea. The tasks such a chip might do include some of the highly parallel mathematics operations that are part of machine learning, neural networks and artificial intelligence (AI).