News

Multicore Chips Hitting 'Memory Wall,' Report Says

The silicon industry's plans to use multiple processor cores on the same chip to improve computer processing performance may have hit a bump on supercomputing highway. The technique, called multicore processing, may face limitations due to memory-handling issues, according to a Sandia National Laboratories study as described by a November IEEE article.

Expectations are high to exploit the multicore technique, since chip-processing speeds aren't getting any faster. Microsoft, for one, had placed a spotlight on multicore development techniques at its Professional Developers Conference in Los Angeles, which took place in October.

According to the IEEE account, tests conducted on 8-, 16- and 32-core microprocessors at the Sandia National Laboratories in New Mexico produced some "distressing results."

Sandia scientists hit a performance wall at about eight cores, after which "there's no improvement," said James Peery, director of computation, computers, information and mathematics at Sandia. In fact, he said, performance actually degraded so that at 16 cores "it looks like two."

The problem is especially vexing for informatics applications used in national security functions and the scientists discussed the study results with chipmakers, according to the IEEE article.

The crux of the problem is a "memory wall" that creates a "disparity between how fast a CPU can operate on data and how fast it can get data," the article explained. The number of cores might increase, but the number of connections to the computer does not keep pace. The core connections are not properly fed with data.

Since chipmakers see multicore architectures as essential to future high-performance computing, Peery suggested that they look to "tighter, and maybe smarter, integration of memory and processors," including stacking memory chips on top of processors.

Intel, whose Tera-scale multicore test chip was pictured in the IEEE article, agreed that the problem is in the processor, not in the multicore architecture. An Intel spokesperson said on Monday that the company has been on top of the situation.

"Intel's work on stacking memory could be key to resolving long-term multicore memory bottlenecks but this is not discussed in the article," the Intel spokesman said in an e-mailed response. "We've been talking in public about the need to integrate memory closer to the processors for more than two years now, showing directionally what will need to happen [and] are confident that the industry will work around the memory bandwidth issue."

There are, the Intel response indicated, "very fast memory bandwidth subsystems" available now, "but reasonable cost as well as performance needs to be achieved."

About the Author

Jim Barthold is a freelance writer based in Delanco, N.J. covering a variety of technology subjects.

Featured

  • Report: Cost, Sustainability Drive DaaS Adoption Beyond Remote Work

    Gartner's 2025 Magic Quadrant for Desktop as a Service reveals that while secure remote access remains a key driver of DaaS adoption, a growing number of deployments now focus on broader efficiency goals.

  • Windows 365 Reserve, Microsoft's Cloud PC Rental Service, Hits Preview

    Microsoft has launched a limited public preview of its new "Windows 365 Reserve" service, which lets organizations rent cloud PC instances in the event their Windows devices are stolen, lost or damaged.

  • Hands-On AI Skills Now Outshine Certs in Salary Stakes

    For AI-related roles, employers are prioritizing verifiable, hands-on abilities over framed certificates -- and they're paying a premium for it.

  • Roadblocks in Enterprise AI: Data and Skills Shortfalls Could Cost Millions

    Businesses risk losing up to $87 million a year if they fail to catch up with AI innovation, according to the Couchbase FY 2026 CIO AI Survey released this month.