News

Multicore Chips Hitting 'Memory Wall,' Report Says

The silicon industry's plans to use multiple processor cores on the same chip to improve computer processing performance may have hit a bump on supercomputing highway. The technique, called multicore processing, may face limitations due to memory-handling issues, according to a Sandia National Laboratories study as described by a November IEEE article.

Expectations are high to exploit the multicore technique, since chip-processing speeds aren't getting any faster. Microsoft, for one, had placed a spotlight on multicore development techniques at its Professional Developers Conference in Los Angeles, which took place in October.

According to the IEEE account, tests conducted on 8-, 16- and 32-core microprocessors at the Sandia National Laboratories in New Mexico produced some "distressing results."

Sandia scientists hit a performance wall at about eight cores, after which "there's no improvement," said James Peery, director of computation, computers, information and mathematics at Sandia. In fact, he said, performance actually degraded so that at 16 cores "it looks like two."

The problem is especially vexing for informatics applications used in national security functions and the scientists discussed the study results with chipmakers, according to the IEEE article.

The crux of the problem is a "memory wall" that creates a "disparity between how fast a CPU can operate on data and how fast it can get data," the article explained. The number of cores might increase, but the number of connections to the computer does not keep pace. The core connections are not properly fed with data.

Since chipmakers see multicore architectures as essential to future high-performance computing, Peery suggested that they look to "tighter, and maybe smarter, integration of memory and processors," including stacking memory chips on top of processors.

Intel, whose Tera-scale multicore test chip was pictured in the IEEE article, agreed that the problem is in the processor, not in the multicore architecture. An Intel spokesperson said on Monday that the company has been on top of the situation.

"Intel's work on stacking memory could be key to resolving long-term multicore memory bottlenecks but this is not discussed in the article," the Intel spokesman said in an e-mailed response. "We've been talking in public about the need to integrate memory closer to the processors for more than two years now, showing directionally what will need to happen [and] are confident that the industry will work around the memory bandwidth issue."

There are, the Intel response indicated, "very fast memory bandwidth subsystems" available now, "but reasonable cost as well as performance needs to be achieved."

About the Author

Jim Barthold is a freelance writer based in Delanco, N.J. covering a variety of technology subjects.

Featured

  • World Map Image

    Microsoft Taps Nebius in $17B AI Infrastructure Deal To Alleviate Cloud Strain

    Microsoft has signed a five-year, $17.4 billion agreement with Amsterdam-based Nebius Group to expand its AI computing capabilities through third-party GPU infrastructure.

  • Microsoft Brings Copilot AI Into Viva Engage

    Microsoft 365 Copilot in Viva Engage is now generally available, extending Copilot's AI-powered assistant capabilities deeper into the Viva platform.

  • MIT Finds Only 1 in 20 AI Investments Translate into ROI

    Despite pouring billions into generative AI technologies, 95 percent of businesses have yet to see any measurable return on investment.

  • Report: Cost, Sustainability Drive DaaS Adoption Beyond Remote Work

    Gartner's 2025 Magic Quadrant for Desktop as a Service reveals that while secure remote access remains a key driver of DaaS adoption, a growing number of deployments now focus on broader efficiency goals.