AMD Seeks Solution in Korea for HBM Memory Bottleneck

Published on March 13, 2026 | Translated from Spanish

The race for AI hardware supremacy is not fought only in chip design, but in securing critical components. AMD faces a strategic challenge with the HBM memory shortage, essential for its GPUs and accelerators. To address it, CEO Lisa Su will travel to South Korea to meet with the leadership of Samsung Electronics. This move underscores a reality: in today's industry, the supply chain can be as decisive as architectural innovation.

Lisa Su, CEO of AMD, in a strategy meeting with Samsung Electronics executives in South Korea.

HBM: The Backbone of High-Performance Computing 🔬

High Bandwidth Memory (HBM) is not conventional DRAM. Its 3D architecture, where multiple memory chips are stacked vertically and interconnected via TSV (Through-Silicon Vias), offers massive bandwidth and a reduced footprint. This integration is crucial to feed the enormous arrays of cores in GPUs and AI accelerators, avoiding data bottlenecks. 3D visualizations of these stacks allow analysis of the complex interconnection between the silicon interposer, memory dies, and processor, showing why its manufacturing is a delicate process with limited capacity.

The Geopolitics of Microarchitecture 🗺️

Lisa Su's visit to Samsung goes beyond a commercial negotiation. It is an acknowledgment that the semiconductor ecosystem is deeply interdependent. Memory manufacturers like Samsung and SK Hynix hold an unprecedented position of power. 3D modeling tools, used to simulate supply flows and production lines, now must also model these geopolitical and capacity risks. The battle for AI is won by securing every link in the chain.

How is the search for advanced 3D packaging and HBM solutions in Korea transforming the architecture of AI accelerators and AMD's strategy against NVIDIA?

(P.S.: 180nm are like relics: the smaller they are, the harder they are to see with the naked eye)