Samsung is preparing to begin mass production of the world's first HBM4 memories. The next-generation chips are expected to be used in Nvidia's Vera Rubin AI accelerators.
It is reported that Samsung is preparing to begin mass production of its sixth-generation high-bandwidth memory (HBM4) chips this month. This step, seen as a critical threshold in the semiconductor industry, means the company is effectively moving one of its most important memory technologies developed for AI-focused hardware to the production line. HBM4 memories are stated to be used for Nvidia's next-generation AI accelerator platform, Vera Rubin.
The Most Critical Component for AI
According to reports from South Korea, Samsung could begin shipping HBM4 chips this month. This timeline is said to be consistent with previously shared mass production plans. The HBM4 memories to be shipped are reported to be used in Nvidia's Rubin architecture GPUs. Rubin GPUs are being developed specifically for servers and large-scale data centers focused on generative AI applications.
It is also emphasized that Samsung has made significant investments to increase its HBM4 production capacity prior to this process. The company is stated to have accelerated its production schedule to meet Nvidia's early access demand. SK Hynix had postponed its HBM4 mass production plan from February to March or April 2026.
Samsung is reported to be the first company to begin mass production for HBM4. If this information is confirmed, it is evaluated that the company may have surpassed SK Hynix, which is currently the world's largest HBM supplier. Samsung had lagged behind SK Hynix in previous generations of HBM3 and HBM3E memories.
Among current-generation AI accelerators, AMD MI350 and Nvidia B200 use fifth-generation HBM3E memories. While Micron and SK Hynix were prominent suppliers in this area, Samsung had a more limited share.
According to shared information, Samsung's HBM4 memories have successfully completed Nvidia's final verification and testing processes. Following this, Nvidia is reported to have placed an official order with Samsung. Samsung's HBM4 memory is expected to offer data processing speeds of up to 11.7 gigabits per second, exceeding the JEDEC industry standard of 8 Gbps by 37 percent. This will provide a 22% improvement compared to HBM3E. The memory bandwidth per stack will reach up to 3 terabytes per second, which is 2.4 times higher than its predecessor.
0 Comments: