Loading build...
Your Shops
Samsung Begins 8GB HBM2 Mass Production, Allows for 32GB GPUs with 1.2TBpS Memory Bandwidth
By Stuart Thomas on January 12th, 2018 at 02:09pm - original article from game-debate

Samsung has pioneered the first mass production of 8GB HBM2 memory chips, offering an unprecedented 2.4Gb/s data transfer speed per pin. Codenamed “Aquabolt”, the second generation HBM2 is targeted at the graphics card and supercomputing markets, opening the door to potential gaming GPUs with 32GB HBM2 memory.

Up until now the majority of GPUs have been restricted to 2-Hi or 4-Hi stacks, yet 8-Hi HBM2 memory sticks could potentially allow for 32GB HBM2 graphics cards. Each of the HBM2 chips provides 307GB/s memory bandwidth, running almost 10 times faster than an 8Gb GDDR5 memory chip (32GB/s). Stack four on a GPU and we’re looking at total memory bandwidth in excess of 1.2TB/s. By comparison, Nvidia’s Titan V with its 12GB HBM2 delivers total memory bandwidth of 652.8GB/s, making it the current market leader.

The technology behind the faster HBM2 memory is related to TSV (Through Silicon Via) design and thermal control. Samsung worked to optimise the TSV design on each 8Gb die, minimising “collateral clock skew” and in the process providing a significant boost to chip performance. In addition, thermal control was enhanced through additional thermal bumps between the HBM2 dies, as well as an additional protective layer on the substrate at the bottom of the stack.

HBM2 Thermal Bump Samsung

“With our production of the first 2.4Gbps 8GB HBM2, we are further strengthening our technology leadership and market competitiveness,” said Jaesoo Han, executive vice president, Memory Sales & Marketing team at Samsung Electronics. “We will continue to reinforce our command of the DRAM market by ensuring a stable supply of HBM2 worldwide, in accordance with the timing of anticipated next-generation system launches by our customers.”

Mass production of 8GB HBM2 memory chips is now commencing, which should help Samsung gain a larger foothold in HBM memory field, as well as hopefully lowering the costs of HBM2 memory usage on future GPUs.