Global Sourcing OEM Limited is a global supply chain solutions company in the industry of electronic components. Our strong brands: TI, ADI/LT, ST, Maxim, Cypress, Altera and Microchip

News Center-Global Sourcing OEM Limited

In addition to computing power chips, AI has also made these chips popular

On May 29, Nvidia released the DGX GH200 artificial intelligence (AI) supercomputer, bringing the AI field to a new peak. The popularity of generative AI such as ChatGPT has created a huge demand for computing power, which has led to the shortage of Nvidia GPUs and price increases. The delivery cycle of the key model A800 has been extended from 1 month to 3 months, and the price has risen to 100,000 yuan. Due to the shortage of GPUs, many application-side companies have begun to allocate computing power internally, concentrating resources on the most valuable projects.

From the current GPU market, it can be seen that the field of AI has become a new trend, and major companies should not miss it. Not just GPUs, this round of AI wave will drive the iterative upgrade of the entire industry chain, including key links such as memory and power devices, which will usher in opportunities for replacement and greatly increased demand.

 

GPU: Catch up with the AI outbreak, but the competition is fierce

GPU (Graphics Processing Unit, Graphics Processing Unit) was born to make up for the short board of CPU in graphics processing capability. Judging from the two architectures, the number of CPU arithmetic operation units (ALU) is small but the performance is powerful enough, supplemented by a large cache (Cache) which is good at completing complex logic operations. The architecture of the GPU is just the opposite of that of the CPU. Its small and many ALUs plus a smaller cache are suitable for pure computing tasks with a large amount of calculation and a high degree of uniformity, as well as parallel tasks.

Comparison of CPU and GPU Architecture From: NVIDIA CUDA Technical Documentation

Generative AI requires a huge amount of training to build models, and GPUs are obviously more suitable for performing such tasks. Nvidia was the first to invent the GPU and has been a leader in the field. As early as 2016, Nvidia developed the supercomputer DGX-1 loaded with 8 P100 chips and provided it to OpenAI, the founding company of ChatGPT. Today, seven years later, ChatGPT has attracted worldwide attention, generative AI development has been hot for a while, and the demand for Nvidia GPU products has grown rapidly.

Recently, server-related industry players revealed that chip modules such as A100/A800/H100 are in short supply. Get the chip. This huge demand has changed the arrangement of the chip production chain. According to the industry, Nvidia originally planned to occupy a large number of TSMC wafers and CoWoS packaging production capacity in the fourth quarter of this year, and has now changed to an average occupation in the second, third, and fourth quarters, so that an average of 1,000-2,000 more production capacity can be discharged per month. Nvidia.

Although there are a large number of orders in hand, this does not mean that Nvidia has complete market dominance. As its CEO Huang Renxun said, the competition for AI chips is very fierce, and many customers are using Nvidia’s GPU while developing their own AI chips. For example, Microsoft is developing its own AI chip called “Athena”, and Google has not stopped developing its own TPU. In China, Cambrian, Jingjiawei, Haiguang, Zhaoxin, etc. have also joined the competition, pursuing the improvement of the autonomy of the AI industry chain.

But regardless of Nvidia’s GPU, or ASIC, TPU and other chips developed by other companies, their computing power scale has put forward new requirements for the adaptation of the overall system. Among them, the “memory wall” problem caused by high data throughput, and high-energy The power system optimization problems brought about by power consumption are all pain points that need to be solved urgently, and the solutions for them will bring a group of “core” Internet celebrities into flames.

 

HBM memory: helping AI servers break through the bottleneck of the “memory wall”

When generative AI trains models and produces content, it will generate a huge amount of data throughput, and the “memory wall” effect will appear when the data passes through a narrow bandwidth. The previous GDDR memory solution is unable to cope with this, and this is exactly what HBM (High Bandwidth Memory, high-bandwidth memory) is an area where it can play an advantage.

HBM was first defined by Samsung Electronics, SK Hynix and AMD. Compared with GDDR and other memory, HBM has two major advantages. One is that HBM is packaged with main computing chips such as GPU and CPU. The closer the distance, the faster the response speed. ; Second, HBM adopts a three-dimensional structure, which can develop more memory space vertically. The most intuitive effect of these two advantages of HBM is that it far exceeds the bandwidth of GDDR. For example, the HBM memory stacked by 4 layers of DRAM has a memory interface width of 1024-bit, while the width of each channel of GDDR5 is only 32-bit. It can be seen that HBM is indeed a sharp weapon for AI servers to break through the bottleneck of the “memory wall”.

HBM memory structure schematic From: AMD

However, compared with GDDR, HBM is not without shortcomings. In terms of capacity, HBM is limited by integrated packaging. Even if a single-chip memory is stacked with 8 layers of DRAM, and 4 HBM memories are packaged in a single computing power chip, the total capacity is only 32GByte. GDDR can use expansion slots on the motherboard, and can easily achieve a level much higher than 32GByte, and even 128GByte is not a problem.

Index comparison between GDDR and HBM memory From: AMD

Of course, memory design can combine HBM bandwidth advantages with GDDR capacity advantages, and integrating the two into one system is also a feasible solution. But in any case, HBM’s key role in overcoming the “memory wall” will make its market demand grow rapidly. TrendForce estimates that the compound growth rate of the HBM memory market in 2022-2025 is expected to reach more than 40%-45%, and its market size is expected to reach 2.5 billion US dollars by 2025.

Compared with DRAM’s annual scale of more than 80 billion U.S. dollars, HBM’s market is still relatively small, but due to the rapid rise in its market demand, it still maintains price increases during the storage price decline cycle of more than one year. Among them, the most advanced HBM3 specification products even increase in price by 5 times.

Due to the characteristics of high technical threshold and high cost, HBM production is still dominated by large storage original factories. Among them, the most powerful are SK Hynix and Samsung, two Korean original manufacturers, with a share of 50% and 40% respectively, and Micron, which ranks third, accounts for 10%. In China, companies such as National Chip Technology and Tongfu Microelectronics are committed to achieving breakthroughs in HBM packaging.

 

DrMOS: an indispensable high-efficiency power supply solution in the AI era

The AI server with powerful performance and fast response is the guarantee of computing power, but its huge energy consumption cannot be ignored. Founder Securities analysts’ research report pointed out that the power of AI servers is 6-8 times higher than that of ordinary servers, and the demand for power will also increase simultaneously. Huaan Securities analysts’ research report pointed out that general-purpose servers only need two 800W server power supplies, while AI servers are directly upgraded to four 1800W high-power AI server power supplies. Under the guidance of the global wave of energy conservation and emission reduction, AI servers are bound to introduce more efficient power solutions to minimize the high energy consumption caused by high power, which provides a good opportunity for the development of the DrMOS industry.

DrMOS (Integrated Driver & MOSFET, integrated driver MOS), as the name implies, is a component that packages the high-side MOS, low-side MOS and driver in a multi-phase power supply circuit. In order to adapt to GPUs, ASICs and FPGA chips with increasingly powerful computing power, as well as PCB space that is becoming more and more expensive, it is imperative to replace discrete solutions with DrMOS. Only a highly integrated solution can reconcile the contradiction between high performance and high energy consumption to the greatest extent. Not only that, the integrated solution can also simplify the circuit design process, thereby saving the development and construction time of AI servers.

Application Diagram of DrMOS Components From: Vishay Product Datasheet

The product form of DrMOS was first established by Intel. It is not the first time that it is applied to the “tall” AI server. In fact, it has been widely used on desktop motherboards. The main suppliers of DrMOS include MPS, Vishay, ON Semiconductor, Renesas, and ADI, etc., and domestic suppliers include Jiehuat, Jingfeng Mingyuan, etc. With the rapid development of AI applications in China, DrMOS has shown strong potential for domestic substitution.

According to data from the research organization Omdia, the global DrMOS market size will be approximately US$956 million in 2021, and is expected to grow to US$1.122 billion by 2026, with a compound growth rate of 3.24% from 2021 to 2026. TrendForce predicts that global server shipments will increase by 38.4% this year, reaching nearly 1.2 million units. At the same time, this field will maintain a compound annual growth rate of 22% until 2026. Comparing the two, the usage of DrMOS will increase exponentially, and its market scale will expand faster than expected, and relevant manufacturers can benefit from it.

AI drives the industrial chain to “rejuvenate” in an all-round way

In order for AI supercomputing to exert unparalleled performance, the corresponding supporting chips must be upgraded. In addition to the aforementioned HBM memory and DrMOS, various analog and power chips will also usher in a huge demand for upgrading. Not only ICs, but also packaging substrates and PCBs used in AI main chips will also usher in a large number of high-end demands.

In general, the “rejuvenation” of the industrial chain driven by AI is in the overall sense. It is conceivable that AI will work with automotive electronics to make up for the demand gap caused by the downturn in consumer electronics, and this round of shift will promote the consolidation of advantages of already competitive large companies. A batch of small and medium-sized enterprises achieved breakthroughs.

Facebook
Twitter
LinkedIn