
"I've been on the phone for the past few days." Talking about the recently popular ChatGPT, a senior executive of Nvidia China said to the "Kechuangban Daily" reporter in a slightly complaining but hard-hitting tone. The logic is that AI chips, the core infrastructure to supply computing power for ChatGPT, have become the key to the investment of various manufacturers. The demand is huge.”
Nvidia's biggest winner?
AI applications such as ChatGPT need to be trained based on a large number of models. It can be so wise based on the "thorough understanding" of huge amounts of data, behind which is the support of powerful computing power. Taking the GPT-3 model as an example, its ability to store knowledge comes from 175 billion parameters. , the computing power required for training is as high as 3650PFLOPS-day.
According to data from the CITIC Construction Investment Research Report, the growth of computing power used for AI training was in line with Moore’s Law, doubling about every 20 months; the emergence of deep learning has accelerated the expansion of performance, and the computing power used for AI training is approximately Doubled in 6 months; and now large-scale models appear, and their training computing power is 10 to 100 times that of the original.
It's no wonder that OpenAI asked Microsoft to spend 1 billion US dollars to exclusively customize a supercomputer that was ranked among the top five in the world at the time before refining GPT-3. ChatGPT announced on February 7 that it would suspend its service due to full-load operation. The main reason is the limitation of resources such as the network, and ChatGPT has also experienced multiple downtimes before, which reflects the strong demand for computing power of AI applications.
ChatGPT mainly involves technologies related to AI natural language processing, and the underlying computing chips are mainly high-performance GPUs. An executive of an artificial intelligence company in China told a reporter from the "Kechuangban Daily" that Nvidia occupies the main market for this type of chip. Nvidia's chip has done a lot of optimization for large model training. It is "very easy to use". The biggest winner of the boom.
From the perspective of chip technology, Niu Xinyu, founder and CEO of Kunyun Technology, told the reporter of "Kechuangban Daily" that Nvidia's CUDA architecture was originally used for gaming GPUs, which is more suitable for large-scale parallel computing than CPUs. Based on its CUDA ecosystem, has accumulated a sound developer ecology. In the early days of this round of artificial intelligence industry development, there were no dedicated AI acceleration chips and hardware products on the market. Developers found that their products could be applied to AI training and reasoning processes, and they often adopted their products.
In addition, Nvidia's products have advantages in terms of versatility and computing power density, and because of the huge algorithm model, the requirements for system-level multi-chip interconnection and high-bandwidth network storage are increasing exponentially. Nvidia has already planned for this. Using acquisition and R&D integration, a complete and mature solution has been formed.
Under the downward cycle of the semiconductor industry, Nvidia has become a "retrograde" among the semiconductor stocks that are constantly falling. Year-to-date, the company's stock has risen by more than 55%, reflecting investors' optimism toward Nvidia under the ChatGPT boom.
Regarding the issue of chip procurement, a reporter from the "Kechuang Board Daily" called Yuncong Technology as an investor. The relevant person only said that the company has a deep adaptation with domestic chip companies, and there were some stocks in the past. The chip will not be subject to international influence. Chip supply impact.