According to a report from Nikkei News, Japan's SoftBank Group plans to invest 150 billion yen in generative AI research by the end of 2025. The investment aims to expand computing power several times over its current capacity and develop generative AI models at the level of the world's most advanced technologies. SoftBank will source the necessary GPU chips from Nvidia.
The report indicates that SoftBank has already invested 20 billion yen in 2023 to establish computing infrastructure. An additional 150 billion yen is planned for investment between 2024 and 2025. SoftBank's investment in computing infrastructure marks the largest scale in the history of Japanese companies. The required GPUs will be purchased from Nvidia and used for SoftBank's own generative AI development, as well as potentially leased to external companies.
SoftBank is actively developing a large language model (LLM) as the foundation for generative AI. This model, estimated to have 390 billion parameters, is expected to be completed in 2024. By 2025, SoftBank aims to create a high-performance model specifically designed for the Japanese language, boasting 1 trillion parameters. Achieving a model with 1 trillion parameters is a world-class performance indicator, similar to OpenAI's GPT-4, which also contains 1 trillion parameters and has performed well on the US bar exam.
Currently, American technology giants like OpenAI lead in large model performance and investment. While Japanese companies such as NTT and NEC are also engaged in AI research, their models only contain tens to hundreds of billions of parameters. SoftBank seeks to develop Japanese-specific generative AI that rivals world-class standards.