PLAYER SELECTION

picture:Kota Kakiuchi
Co-Founder CTO Kota Kakiuchi
COMPANY
ELYZA, Inc.
ADDRESS
3-15-9 Hongo, Bunkyo-ku, Tokyo, Japan
URL
https://elyza.ai/

ELYZA, Inc. has been engaged in the research, development, and direct application of large language models (LLMs) since 2019. Building on open models developed by Meta and others, Elyza conducts continued pretraining and instruction fine-tuning of LLMs for Japanese usage, achieving accuracy comparable to globally leading commercial models such as GPT-3.5 Turbo and Gemini 1.0 Pro. For businesses, Elyza provides products and solutions that leverage LLMs.

Elyza is profoundly committed to the practical application and integration of LLMs, having on numerous occasions enhanced operational efficiencies by 30~50% with the use of foundational models. In January 2022, they established a dedicated data creation department, strategically positioned to develop ELYZA's unique datasets in close coordination with its AI researchers and developers.

Elyza aims to construct foundation models that will integrate into Japanese society as seamlessly and ubiquitously as the internet and smartphones. To ensure the integration of generative AI into everyday infrastructure, Elyza continuously strives to enhance performance and adapt more comprehensively to the particularities of Japanese use cases.

In this project, Elyza is set to develop models with enhanced Japanese linguistic capabilities. By deploying the models created through this project, Elyza intends not only to elevate the productivity of Japanese businesses but also to enable the creation of domain-specific models that leverage their accumulated expertise to foster industry-wide advancements and address industry-wide challenges.

picture:Noriyuki Kojima
Chief Executive Officer Noriyuki Kojima
COMPANY
KK Kotoba Technologies Japan
ADDRESS
6th Floor, Otemachi Building, 1-6-1 Otemachi, Chiyoda-ku, Tokyo, Japan
URL
https://twitter.com/kotoba_tech

Kotoba Technologies Japan develops speech foundational model technologies. The company's researchers have been engaged in cutting-edge AI research at academic institutions in the U.S., and have pioneered the development of Large Language Models (LLMs) using domestic supercomputers in Japan. The team, recognized and awarded at numerous top AI conferences, including receiving several best paper awards, showcases Kotoba Technologies Japan's strong research capabilities.

Just as the text domain has seen a revolution from GPT-1 to GPT-4 through large-scale models, Kotoba Technologies Japan anticipates similar revolutionary advancements in the speech domain. They are committed to developing new generative AI through their work on 'speech foundation models.'

General-purpose AI foundation models for speech applications are still in the early stages of development globally, particularly for non-English languages such as Japanese. They are developing speech foundation models that support both Japanese and English. They plan to widely share the insights and knowledge gained from this endeavor.

Kotoba Technologies Japan is aiming to pioneer generative AI technology development with challenging and unique approaches. Their development of speech foundation models could potentially serve as a benchmark example and spearhead the global expansion of AI utilization, starting from Japan.

picture:Koichi Shirahata
Artificial Intelligence Laboratory Koichi Shirahata
COMPANY
Fujitsu Limited
ADDRESS
4-1-1 Kamikodanaka, Nakahara-ku, Kawasaki-shi, Kanagawa, Japan
URL
https://www.fujitsu.com/global/

Fujitsu Limited is actively engaged in the development of AI technologies that enhance business efficiency. The company focuses on foundational models, including "Fugaku-LLM," and carries out research and development in generative AI amalgamation technology, which combines several generative AI models, and generative AI trust technology, ensuring AI's safe application in business. A distinctive feature of Fujitsu is its ability to customize and provide specialized models through its AI brand "Fujitsu Kozuchi," tailored to meet specific customer needs.

Historically, the challenge of hallucinations in generative AI has prevented its full utilization in critical fields such as law and medicine. With this new initiative, Fujitsu is committed to developing a foundation model specifically for generating and inferring knowledge graphs, aiming to enable generative AI that can produce outputs based on logical reasoning and is suitable for use in sectors demanding high accuracy.

Fujitsu’s ambition extends beyond legal and medical applications, aiming to integrate generative AI into various other areas, including software development and marketing, to facilitate transformative changes in business processes. Additionally, by promoting the use of AI in fields such as new drug development and clean energy material development, Fujitsu intends to contribute to the creation of a sustainable, long-lived society. AI's increasing integration and close coordination with daily life and business suggest that it will soon become a pervasive "buddy" in many aspects of human activity.

picture:Yousuke Okada
Representative Director and CEO Yousuke Okada
COMPANY
ABEJA, Inc.
ADDRESS
2nd Floor, Bizflex Azabu-Juban, 1-1-14 Mita 1-chome, Minato-ku, Tokyo, Japan
URL
https://www.abejainc.com/en

ABEJA with its corporate philosophy of " Implement a fruitful world", offers a "digital platform business" that transforms the core business processes of client companies based on “ABEJA Platform” and accompanies the realization of continuous revenue growth in the business.

The company uses a "Human in the Loop" approach for the ABEJA Platform, in which humans and AI cooperate to enable actual operation from the initial stages, when the amount of data is too small for AI to learn effectively and demonstrate high accuracy. By creating an environment in which humans and AI can cooperate, ABEJA has succeeded in providing services in mission-critical areas where failure is not allowed.

Following this selection, ABAJA will conduct research and development of Japanese LLM and peripheral technologies (RAG, Agent) to dramatically improve the "accuracy" and "computational cost performance", which are essential for the social implementation of LLM. The improvement of accuracy and agent optimization by RAG will enhance computational cost performance, bringing economic rationality and expandability of the scope of application. ABEJA will contribute to increasing companies and organizations utilizing generative AI, driving the social implementation of LLM.

In addition, ABEJA will provide the LLM model, source code, and development know-how obtained through this research and development, which will be widely available to the public. Doing so will contribute not only to the use of LLM, but also to the significant acceleration of AI technological innovation in society and to the development of next-generation researchers and engineers.

picture:Takuya Akiba
Research Scientist Takuya Akiba
COMPANY
Sakana AI K.K.
ADDRESS
Toranomon Hills Business Tower 15F 1-17-1 Toranomon, Minato-ku Tokyo, Japan
URL
https://sakana.ai/

Sakana AI takes a completely different approach to AI development. Inspired by the laws of nature, such as a school of fish coming together and forming a coherent entity from simple rules, the company aims to develop foundational models that apply the principles of nature, such as evolution and collective intelligence, in its research.

Following the selection as an operator, Sakana AI plans to develop foundational models for autonomous agent systems capable of handling agent tasks. The company is also trying to build a compact foundational model that can operate at a low cost yet possesses high reasoning abilities comparable to large-scale LLMs by exploring new methods to enhance reasoning capabilities and verifying algorithms for efficient computation. Realizing autonomous agent systems promises to pioneer new research fields like multi-agent research and applications across diverse industrial uses.

With world-class talent from around the world, the company aims to build a world-class AI lab in Tokyo. As a globally oriented startup from Japan, rich in potential in the generative AI field, the company seeks to deepen Japan’s AI ecosystem by realizing collaborations between Japan’s abundant and diverse creative content and its robust research and development community.

picture:Sadao Kurohashi
Director General Sadao Kurohashi
COMPANY
National Institute of Informatics /ROIS
ADDRESS
2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo, Japan
URL
https://www.nii.ac.jp/en/

The National Institute of Informatics (NII) is conducting research and development to ensure the transparency and reliability of large language models. In May 2023, researchers in natural language processing, computer systems, and other fields gathered to start the LLM research group (LLM-jp, “LLM勉強会” in Japanese) , and it now has more than 1,000 members(as of April, 2024). In this activity, LLM-jp is building LLM that is open and proficient in the Japanese language and working to elucidate the principles of LLM. LLM-jp will make everything available to the public, including developing results, discussion process, and even failure for construction. NII released the large language model with 13 billion parameters as an initial version in the fall of 2023.On April 1st, 2024, NII established the Research and Development Center for Large Language Models(LLMC) to develop these activities further.

Following the selection as an operator, NII, led by LLMC, will build an LLM that is open, permitted even for commercial use, and proficient in the Japanese language. NII will construct the LLM with high Japanese language performance in terms of comprehension and generation, with a scale of 175 billion parameters, which exceeds the level at which emergent abilities can be acquired. NII will release the LLM to the public after safety confirmation. By adding the original technological development, companies can provide original and diverse services.

Through building the knowledge infrastructure systematized by data interpretation and knowledge association based on LLMs, in the future, we can expect to create new knowledge and solve complex social issues that a single academic discipline cannot solve.

picture:Kosuke Arima
Co-Founder CTO Kosuke Arima
COMPANY
Stockmark Inc.
ADDRESS
1-12-3 LIFORK MINAMI AOYAMA S209, Minato-ku, Tokyo, Japan
URL
https://stockmark.co.jp/

Stockmark.inc. provides a SaaS service named A-series, developed to solve “information-gathering issues” for businesses and organizations. Its core value is rooted in a system that delivers personalized information, compiled from over 35,000 worldwide media sources. It develops its own LLM specialized in business domains and supports the creation of value for business development by combining business information with LLM.

Its strength lies in its Natural Language Processing technology. that minimizes hallucinations. This allows the company to restructure and convert complex, disorganized text data into useful information for business applications. Its technology enables us to collect and analyze comprehensive information from both within and outside the organization. With this product, users can automatically receive information personalized to their interests and needs.

People have been disappointed with generative AI due to its inability to ensure trust and accuracy in critical decision-making processes. Following this selection, Stockmark is committed to developing a foundational model that significantly reduces instances of hallucinations that mainly use for business cases.

If it can minimize hallucinations and achieve this project, its efforts will greatly accelerate the adoption of generative AI in Japan. Simultaneously, Stockmark can prove that Large Language Models (LLMs) are also beneficial in domains requiring accuracy for business cases.

picture:Shunsuke Aoki
CTO Shunsuke Aoki
COMPANY
Turing Inc.
ADDRESS
4th Floor, East Tower, Gate City Osaki, 1-11-2 Osaki, Shinagawa-ku, Tokyo, Japan
URL
https://www.turing-motors.com/en

Turing Inc. develops and manufactures fully autonomous electric vehicles (EVs) and develops autonomous driving AI for this purpose. The company conducts consistent research and development of multimodal AI, which becomes crucial for autonomous driving, along with hardware integrated with AI, such as vehicles and edge semiconductors for in-vehicle use. With the mission of "We Overtake Tesla," Turing aims to mass-produce fully autonomous vehicles by 2030 and to become a complete car manufacturer.

Under the concept of making autonomous driving AI drive like humans, the company has been conducting foundational research on multimodal AI from the perspective of linking "eyes" and "brains." Following the selection as an operator, it aims to develop a large-scale multimodal foundation model that has learned the driving domain for fully automated driving.

Once they are realized, general-purpose multimodal AI that understands the Japanese language and culture will be available, which enables various applications. In the future, Turing aims to realize autonomous driving that understands Japan's road environment. Fully autonomous driving is an environmentally friendly products that dramatically improves everyone's life. Its realization represents an important advancement for civilization and humanity.

picture:Yutaka Matsuo
Professor at the University of Tokyo, Graduate School of Engineering, Department of Technology Management for Innovation Yutaka Matsuo
COMPANY
The University of Tokyo
ADDRESS
Engineering bldg. 2nd 7-3-1, Hongo, Bunkyo-ku, Tokyo,Japan
URL
https://weblab.t.u-tokyo.ac.jp/en/

With the vision of "creating intelligence," the Matsuo-Iwasawa Laboratory at the University of Tokyo conducts basic research on deep learning, provides lectures, conducts joint research with companies, and supports the development of student entrepreneurs. The laboratory also focuses on contributing the results of its research to society. We offer free online lectures to not only students at the University of Tokyo, but the general public. The number of participants exceeded 10,000 in FY2023, and approximately 2,000 people attended a series of seven lectures on the "Large Language Models (LLM) Course " held in August 2023. A 10 billion parameter LLM named "Weblab-10B" was also released that month.

In response to the selection as an operator, a development team consisting of graduates of the "Large Language Models Course" and volunteer developers gathered from various research institutions and companies was formed. During phase 1, members are divided into 8 teams and investigate practical and efficient methods for optimal model structure and hyperparameters for LLMs, incorporating the latest research and technical knowledge. The best-performing team at phase 1 will work on developing a LLM with a size of 50 billion parameters during phase 2.

The models, source code, development process, and know-how developed throughout this course will be widely disclosed on the Matsuo-Iwasawa Laboratory website. With this highly transparent approach, we aim to promote the improvement of technological literacy in society as a whole and its applications in industry and academia.

picture:Daisuke Okanohara
Chief Executive Officer Daisuke Okanohara
COMPANY
Preferred Elements Co., Ltd.
ADDRESS
Otemachi Building, 1-6-1 Otemachi, Chiyoda-ku, Tokyo, Japan
URL
https://www.pelements.jp/

Preferred Elements (PFE) researches, develops, and sells foundation models. The company provides acquisition of patents, trademarks, utility model registrations, and design rights related to the foundation models, develops software and provides services based on these rights, and offers consulting services for business improvement and operational improvement. It also develops multimodal foundation models with high performance in the Japanese language and a multimodal foundation model that handles various types of data, including text, images, audio, and sensor values, and it is planning to commercialize its foundation model in 2024.

Following the selection as an operator, Preferred Elements will develop a new 100 billion-parameter multimodal (text, image, and audio) foundation model and test the pre-training of a one trillion-parameter large language model. PFE will also attempt to secure cutting-edge generative AI technologies in Japan, and to provide, enhance, and diversify generative AI solutions with Japanese language, culture, ethics, and business practice.

After PFE verifies the project’s safety outcome, the company plans to disclose some models and know-how. It supports the innovation and international competitiveness of the country’s developers and industrial players. PFE plans to improve the learning efficiency of foundation models to the world’s highest level and develops models that support strengthening Japan’s industrial competitiveness.

Last updated:2024-02-01