
2024/12/27
On Friday, November 1, 2024, AWS hosted a seminar in Tokyo specifically designed for developers to gain cutting-edge insights from overseas and apply them to their development efforts.
The event featured sessions led by specialists from Meta and Anthropic, who explained their approaches and technical considerations when leveraging their latest AI foundation models. Additionally, AWS introduced its initiatives under developer support programs.
Approximately 100 members of the GENIAC community, including developers selected for Cycle 2, participated in the event. Through Q&A sessions following each talk and a networking event afterward, participants shared knowledge and strengthened their connections.
■Technical Explanation of Llama Models by a Meta Developer

After opening remarks by AWS and a representative from METI, the first speaker was Hamid Shojanazeri, a developer of Meta's AI language model, Llama. Llama has been downloaded over 400 million times on Hugging Face, achieving tenfold growth in 2023 alone. The model has been developed in various research communities, resulting in more than 65,000 variations.
Shojanazeri explained that the current Llama model has evolved into its third generation (3.2), with enhanced co-coding functionality and multimodality. He also introduced the "Llama Guard" model, which improves AI safety and manages ethical risks.
He further highlighted that "Llama 3.2" is lightweight, device-compatible, and supports a wide range of tasks, including image and OCR processing, PDF reading, and summarization. It is available on platforms such as AWS, Dell, and Hugging Face, and performs exceptionally well compared to other models.

Shojanazeri emphasized the importance of understanding and clearly defining objectives when starting a journey with "Llama/PyTorch." He provided detailed steps for evaluating whether prompt engineering alone can address issues, optimizing workflows with lightweight fine-tuning, and applying additional training or data augmentation when necessary.
He advised that when tackling new tasks, developers should proceed cautiously, grasping concepts thoroughly while adapting models and data formats through iterative evaluation and selection based on accurate metrics.
Shojanazeri also introduced the "Llama Stack," an open-source toolkit designed to improve development efficiency with Llama models. The stack integrates tools and libraries, offering flexibility and functionality to complement Llama's capabilities. Concluding his talk, he described it as a future standard for developers and encouraged its adoption.
■The Advantages and Future Prospects of Claude by an Anthropic Developer

In his session, Jason Kim from Anthropic focused on the technical features and advantages of Anthropic's latest AI model, "Claude 3.5."
Kim explained what makes Claude superior, detailing how it efficiently supports a series of steps such as command line operations, editing, and viewing, starting with pull requests on GitHub. He emphasized that Claude is not limited to providing responses to inputs but excels in solving complex problems with enhanced tool functionality. Kim also highlighted Claude's ability to balance technical outcomes with pragmatic, real-world results.
Kim elaborated on one of Claude's key advancements: its improved contextual understanding (AG). He described how Claude can interpret repository structures from pull request descriptions, determine necessary code, reproduce errors using tools, and create test cases to ultimately resolve issues. These capabilities showcase its high level of autonomy and creative problem-solving.
Kim also discussed ongoing research and future improvements for Claude, aiming to address complex tasks and further enhance features such as self-correction, enabling the AI to recognize its mistakes and explore alternative solutions. He expressed his intent to expand the model's applicability across a broader range of use cases and users.

In closing, Kim touched on the importance of safety and challenges related to Claude. He noted that features designed to mimic human-like cognitive functions reduce risks of inappropriate responses and highlighted Anthropic's significant investment in AI safety. Kim concluded with a call to action, encouraging participants to collaborate and share research to build a better future for AI.
Safety is one of the most critical aspects, and Anthropic is dedicating significant resources to this field," said Kim, expressing his vision for further evolving Claude into a more user-friendly and reliable tool for a broader range of users.
He concluded by calling on participants to continue sharing research findings with many others to contribute to a better future for AI.
■AWS’s Initiatives to Support Developers

As the final session, AWS presented an overview of its services and programs supporting development and introduced its developer support teams.
Yoshitaka Harihara, Senior Startup ML Solutions Architect at AWS, provided an outline of "Amazon Bedrock," a fully managed service launched in September 2023. This service offers unified APIs for multiple AI companies and Amazon’s advanced foundation models. Harihara also introduced initiatives such as the LLM (Large Language Model) development support program for several GENIAC developers and the AWS Generative AI Accelerator, which supports three domestic startups.

Keita Watanabe, Senior Worldwide Specialist at AWS, discussed how AWS’s global "Frameworks" initiative supports developers. He explained that the Frameworks team, composed of global members, provides technical assistance in areas such as architecture, POCs, deployment, learning on new platforms, and ecosystem support beyond AWS. He also shared specific examples and their outcomes.
Watanabe highlighted AWS's orchestration support and services like "AWS Health Dashboard" and "Amazon SageMaker HyperPod," which help optimize support and address challenges commonly encountered during development.

Following the seminar, a networking session was held, providing participants and experts with a valuable opportunity for meaningful interaction in a relaxed atmosphere.
GENIAC plans to continue organizing seminars and workshops featuring AI development experts, along with events connecting developers, user companies, and other stakeholders. Stay tuned for the wide-ranging activities of GENIAC-selected developers and the progress of GENIAC’s initiatives, which continue to drive AI development in Japan.
Last updated:2024-02-01