Google has introduced three new additions to its Gemma 2 family, building on the open models first released in June 2024.

The initial models of Gemma 2 series were launched in 27 billion (27B) and 9 billion (9B) parameters. Google claimed that the 27B model outperformed competitors of larger sizes in the LMSYS Chatbot Arena leaderboard.

The tech giant’s new additions aim to enhance safety and accessibility, aligning with its stated focus on responsible artificial intelligence (AI) development.

Behind the three new Gemma 2 additions

The first new addition, Gemma 2 2B, is an upgraded version of the existing 2 billion parameter model. It offers improved performance and safety, providing a balanced solution for various devices.

Google’s second addition, ShieldGemma, is a portfollio of safety content classifiers built on the Gemma 2 framework. It is designed to filter harmful content from AI model inputs and outputs.

Its third addition, Gemma Scope, is a tool for model interpretability, offering detailed insights into AI models’ internal workings and promoting transparency in AI decision-making.

According to Google, Gemma 2 2B surpasses GPT-3.5 models in conversational AI capabilities. Gemma 2 2B is adaptable for deployment across diverse hardware, from edge devices to cloud environments using Google’s Vertex AI and Kubernetes Engine.

It is optimised with NVIDIA’s TensorRT-LLM library and supports various development tools such as Keras, JAX, and Hugging Face.

ShieldGemma focuses on safety by addressing issues like hate speech, harassment, sexually explicit and dangerous content. It integrates with Google’s Responsible AI Toolkit.

Gemma Scope uses sparse autoencoders to enable researchers to understand how Gemma models process and interpret data, contributing to more transparent AI systems.

Efforts to encourage broader use

The new models and tools, including Gemma 2’s model weights and Gemma Scope’s resources, are available on platforms like Kaggle, Hugging Face, and through the Google DeepMind blog, promoting broader use and development.

The Gemma series, which was launched by Google earlier this year, derived from the same foundational research as the Gemini models.

Subsequently, the Gemma family has expanded to include CodeGemma, RecurrentGemma, and PaliGemma, each tailored for specific AI tasks and supported by partnerships with Hugging Face, NVIDIA, and Ollama.

By mid-2024, the introduction of Gemma 2 marked an upgrade in performance and safety, aligning with Google’s strategy to advance AI technology responsibly.

Last week, Google’s parent company Alphabet reported a net income of $23.62bn for the second quarter of 2024 (Q2 2024), a 28.6% increase from the previous year. The growth was attributed to strong performance in search, YouTube, and cloud divisions, as well as rising interest in AI services.