Image created by AI
Google has taken a significant step forward in the world of artificial intelligence (AI) by releasing a new set of AI models known as Gemma. Reflecting a growing trend in the tech industry, initiated by Meta Platforms among others, this strategic move aims to democratize AI and foster innovation among developers and businesses. Following this development, individuals can now freely build AI applications using Google’s new models, potentially expanding the reach and utility of AI technology.
Gemma encompasses a variety of models with varying complexities, ranging from two billion to seven billion parameters – the individual pieces of information that an AI model takes into account when performing tasks. To compare, Meta Platform's Llama 2 models contain between seven to seventy billion parameters, suggesting that Gemma models are intended to be accessible and versatile, rather than pushing the limits of complexity in AI. Although Google's largest Gemini models dwarf these figures, their exact size remains undisclosed, keeping in line with Google’s selective approach to openness.
Google’s approach with Gemma is notably different from Meta and some others in the tech sector. While Gemma models are open to the public in terms of usability, Google has not fully embraced the open-source ethos, which often implies relinquishing control over terms of use and ownership. This decision may reflect concerns within the AI community about the potential misuse of open-source AI technology. Nevertheless, by releasing these models, Google is inviting widespread participation in AI development, potentially increasing the quality and quantity of AI applications on the market.
Key to this initiative is Google’s aim to drive traffic and business to its cloud division, which has recently achieved profitability. The models are optimized for Google Cloud, and new cloud customers are incentivized with $300 credits – an attractive offer for startups and independent developers looking to minimize costs as they leverage Google's infrastructure. Moreover, the partnership with chipmaker Nvidia ensures Gemma models run efficiently on Nvidia chips, indicating Google's commitment to creating a seamless, user-friendly AI development experience.
In addition to Nvidia’s technical collaboration with Google on Gemma, they also plan to enhance compatibility with Windows PCs by making their in-development chatbot software supportive of Gemma models. This cross-platform functionality underscores the potential for widespread integration and adoption of the Gemma suite.
In the final analysis, Google's latest release of the Gemma open models can be seen as a calculated move that balances the benefits of accessible AI technology with the need to maintain a level of control over its AI infrastructure. By encouraging the use of their AI models and incentivizing the use of Google Cloud, Google may be poised to take on a more significant role in the future landscape of AI-driven innovation.