
Google Cloud Next 2025
Top Ads and Impressions
Google Cloud Next 2025 arrived with a bang! From April 9th to 11th, the Mandalay Bay Convention Center in Las Vegas hosted a veritable avalanche of news in the world of artificial intelligence and cloud computing. This year, the conference clearly showed that Google is betting all its chips on generative AI and agentic capabilities. Let's take a look at the main announcements that prompted that collective 'wow' among participants.
Ironwood: the first Google TPU for the inference era
Ironwood is Google's most powerful, capable, and energy-efficient Tensor Processing Unit (TPU) to date, designed to power inference and reasoning AI models at scale. It offers significant performance gains with a focus on energy efficiency, allowing AI workloads to run more economically. Ironwood offers a capacity of 192GB per chip, 6x that of Trillium, enabling the processing of larger models and datasets, reducing the need for frequent data transfers and improving performance.
Look, I'm no chip expert, but Ironwood seems like that buff friend who not only lifts more weight but also doesn't need to eat the whole restaurant afterward. It's impressive to see how Google is investing heavily in specific hardware for AI. With 6 times more memory than the previous model, I imagine the engineers celebrating not having to chop up their models into tiny pieces just to fit them on the chip. In a world where AI companies' electricity bills are already scary, this energy efficiency will be a huge differentiator!
Gemini 2.5 Pro is Google's most expensive AI model yet
Google launched Gemini 2.5 Pro, an AI model with industry-leading performance on various benchmarks measuring coding, reasoning, and math. For prompts up to 200,000 tokens, Gemini 2.5 Pro costs $1.25 per million input tokens and $10 per million output tokens. For prompts over 200,000 tokens, Gemini 2.5 Pro costs $2.50 per million input tokens and $15 per million output tokens. This pricing makes Gemini 2.5 Pro more expensive for developers than any other AI model currently offered by Google, including Gemini 2.0 Flash. According to Google CEO Sundar Pichai, Gemini 2.5 Pro is the company's most requested AI model among developers, leading to an 80% increase in usage on Google's AI Studio platform and the Gemini API this month alone.
Whoa! It looks like Google discovered what Apple has known for years: if you charge more, people think it's better! Jokes aside, these prices make the wallet cry a little, but apparently, the market is willing to pay for superior performance. The 80% increase in usage, even at these prices, shows that companies are in an AI arms race and don't want to be left behind. Now, I imagine smaller startups will need to do some math before adopting the model. Like that expensive neighborhood restaurant – you know the food is good, but you have to choose carefully when you go.
Agent Development Kit
The Agent Development Kit (ADK) is a flexible and modular framework for developing and deploying AI agents. The ADK can be used with popular LLMs and open-source generative AI tools and was designed with a focus on integration with the Google ecosystem and Gemini models. The ADK makes it easy to get started with simple agents powered by Gemini models and Google AI tools, providing the control and structure needed for more complex agent architectures and orchestration.
Finally, we have a tool that promises to work that 'build your own AI assistant' magic without needing a Ph.D. in computer science! The ADK looks like 'AI Lego' to me – modular blocks you can snap together to create anything from a simple little robot to an army of intelligent agents. The coolest part is that it's not limited to just the Google ecosystem, which shows a certain maturity from the company in understanding that the real world is heterogeneous. I'm curious to see what creative developers will build with this toolkit in their hands.
Announcing the Agent2Agent (A2A) Protocol
Google announces the launch of the Agent2Agent (A2A) protocol in collaboration with over 50 technology partners. A2A is an open protocol aimed at enabling AI agents to communicate, exchange information, and coordinate actions across various enterprise platforms. The goal is to promote interoperability between agents, increase autonomy, and drive efficiency and innovation, regardless of their underlying vendors or technologies.
You know when you have those friends who don't get along and you need to be the middleman in conversations? Well, A2A promises to end that in the world of AI agents! Having 50 partners already onboard with this protocol is a strong sign that the industry is tired of fragmentation. I imagine a near future where Google's assistant chats easily with Microsoft's, which talks to Amazon's... It sounds a lot like a WhatsApp group for AIs! I just hope they don't start gossiping about us humans behind our backs.
Introducing Firebase Studio and agentic developer tools for building with Gemini
Millions of developers use Firebase to engage their users, powering over 70 billion app instances every day. At Google Cloud Next, a suite of new features was introduced that transforms Firebase into an end-to-end platform to accelerate the full application lifecycle. The new Firebase Studio, available to everyone in preview, is a cloud-based agentic development environment powered by Gemini that includes everything developers need to quickly build and ship production-quality AI applications, all in one place.
Firebase was already a trusted friend to developers, and now it's gained AI superpowers! What excites me about Firebase Studio is the promise of democratizing AI app development, just like the great Bolt does on Stackblitz – I imagine this will allow many cool ideas to get off the ground without needing a Silicon Valley budget. If it really delivers on its promise, we can expect an explosion of intelligent applications in the coming months. Who knows, maybe even I'll get inspired to create that app I've always had in mind?
Gemini Code Assist, Google's AI coding assistant, gets 'agentic' skills
Gemini Code Assist, Google's AI coding assistant, is gaining new 'agentic' features in preview. Google said Code Assist can now deploy new AI 'agents' that can perform multiple steps to carry out complex coding tasks. These agents can build applications from product specifications in Google Docs, for example, or perform code transformations from one language to another. The updates to Code Assist are likely in response to competitive pressure from rivals like GitHub Copilot, Cursor, and Cognition Labs.
And the race to see who will retire developers first continues! Just kidding! Gemini Code Assist with agentic features seems like the dream of every lazy programmer (i.e., a good programmer). Creating an entire app from a spec in a Google Doc? That's almost magic! Google is certainly feeling the heat from GitHub Copilot and Cursor, and this update shows that competition is good for the market. As someone who has spent hours converting code from one language to another, I can say that this feature alone is worth a hug for the Google development team.
New generative AI tools for video, image, speech, and music are coming to Vertex AI.
Google announced four major updates for generative media within Vertex AI, Google Cloud's unified, fully managed AI development platform: Lyria (Google's text-to-music model), Veo 2 (new editing capabilities and camera controls), Chirp 3 (Instant Custom Voice), and Imagen 3 (reconstruction of missing or damaged parts of an image and even higher quality object removal editing). These updates make Vertex AI the only platform with generative media models across video, image, speech, and music.
Google is turning Vertex AI into a multi-talented friend who can play various instruments, sing well, take incredible photos, and even make professional videos! It's exciting to see all these creative models brought together on a single platform. What excites me most is Imagen 3 – who hasn't wanted to 'resurrect' that special photo that had a finger in front or that photobomber in the background? I'm already imagining the creatives at advertising agencies rubbing their hands together with these tools. The only danger is getting so used to perfect content that we forget what the real world, with its charming imperfections, looks like.
Google's new Gemini AI model focuses on efficiency
Google is launching a new AI model designed to deliver strong performance with a focus on efficiency. The model, Gemini 2.5 Flash, will launch soon on Vertex AI, Google's AI development platform. The company says it offers 'dynamic and controllable' compute, allowing developers to adjust processing time based on query complexity.
Gemini 2.5 Flash seems to be Google's answer to that eternal developer complaint: 'Why does this AI take so long to answer simple things?'. Imagine having a sports car that knows when to save fuel and when to floor it. The ability to adjust processing time based on query complexity is really smart – I don't need the full engine power to ask what time it is, right? This should bring a good balance between performance and cost, something essential for companies wanting to leverage AI without having to sell a kidney to pay the bill at the end of the month.
Google announces 'Workspace Flows' automation with Gems, audio in Docs, and more Gemini
In addition to new Gemini features in Google Docs, Sheets, Meet, Chat, and Vids, Cloud Next 2025 saw the announcement of Google Workspace Flows, a tool for automating multi-step processes using AI that can search, analyze, and generate content. Workspace Flows can refer to Google Drive files for context and use custom-trained Gems to take the right next steps. Google Docs is adding audio features that allow you to create podcast-style summaries and full audio versions of your documents.
Finally, Google Workspace is entering the era of true automation. Workspace Flows seems like the executive assistant I always dreamed of having – someone who not only understands what I'm asking but also knows where to find the right information without me having to explain everything. And this audio feature in Docs? Genius! Now I can turn my boring reports into podcasts that nobody will listen to (just kidding!). Seriously, this is perfect for accessibility and for those times when you want to review a document while driving or exercising. Google clearly understands that not everyone consumes content the same way.
Deep Research with Gemini 2.5 Pro (experimental) now available
Gemini Advanced subscribers can now conduct deep research with Google's smartest 2.5 Pro (experimental) model. Free users don't have access today. Evaluators preferred the reports generated by Gemini Deep Research powered by 2.5 Pro over other leading deep research providers by more than 2-to-1. The evaluation includes instruction following, comprehensiveness, completeness, and writing quality.
Gemini's deep research feature seems like a nerd who not only reads all the books but also takes impeccable notes and gives you a perfect summary. It's encouraging to see Google making this feature subscriber-only, following the logic of 'you want the best? You'll have to pay'. The evaluation results are impressive – beating the competition by a 2-to-1 ratio is no small feat. As someone who has spent sleepless nights doing research for papers and articles, I can say that a tool like this is worth every penny... if it delivers on its promise, of course. Now all Google needs to do is create a version that also writes bibliographies and system documentation in the correct format!
Geospatial Reasoning: Unlocking insights with generative AI and multiple foundation models
Google is introducing new geospatial foundation models and uniting them under Geospatial Reasoning, a research effort that uses generative AI to accelerate geospatial problem-solving. This can unlock powerful insights for crisis response, public health, climate resilience, commercial applications, and much more.
Look, this might not seem like the sexiest news from the event, but it's possibly one of the most important! Imagine an AI that can analyze complex geographical patterns and help predict floods, optimize evacuation routes during disasters, or track the spread of diseases. The potential to save lives and resources is immense. Furthermore, the commercial applications are endless – from precision agriculture to optimized logistics. If you work with geospatial data, I think you just gained a super-assistant that will make your work much more impactful.
Google is riding the agentic wave
Phew! Google Cloud Next 2025 was like a flood of AI news, wasn't it? It's clear the company is betting all its chips on the concept of AI agents.
The impression one gets is that Google is building a complete ecosystem, from specialized hardware (Ironwood TPU) to tools that democratize agent creation (ADK and Firebase Studio), without forgetting the infrastructure for these agents to talk to each other (A2A). It's an ambitious vision that goes far beyond simply having the smartest AI – it's about creating a world where humans and AI collaborate in a more natural and productive way.
It will be interesting to see how these technologies will be adopted and what use cases will emerge in the coming months. One thing is certain: the pace of innovation in the AI field shows no signs of slowing down. If you work in tech, you'd better get used to constantly learning new things. And if you don't... well, maybe it's time to consider it!