Google Labs adds an agent step to Opal to turn static flows into agentic AI workflows
Original: Google Labs launches agent step in Opal to build agentic AI workflows View original →
Google said on February 24, 2026 that it is launching a new agent step in Opal, available for all users, to turn static workflows into agentic AI workflows. Instead of manually wiring a single model call, users can now select an agent in the generate step and let Opal determine which tools and models it needs to reach the stated objective.
Google describes this as a move from static model calls to agentic intelligence. In the company’s explanation, the new step understands a user’s goal, chooses the best path, and can invoke different tools and models as needed. The examples in the post include Web Search for research tasks and Veo for video tasks, with the broader point being reduced manual configuration for more complex automation.
- The new agent step is available to all Opal users from launch.
- Google positions it as a way to build interactive experiences rather than one-way workflows.
- Opal now also adds Memory so an agent can retain names, preferences, and running lists across sessions.
Google’s before-and-after example is an interior design workflow. Before the update, an Opal could take a room image plus a style input and return a redesigned image in a mostly one-directional flow. With the new agent step, Google says the upgraded “Room Styler Opal” behaves more like a collaborative design partner, reaching back to the user when it needs input and selecting the best models and tools on its own.
That matters because many no-code AI workflow builders still depend on brittle step chaining. By pushing more orchestration into the agent layer, Google is trying to make multi-step workflows easier to author and more adaptive at runtime. The addition of Memory also points toward longer-lived, personalized agents rather than disposable one-off flows.
The competitive question is reliability. If the agent step consistently chooses the right tools and knows when to ask for human input, Opal becomes more than a prompt wrapper. If not, the underlying complexity simply moves from the user’s canvas into the system. Either way, the release is a clear sign that agentic product claims are shifting from demos into workflow builders.
Related Articles
Google has put Deep Research on Gemini 3.1 Pro, added MCP connections, and created a Max mode that searches more sources for harder research jobs. The April 21 preview targets finance and life sciences teams that need web evidence, uploaded files and licensed data in one workflow.
Google said on April 2, 2026 that Gemma 4 is its most capable open model family so far, built from the same technology base as Gemini 3. Google says the family spans E2B, E4B, 26B MoE, and 31B Dense models, adds function-calling and structured JSON support, and offers up to 256K context with an Apache 2.0 license.
Google's AI Edge team said on April 2, 2026 that Gemma 4 is bringing multi-step agentic workflows to phones, desktops, and edge hardware under an Apache 2.0 license. The launch combines open models, Agent Skills, and LiteRT-LM deployment tooling.
Comments (0)
No comments yet. Be the first to comment!