Google introduces the Developer Knowledge API and MCP Server for Gemini Code Assist workflows

Original: Introducing the Developer Knowledge API and MCP Server View original →

Read in other languages: 한국어日本語
LLM Mar 13, 2026 By Insights AI 2 min read 3 views Source

Google introduced the Developer Knowledge API and an open-source MCP Server on February 4, 2026 as a way to make Gemini Code Assist more aware of a team’s actual technical context. The goal is not just better model output in the abstract. It is easier access to the documents developers already depend on, including internal docs, architecture decision records, code snippets, and public technical URLs.

According to the announcement, developers can configure knowledge sources from a list of URLs and then use the API or the MCP Server to search, retrieve, and apply relevant information inside an IDE or an AI-agent workflow. That reduces the amount of custom retrieval plumbing teams usually need when they want coding assistants to work against real project knowledge instead of generic model priors.

What Google is adding

  • A Developer Knowledge API that programmatically searches and surfaces relevant documentation.
  • An open-source MCP Server that exposes the same knowledge layer to IDEs and AI agents.
  • A simpler path to tailoring Gemini Code Assist with internal resources and public URLs.
  • Less manual integration work for teams building custom developer workflows.

The significance is larger than a single API launch. As coding assistants become more capable, the limiting factor often shifts from model quality to context quality. Developers need answers grounded in the conventions, architecture, and documentation of their own codebases. Google’s new knowledge layer is aimed squarely at that operational reality.

The MCP angle also matters. Enterprises rarely standardize on one editor, one agent framework, or one internal toolchain. By pairing a proprietary API with an open-source MCP Server, Google is signaling that retrieval and interoperability are becoming core infrastructure for AI development tools. That makes the announcement relevant not just for Gemini users, but for any team thinking seriously about how agents should consume trusted engineering knowledge.

Source: Google Developers Blog

Share: Long

Related Articles

LLM sources.twitter 5d ago 1 min read

Google DeepMind said Gemini 3.1 Flash-Lite is rolling out in preview through the Gemini API and Google AI Studio. The company positioned it as the most cost-efficient Gemini 3 model, with lower price, faster performance, and tunable thinking levels.

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

© 2026 Insights. All rights reserved.