Google introduces the Developer Knowledge API and MCP Server for Gemini Code Assist workflows
Original: Introducing the Developer Knowledge API and MCP Server View original →
Google introduced the Developer Knowledge API and an open-source MCP Server on February 4, 2026 as a way to make Gemini Code Assist more aware of a team’s actual technical context. The goal is not just better model output in the abstract. It is easier access to the documents developers already depend on, including internal docs, architecture decision records, code snippets, and public technical URLs.
According to the announcement, developers can configure knowledge sources from a list of URLs and then use the API or the MCP Server to search, retrieve, and apply relevant information inside an IDE or an AI-agent workflow. That reduces the amount of custom retrieval plumbing teams usually need when they want coding assistants to work against real project knowledge instead of generic model priors.
What Google is adding
- A Developer Knowledge API that programmatically searches and surfaces relevant documentation.
- An open-source MCP Server that exposes the same knowledge layer to IDEs and AI agents.
- A simpler path to tailoring Gemini Code Assist with internal resources and public URLs.
- Less manual integration work for teams building custom developer workflows.
The significance is larger than a single API launch. As coding assistants become more capable, the limiting factor often shifts from model quality to context quality. Developers need answers grounded in the conventions, architecture, and documentation of their own codebases. Google’s new knowledge layer is aimed squarely at that operational reality.
The MCP angle also matters. Enterprises rarely standardize on one editor, one agent framework, or one internal toolchain. By pairing a proprietary API with an open-source MCP Server, Google is signaling that retrieval and interoperability are becoming core infrastructure for AI development tools. That makes the announcement relevant not just for Gemini users, but for any team thinking seriously about how agents should consume trusted engineering knowledge.
Source: Google Developers Blog
Related Articles
Google DeepMind said Gemini 3.1 Flash-Lite is rolling out in preview through the Gemini API and Google AI Studio. The company positioned it as the most cost-efficient Gemini 3 model, with lower price, faster performance, and tunable thinking levels.
Google has put Gemini Embedding 2 into public preview through the Gemini API and Vertex AI. The model is Google’s first natively multimodal embedding system, combining text, image, video, audio, and document inputs in one embedding space.
Google AI Developers says Gemini Embedding 2 is now in preview via the Gemini API and Vertex AI. Google describes it as its first fully multimodal embedding model on the Gemini architecture and its most capable embedding model so far.
Comments (0)
No comments yet. Be the first to comment!