GitHub to use Copilot consumer interaction data for AI training from April 24 unless users opt out
Original: Updates to our Privacy Statement and Terms of Service: How we use your data View original →
On March 25, 2026, GitHub published updates to its Privacy Statement and Terms of Service that materially change the default data posture for Copilot consumer plans. Beginning on April 24, GitHub says it may collect and use interaction data from Copilot Free, Pro, and Pro+ users to develop, train, and improve AI models unless those users opt out in settings. The company explicitly said Copilot Business and Copilot Enterprise accounts are not affected by this update.
The scope of data named in the announcement is broad. GitHub listed inputs, outputs, code snippets, and associated context as data that may be used for AI training and product improvement. It also updated the affiliate-sharing language in its privacy policy so that GitHub affiliates, including Microsoft, may use shared data for additional purposes related to artificial intelligence and machine learning, subject to applicable law and GitHub's privacy commitments. GitHub said opt-out preferences and enterprise protections travel with the data when it is shared with affiliates.
An important nuance is how the company describes private repositories. GitHub said it will not use private repository content at rest to train AI models. However, it also clarified that if a user provides private repository content as input to an AI feature, that interaction data may be used to improve AI features unless the user opts out. In practical terms, the stored repository itself is not the training corpus, but prompts, suggestions, and related code context generated during Copilot use can be.
The update also reorganizes the legal framing around AI features. GitHub added or refreshed definitions such as AI Feature, Input, Output, and Affiliate, created a dedicated Terms of Service section for AI features and training, and said that for users in the EEA and UK, AI development is treated as a legitimate interest. At the same time, GitHub emphasized that users continue to own their inputs and outputs and that third-party AI model providers will not receive this data for their own independent training.
For developers, this is a meaningful policy change rather than a minor wording cleanup. Anyone using Copilot on a personal plan now needs to review whether their prompts, code snippets, and surrounding context should remain in the default collection path. More broadly, the change shows that competition in AI coding tools is no longer only about model quality and workflow features. Default training rules, opt-out controls, and trust posture are becoming product-level differentiators too.
Related Articles
Microsoft said on March 9, 2026 that it is combining Copilot Wave 3, Agent 365, and broader model choice into a new Frontier Suite for enterprise AI. Agent 365 reaches general availability on May 1 at $15 per user, while Microsoft 365 E7 launches the same day at $99 per user.
Microsoft said on Mar 17, 2026 that it is collapsing consumer and commercial Copilot into one operating structure. Jacob Andreou becomes EVP, Copilot, while Mustafa Suleyman concentrates on frontier models and the company’s superintelligence push.
Anthropic's Claude Code Cowork (multi-agent collaboration) feature was found to create a ~10GB VM bundle on macOS using Apple's Virtualization Framework without warning users. The GitHub issue garnered 200+ points on Hacker News.
Comments (0)
No comments yet. Be the first to comment!