AI Reddit 3h ago 2 min read
A new paper discussed in r/MachineLearning argues that unofficial model-access providers can quietly substitute models and distort both research and production results.
A new paper discussed in r/MachineLearning argues that unofficial model-access providers can quietly substitute models and distort both research and production results.
A prominent r/MachineLearning thread highlighted arXiv 2603.01919, which audits shadow APIs claiming GPT-5 and Gemini-2.5 access and reports large performance drift, unstable safety behavior, and frequent identity-verification failures.