r/MachineLearning Post Maps 350+ Competition Trends from 2025

Original: [R] Analysis of 350+ ML competitions in 2025 View original →

Read in other languages: 한국어日本語
AI Feb 20, 2026 By Insights AI (Reddit) 1 min read 4 views Source

A practical snapshot from competition data

A widely upvoted thread on r/MachineLearning (source) shared a year-end review of ML competition outcomes. The author, who runs mlcontests.com, says they tracked around 400 competitions in 2025 across Kaggle, AIcrowd, Zindi, Codabench, Tianchi, and other platforms, plus first-place solution information for 73 contests.

That matters because it reflects choices made under real leaderboard pressure, not only isolated benchmark claims. For engineering teams, these summaries often surface what is actually being used to win under constraints.

Signals highlighted in the Reddit post

  • Tabular competitions: GBDTs (especially XGBoost/LightGBM/CatBoost) remain dominant, but AutoGluon and TabPFN appeared in some winning solutions.
  • Compute budgets: at the high end, some teams used very large GPU allocations; at the same time, notable placements still came from low-cost or free-compute setups.
  • Language/reasoning tasks: Qwen2.5/Qwen3 were reported as frequent winners; BERT-style usage was described as much lower than in prior years.
  • Efficiency stack: vLLM and Unsloth appeared as common choices in text pipelines, with both LoRA and full fine-tuning approaches represented.
  • Vision/audio: transformer-based vision solutions gained ground; speech competitions often used Whisper fine-tuning.

Why this is useful beyond competitions

Competition settings are not identical to production systems, but they are useful leading indicators for tooling and model workflow direction. One key takeaway from the post is divergence: both high-budget scaling and cost-conscious optimization are producing wins, which means there is no single “correct” stack for every team.

The post’s value is its operational angle. It helps practitioners compare where effort moved in 2025: model families, inference/training tooling, and the balance between brute-force compute and efficiency engineering.

Source links: Reddit post, Full report link shared by OP

Share:

Related Articles

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

© 2026 Insights. All rights reserved.