HN reacted because fake stars are no longer just platform spam; they distort how AI and LLM repos look credible. The thread converged on a practical answer: read commits, issues, code, and real usage instead of treating stars as proof.
#trust
RSS FeedHN did not upvote this just for another AI backlash post. The thread dug into whether LLMs are changing the operating cost of search, work, education, and software trust, with commenters split between constrained use, stronger policy, and outright refusal.
HN did not stay on the word steal for long. The real argument was whether an AI agent can spend a user’s paid LLM credits and GitHub identity on upstream maintenance without a hard opt-in, because once that happens the problem stops being clever automation and becomes consent.
In a February 4, 2026 post, Anthropic said Claude conversations will remain ad-free and not include unsolicited product placements. The company argues that conversational AI requires clearer trust incentives than ad-supported feed or search models.