A front-page Hacker News discussion resurfaced an EE Times interview outlining how AMD wants ROCm, Triton, OneROCm, and an open-source release model to chip away at CUDA dependence. The real test is not a headline compatibility claim, but whether stacks like vLLM and SGLang work in a boring, dependable way.
#rocm
RSS FeedAI Hacker News Apr 13, 2026 2 min read
LLM Reddit Mar 23, 2026 2 min read
A benchmark thread on r/LocalLLaMA compared ROCm 7 nightlies and Vulkan on an AMD Mi50 for llama.cpp, arguing that Vulkan wins short dense workloads while ROCm pulls ahead on long context and some MoE scenarios.
LLM Reddit Mar 8, 2026 2 min read
A LocalLLaMA thread reported a large prompt-processing speedup on Qwen3.5-27B by lowering llama.cpp `--ubatch-size` to 64 on an RX 9070 XT. The interesting part is not a universal magic number, but the reminder that prompt ingestion and token generation can respond very differently to `n_ubatch` tuning.