LLM Reddit 3h ago 2 min read
LocalLLaMA did not just cheer the number. The moment 80 tps and a 218k context window appeared, the thread shifted to prompt length, quantization tradeoffs, and whether the vLLM setup really holds up in practice.
LocalLLaMA did not just cheer the number. The moment 80 tps and a 218k context window appeared, the thread shifted to prompt length, quantization tradeoffs, and whether the vLLM setup really holds up in practice.