LLM Reddit 5h ago 2 min read
A detailed r/LocalLLaMA experiment claims that copying layer blocks around 50-56% depth consistently hurts or collapses model quality across multiple architectures. The post stands out because it compares dense, hybrid, MoE, and transplant setups from a fully local MLX workflow.