Perplexity Adds GPT-5.4 and GPT-5.4 Thinking for Pro and Max Subscribers
Original: Perplexity Adds GPT-5.4 and GPT-5.4 Thinking for Pro and Max Subscribers View original →
What Perplexity Announced
On March 5, 2026 (UTC), Perplexity posted on X that GPT-5.4 and GPT-5.4 Thinking are now available to Pro and Max subscribers. The wording is direct: this is a live paid-tier availability update, not a preview waitlist message.
At crawl time using public mirror endpoints, the post had more than 1,500 likes and over 84,000 views, suggesting fast user attention from people who want frontier-model access inside a search-first product.
Why It Matters
Perplexity competes by combining web retrieval, source grounding, and LLM response generation in one workflow. Adding GPT-5.4 class models strengthens its premium position for users who compare model quality across multiple AI assistants.
For teams, the practical question is not just model branding. It is whether answer quality, reasoning depth, and time-to-output improve in daily research tasks. This update creates a clear test point for organizations already using Pro or Max seats.
What Is Still Unknown
The X post does not provide detailed benchmark methodology or per-task performance breakdowns. Teams considering broad adoption should run task-specific evaluations on accuracy, consistency, latency, and cost before standardizing workflows.
Related Articles
GitHub said on March 5, 2026 that GPT-5.4 is now generally available and rolling out in GitHub Copilot. The company claims early testing showed higher success rates plus stronger logical reasoning and task execution on complex, tool-dependent developer workflows.
Cursor announced GPT-5.4 availability on March 5, 2026, saying the model feels more natural and assertive and currently leads its internal benchmarks. The update underscores rapid model-refresh cycles in AI coding tools.
OpenAI announced GPT-5.4 on March 5, 2026, adding a new general-purpose model and GPT-5.4 Pro with stronger computer use, tool search efficiency, and benchmark improvements over GPT-5.2.
Comments (0)
No comments yet. Be the first to comment!