Perplexity launches the Secure Intelligence Institute for frontier AI security research

Original: Today, we're launching the Secure Intelligence Institute. SII partners with top cryptography, security, and ML teams to advance security research and industry collaboration. It is led by Dr. Ninghui Li at Purdue. https://www.perplexity.ai/secure-intelligence-institute View original →

Read in other languages: 한국어日本語
AI Apr 1, 2026 By Insights AI 2 min read 1 views Source

What Perplexity announced

On March 31, 2026, Perplexity said it is launching the Secure Intelligence Institute, or SII, to study the security, trustworthiness, and practical defense of leading-edge AI systems. The company presented the institute as both a research program and an industry collaboration vehicle rather than a one-off paper initiative.

The timing is notable because frontier AI security is moving from general discussion into specific system design questions around agents, browsers, tools, and autonomous workflows. Perplexity is framing SII as a place where those questions can be studied in a way that feeds back into production systems, not just into policy commentary.

What the institute page says

Perplexity says SII will advance AI security through a mix of internal research and collaboration with the academic community. The institute page says the work is informed by Perplexity’s experience operating general-purpose AI systems used by millions of users and thousands of enterprises across controlled and open-world environments. That detail matters because it grounds the institute in operational exposure, not only in theoretical analysis.

The page also identifies Dr. Ninghui Li, the Samuel D. Conte Professor of Computer Science at Purdue University, as SII’s inaugural director. Perplexity further says the research network includes academic groups across cryptography, usable privacy and security, robust machine learning, and trustworthy AI, and names researchers such as Dan Boneh and Neil Gong on the network page.

Early research signals

The institute page already points to concrete outputs. One is BrowseSafe, which Perplexity describes as an open-source benchmark and content-detection model for the emerging AI-native web, including more than 14,700 real-world attack scenarios across 14 harm categories. Another is the NIST Agent Security RFI paper.

The related arXiv paper says it is a lightly adapted version of Perplexity’s response to NIST/CAISI Request for Information 2025-0035. The abstract argues that AI agents change assumptions around code-data separation, authority boundaries, and execution predictability, and maps attack surfaces across tools, connectors, hosting boundaries, and multi-agent coordination. It specifically highlights indirect prompt injection, confused-deputy behavior, and cascading failures in long-running workflows.

Why this matters

The larger signal is that AI security work is becoming more institution-like inside product companies. Instead of scattering security papers across separate teams, Perplexity is packaging benchmark development, policy engagement, production engineering, and academic collaboration under one named institute.

An inference from the launch is that Perplexity wants to make security a first-class competitive and governance narrative as agentic products expand. The hard part will be proving that the institute yields reusable defenses and meaningful standards, not just branding. Still, the launch is notable because it connects live-product operating experience with published benchmarks, public-policy input, and a research network that spans both academia and industry.

Sources: Perplexity X post · Secure Intelligence Institute · NIST agent security paper

Share: Long

Related Articles

AI Mar 13, 2026 2 min read

Perplexity has introduced Personal Computer, an always-on local agent system that runs through a continuously operating Mac mini and exposes files, apps, and sessions to Perplexity Computer and the Comet Assistant. The company is pitching it as a persistent AI operating system with human approval, logging, and a kill switch for sensitive actions.

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

© 2026 Insights. All rights reserved.