How to Use Perplexity AI for Search

Ryan Flanagan
Sep 01, 2025By Ryan Flanagan

TL;DR: Perplexity combines an AI model with live web search, pulls from multiple sources, and shows citations so you can verify facts. This piece explains how the system works, the modes that matter, how to start, and a worked example on “AI training” research. It closes with the simple next step to build this into your team’s workflow. 

What makes Perplexity different?

Most AI tools generate answers from training data and do not expose sources, which makes verification slow. Perplexity’s Pro Search runs a targeted crawl, synthesises across many references, and attaches direct links for audit. You can also choose advanced models before a search. 

Perplexity supports model selection, including its own Sonar plus third-party models such as GPT-5, Claude 4.0 Sonnet, and Gemini 2.5 Pro. Pick depth over speed when accuracy matters. 

For evidence-heavy work, use Focus options such as Academic to prioritise journals and peer-reviewed sources. This reduces noise from general web content. 

Why this matters for work

Research and reporting stall when staff open ten tabs to build one answer. Pro Search increases source depth and keeps a full trail of links, so your team can defend a claim without extra legwork. 

Threads keep context across follow-ups, and Spaces let teams organise work, set custom instructions, and collaborate in one place. This cuts switching cost across tools.

Enterprise Pro adds Internal Knowledge Search so you can combine the web with your organisation’s files in one query, useful for due diligence and board papers. 

How to get started, step-by-step

  • Open Perplexity. For quick checks, standard search is fine. For serious work, click Pro Search. 
  • Choose model. Select Sonar for fast grounded answers, or a third-party model when you want deeper reasoning. 
  • Set focus. Use Academic for peer-reviewed material, or keep Web for a general scan. 

Ask a precise question. Example: “What training methods are Australian companies using to build AI literacy in 2025?” Then run Pro Search. 

  • Audit the answer. Open the cited links. Check whether the key claims appear in those sources. If a claim is critical, confirm it in two independent references. 
  • Organise. Save the thread to a Space, add notes, and share with colleagues for review. 
  • Enterprise. If you have sensitive material, run Internal Knowledge Search and select Web + Org Files to blend sources. 

Example: deep research on “AI training”
Task: prepare a board briefing on staff AI training options for the next quarter. The brief needs current methods, cost ranges, time to competency, and risks.

Action in Perplexity:

  • Run Pro Search with the query above.
  • Switch to Academic focus for peer-reviewed programmes and learning outcomes.
  • Add a second query for “enterprise AI training cost benchmarks ANZ 2025.”
  • Save both threads into a Space called “AI Training Q2 Briefing.” Perplexity 

What you get: a synthesised answer with linked sources across policy pages, HR case studies, and education journals. You keep the citations, so each claim in your slide or memo points back to a source your board can check. If you need internal policy context, run the same query with Internal Knowledge Search to include your own training catalogues or LMS exports. 

Results you should expect

  1. Faster briefing prep, because the links are attached to the summary.
  2. More defensible decisions, because assertions are traceable to sources.
  3. Less time spent chasing facts, more time spent applying them. 
  4. Known limits and how to handle them
  5. Perplexity improves transparency, not infallibility. Always scan sources for quality, and be aware that some linked pages on the web may themselves be AI-generated.
  6. Double-check key claims via independent, reputable sites. 

Finding and verifying information is a core skill. Perplexity makes it practical at pace. If you want your team to use tools like this safely and effectively, enrol in the AI Fundamentals Masterclass for a structured starting point and hands-on workflow practice.

FAQ

Q: Where do I see sources, and how many will Pro Search surface?
A: Sources appear next to the answer. Pro Search performs a deeper crawl and can return many more references than standard search. 

Q: Which model should I pick?
A: Start with Sonar for grounded search. Switch to an advanced model such as GPT-5, Claude 4.0 Sonnet, or Gemini 2.5 Pro when the task needs more reasoning. Test for your use case. 

Q: How do I restrict to peer-reviewed material?
A: Use the Academic focus to prioritise journals, databases, and scholarly publications. 

Q: Can I organise multi-week research with teammates?
A: Yes. Use Spaces to group threads, set instructions, and collaborate. Keep key decisions and citations in one place. 

Q: Can it search our internal files with the web?
A: Yes, with Internal Knowledge Search in Pro and Enterprise Pro. Choose Web, Org Files, or Web + Org Files. Check your data policy first. 

Q: Is there a mobile or browser option?
A: Yes. Perplexity is available on web, mobile apps, and extensions, with threads for context across sessions. 

Q: How do I avoid low-quality or AI-generated sources?
A: Prefer official docs, reputable news, and journals. If a linked page looks synthetic, verify the claim via a second, established source before you use it. 

Q: When should I not use Perplexity?
A: For confidential material you cannot upload, or topics where authoritative databases are paywalled and unavailable. In those cases, collect citations manually from primary sources.