This guide walks you through building a repeatable, AI-powered competitor analysis workflow from scratch. By the end, you will have a live system that automatically gathers competitor data, summarises insights, and delivers a structured report into Notion — taking roughly 4–6 hours to set up the first time.
What You'll Build
- A Perplexity API pipeline that pulls fresh competitor intelligence on a weekly schedule
- A Make (formerly Integromat) automation that routes raw data through an AI summarisation step
- A structured Notion database where every competitor gets its own insight card, updated automatically
- A Slack digest that delivers a plain-English summary to your marketing channel every Monday morning
Prerequisites
- A Perplexity API key (Pro plan, US$20/month as of 2026)
- A Make account (free tier works for testing; Core plan recommended for production)
- A Notion workspace with admin access
- A Slack workspace with a channel for marketing updates
- Basic comfort reading JSON — no coding required
Step 1: Define Your Competitor Set and Research Scope
Before touching any tool, you need clarity on what you are tracking. Vague inputs produce vague outputs — even with AI.
How many competitors should you track?
Aim for 3–8 direct competitors. More than eight makes weekly reports too noisy to act on. Include at least one aspirational brand — a company slightly ahead of you — and one emerging challenger.
For each competitor, define three research dimensions:
- Positioning & messaging — How do they describe their product? What pain points do they lead with?
- Content & SEO activity — What topics are they publishing? Which pages rank?
- Offer & pricing changes — Have their plans, bundles, or promotions changed?
Write these dimensions down. You will turn them into Perplexity prompt templates in Step 3.
Pro tip: If you are also evaluating your own brand health alongside competitor data, run the free brand health score assessment from Lenka Studio first. It gives you a baseline score across positioning, visibility, and trust — so you know exactly which competitor gaps matter most to close.
Step 2: Set Up Your Notion Competitor Database
Notion acts as your single source of truth. Every automated update lands here.
What fields does the database need?
Create a new Notion database called Competitor Intelligence. Add these properties:
- Name (Title) — Competitor brand name
- Website (URL)
- Last Updated (Date)
- Positioning Summary (Text)
- Content Activity (Text)
- Offer Changes (Text)
- Overall Threat Level (Select: Low / Medium / High)
- Week Tag (Text) — e.g. "2026-W18"
Create one card per competitor manually now. The automation will update existing cards, not create duplicates.
Common pitfall: If you let the automation create new cards every week, your database becomes unreadable within a month. Always use the Update Page action in Make, not Create Page.
Step 3: Write Your Perplexity Prompt Templates
Perplexity's API uses its sonar-pro model (as of 2026) with real-time web search built in. This is what separates it from a static ChatGPT prompt — it pulls live data.
What does a good competitor prompt look like?
Use this structure for each research dimension. Replace the bracketed variables in Make later.
You are a competitive intelligence analyst.
Research {{competitor_name}} ({{competitor_website}}) as of today.
Focus only on: {{research_dimension}}.
Return a structured summary in 3 bullet points.
Each bullet must be under 40 words.
Do not speculate. Only report what you can verify from their website, press releases, or credible news sources published in the last 30 days.
End with a one-sentence threat assessment for a {{our_industry}} business.
Save three versions of this prompt — one per research dimension. Store them as text fields inside Make for easy editing.
Pro tip: The instruction "published in the last 30 days" is critical. Without it, Perplexity may surface older news as if it were current. Always constrain recency.
Step 4: Build the Make Automation Scenario
This is where everything connects. The scenario runs on a weekly schedule and loops through each competitor.
How do you structure the Make scenario?
Follow this module sequence inside Make:
- Schedule trigger — Set to run every Monday at 07:00 in your local timezone (Sydney, Singapore, Toronto, or New York).
- Notion: Search Pages — Query your Competitor Intelligence database. Filter: all pages where Threat Level is not empty (this excludes archived entries).
- Iterator — Loop through each Notion page returned.
- HTTP: Make a Request (×3) — One request per research dimension. Call the Perplexity API with your prompt template. Use the Name and Website fields from the Notion page as dynamic variables.
Here is the HTTP module configuration for each Perplexity call:
{
"URL": "https://api.perplexity.ai/chat/completions",
"Method": "POST",
"Headers": {
"Authorization": "Bearer YOUR_API_KEY",
"Content-Type": "application/json"
},
"Body": {
"model": "sonar-pro",
"messages": [
{
"role": "user",
"content": "YOUR_PROMPT_TEMPLATE_HERE"
}
],
"max_tokens": 300,
"temperature": 0.2
}
}
Set temperature to 0.2. Lower temperature means more factual, less creative responses — exactly what you want for competitive intelligence.
- Notion: Update Page — Write the three API responses into the corresponding text fields. Set Last Updated to today's date. Update Week Tag to the current ISO week.
What if an API call fails mid-loop?
Add an Error Handler module after each HTTP call. Route errors to a Make data store that logs the competitor name and error code. This prevents one failed call from breaking the entire loop. Review the error log after the first three runs — Perplexity's rate limits (60 requests per minute on the Pro plan) rarely cause issues for fewer than eight competitors, but it is good practice.
Step 5: Add the AI Threat Level Scoring Step
After the three research calls complete for one competitor, add a fourth HTTP call that asks the model to synthesise a threat level score.
{
"model": "sonar-pro",
"messages": [
{
"role": "user",
"content": "Based on this competitive intelligence summary:\
\
Positioning: {{positioning_summary}}\
Content Activity: {{content_summary}}\
Offer Changes: {{offer_summary}}\
\
Rate the competitive threat to a {{our_industry}} business as Low, Medium, or High.\
Return only one word: Low, Medium, or High."
}
],
"max_tokens": 5,
"temperature": 0
}
Parse the response text and map it to your Notion Threat Level select field. Setting max_tokens: 5 forces a single-word answer and saves API credits.
Step 6: Build the Slack Digest Module
After the Iterator finishes all competitors, add an Aggregator module to collect all summaries, then send a single Slack message.
How should the Slack message be formatted?
Use Slack's Block Kit format. A clean structure looks like this:
{
"blocks": [
{
"type": "header",
"text": { "type": "plain_text", "text": "🔍 Weekly Competitor Digest — {{week_tag}}" }
},
{
"type": "section",
"text": { "type": "mrkdwn", "text": "{{aggregated_summaries}}" }
},
{
"type": "actions",
"elements": [
{
"type": "button",
"text": { "type": "plain_text", "text": "View Full Report in Notion" },
"url": "YOUR_NOTION_DATABASE_URL"
}
]
}
]
}
Keep the Slack message to headline-level insights only. The full detail lives in Notion. This respects your team's attention and gets the digest actually read.
Step 7: Test, Validate, and Refine
Run the scenario manually before enabling the weekly schedule.
What does a successful test look like?
- All Notion cards show updated Last Updated dates
- Each text field contains 3 bullet points — not raw JSON or error messages
- Threat Level field shows Low, Medium, or High (not blank)
- Slack message appears in the target channel with a working Notion link
Run two test cycles on the same day. Compare the outputs. If a competitor's positioning summary changes significantly between identical runs, your prompt is too open-ended. Tighten the scope language.
Common pitfall: Perplexity sometimes returns markdown formatting (asterisks, hashes) inside the response text. Add a text formatter module in Make to strip markdown before writing to Notion, otherwise fields display raw symbols.
Teams at agencies like Lenka Studio typically spend the first two weeks refining prompts after seeing real outputs. Expect iteration — the first run is a starting point, not a finished product.
Frequently Asked Questions
Can I use this workflow with GPT-4o instead of Perplexity?
GPT-4o does not browse the web in real time by default, which limits its usefulness for current competitor data. Perplexity's sonar-pro model is the better choice here because it combines live search with language model summarisation in a single API call.
How much does this workflow cost to run per month?
For eight competitors with three research calls each, you use roughly 96 Perplexity API calls per month (four calls per competitor per week). At Perplexity Pro rates in 2026, this costs approximately US$5–10/month in API credits on top of your plan. Make's Core plan comfortably handles the operation count.
What if Perplexity returns inaccurate information about a competitor?
Always treat AI-generated competitive intelligence as a first draft, not ground truth. The workflow surfaces signals quickly — your team should verify significant findings directly on the competitor's website before acting. Add a "Verified" checkbox to your Notion database so the team can flag confirmed insights.
Does this work for service businesses as well as SaaS companies?
Yes. The prompt templates work for any business type. For service businesses in Australia, Singapore, Canada, or the US, add a location variable to the prompt (e.g. "Focus on their Australian market activity") to filter out irrelevant global noise.
How is this different from using a tool like Crayon or Klue?
Dedicated tools like Crayon offer richer features — browser extensions, CRM integrations, and a purpose-built UI. This workflow is a fraction of the cost (US$100+/month vs roughly US$10–15/month) and fully customisable. It is the right starting point for SMBs who want structured competitor data without enterprise pricing.
Next Steps
Once your workflow has run for four weeks, you will have enough data to spot patterns — which competitors are accelerating content output, which are quietly repricing, and where your positioning has a genuine gap to exploit.
From there, consider:
- Adding a fifth research dimension (e.g. review sentiment from G2 or Trustpilot) to your prompt set
- Connecting your Notion database to a BI tool like Rows.com for trend visualisation
- Sharing the weekly digest with your sales team alongside your brand health score to align competitive response with your own positioning strategy
If you would rather have a team build and manage this workflow for you — or need it integrated into a wider marketing automation stack — Lenka Studio works with SMBs across Australia, Singapore, Canada, and the US to design and deploy AI-powered marketing systems. Reach out and we will scope it with you.




