Why Most Design Critiques Fail (And What to Do Instead)
Most design critique sessions in product teams and agencies follow the same broken pattern: a designer shares their work, a few people say "I like this" or "can we make it pop more?", and the meeting ends with no clear direction. The designer goes back to their desk more confused than before.
A well-structured critique process fixes this. It aligns stakeholders, surfaces real usability problems, and gives designers the specific, actionable input they need to make work better — faster. Whether you're a solo designer at a startup in Sydney, a product team in Singapore, or a distributed agency across North America, the process below works at any scale.
This guide walks you through building a repeatable design critique workflow from scratch, including the meeting structure, the language framework, the facilitation tactics, and the tools to document it all.
What You'll Need
- A design tool with shareable links (Figma is the standard in 2026)
- A collaborative doc or whiteboard tool (Notion, FigJam, or Confluence)
- A facilitator (usually the design lead or a senior designer)
- Two to six participants — more than six degrades the quality of feedback
- A defined brief or design goal for the work being reviewed
Step 1: Define the Purpose Before the Meeting
The single biggest reason critiques fail is that nobody agrees on what's being evaluated. Before you schedule anything, the designer presenting the work must answer three questions in writing:
- What is this design trying to accomplish? (the goal, not the aesthetic)
- Who is the intended user? (specific persona or segment)
- What specific questions do you want answered? (e.g. "Does the checkout flow feel trustworthy?" or "Is the hierarchy clear on mobile?")
These three answers become the critique brief. Share it with all participants at least 24 hours before the session. This sounds like admin overhead, but it saves 20 minutes of scene-setting at the start of every meeting — and it means feedback lands on the right target.
What a good critique brief looks like
Here's a real example format you can copy:
Design: New onboarding flow — Step 1 to Step 3
Goal: Reduce drop-off between account creation and first value action
User: SMB owner, non-technical, 35–55, first-time SaaS user
Questions:
1. Is it clear what the user needs to do on each screen?
2. Does the progress indicator reduce or create anxiety?
3. Is the copy on the CTA buttons specific enough?Notice what's not in the brief: colour preferences, personal opinions about fonts, or requests for "more modern" design. Keep it outcome-focused.
Step 2: Set the Room Rules at the Start of Every Session
Even experienced teams slip into unhelpful feedback patterns. Spend the first two minutes of every critique session restating four ground rules out loud:
- Describe before you judge. Say what you see before you say whether it works.
- Connect feedback to the goal. Every observation should tie back to the brief questions.
- Use "I notice" not "I think." This separates observation from opinion and reduces defensiveness.
- One voice at a time. The facilitator controls who speaks. No crosstalk.
These rules feel obvious. State them anyway. Teams that skip this step spend the first ten minutes of critique in unstructured opinion pile-ons that kill psychological safety for the designer.
Step 3: Run the Structured Critique in Four Phases
The session itself follows a fixed format. For a typical review of three to five screens, plan for 45 to 60 minutes.
Phase 1 — Silent Review (5–8 minutes)
Share the Figma link with all participants. Everyone reviews the design silently and adds sticky notes (in FigJam or directly in Figma comments) using this format:
- 🟢 What's working — specific elements that serve the brief
- 🔴 What's unclear or problematic — observations tied to a stated question
- 🟡 Questions for the designer — things you need to understand before judging
Silent review prevents groupthink. The loudest person in the room stops setting the agenda the moment feedback is written before it's spoken.
Phase 2 — Designer Walkthrough (5 minutes)
The designer walks through the work briefly — not to sell it, but to surface decisions that might not be visible. They explain one or two key choices and flag anything they're already uncertain about. This is not a presentation. It's context-setting. Keep it tight.
Phase 3 — Structured Discussion (20–25 minutes)
The facilitator reads out sticky notes aloud, groups similar observations, and opens them for discussion. Work through the brief questions in order. For each piece of feedback, push participants to be specific:
- "Which element specifically are you referring to?"
- "Can you connect that to one of our brief questions?"
- "What would a better version look like, and why?"
The facilitator documents key decisions and unresolved questions in a shared Notion doc in real time. This becomes the revision brief.
Phase 4 — Action Summary (5–10 minutes)
Before the session ends, the facilitator reads back every agreed action item with:
- What needs to change
- Why (the brief question it addresses)
- Priority level: must-fix, should-fix, or explore-later
The designer does not leave the room with a vague to-do list. They leave with a prioritised revision brief.
Step 4: Create a Feedback Documentation Template
Verbal feedback that isn't documented disappears. Build a simple Notion or Confluence template that captures every critique session. Here's the minimum structure:
## Critique Log — [Design Name] — [Date]
**Brief questions:** (copy from the original brief)
### What's working
- [observation] → [relevant brief question]
### Priority revisions
- Must-fix: [specific change] → [reason]
- Should-fix: [specific change] → [reason]
- Explore later: [idea] → [context]
### Open questions / decisions needed
- [question] → [owner] → [due date]
### Next review date:Store all critique logs in a shared folder accessible to the whole team. Over time, these logs become a searchable record of design decisions — invaluable when stakeholders revisit old choices or onboard new team members.
Step 5: Establish a Cadence That Fits Your Team
A critique process only works if it's regular. One-off sessions don't build the feedback culture that makes teams better. Here's a cadence that works for most product and agency teams:
- Weekly informal critique (30 min): Work-in-progress reviews, early concepts, rough flows. Low-stakes. High frequency.
- Bi-weekly formal critique (60 min): Near-complete designs before development handoff. Uses the full four-phase structure.
- Post-launch retrospective review: Compare shipped design against live analytics or usability data. Closes the loop between critique and real-world outcomes.
Teams at Lenka Studio run a variation of this cadence across client projects — it's one of the reasons feedback loops stay tight even when designers, developers, and clients are spread across Bali, Australia, and Singapore.
Step 6: Train Your Participants to Give Better Feedback
Even with a great structure, critique quality depends on the quality of the people in the room. Most non-designers give poor feedback not because they're bad at their jobs, but because nobody's taught them how to contribute to a design review.
Run a short 20-minute onboarding session for any new participant — a product manager, a developer, a client — before their first critique. Cover:
- The difference between taste-based and goal-based feedback
- How to use the "I notice / I wonder / what if" framework
- Why asking questions is more useful than making declarations
The "I notice / I wonder / what if" framework is worth internalising across your whole team. It works like this:
- I notice the CTA button blends into the background on mobile.
- I wonder if users are missing it on smaller screens.
- What if we tested a higher-contrast version to see if tap rate improves?
This structure moves feedback from opinion to hypothesis — which is exactly the mindset that leads to better design decisions.
Common Pitfalls to Avoid
- Skipping the brief. No brief means no criteria for evaluating feedback. Everything becomes subjective.
- Inviting too many people. More than six participants creates noise. Quality drops sharply above this number.
- Letting senior voices dominate. The highest-paid person's opinion (HiPPO) is one of the most destructive forces in a critique room. The facilitator's job is to balance voices.
- Conflating critique with approval. A critique is not a sign-off meeting. Keep those separate.
- Skipping documentation. A great discussion that produces no written output is wasted. Always close with a documented revision brief.
Tools That Support This Process in 2026
- Figma — comments, branching for revisions, and dev mode for handoff
- FigJam — sticky note phases and real-time whiteboarding during structured discussion
- Notion — critique log templates and linked design briefs
- Loom — async critique walkthroughs for distributed or timezone-split teams (a designer records a five-minute walkthrough; reviewers drop comments before the live session)
- Fathom or Otter.ai — AI meeting transcription so the facilitator can focus on facilitation rather than note-taking
Next Steps
Start small. Run one structured critique this week using the four-phase format and the brief template above. You'll notice the quality of feedback — and the speed of revisions — improve immediately. As the process embeds into your team's rhythm, the long-term payoff is significant: fewer revision cycles, stronger design rationale, and better alignment between design decisions and business outcomes.
If your team is working on a product or brand and wants an outside perspective on your design process, take the free brand health score assessment to get a clearer picture of where your design and brand strategy stands before your next review cycle.
Want help building a critique culture across a remote or distributed team? The team at Lenka Studio works with SMBs and product teams across Australia, Canada, Singapore, and the US to establish design workflows that actually scale. Get in touch and let's talk about what your team needs.




