Sprint Questions

Sprint Questions — Research Tech

Purpose: These questions guide all Sprint 1-2 work. Every design decision should help answer one of these.

Context: Built from Workshop 1 input (HMW voting, sprint question session) refined to focus on what interface testing CAN validate.


Sprint Questions

1. Can users perceive meaningful differentiation from ChatGPT/Gemini within the first experience?

Builds on: Cluster 1 (4 votes) — Competitive Differentiation + "Can we show unique value for clients"

Why testable: We can show users the processing view, evidence trail, and cheat sheet format, then ask: "How is this different from ChatGPT deep research?" If they can't articulate it, the interface isn't communicating value.

What we're testing:

  • Processing view (agent graph, sources counter)
  • Evidence drawer interaction
  • Cheat sheet format with citations

2. Can users trust the research output enough to take action (share with colleagues, use in a meeting)?

Builds on: "HMW build trust with the user" (2 votes) + "Can we...make the solution more trustworthy than other options on the market?"

Why testable: We observe whether users click evidence links, how they react to confidence indicators, and whether they'd forward the output to a colleague. Trust is behavioural — we can measure it.

What we're testing:

  • Citation click-through rates
  • Confidence indicator comprehension
  • "Would you share this?" response
  • Evidence drawer usability

3. Can users navigate from high-level insights to detailed evidence without getting lost or overwhelmed?

Builds on: Cluster 3 (3 votes) — Information Architecture + "Can we show simplicity in complexity?" (highlighted)

Why testable: We can observe whether users successfully drill down, where they get stuck, and whether they can return to the cheat sheet. This is pure usability testing.

What we're testing:

  • Cheat sheet → evidence drawer → back flow
  • Progressive disclosure clarity
  • Information hierarchy comprehension

User Research Questions (for Sprint 2 Testing)

These address knowledge blind spots from user research that CAN be validated through click-testing:

From "False Positive Tolerance" blind spot:

  • How do users react when multiple items are flagged for review? (Do 3 conflicts feel thorough or alarming? Does 10 feel overwhelming?)
  • Do users click through to verify flagged items, or do they trust the system's recommendation?

From "Pre-Meeting Briefing Format" recommendation:

  • Is the cheat sheet format immediately useful, or do users want to jump to the full report?
  • Can users identify the "killer facts" within 60 seconds?
  • What do users do first: read the cheat sheet, or check the sources?

From the "Omission Problem" core insight:

  • Do users understand the "what we couldn't find" disclosure? Does it build or erode trust?
  • When shown the processing view (70+ agents, parallel research), does it feel like "serious work" or intimidating complexity?

From Workshop Step 5 ("Alternative Entry Point"):

  • Can users who enter at a completed report (not the creation flow) understand what they're looking at?
  • What context do they need to trust a report they didn't create?

How These Connect to Workshop Output

Workshop InputRefined Sprint Question
Competitive Differentiation cluster (4 votes)→ Question 1: Differentiation perception
"HMW build trust" (2 votes)→ Question 2: Trust & action
Information Architecture cluster (3 votes)→ Question 3: Navigation & comprehension
"Can we find what ICPs are willing to pay for?" (prioritized)→ Deferred: prototype testing can't validate pricing

Created: 6 January 2026 Source: Workshop 1 outcomes + upfront-user-research.md blind spots