ResearchTech — UI/UX Master Brief
Audience: UI/UX designer joining the project Goal of this document: provide a complete, practical picture of the product vision, key user needs, decisions made so far, and the screens/flows we expect to design.
0) TL;DR
ResearchTech is a startup/project analysis platform that produces decision-ready diligence outputs (cheat sheet + full report) backed by source-linked evidence (URLs + extracted snippets) and conflict detection.
The platform runs a complex research pipeline under the hood (many LLM calls, parallel research plans, multi-phase processing), but the UI must present this complexity using progressive disclosure — simple by default, deeper visibility via a Make/n8n-style workflow graph (not a trace/log viewer).
Users can:
- create a report from web inputs (primary) or email intake (secondary),
- monitor progress via an "interactive analysis" view during processing,
- review a cheat sheet (default) with drill-down into evidence and a full report,
- export two PDFs (short + long),
- ask follow-ups in a grounded chat (answers tied to report sources) in quick mode, or run Investigate (deep mode) that creates a child report linked to the original and mergeable only on request.
1) What is ResearchTech?
ResearchTech is a research platform that finds and verifies information about startups and companies so investors can make faster, better decisions.
You give it a startup or company to analyze. It runs a multi-step research pipeline (70+ specialized agents, parallel web research, document processing). You get back a cheat sheet (top risks, opportunities, questions to ask) with every claim linked to its source. Click any claim → see the exact quote and URL.
It's a power suit that makes investors faster and more thorough.
1.2 The problem we're solving
Investment professionals spend 30-50% of their time hunting for information — web searches, reading docs, cross-checking facts. That's time not spent on actual thinking and decisions.
The obvious fix is AI. But generic AI (ChatGPT, Perplexity) has a trust problem. Not only hallucinations — but omissions.
From our customer interviews (26 conversations with VCs, PE, consultants): the scarier failure mode is AI missing critical signals. One VC paid for "deep research" that failed to detect a partner linked to the mafia. Another missed an IP clause buried on page 99. You can catch a hallucination by checking. You can't catch an omission you don't know exists.
1.3 How we solve it
Multi-model verification: We run multiple AI models in parallel. When one finds something another missed, we flag it. This catches omissions, not just errors.
Source-linked everything: Every claim links to URLs + extracted snippets. One-click verification.
Conflict detection: When sources disagree, we show both sides and flag it for human review — we don't quietly pick one.
Review queue: Low-confidence and conflicting items get surfaced. You know exactly what needs your attention.
The goal is comprehensive coverage (did we find everything?) not just accuracy (is what we found correct?).
1.4 How it differs from generic AI
| Generic AI (ChatGPT) | ResearchTech |
|---|---|
| Single model = omission risk | Multi-model verification catches what one misses |
| Single prompt/conversation | 70+ specialized agents, 13-step pipeline |
| No citations or "trust me" | Source URLs + extracted snippets for every claim |
| Confident even when wrong | Confidence scores + conflict flags |
| User re-checks everything | System flags what needs human review |
| General-purpose | Built for investment research workflows |
1.5 What this means for UX design
- Evidence/citations are first-class UI elements
- Conflict badges and Review Queue are core features
- The "cheat sheet" format reflects how analysts actually consume research (top risks, opportunities, questions to ask)
- Progressive disclosure: simple by default, depth on demand — users shouldn't need to understand the 13-step pipeline
- The workflow graph signals "serious work is happening" and builds credibility
2) Product context
Users want to walk into a meeting or IC (Investment Committee) with:
- the most important risks/opportunities/questions, clearly prioritized,
- quick verification (click to evidence),
- the ability to ask follow-ups and, when needed, trigger deeper research.
3) Target users and ICP (Ideal Customer Profile)
2.1 Primary ICPs (designed for both)
- Early-stage VC (Venture Capital) at Seed/Series A stages: needs speed + "killer facts" + questions to ask.
- Growth / PE (Private Equity): needs deeper diligence; same core product, more modules by default.
2.2 Roles
- Owner / Member: can generate reports, run Investigate (deep research), manage templates/modules.
- Viewer: view-only (including external share links).
4) Product shape decisions (locked)
3.1 Platform approach
- Point solution + light workspace
- Not a full CRM/pipeline replacement.
- Workspace manages templates, modules, report history, sharing, settings.
3.2 Inputs & outputs
- Primary input: web app (paste URL/name + upload docs + links)
- Secondary input: email intake (email-only; forwarded links/docs appear in the web run)
- Output: always via web report (interactive)
- Exports: two PDFs:
- Short PDF (cheat sheet)
- Long PDF (full report)
3.3 Diligence scope
- Full diligence suite is the direction.
- Users toggle big modules; internal pipeline steps are not user-editable.
3.4 Trust / audit UX
- In-product audit: sources + extracted snippets visible for claims.
- Conflicts: flag + recommend + require review.
- Export: only the final report PDFs (with citations). No separate "audit appendix export".
3.5 Sharing
- Share via magic link (view-only externally)
- Optional password
- Default expiry: 30 days (configurable)
3.6 Retention
- Configurable retention policy.
- Default: indefinite retention
- Stored: reports + raw uploaded files
- Setting option: "Delete uploads, keep report"
3.7 Language
- English-only in v1 (other languages later).
3.8 Platform scope
Platform: Desktop-first for V1 with responsive design. Primary focus on desktop experience for investigations; mobile provides full functionality but optimized for reviewing outputs (cheat sheets, briefings).
5) Design inspirations
4.1 Make / n8n inspiration (for complexity + workflows)
Make and n8n are visual workflow automation platforms where users connect nodes on a canvas to build automated processes — we borrow this visual metaphor.
- Use a workflow canvas metaphor (nodes + connections + branching).
- We want "Level 1 + Level 2" visibility:
- Level 1: simple progress ("what's happening")
- Level 2: graph view (workflow DAG — directed graph showing task dependencies) with node statuses + inspector
- We do not want a trace/log view for v1.
4.2 NotebookLM inspiration (for report + sources + chat)
NotebookLM is Google's AI notebook that grounds answers in uploaded sources with inline citations — we borrow this evidence-linking pattern.
- Source-grounded experience:
- citations in-line,
- quick evidence previews,
- chat that is grounded to sources and encourages verification.
6) Key user insights from talks
- Users underestimate complexity.
UX should signal credibility via a clear, "serious work is happening" representation:
- progress counters, phases, and an optional workflow graph.
- Interactive reports beat long walls of text. Default output must be a cheat sheet with drill-down.
- Users want ChatGPT-like follow-ups after the report. Chat must be a first-class part of the report experience.
- Users want modules/workflows but don't want to build from scratch. Provide presets + light editing; optional chat-based workflow creation later.
- Future: integrations + private data. Modules may integrate with external apps/MCP (Model Context Protocol — a standard for AI integrations); the UX should accommodate "source types" beyond public web.
7) Conceptual model (terms & objects)
6.1 Core objects
- Workspace: team environment (users, roles, settings).
- Module: big functional block (e.g., Person Background Research, Market/Competition, Legal, Product/Tech, etc.).
- Template (Workflow): a saved arrangement of modules + settings (VC vs PE presets; editable).
- Report (Execution/Run): a single run of a template on a target project. Immutable output.
- Child Report (Investigation): a deep-research run spawned from a chat request; linked to parent report.
- Source: URL or uploaded document referenced by evidence.
- Evidence Snippet: extracted quote/section from a source tied to a claim.
- Finding: a report item (risk/opportunity/question) with citations and confidence signals.
6.2 Key UX rule: Templates vs Reports
Think like Make/n8n:
- Templates are edited.
- Reports are executed and read.
8) Under-the-hood pipeline (internal engine) — visible only during processing
Users edit big modules, not parts of internal engine. During processing, we display an "engine view" as read-only to convey progress and credibility.
Architecture: Orchestrator + Specialized Agents
The pipeline uses a modular agent architecture rather than fixed sequential steps:
┌─────────────────────────────────────────────────────────────┐
│ 🎯 Research Orchestrator │
│ (dispatches tasks, manages dependencies) │
└─────────────────────┬───────────────────────────────────────┘
│
┌─────────────┼─────────────┐
▼ ▼ ▼
┌───────────┐ ┌───────────┐ ┌───────────┐
│ Agent 1 │ │ Agent 2 │ │ Agent N │ ← Parallel execution
│ (domain) │ │ (domain) │ │ (domain) │
└─────┬─────┘ └─────┬─────┘ └─────┬─────┘
│ │ │
└──────────────┼──────────────┘
▼
┌─────────────────────────────────────────────────────────────┐
│ 📊 Synthesis Layer │
│ (merge findings, detect conflicts, generate report) │
└─────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│ Level 1: Progress Bar │
│ ═══════════════════●═══════════════════○─────────────── │
│ Orchestration ✓ Agents (3/5) Synthesis │
│ │
│ [▼ Show Details] │
└─────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│ Level 2: Agent Graph (expanded) │
│ │
│ ┌──────────┐ │
│ │ Person │ ████████░░ 80% │
│ └──────────┘ │
│ ↑ │
│ ┌───────────┴───────────┐ │
│ │ Orchestrator ✓ │ │
│ └───────────┬───────────┘ │
│ ↓ ↓ ↓ │
│ ┌────┐ ┌────┐ ┌────┐ │
│ │Comp│ │Time│ │ IP │ ... more agents │
│ │ ✓ │ │ ◉ │ │ ○ │ │
│ └────┘ └────┘ └────┘ │
│ │
│ [Inspector Panel] Agent: Timeline Anchors │
│ Status: Running | Progress: 45% | Sources found: 12 │
└─────────────────────────────────────────────────────────────┘
Specialized Research Agents (examples)
| Agent | Focus Area |
|---|---|
| 👤 Person Background | Founders, key people, track records |
| 🏢 Competitors | Market landscape, competitive positioning |
| 📅 Timeline Anchors | Founding dates, milestones, recent activity |
| 📜 IP Research | Patents, trademarks, domains |
| ✅ Evidence of Usage | Status pages, reviews, community activity |
| … | … |
Agents are modular — new research domains can be added without restructuring the pipeline.
UX Display (Two Levels)
Level 1 (Default): Three-phase progress bar
- Orchestration → Agents Running (X of Y complete) → Synthesis
Level 2 (Expanded): Agent graph with status indicators
- Each agent node shows: ⏳ Pending | 🔄 Running | ✅ Complete | ❌ Error
- Click any agent → inspector panel with details
9) Report experience (what users see)
8.1 Default: Cheat Sheet (above the fold)
The cheat sheet is the "decision dashboard," not a narrative report:
- Top Risks (prioritized)
- Top Opportunities (prioritized)
- Key Questions to Ask (meeting-ready)
Each bullet has:
- short statement,
- confidence indicator (lightweight),
- citation chips (click → evidence drawer),
- conflict marker if applicable.
8.2 Drill-down: Full Report
Long-form narrative or structured sections:
- still source-linked,
- still evidence-drawer enabled,
- includes module sections and appendices.
8.3 Two PDF exports
- Short PDF = cheat sheet
- Long PDF = full report Both: include citations.
10) Evidence & trust UX (full audit inside web, no audit export)
9.1 Evidence drawer (core component)
From any claim/bullet:
- open drawer with:
- source URL / document reference,
- extracted snippet/quote,
- timestamp / capture date,
- (optional) source category (web / uploaded / integration).
9.2 Conflict handling
When sources disagree:
- show conflict badge,
- show recommended interpretation,
- put item into a Review Queue.
9.3 Review Queue
A focused list of:
- conflicts,
- low-confidence claims,
- "needs human check" items.
Goal: give users a short list of what to verify before acting.
11) Chat (post-report) — two-speed model
10.1 Quick mode (no new report)
If answerable from existing report + sources:
- answer in chat with citations
- actions:
- Add to Cheat Sheet
- Save Note
No new report is created. Does not increase report count.
10.2 Investigate mode (deep research; one-off child report)
If the request requires new data or deeper analysis:
- system proposes a plan:
- intent summary,
- modules selected automatically,
- what data will be gathered,
- what will be produced.
- user confirms "Run investigation"
- system runs a one-off workflow and produces Child Report:
- independent exportable short/long PDFs,
- linked to parent report with clear lineage,
- includes "What's new vs parent".
10.3 Merge (only on request)
Child report can be merged into the parent only if user requests:
- select which findings/sections to merge,
- record "Merged from Child Report X" as a note/audit marker.
10.4 Permission rule
Only users in the same workspace can ask follow-ups / run Investigate. External share links are view-only.
10.5 Mode distinction (UX)
Mode Indicator: Clear visual distinction between Quick Chat and Investigate modes. User should always know which mode they're in.
- Quick Chat: Lightweight appearance, conversational UI, instant responses
- Investigate: "Serious work" appearance, shows workflow graph, progress indicators, estimated completion time
- Transition: Explicit user action to switch modes (not automatic). Consider: "This question needs deeper research. Switch to Investigate mode?"
10.5 Mode Distinction & Long-Running Process UX
Two Chat Modes
| Quick Chat | Investigate | |
|---|---|---|
| When | Answer from existing report data | New research needed |
| Duration | Seconds | 15-120 minutes |
| Output | Chat response | Child Report |
| User action | Stays in app | Leaves; gets notification when done |
Visual Distinction
Quick Chat: Light/minimal styling, standard chat bubbles, typing indicator
Investigate: Distinct container, "Deep Research" badge, workflow graph preview, estimated time display
Mode Switching
When system detects a question needs deep research:
- Show prompt: "This needs deeper research. Run an Investigation?"
- User confirms → sees plan preview (modules, ~15-120 minute estimate)
- User clicks "Start Investigation"
- Confirmation: "Started. We'll notify you when ready."
- User can close website and return later (notified by email as well)
Progress Indicators
| Duration | Pattern |
|---|---|
| < 30 sec | Typing dots |
| > 30 sec | Step labels ("Analyzing sources…") |
| > 5 min | Email notification on completion |
For Investigate mode (if user returns to check):
- Workflow graph with completed/current/pending phases
- Step counter: "Step 84 of ~100"
- Time remaining estimate
Return Experience
When user clicks email or returns:
- Completion summary modal
- "What's new" highlights
- Clear "View Report" CTA
12) Workflow customization and modules
11.1 What users can edit
- Toggle big modules on/off
- Adjust module depth/settings (optional)
- Choose templates (VC vs PE vs Custom)
- Save templates in workspace
11.2 What users cannot edit (v1)
- Internal 13-step engine
- Low-level pipeline steps and implementation details
11.3 Chat-based workflow creation (planned)
Later capability:
- user describes goal/constraints in chat,
- system assembles a workflow from existing modules,
- user reviews and edits on canvas,
- save as template.
13) Information architecture (recommended)
12.1 Workspace nav
- Reports (executions/history)
- Templates (workflows/presets)
- Modules (library)
- Settings
("Sources" is a per-report tab in v1; global sources can come later.)
12.2 Reports list
Table list with:
- name (target/project),
- template used (VC/PE/Custom),
- status (running/success/failed),
- created date,
- owner,
- share status,
- export options.
12.3 Template builder
Canvas-based workflow editor:
- module nodes and connections,
- groups (Modules vs Engine),
- right-side inspector for module settings,
- warnings for dependencies.
14) Core screens to design first (v1 priority)
- Reports List (workspace home)
- New Report (create flow: target → inputs → template → module toggles)
- Template Builder (canvas)
- Run Processing View (Level 1 progress + Level 2 graph)
- Report Viewer (cheat sheet + evidence + full report + PDFs)
- Chat (quick vs investigate; plan-first for deep)
- Share modal (magic link + password + expiry default 30 days)
- Settings (retention + delete uploads keep report + share defaults + roles)
15) UX principles and non-goals (to keep design coherent)
14.1 Principles
- Progressive disclosure: simple first; depth on demand.
- Actionable first: cheat sheet above the fold.
- Trust by design: evidence is one click away; conflicts are explicit.
- Workflow clarity: Make/n8n-style canvas for "what's running."
- Iteration-friendly: child reports for deep investigations; merge only on request.
14.2 Non-goals (v1)
- No raw trace/prompt logs UI.
- No separate audit appendix export.
- No multi-language UI.
- No "credits/tokens" UI; track usage via report counts.
16) Open items (expected inputs from product team)
To finalize module library IA + templates, we need:
- list of "big modules" (names + short description),
- which modules are default in VC vs PE,
- any "hard constraints" (e.g., legal/compliance boundaries) for OSINT-like (Open Source Intelligence) modules.
17) Suggested deliverables for the designer
- IA sitemap + screen inventory
- Wireframes for the 8 v1 screens
- Component library:
- Evidence Drawer
- Node Inspector
- Conflict badge + Review Queue
- Share modal
- Chat message cards with "Add to Cheat Sheet" action
- Interaction prototype:
- New Report → Processing Graph → Report Viewer → Chat Investigate → Child Report
Appendix A — Example user journeys (fast)
Journey 1 — VC partner pre-meeting
- paste URL or upload pitch deck
- select VC template
- review cheat sheet
- click 2–3 evidence links
- export short PDF
- share view-only link to associate (30 days expiry)
Journey 2 — PE deep diligence
- choose PE template (more modules)
- monitor processing via graph
- full report drill-down
- ask chat to Investigate one risk → child report
- export long PDF for internal review
- merge key findings into parent on request
Journey 3 — Analyst iterative research
- quick chat Q&A (no new report)
- Investigate deep questions as needed (child reports)
- maintain lineage across runs
18) Glossary
Cheat Sheet — Executive summary; the quick-reference output showing top risks, opportunities, and questions.
Child Report — A deep-research report spawned from a chat investigation, linked to its parent report.
Confidence — High/medium/low rating indicating how trustworthy a claim is based on source quality and agreement.
Conflict — When multiple sources disagree on a fact; flagged for human review.
DAG — Directed Acyclic Graph; a visual representation of task dependencies and flow.
Diligence (Due Diligence) — Systematic investigation and verification performed before investment or business decisions.
Evidence Snippet — An extracted quote or section from a source that supports a claim.
Exoskeleton (Analyst Exoskeleton) — ResearchTech's core philosophy: the system is a "power suit" that augments human analysts by handling tedious data hunting.
Grounded — Answers or claims that are tied directly to source documents (not invented by AI).
IC — Investment Committee; the decision-making body at a fund.
ICP — Ideal Customer Profile; the target buyer persona.
Lineage — The parent-child relationship between reports showing how investigations evolved.
Module — A major functional block in the research workflow (e.g., Market Analysis, Legal Review).
OSINT — Open Source Intelligence; publicly available information gathering.
PE — Private Equity; investment firms that acquire or invest in mature companies.
Pitch Deck — A presentation used by startups to pitch to investors.
Seed/Series A — Early-stage investment rounds in a startup's lifecycle.
Template — A saved configuration of modules that can be reused for similar research runs.
VC — Venture Capital; investment firms focused on early-stage startups.
Related Documents
This master brief is part of a larger documentation set. Please also review:
- Visual References Page — Contains visual references of possible landing page design structure and design.
- Technology Solution Page — Contains information about the technology solution being considered.