Create a 90-day plan to ship an MVP: milestones, API integrations, extraction, indexing, citation system, and UI. Include risk register, budget, and acceptance criteria for launch.
Design a multi-agent approach: planner creates sub-queries, retriever collects sources, reader extracts claims, writer synthesizes. Include handoff artifacts and consensus checks.
Design how to expose search/fetch/extract as MCP tools: schemas, permissions, and audit logs. Include example tool definitions and a safety policy for tool use.
Design domain packs: specialized source allowlists, stricter citation rules, and domain-specific extraction (e.g., trials, filings). Include how to swap packs safely.
Create three output modes: executive brief (short), analyst memo (deep), and student explainer (teaching). Include structure, tone, and citation density rules for each.
Design hallucination controls: enforce evidence-only summarization, require citations for key claims, and label uncertainty. Include prompts and automated checks (linting).
Design evaluation metrics: search precision/recall, source diversity, citation correctness, and user satisfaction. Include how to create labeled datasets and measure improvements.
Create a testing strategy: unit tests for extractors, integration tests for API calls (mocked), and golden report tests to detect synthesis drift. Include CI gating rules.
Plan export: Markdown report, PDF generation, and CSV evidence table. Include how to preserve citations and embed “as of” timestamps consistently.
Design a user experience: question intake, clarifying questions, showing sources, evidence table, and exporting. Include interaction patterns for analysts vs casual users.
Design content versioning: snapshots, change logs, and references to “as of” dates for citations. Include retention and storage cost considerations.
Design incremental crawling: store ETags/Last-Modified, compute diffs, and update indexes incrementally. Include pitfalls and how to handle missing headers.