A repeatable way to compile fragmented product evidence into a living wiki of insights, opportunities, and recommendations — anchored to your strategy, with every claim traced back to its source.
Harvest works on what your team has already written down — sales notes, research transcripts, decision memos, support tickets. First-party evidence your team authored, not customer data harvested from anywhere else. Self-hosted Winnow runs on your own infrastructure with no telemetry; the hosted version runs in an isolated workspace we don't read. Nothing trains a model.
Every Harvest project begins with five baseline files: the product proposition, vision and strategy, lifecycle model, target markets, and constraints and assumptions. These are authored, not synthesised. They define product reality at this moment in time — what you sell, to whom, for what outcome, in which lifecycle stage, and the things you're treating as fixed.
The maintenance LLM treats these foundations as constraints, not suggestions. Every insight, opportunity, and recommendation downstream is evaluated against them. A first draft is enough to start; the LLM will flag tensions as new evidence lands, and the foundations refine through the reconcile flow as decisions and delivery play out.
Raw evidence — sales calls, support tickets, user research, competitor scans, system exports — lands in a raw/ folder. Files are immutable: once a piece of evidence is in, it stays in, untouched, as a source of truth. Markdown, plain text, PDF or DOCX all work; the LLM reads extracted text, but the canonical binary stays put.
On ingest, the LLM extracts signals, identifies which wiki pages are affected, and updates insights, problems, opportunities, and assumptions. Reinforcement and contradiction with prior beliefs are flagged explicitly — there are no silent overwrites. Four equivalent input paths (drag-and-drop, the in-UI Create-evidence form, an authenticated webhook, a cloud-synced folder) feed the same pipeline; the synthesis step is identical regardless of how a file got there.
Discovery looks across the wiki for misalignment between what customers need, what strategy says, and what the current solution delivers. Each opportunity is recorded with the evidence it draws on and a confidence level — never invented out of thin air, and never produced from a single piece of evidence in isolation.
The opportunity record is the bridge between knowledge and action. It carries cited evidence, a strategic-alignment note, and an explicit status. Without an opportunity, no recommendation is allowed to exist.
Recommendations are generated only from opportunities — never directly from raw evidence. Each is evaluated against the strategy, lifecycle stage, and constraints defined in the foundations. Expected impact, risks, unknowns, and confidence are stated up-front, and the source opportunity is linked. Recommendations are proposals, not facts.
Once a recommendation exists, the same evidence-cited context that justified it can be handed straight to an AI builder. A paste-ready brief bundles the recommendation, its parent opportunity, the cited evidence, and the five foundation pages — with foundations labelled as non-negotiable constraints so Claude Code, Cursor, Lovable, or any other AI builder treats them as guardrails rather than suggestions.
When a decision is made or a delivery outcome arrives, reconciliation closes the loop. The LLM reads the decision or delivery artifact, compares it against the current foundations, and proposes baseline updates as a sidecar — never silently. The user reviews each proposed edit and either applies or discards.
This is the only path through which foundations are allowed to change. Sales notes, customer feedback, and competitor intel are not sufficient triggers — those flow through ingest and discover. A speculative belief shift never reaches the baseline; observable reality does. Every applied update carries a dated changelog entry pointing at the artifact that triggered it.
Most product processes jump from "I heard a customer say something" to "we should build X." Harvest refuses to. A piece of evidence becomes an insight, an insight becomes a problem statement, a problem becomes an opportunity — and only an opportunity earns a recommendation.
The discipline matters because the alternative is shipping features whose success criteria you can't articulate. With an opportunity in the loop, every recommendation has a reason it should exist that traces back to evidence. The opportunity is also where humans push back: validating, deferring, or discarding before any solution work begins.
Foundations propagate. They shape every interpretation the LLM makes downstream. If they could shift on the strength of any new sales note or research finding, one weird Sunday-afternoon LLM call could drift the team's strategy without anyone noticing.
So the foundations only change when a documented decision has been made or observable delivery has changed reality on the ground — and even then, only via a human-approved sidecar proposal. Every update carries a dated changelog entry naming the artifact that triggered it. Historical statements are annotated as superseded, not deleted. The baseline gets smarter over time; it doesn't drift.
Winnow is open-source and free to run on your own infrastructure. The hosted version at usewin.now is opening in waves — join the waitlist to get early access.