Harvest is the methodology. Winnow is the open-source tool that runs it. This page walks the full loop: how evidence gets into the system, how the LLM maintains the wiki, how opportunities and recommendations get produced, and how all of that flows into the agentic coding tools where prototypes actually get built.
Product teams already write everything down: sales call notes, support tickets, customer-research transcripts, competitor scans, decision memos, retro outcomes. Harvest asks for nothing new — just a single canonical place for it all to land. Winnow's raw/ folder is that place. Sub-folders (sales/, customer_success/, support/, research/, market/, competitors/, systems/, decisions/, delivery/) keep evidence routed by source type so the synthesis prompts can be tuned per category.
Four input paths converge on the same destination:
raw/sales/ from your editor of choice.POST /api/raw/{subdir} for n8n, Zapier, Make, or custom scrapers. Optional auto_ingest triggers the synthesis pipeline once the burst settles.raw/ into a Dropbox, iCloud, Google Drive or OneDrive folder; drop files from any device.Files are immutable once they're in. The original binary is preserved untouched; the LLM reads an extracted-text view at synthesis time.
The synthesis pipeline is the same shape on every run: discover unprocessed sources, classify them by content hash (new / revision / unchanged), pick the wiki pages most relevant to each source via a cheap selector model, then call a smarter synthesis model with the foundations, the selected pages, and the raw evidence. The model returns a structured edit plan. The runtime stages every change to a temp directory, validates it, and either applies atomically or rolls back — there's no half-applied wiki.
Six commands cover the full lifecycle:
winnow ingest — general-purpose evidence (sales / support / research / market / competitors / systems).winnow decide — documented decisions.winnow deliver — delivery outcomes.winnow discover — scans the wiki for tension and proposes opportunities.winnow recommend — generates recommendations from validated opportunities.winnow reconcile — two-step propose/apply for baseline foundation updates.Every command supports --dry-run so you can preview the plan before anything writes. Reconcile is two-step by design: the propose step writes a sidecar; the apply step is always explicit human action.
A recommendation that lives in the wiki carries everything an AI builder needs to start prototyping: a clear opportunity, cited evidence, expected impact, risks, alignment with strategy. Harvest delivers that bundle into the tools where prototypes actually get made through three integration surfaces:
winnow gateway boots a chat-completions service at port 11434 (Ollama's default port, for client auto-discovery). Open WebUI, Continue.dev, LibreChat, LobeChat, Cursor's OpenAI-compatible mode — any client that speaks the OpenAI API can chat with the wiki as if it were a model.The wiki is read-only over MCP and the gateway. Adding evidence still flows through the four input paths above, so the audit trail stays clean and the LLM remains the wiki's only editor.
Self-host. pip install winnow, or run the Docker image (~520 MB, single /data volume, backend + frontend in one container). Bring your own LLM key (Anthropic or OpenRouter). Runs on your laptop, your server, or behind a reverse proxy with trusted-header auth (Caddy, Nginx, Tailscale Funnel, Cloudflare Access).
Hosted at usewin.now. Same image, hosted for you. Each workspace gets a subdomain that reads as a sentence — cocacola.usewin.now, "Coca Cola use Winnow." Bring your own LLM key (no token markup, no rate-limit accounting). Daily volume snapshots, 30-day export window on cancellation. Open in waves — join the waitlist for early access.
Winnow is open-source and free to run on your own infrastructure. The hosted version at usewin.now is opening in waves — join the waitlist to get early access.