Skip to content

Launch Playbook

Launch Playbook (Revised for Agent-Human Collaboration)

Section titled “Launch Playbook (Revised for Agent-Human Collaboration)”

This playbook assumes sequential execution: Mailmolt (months 0-6) → Findable (months 6-12) → Agent Workspace (months 12-18). Agent Workspace engineering begins at month 10 with the open-source MCP server. The human dashboard launches with the managed service — it’s the primary differentiator.

Month 8-9: Publish “Why Agents and Humans Need a Shared Workspace”

  • Long-form blog post (2,000-3,000 words)
  • The desk metaphor as the hook: “Imagine hiring 37 employees and giving them no shared drive”
  • Data-backed (3M agents, 88% security incidents, 53% unmonitored, 18% IAM confidence)
  • Framework-level evidence (LangChain Deep Agents, Google ADK Artifacts, the workarounds)
  • NEW angle: Not just “agents need storage” — “agents and humans need to work together on real files”
  • Post on personal blog, cross-post to dev.to, share on X/HN/Reddit
  • Goal: Establish authority on the agent-human collaboration problem

Month 9-10: “The State of Agent-Human Collaboration” Research Post

  • Survey or analysis of how teams currently oversee agent output
  • Include the workarounds: screenshots of terminal logs, custom Slack bots, manual copy-paste
  • Reference Fast.io (agent-agent only), Box MCP (human tools for agents), Google WS (agents as extensions)
  • Key finding to highlight: No tool lets humans and agents collaborate on shared files
  • Goal: Become the reference post people link to when discussing this topic
  • Finalize name (see 10-naming-positioning.md)
  • Secure domains (.dev, .io, .com)
  • Register trademarks (basic filing)
  • Create brand identity (logo, colors, typography)
  • Design the dashboard UI — mockups and screenshots before code. The dashboard IS the marketing.
  • Goal: Clean brand + dashboard designs ready for public use

This is the most important marketing asset. Ship it before the managed product.

  • Core MCP server: vault_write, vault_read, vault_list, vault_delete, vault_share, vault_search, vault_versions
  • NEW collaboration tools: vault_comment, vault_feedback, vault_watch, vault_activity
  • Works with local filesystem (for self-hosting) and R2 (for managed)
  • Published to npm: @agentvault/mcp-server
  • Published to MCP registries: PulseMCP, Smithery, Glama
  • GitHub repo with: clear README, quickstart guide, framework examples (LangChain, CrewAI, Google ADK), MIT license
  • Goal: 500+ GitHub stars before managed service launch

GitHub README Template (Revised for Collaboration):

# AgentVault MCP Server
Shared workspace for AI agents and humans. Real files. Full collaboration.
## Quick Start (5 minutes)
npm install @agentvault/mcp-server
# Add to your MCP config:
{
"mcpServers": {
"agentvault": {
"command": "npx",
"args": ["@agentvault/mcp-server", "--storage", "local"]
}
}
}
## Why?
Your agents write reports, code, and datasets. But you can't SEE their work.
You can't comment on it. When the session ends, it disappears.
AgentVault gives agents and humans a shared workspace:
- Agents write files that persist across sessions
- Humans view files in a web dashboard (like Google Drive)
- Humans comment → agents read feedback on next run
- Everything versioned with agent/human attribution
- Permissions: agents can't access what they shouldn't
## Managed Service
Dashboard + managed hosting: [url] (free trial)
See what your agents are working on →

Managed Service MVP + Human Dashboard (Month 12-13)

Section titled “Managed Service MVP + Human Dashboard (Month 12-13)”

The managed service launches WITH the dashboard. This is non-negotiable — the dashboard is the product.

  • Landing page with trial signup
  • 7-day free trial (500 MB, 1 agent, 1 human seat, no credit card)
  • Starter tier ($5/month) with credit card
  • Human dashboard: file browser, activity feed, comments panel, agent status
  • API docs with interactive playground
  • Framework quickstart guides

Dashboard must be demo-ready at launch. Screenshots and video of the dashboard are the primary marketing asset.

  • Launch Discord before the product
  • Channels: #general, #help, #feedback, #showcase, #contributors, #dashboard-feedback (NEW)
  • Bot that posts new GitHub releases/issues
  • Goal: 200 members before managed service launch
  • Email all waitlist/newsletter subscribers
  • DM 50 developer influencers with early access + dashboard screenshots
  • Prepare HN post (draft, review, finalize — collaboration angle, NOT just storage)
  • Prepare Product Hunt listing (screenshots of dashboard, description, maker comment)
  • Schedule social media posts
  • Have 3-5 beta user testimonials ready
  • Record 60-second demo video: agent writes file → human sees it in dashboard → human comments → agent reads feedback

Morning (9am PT):

  1. Product Hunt listing goes live (lead with dashboard screenshots)
  2. “Show HN” post goes live
  3. X/Twitter thread: “We just launched [product]. Here’s what your AI agents have been working on.” (include dashboard screenshot)
  4. Blog post: “Introducing [Product]: Where Agents and Humans Work Together”
  5. Reddit posts: r/ClaudeAI, r/LocalLLaMA, r/programming

HN Post Format (Revised for Collaboration):

Show HN: [Product] – Shared workspace for AI agents and humans
Your agents write reports, analyze data, generate code. But you can't
see their work. You can't comment on it. When the session ends, it
disappears.
We built a shared workspace where agents and humans collaborate:
- Agents write files via MCP (any MCP-compatible agent)
- Humans view files in a web dashboard (like Google Drive)
- Humans comment, agents read feedback on next run
- Everything versioned with agent/human attribution
- Permissions: agents can't access what they shouldn't
- Activity feed: see what every agent is doing, real-time
Open source (MIT): npm install @agentvault/mcp-server
Dashboard: [url] (free trial)
GitHub: [url]
Works with Claude Code, OpenClaw, LangChain, CrewAI, anything MCP.
We're the team behind Mailmolt (email for agents) and Findable (agent skill registry).
Ask us anything about building collaboration infrastructure for AI agents.

Throughout the Day:

  • Monitor HN comments, respond to every question
  • Emphasize the dashboard in responses — link to screenshots, demo video
  • Monitor Product Hunt comments
  • Fix any critical bugs immediately
  • Share updates on X/Twitter
  • Continue engaging on HN/PH/Reddit
  • Publish “What we learned from our launch” blog post
  • Send follow-up email to new signups who haven’t activated
  • Reach out to developers who starred the GitHub repo but didn’t try the managed service
  • Collect feedback systematically (Discord, support emails, GitHub issues)
  • NEW: Reach out to PMs/managers who signed up as human viewers — their feedback is critical
  • Framework integration blog posts:
    • “How to Give Your LangChain Agent a Shared Workspace”
    • “Persistent File Collaboration for CrewAI Pipelines”
    • “Using AgentVault with Google ADK”
  • NEW: “What Your AI Agents Are Working On” blog post — aimed at non-developers
    • Show the dashboard experience for a PM managing agent outputs
    • Screenshots of activity feed, comments, file browser
    • This targets Audience 2 (PMs/domain experts)
  • YouTube tutorial: “5-Minute Setup: See What Your Agents Are Working On”
  • YouTube demo: “The Agent-Human Collaboration Loop” — full workflow demo
  • Case study from first production user
  • Announce partnerships (if applicable)
  • Cross-agent file sharing (shared workspaces)
  • Enhanced dashboard: file preview, inline comments, @mentions
  • Notification integrations: Slack webhook, email digest (via Mailmolt)
  • Version history dashboard with diff view
  • Launch Team tier ($15/month, 5 agents + 5 humans)
  • Blog: “Multi-Agent + Human Collaboration: How It Works”
  • SOC 2 Type I (if not already)
  • EU AI Act compliance positioning: “Human oversight built in”
  • Enterprise tier with audit logs, SSO, custom retention
  • Partnership announcement with identity provider (Keycard, Strata)
  • Enterprise landing page (lead with compliance + human oversight, not developer features)
  • First enterprise case study
  • Enterprise sales deck emphasizing: human dashboard, audit trails, compliance, delegation chains
  • Unified identity with Mailmolt
  • Findable integration (skill outputs auto-stored)
  • Suite bundle pricing
  • Blog: “One Identity, Every Service: The Agent-Human Workspace Platform”
  • Announce suite vision publicly

Key Milestones & Success Criteria (Revised)

Section titled “Key Milestones & Success Criteria (Revised)”
MilestoneTarget DateSuccess Criteria
Open-source MCP serverMonth 11500+ GitHub stars
Managed service + dashboard launchMonth 13100 trial activations in first week
First human comments on agent filesMonth 1350+ human comments in first month
1,000 registered agentsMonth 15Organic growth, not paid
500 active human dashboard usersMonth 15Humans returning weekly to view agent work
$5K MRRMonth 16200+ paying organizations
SOC 2 Type IMonth 14Certificate received
Enterprise pilotMonth 161 signed enterprise customer
Suite integration liveMonth 18Cross-product identity working
$20K MRRMonth 20500+ paying organizations

North Star: Time to First Human-Agent Collaboration

Section titled “North Star: Time to First Human-Agent Collaboration”

Target: < 10 minutes from signup to a human viewing an agent-created file in the dashboard and leaving a comment.

Previous north star was “Time to First File Write” — that’s necessary but not sufficient. The collaboration moment is the magic.

Developer Funnel:

GitHub stars → npm installs → Trial signups → First file write →
First agent-human interaction → Day-7 retention → Conversion to paid

Human Funnel (NEW):

Invite received → Dashboard login → First file viewed →
First comment left → Day-7 return → Active monthly user
MetricTarget
GitHub stars (Month 13)1,000
npm installs/month (Month 14)5,000
Trial activations/month (Month 14)500
Human dashboard logins/month200 (Month 14)
Human comments/month500 (Month 14)
Trial → Paid conversion8-12%
Day-7 trial retention40%
Day-7 human dashboard retention30%
Monthly churn (paid)<5%
Net Revenue Retention120%+
  • Website traffic (meaningless without activation)
  • Total signups (meaningless without retention)
  • GitHub stars alone (vanity metric without product usage)
  • Number of blog posts published (quality > quantity)
  • Agent-only metrics without human counterpart (agents writing files is necessary but the value is the collaboration)