The Problem
Business development work is repetitive by design: find prospects, enrich their data, score them, follow up on schedule, draft outreach. Most of that work doesn't require human judgment — it just requires consistency. The problem is that doing it consistently, across multiple BD streams, at any real volume, is too slow to do manually and too fragile to cobble together with spreadsheets.
The Architecture
The suite runs as seven discrete modules, each responsible for one stage of the pipeline:
- Prospect Research — pulls new targets from external data sources based on per-stream search configs
- Contact Discovery — finds decision-maker contacts for qualified leads
- Lead Enrichment — populates structured fields from multiple external adapters
- Lead Scoring — applies weighted YAML-defined criteria to produce a numeric score
- Outreach Drafter — uses Claude to write personalized first-touch emails (drafts only — never auto-sent)
- Follow-up Scheduler — creates CRM activities based on pipeline stage and days-since rules
- Pipeline Reporter — generates weekly Markdown summaries of pipeline health
All modules write to Odoo CRM as the central data hub. All modules support --dry-run.
The Key Design Decisions
YAML-driven business logic. Scoring weights, enrichment sources, follow-up thresholds, and outreach tone live in config files, not code. Adapting the suite to a new BD context means editing YAML, not touching Python.
Adapter pattern for external data. Every external source — Google Maps, trade databases, company websites, news — is a self-contained adapter that never raises. On failure it returns success=False. The orchestrating module moves on. This keeps the pipeline resilient to spotty third-party APIs.
Idempotency throughout. Every module is safe to re-run. Deduplication on lead creation, duplicate-checking before activity creation. Cron jobs can be re-triggered without creating junk data.
Human in the loop on outreach. The outreach module uses Claude Sonnet to draft emails and writes them to a CRM field for human review. Nothing is ever sent automatically. The automation handles the research and drafting; a person handles the relationship.
Haiku for scale, Sonnet for quality. Batch enrichment summarization uses Claude Haiku (cheap, fast). Outreach drafts use Claude Sonnet (higher quality). The distinction is explicit via named constants in the shared LLM client.
What It Replaced
Manual prospect research in spreadsheets, ad-hoc follow-up reminders, and inconsistent lead data. The pipeline now runs on a cron schedule — prospect research and reporting weekly, follow-up scheduling and scoring daily.
What I Learned
The hardest part wasn't the code — it was defining the scoring criteria precisely enough to encode them in YAML. Forcing that precision was the real value. Vague intuitions about what makes a good lead don't survive contact with a config file that demands explicit weights and thresholds.