WhiteSpaceIQ
A production-grade multi-agent system that generates 40-page strategic marketing audits in 45 minutes instead of 30 days. Built for Sarra Richmond, The Ghost, this system preserves brand voice through a novel "Foundation Lock" pattern while orchestrating three specialized AI agents.
The Challenge
Sarra Richmond creates comprehensive marketing audits that typically require a month of research, analysis, and strategic development. The challenge: automate this without losing her unique "Ghost Marketing" voice or compromising on strategic depth.
The Solution
A three-agent orchestration system where Research, Strategy, and Validation agents work in conversation (not sequence) around a locked brand foundation. The system can reject and revise its own work to maintain quality and voice consistency.
System Architecture
High-Level Architecture
Core Components
WhiteSpaceIQ/
โโโ backend/
โ โโโ src/
โ โ โโโ agents/ # Three specialized agents
โ โ โ โโโ research_coordinator.py
โ โ โ โโโ strategic_analyzer.py
โ โ โ โโโ insight_validator.py
โ โ โโโ builders/ # Phase builders (1-5)
โ โ โโโ services/ # Orchestration services
โ โ โโโ templates/ # Jinja2 templates
โ โ โโโ main.py # FastAPI application
โ โโโ data/
โ โโโ foundations/ # Locked brand foundations
โ โโโ outputs/ # Generated audits
โโโ frontend/
โโโ components/ # React components
โโโ pages/ # Next.js pages
โโโ services/ # API integration
Data Flow
Workflow Phases
The system operates through five distinct phases, each building on the previous while maintaining the locked foundation throughout the process.
Phase 1: Strategic Foundation
# Foundation elements that get locked
foundation = {
"brand_essence": {
"mission": "User-provided mission",
"vision": "User-provided vision",
"values": ["integrity", "innovation", "impact"]
},
"voice_parameters": {
"tone": "rebellious", # Locked parameter
"stance": "direct", # Locked parameter
"ethic": "human_first" # Locked parameter
}
}
Phase 2: Research & Discovery
Research Coordinator agent analyzes market landscape, identifies opportunities, and writes findings to shared context for other agents to access.
Phase 3: Strategic Analysis
Strategic Analyzer reads research context and crafts narrative sections, applying the locked foundation voice throughout.
Phase 4: Validation & Refinement
Insight Validator checks all content against foundation lock, measuring voice consistency and factual accuracy. Can reject and request rewrites.
Phase 5: Document Generation
Final assembly and rendering of the 40-page audit with all validated content.
The Three Agents
Unlike traditional prompt chaining, these agents operate with genuine autonomy and can disagree with each other's outputs.
Agent Communication
# Agents share context and can disagree
class AgentOrchestrator:
async def orchestrate(self, context: SharedContext):
research = await self.research_agent.gather(context)
# Strategy agent reads research
strategy = await self.strategy_agent.compose(
research=research,
foundation=context.foundation_lock
)
# Validation can reject and loop
while not self.validator.approve(strategy):
feedback = self.validator.get_feedback()
strategy = await self.strategy_agent.revise(
feedback=feedback,
foundation=context.foundation_lock
)
return strategy
Agent Orchestration
The orchestration layer manages agent interactions, shared context, and validation loops using LangGraph for state management.
LangGraph Implementation
from langgraph.graph import StateGraph, END
# Define the agent graph
workflow = StateGraph(AgentState)
# Add nodes for each agent
workflow.add_node("research", research_agent)
workflow.add_node("strategy", strategy_agent)
workflow.add_node("validation", validation_agent)
# Define edges with conditional logic
workflow.add_edge("research", "strategy")
workflow.add_conditional_edges(
"strategy",
lambda x: "validation" if x["ready"] else "strategy"
)
workflow.add_conditional_edges(
"validation",
lambda x: END if x["approved"] else "strategy"
)
Shared Context Management
All agents access a shared context that maintains state across the orchestration:
| Context Element | Purpose | Access |
|---|---|---|
| Foundation Lock | Immutable brand voice parameters | Read-only for all agents |
| Research Findings | Market intelligence and insights | Write: Research, Read: All |
| Strategic Sections | Draft content for audit | Write: Strategy, Read: Validation |
| Validation Feedback | Rejection reasons and revision requests | Write: Validation, Read: Strategy |
Foundation Lock Pattern
Implementation
class FoundationLock:
"""Immutable brand voice parameters"""
def __init__(self, brand_profile: BrandProfile):
self.tone = brand_profile.tone # e.g., "rebellious"
self.stance = brand_profile.stance # e.g., "direct"
self.ethic = brand_profile.ethic # e.g., "human_first"
self._locked = True
def validate_content(self, content: str) -> ValidationResult:
"""Check if content matches locked foundation"""
tone_score = self.measure_tone_alignment(content)
stance_score = self.measure_stance_alignment(content)
ethic_score = self.measure_ethic_alignment(content)
overall_score = (tone_score + stance_score + ethic_score) / 3
return ValidationResult(
passed=overall_score >= 0.9, # 90% threshold
score=overall_score,
feedback=self.generate_feedback(content)
)
Voice Parameters
Technology Stack
AI Models
| Model | Purpose | Configuration |
|---|---|---|
| GPT-4 Turbo | Primary reasoning engine | Temperature: 0.7, Max tokens: 4000 |
| Claude 3 Opus | Validation and refinement | Temperature: 0.3, Max tokens: 2000 |
| OpenAI Embeddings | Vector search for RAG | Model: text-embedding-3-large |
Setup & Deployment
Local Development Setup
Environment Variables
# API Keys OPENAI_API_KEY=sk-... ANTHROPIC_API_KEY=sk-ant-... # Database DATABASE_URL=postgresql://user:pass@localhost/whitespaceiq PGVECTOR_EXTENSION=true # Application ENVIRONMENT=development DEBUG=true SECRET_KEY=your-secret-key # Model Configuration PRIMARY_MODEL=gpt-4-turbo-preview VALIDATION_MODEL=claude-3-opus EMBEDDING_MODEL=text-embedding-3-large
Production Deployment
The application is deployed on Render.com with automatic builds from the main branch:
Frontend: Static Site with build command
Database: PostgreSQL with pgvector extension
Environment: Production with auto-scaling
API Endpoints
Main Endpoints
| Endpoint | Method | Description |
|---|---|---|
| /api/v1/audit/generate | POST | Generate complete audit from user input |
| /api/v1/foundation/create | POST | Create and lock brand foundation |
| /api/v1/status/{session_id} | GET | Check generation status |
| /api/v1/download/{session_id} | GET | Download completed audit PDF |
Example Request
{
"business_context": {
"company_name": "Example Corp",
"industry": "Technology",
"target_audience": "B2B SaaS buyers"
},
"brand_voice": {
"tone": "rebellious",
"stance": "direct",
"ethic": "human_first"
},
"audit_focus": [
"competitive_analysis",
"market_positioning",
"growth_opportunities"
]
}
Testing
Test Scripts
Validation Metrics
Troubleshooting
Common Issues
Solution: Restart backend between test runs
lsof -ti:8000 | xargs kill -9 && uvicorn src.main:app --reload
Solution: Use flexible template rendering without rigid schemas
See:
templates/TEMPLATE_ARCHITECTURE.md
Solution: Increase timeout in LLMClient configuration
timeout=httpx.Timeout(120.0)
Lessons Learned
Spent a day building complex connection pooling with context managers, health monitoring, and dependency injection. The actual solution? Restart the server between tests. Simple solutions often beat clever ones.
Key Insights
- Foundation First: Lock brand voice before any generation to prevent drift
- Agents Need Autonomy: Real reasoning requires ability to disagree and revise
- Templates Must Be Flexible: LLM output varies; rigid schemas break
- Measure Before Optimizing: Don't solve imaginary problems
- Production != Perfect: Ship working code, iterate based on real usage
What Worked
Future Improvements
- Implement streaming responses for better UX
- Add more granular progress tracking
- Expand validation metrics beyond voice consistency
- Build agent memory for improved performance over time