Quick Start¶
Get your first AI-generated podcast episode in about 5 minutes.
Prerequisites¶
What you need
Required
- Anthropic API key — Claude script generation. Get one at console.anthropic.com.
- Google Cloud project with TTS + GCS enabled. Use the Terraform setup to provision everything automatically.
Optional
AWS S3 (hosting), MongoDB (deduplication), and Grafana Loki (logging) are all optional. The core pipeline works without them — output is saved locally.
5-minute quickstart¶
Step 1 — Pull the image
Step 2 — Create an environment file
cat > .env << 'EOF'
ANTHROPIC_API_KEY=sk-ant-api03-your-key-here
GCS_BUCKET_NAME=your-gcs-bucket-name
GOOGLE_APPLICATION_CREDENTIALS=/credentials.json
EOF
Step 3 — Fix output directory permissions
- The container runs as UID 1000. This ensures it can write to the mounted volume.
Step 4 — Run
What you'll get¶
After a few minutes the pipeline completes and you'll have:
That's it!
You now have a complete podcast episode. The script is the raw dialogue,
and the .wav file is ready to upload to any podcast host.
What just happened?¶
sequenceDiagram
participant You
participant Pipeline
participant Wired
participant Claude
participant GoogleTTS
You->>Pipeline: docker run ...
Pipeline->>Wired: fetch RSS feed
Wired-->>Pipeline: latest articles
Pipeline->>Claude: write podcast script
Claude-->>Pipeline: two-host dialogue
Pipeline->>GoogleTTS: synthesize audio
GoogleTTS-->>Pipeline: episode.wav
Pipeline-->>You: output/episode_script.txt + episode.wav
Next steps¶
-
Installation options
All ways to install: Docker, pip, or from source, with platform notes.
-
CLI reference
Every flag, voice option, source selector, and output setting documented.
-
Configuration
All environment variables with defaults and examples.