Skip to content

Egregora Documentation

Emergent Group Reflection Engine Generating Organized Relevant Articles

Welcome to the Egregora documentation! Transform your WhatsApp group chats into intelligent, privacy-first blogs where collective conversations emerge as beautifully written articles.

Features

  • 🧠 Emergent Intelligence: Collective conversations synthesize into coherent articles
  • 👥 Group Reflection: Your community's unique voice and insights are preserved
  • 🛡️ Privacy-First: Automatic anonymization - real names never reach the AI
  • ⚙️ Fully Automated: Stateless pipeline powered by Ibis, DuckDB, and Gemini
  • 📊 Smart Context: RAG retrieval ensures consistent, context-aware writing

🌟 Try the Live Demo

See Egregora in action! Check out our live demo blog generated from real WhatsApp conversations.

Explore the Demo

Architecture Overview

graph LR
    A[Ingestion] --> B[Privacy]
    B --> C[Augmentation]
    C --> D[Knowledge]
    D --> E[Generation]
    E --> F[Publication]
    D -.-> E

Egregora uses a staged pipeline architecture that processes conversations through distinct phases:

  1. Ingestion: Parse WhatsApp exports into structured data
  2. Privacy: Anonymize names and detect PII
  3. Augmentation: Enrich context with LLM-powered descriptions
  4. Knowledge: Build RAG index and annotation metadata
  5. Generation: LLM generates blog posts with tool calling
  6. Publication: Create MkDocs site with templates

Stack

  • Ibis: DataFrame abstraction for data transformations
  • DuckDB: Fast analytical database with vector search
  • Gemini: Google's LLM for content generation
  • MkDocs: Static site generation
  • uv: Modern Python package management

Philosophy

Egregora follows the principle of "trusting the LLM" - instead of micromanaging with complex heuristics, we:

  • Give the AI complete conversation context
  • Let it make editorial decisions (how many posts, what to write)
  • Use tool calling for structured output
  • Keep the pipeline simple and composable

This results in simpler code and often better outcomes. The LLM knows what makes a good article - our job is to give it the right context.