Architecture
The chatbot follows the RAG (Retrieval-Augmented Generation) pattern:- User asks a question about the Urantia Book
- Search the Urantia Papers API for relevant passages
- Retrieve context around the top results
- Generate an answer using an LLM with the retrieved passages as context
- Return the answer with source citations
Prerequisites
- Node.js 18+ or Python 3.10+
- An OpenAI API key (for the LLM)
- No Urantia API key needed (it’s free and open)
Step 1: Search for Relevant Passages
and— All words must appear (best for specific queries)or— Any word can appear (best for broad exploration)phrase— Exact phrase match (best for quoting)
Step 2: Get Surrounding Context
The context endpoint is critical for RAG quality. A single paragraph often lacks the full meaning — surrounding paragraphs provide narrative flow.window parameter (1-10) controls how many paragraphs before and after the target are included.
Step 3: Build the Prompt
Step 4: Generate the Answer
Step 5: Add Streaming (Optional)
For a better user experience, stream the response:Tips for Better Results
- Use
type: "and"for specific questions andtype: "or"for exploratory ones - Increase the context window for complex topics —
window=5gives broader narrative context - Search with key terms from the Urantia Book’s vocabulary (e.g., “Thought Adjuster” instead of “inner spirit”)
- Filter by paper when you know the relevant section — use the
paperIdparameter - Lower the temperature (0.2-0.4) for factual accuracy; raise it (0.6-0.8) for more creative explanations
Full Example (Python)
Next Steps
- Add conversation history for multi-turn chat
- Implement a web UI with React or Next.js
- Add audio playback for cited passages using the
/audioendpoint - Deploy as a Telegram or Discord bot
AI Agent Integration Guide
See the full recommended workflow for AI agent integration.