LLM Adapter
Configure LLM integration: Vercel AI, custom adapters, or emotion-only mode.
LLM Adapter
molroo supports three modes for LLM integration: the built-in Vercel AI adapter, a custom adapter, or no LLM at all. You choose the mode that fits your architecture.
Three modes
| Mode | When to use |
|---|---|
| Vercel AI adapter | Fastest path. Supports OpenAI, Anthropic, Google, and any Vercel AI-compatible provider. |
| Custom adapter | You have a non-standard LLM API, need custom preprocessing, or want full control. |
| Emotion-only | You already generate dialogue elsewhere (game engine, scripted content) and only need emotion computation. |
Vercel AI adapter (recommended)
The @molroo-ai/adapter-llm package provides VercelAIAdapter, which wraps any Vercel AI SDK model.
npm install @molroo-ai/adapter-llmOpenAI
import { VercelAIAdapter } from '@molroo-ai/adapter-llm';
import { openai } from '@ai-sdk/openai';
const llm = new VercelAIAdapter({ model: openai('gpt-4o-mini') });Anthropic
import { VercelAIAdapter } from '@molroo-ai/adapter-llm';
import { anthropic } from '@ai-sdk/anthropic';
const llm = new VercelAIAdapter({ model: anthropic('claude-sonnet-4-20250514') });import { VercelAIAdapter } from '@molroo-ai/adapter-llm';
import { google } from '@ai-sdk/google';
const llm = new VercelAIAdapter({ model: google('gemini-2.0-flash') });Then pass the adapter when creating or connecting:
// With MolrooWorld (multi-entity)
const world = await MolrooWorld.create(
{ baseUrl: 'https://api.molroo.io', apiKey: 'your-api-key', llm },
worldConfig,
);
// With MolrooPersona (single character)
const persona = await MolrooPersona.create(
{ baseUrl: 'https://api.molroo.io', apiKey: 'your-api-key', llm },
personaConfig,
);You must pass an LLMAdapter instance to the SDK, not a plain config object. The SDK requires an adapter with generate() and generateText() methods.
LLMAdapter interface
If you need to build a custom adapter, implement the LLMAdapter interface:
interface LLMAdapter {
generate(prompt: LLMPrompt, schema: object, message: string): Promise<any>;
generateText(prompt: LLMPrompt, message: string): Promise<string>;
}| Method | Purpose |
|---|---|
generate() | Returns structured output matching the provided JSON schema. Used for chat (response + appraisal). |
generateText() | Returns plain text. Used for auxiliary generation tasks. |
LLMPrompt
The LLMPrompt object is built by the SDK and contains the full system prompt with world context, persona state, and instructions. Your adapter receives this and passes it to the LLM.
interface LLMPrompt {
system: string; // System prompt built by SDK
// ...additional context fields
}The SDK builds the system prompt from world context. The adapter only passes it to the LLM -- it does not create or modify the prompt.
Custom adapter example
Here is a skeleton for a custom adapter that calls a hypothetical LLM API:
import type { LLMAdapter, LLMPrompt } from '@molroo-ai/sdk';
class MyAdapter implements LLMAdapter {
private apiKey: string;
constructor(apiKey: string) {
this.apiKey = apiKey;
}
async generate(prompt: LLMPrompt, schema: object, message: string) {
const response = await fetch('https://my-llm-api.com/v1/generate', {
method: 'POST',
headers: {
'Authorization': `Bearer ${this.apiKey}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
system: prompt.system,
user: message,
response_format: { type: 'json_schema', schema },
}),
});
const data = await response.json();
// Must return structured output matching the schema
return JSON.parse(data.content);
}
async generateText(prompt: LLMPrompt, message: string) {
const response = await fetch('https://my-llm-api.com/v1/generate', {
method: 'POST',
headers: {
'Authorization': `Bearer ${this.apiKey}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
system: prompt.system,
user: message,
}),
});
const data = await response.json();
return data.content;
}
}
// Works with both MolrooWorld and MolrooPersona
const llm = new MyAdapter('my-api-key');
const world = await MolrooWorld.create(
{ baseUrl: 'https://api.molroo.io', apiKey: 'mk_live_xxxxx', llm },
worldConfig,
);
const persona = await MolrooPersona.create(
{ baseUrl: 'https://api.molroo.io', apiKey: 'mk_live_xxxxx', llm },
personaConfig,
);Emotion-only mode (no LLM)
If you already have a dialogue system and only need molroo for emotion computation, skip the LLM entirely by providing manual appraisal values:
// No llm parameter — works with both World and Persona
const world = await MolrooWorld.create(
{ baseUrl: 'https://api.molroo.io', apiKey: 'your-api-key' },
worldConfig,
);
// World: provide appraisal in chat options
const result = await world.chat('Sera', 'Hello', {
appraisal: {
goal_relevance: 0.5,
goal_congruence: 0.8,
expectedness: 0.6,
controllability: 0.5,
agency: 0,
norm_compatibility: 0.7,
},
});
console.log(result.response.emotion.discrete); // { primary: 'contentment', ... }
// Persona: use perceive() with appraisal
const persona = await MolrooPersona.create(
{ baseUrl: 'https://api.molroo.io', apiKey: 'your-api-key' },
personaConfig,
);
const response = await persona.perceive('Hello', {
appraisal: {
goal_relevance: 0.5,
goal_congruence: 0.8,
expectedness: 0.6,
controllability: 0.5,
agency: 0,
norm_compatibility: 0.7,
},
});
console.log(response.emotion.discrete); // { primary: 'contentment', ... }This mode is particularly useful for:
- Game engines that handle dialogue through branching scripts
- Chatbot platforms that already generate text and want to add emotional depth
- Research applications that need deterministic emotion computation
- Testing the emotion pipeline without LLM costs
Chat flow
Understanding the full chat flow helps when debugging or building custom adapters. Both MolrooWorld and MolrooPersona follow the same pattern:
1. world.chat('Sera', 'Hello!') / persona.chat('Hello!')
│
2. SDK calls API for context
→ World: /context endpoint (spatial awareness, events, knowledge)
→ Persona: /state endpoint (emotion, mood, narrative)
│
3. SDK builds system prompt from context + persona config
│
4. SDK calls llm.generate(prompt, zodSchema, message)
→ adapter sends prompt + message to LLM
→ LLM returns structured JSON (response text + appraisal)
│
5. SDK calls API with appraisal
→ engine computes emotion
→ returns full AgentResponse
│
6. SDK returns ChatResult / PersonaChatResult to your codeEach chat call involves exactly 2 external calls: one to your LLM provider and one to the molroo API. The SDK orchestrates both. This is the same for both World and Persona.