LLMAdapter
LLM adapter interface — generate() for structured output, generateText() for plain text.
LLMAdapter
The LLMAdapter interface abstracts LLM integration so you can use any language model with the molroo SDK. The SDK builds the system prompt and passes it to your adapter -- the adapter is responsible only for sending the prompt to the LLM and returning the result.
import type { LLMAdapter } from '@molroo-ai/sdk';Interface
interface LLMAdapter {
/**
* Generate structured output from a prompt.
* Used by chat() for response + appraisal extraction.
*/
generate(prompt: LLMPrompt, schema: object, message: string): Promise<any>;
/**
* Generate plain text from a prompt.
* Used for reflections and other free-form text generation.
*/
generateText(prompt: LLMPrompt, message: string): Promise<string>;
}LLMPrompt
The prompt object passed to both generate() and generateText():
interface LLMPrompt {
/** System prompt built by the SDK (persona identity, personality, world context) */
system: string;
/** Additional context block (who is present, recent events, memories) */
context?: string;
/** Additional instructions */
instruction?: string;
/** Conversation history */
history?: ChatMessage[];
}
interface ChatMessage {
role: 'user' | 'assistant' | 'system';
content: string;
}Methods
generate(prompt, schema, message)
Generate structured output matching the provided JSON schema. The SDK calls this during chat() to get both a response text and an appraisal vector from the LLM.
| Parameter | Type | Description |
|---|---|---|
prompt | LLMPrompt | System prompt, context, and history built by the SDK |
schema | object | JSON Schema or Zod schema describing the expected output shape |
message | string | The user's message text |
Returns: Parsed structured output matching the schema. For chat, this includes:
{
response: string; // The character's response text
appraisal: {
goal_relevance: number; // [-1, 1]
goal_congruence: number; // [-1, 1]
expectedness: number; // [0, 1]
controllability: number; // [0, 1]
agency: number; // [-1, 1]
norm_compatibility: number; // [-1, 1]
};
}generateText(prompt, message)
Generate plain text output. The SDK calls this for reflections (when a MemoryAdapter with saveReflection is configured).
| Parameter | Type | Description |
|---|---|---|
prompt | LLMPrompt | System prompt and context |
message | string | The prompt text for generation |
Returns: string -- the generated text.
How the SDK Uses the Adapter
The LLM adapter works with both MolrooWorld and MolrooPersona. The flow is the same:
MolrooWorld — world.chat('Sera', 'Hello'):
- Fetches world context via
getContext('Sera')(spatial awareness, nearby entities, events) - Builds a system prompt from persona config + world context
- Calls
adapter.generate(prompt, LLMResponseSchema, message) - Sends the LLM response + appraisal to the API for emotion computation
- Returns
ChatResult
MolrooPersona — persona.chat('Hello'):
- Fetches persona state via
getState()(emotion, mood, somatic) - Builds a system prompt from persona config + state
- Calls
adapter.generate(prompt, LLMResponseSchema, message) - Sends the LLM response + appraisal to the API for emotion computation
- Returns
PersonaChatResult
The adapter never needs to construct system prompts. The SDK handles all prompt engineering.
Built-in Adapter: VercelAIAdapter
The @molroo-ai/adapter-llm package provides a ready-made adapter using the Vercel AI SDK:
npm install @molroo-ai/adapter-llm @ai-sdk/openaiimport { VercelAIAdapter } from '@molroo-ai/adapter-llm';
import { openai } from '@ai-sdk/openai';
const llm = new VercelAIAdapter({
model: openai('gpt-4o-mini'),
});
// Works with both MolrooWorld and MolrooPersona
const world = await MolrooWorld.create(
{ baseUrl: 'https://api.molroo.io', apiKey: 'key', llm },
{ /* setup */ },
);
const persona = await MolrooPersona.create(
{ baseUrl: 'https://api.molroo.io', apiKey: 'key', llm },
{ /* config */ },
);The VercelAIAdapter supports any Vercel AI SDK provider:
import { anthropic } from '@ai-sdk/anthropic';
import { google } from '@ai-sdk/google';
// Anthropic
const llm = new VercelAIAdapter({ model: anthropic('claude-sonnet-4-20250514') });
// Google
const llm = new VercelAIAdapter({ model: google('gemini-2.0-flash') });Custom Adapter Implementation
You can implement the LLMAdapter interface for any LLM provider or custom pipeline:
import type { LLMAdapter, LLMPrompt } from '@molroo-ai/sdk';
class MyCustomAdapter implements LLMAdapter {
async generate(prompt: LLMPrompt, schema: object, message: string): Promise<any> {
// Build messages array from the prompt
const messages = [
{ role: 'system' as const, content: prompt.system },
];
if (prompt.context) {
messages.push({ role: 'system' as const, content: prompt.context });
}
if (prompt.history) {
messages.push(...prompt.history);
}
messages.push({ role: 'user' as const, content: message });
// Call your LLM with structured output
const response = await myLLMCall(messages, {
responseFormat: schema, // JSON schema for structured output
});
return JSON.parse(response);
}
async generateText(prompt: LLMPrompt, message: string): Promise<string> {
const messages = [
{ role: 'system' as const, content: prompt.system },
];
if (prompt.context) {
messages.push({ role: 'system' as const, content: prompt.context });
}
messages.push({ role: 'user' as const, content: message });
return myLLMCall(messages);
}
}Emotion-Only Mode (No LLM)
If you do not provide an LLMAdapter, you can still use the SDK in emotion-only mode by providing manual appraisal values:
// World: provide appraisal in chat options
const world = await MolrooWorld.create(
{ baseUrl: 'https://api.molroo.io', apiKey: 'key' }, // no llm
{ /* setup */ },
);
const result = await world.chat('Sera', 'Hello', {
appraisal: { goal_relevance: 0.5, goal_congruence: 0.8, /* ... */ },
});
// Persona: use perceive() with appraisal
const persona = await MolrooPersona.create(
{ baseUrl: 'https://api.molroo.io', apiKey: 'key' }, // no llm
{ /* config */ },
);
const response = await persona.perceive('Hello', {
appraisal: { goal_relevance: 0.5, goal_congruence: 0.8, /* ... */ },
});Calling world.chat() or persona.chat() without an LLM adapter and without manual appraisal throws a MolrooApiError with code LLM_NOT_CONFIGURED.