LLM Adapter
LLM integration — Vercel AI SDK providers, LLMAdapter interface, or emotion-only mode.
LLM Adapter
The SDK accepts LLM configuration via the LLMInput type, which is a union of:
- Vercel AI SDK
LanguageModel(recommended) — Pass a provider instance directly LLMAdapterinterface — Implement the two-method interface for custom LLM pipelines
import type { LLMInput, LLMAdapter } from '@molroo-io/sdk';Vercel AI SDK providers (recommended)
The simplest approach is to pass a Vercel AI SDK provider instance directly. The SDK wraps it internally.
import { Molroo } from '@molroo-io/sdk';
import { createOpenAI } from '@ai-sdk/openai';
const molroo = new Molroo({ apiKey: 'mk_live_...' });
const openai = createOpenAI({ apiKey: process.env.OPENAI_API_KEY! });
const sera = await molroo.createPersona(personaConfig, {
llm: openai('gpt-4o-mini'),
});Supported providers
Any provider compatible with the Vercel AI SDK LanguageModel interface works:
| Provider | Package | Example |
|---|---|---|
| OpenAI | @ai-sdk/openai | createOpenAI({ apiKey })('gpt-4o') |
| Anthropic | @ai-sdk/anthropic | createAnthropic({ apiKey })('claude-sonnet-4-5-20250929') |
| Google Vertex AI | @ai-sdk/google-vertex | createVertex({ project })('gemini-2.0-flash') |
| Any OpenAI-compatible | @ai-sdk/openai | createOpenAI({ apiKey, baseURL })('model-name') |
// OpenAI
import { createOpenAI } from '@ai-sdk/openai';
const openai = createOpenAI({ apiKey: process.env.OPENAI_API_KEY! });
const sera = await molroo.createPersona(config, { llm: openai('gpt-4o-mini') });
// Anthropic
import { createAnthropic } from '@ai-sdk/anthropic';
const anthropic = createAnthropic({ apiKey: process.env.ANTHROPIC_API_KEY! });
const sera = await molroo.createPersona(config, { llm: anthropic('claude-sonnet-4-5-20250929') });
// Google Vertex AI
import { createVertex } from '@ai-sdk/google-vertex';
const vertex = createVertex({ project: 'my-gcp-project' });
const sera = await molroo.createPersona(config, { llm: vertex('gemini-2.0-flash') });
// OpenAI-compatible API (OpenRouter, Together, etc.)
const openrouter = createOpenAI({
apiKey: process.env.OPENROUTER_API_KEY!,
baseURL: 'https://openrouter.ai/api/v1',
});
const sera = await molroo.createPersona(config, { llm: openrouter('anthropic/claude-sonnet-4-5') });The ai package is a required peer dependency. Install it alongside your chosen provider: npm install ai @ai-sdk/openai.
LLMAdapter interface
For custom LLM pipelines, implement the LLMAdapter interface with two methods:
interface LLMAdapter {
/**
* Generate plain text from messages.
* Used in split mode for the quality response.
*/
generateText(options: GenerateTextOptions): Promise<{ text: string }>;
/**
* Generate structured output matching the provided Zod schema.
* Used in combined mode for response + appraisal extraction.
*/
generateObject<T>(options: GenerateObjectOptions<T>): Promise<{ object: T }>;
}Options types
interface Message {
role: 'user' | 'assistant' | 'system';
content: string;
}
interface GenerateTextOptions {
system?: string;
messages: Message[];
temperature?: number;
maxTokens?: number;
}
interface GenerateObjectOptions<T> {
system?: string;
messages: Message[];
schema: ZodSchema<T>;
temperature?: number;
}Methods
generateObject<T>(options)
Generate structured output matching the provided Zod schema. The SDK calls this during chat() in combined mode to get both a response text and an appraisal vector from the LLM.
| Field | Type | Description |
|---|---|---|
system | string? | System prompt built by the API (persona identity, emotional state, mood, somatic markers) |
messages | Message[] | Conversation history + user message |
schema | ZodSchema<T> | Zod schema describing the expected output shape |
temperature | number? | Sampling temperature |
Returns: { object: T } -- parsed structured output matching the schema. For chat, this is:
{
response: string; // The character's response text
appraisal: {
goal_relevance: number;
goal_congruence: number;
expectedness: number;
controllability: number;
agency: number;
norm_compatibility: number;
internal_standards: number;
adjustment_potential: number;
urgency: number;
};
}generateText(options)
Generate plain text output. The SDK calls this in split mode for the quality response, after the appraisal has been computed by engineLlm.
| Field | Type | Description |
|---|---|---|
system | string? | System prompt (includes updated emotional state from appraisal) |
messages | Message[] | Conversation history + user message |
temperature | number? | Sampling temperature |
maxTokens | number? | Maximum tokens to generate |
Returns: { text: string } -- the generated text.
How the SDK uses the adapter
persona.chat('Hello') orchestrates the following flow:
Combined mode (default)
1. SDK calls API: POST /personas/:id/prompt-context
-> API builds system prompt from live emotional state
-> Returns { systemPrompt, llmSchema }
2. SDK calls llm.generateObject({ system, messages, schema })
-> LLM returns { response, appraisal }
3. SDK calls API: POST /personas/:id/perceive
-> Engine computes emotion from appraisal
-> Returns AgentResponse with new emotional state
4. SDK returns PersonaChatResult (including updatedHistory)Split mode
1. SDK calls API: POST /personas/:id/prompt-context
2. SDK calls engineLlm.generateObject({ system, messages, schema })
-> Cheap model returns appraisal vector only
3. SDK calls API: POST /personas/:id/perceive
-> Engine computes emotion, returns updated state
4. SDK calls API: POST /personas/:id/prompt-context
-> Gets new system prompt reflecting updated emotion
5. SDK calls llm.generateText({ system, messages })
-> Quality model returns response text with emotional awareness
6. SDK returns PersonaChatResult (including updatedHistory)Each chat call involves 2 API calls (prompt-context + perceive) and 1 LLM call in combined mode, or 2 API calls + 1 additional prompt-context + 2 LLM calls in split mode. The adapter never constructs system prompts -- the API handles all prompt engineering.
Custom adapter implementation
You can implement the LLMAdapter interface for any LLM provider or custom pipeline:
import type { LLMAdapter, GenerateTextOptions, GenerateObjectOptions } from '@molroo-io/sdk';
import { zodToJsonSchema } from 'zod-to-json-schema';
class MyCustomAdapter implements LLMAdapter {
private apiKey: string;
constructor(apiKey: string) {
this.apiKey = apiKey;
}
async generateObject<T>(options: GenerateObjectOptions<T>): Promise<{ object: T }> {
const jsonSchema = zodToJsonSchema(options.schema);
const messages = options.system
? [{ role: 'system', content: options.system }, ...options.messages]
: options.messages;
const response = await fetch('https://my-llm-api.com/v1/generate', {
method: 'POST',
headers: {
'Authorization': `Bearer ${this.apiKey}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
messages,
response_format: { type: 'json_schema', schema: jsonSchema },
}),
});
const data = await response.json();
return { object: JSON.parse(data.content) as T };
}
async generateText(options: GenerateTextOptions): Promise<{ text: string }> {
const messages = options.system
? [{ role: 'system', content: options.system }, ...options.messages]
: options.messages;
const response = await fetch('https://my-llm-api.com/v1/generate', {
method: 'POST',
headers: {
'Authorization': `Bearer ${this.apiKey}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({ messages }),
});
const data = await response.json();
return { text: data.content };
}
}Emotion-only mode (no LLM)
If you do not provide an LLM, you can still use the SDK in emotion-only mode by providing manual appraisal values:
const molroo = new Molroo({ apiKey: 'mk_live_...' });
const persona = await molroo.createPersona(personaConfig); // no llm
const response = await persona.perceive('Hello', {
appraisal: {
goal_relevance: 0.5,
goal_congruence: 0.8,
expectedness: 0.6,
controllability: 0.5,
agency: 0,
norm_compatibility: 0.7,
internal_standards: 0,
adjustment_potential: 0.5,
urgency: 0.5,
},
});Calling persona.chat() without an LLM throws a MolrooApiError with code LLM_NOT_CONFIGURED.