molroo docs
SDK Reference

Persona

Client for creating and interacting with emotion-aware AI personas.

MolrooPersona

MolrooPersona is the SDK class for interacting with the molroo emotion engine. Each instance represents a single AI character with persistent emotional state.

Quick start

The recommended way to create a persona is via the unified Molroo client:

import { Molroo } from '@molroo-io/sdk';
import { createOpenAI } from '@ai-sdk/openai';

const molroo = new Molroo({ apiKey: 'mk_live_xxx' });

const openai = createOpenAI({ apiKey: process.env.OPENAI_API_KEY! });

const persona = await molroo.createPersona(
  {
    identity: { name: 'Sera', role: 'barista', speakingStyle: 'warm and casual' },
    personality: { O: 0.7, C: 0.6, E: 0.8, A: 0.9, N: 0.3, H: 0.8 },
  },
  { llm: openai('gpt-4o-mini') },
);

// Chat with external history management
let history: Message[] = [];
const result = await persona.chat('Good morning! What do you recommend?', { history });
console.log(result.text);                      // LLM-generated response
console.log(result.response.emotion);          // { V: 0.65, A: 0.42, D: 0.38 }
history = result.updatedHistory;               // Save for next call

Properties

PropertyTypeDescription
idstringUnique persona instance ID
personaIdstringAlias for id

Static Factory Methods

MolrooPersona.create(config, input)

Create a new persona on the API and return a connected instance. Accepts either a natural language description (requires llm) or explicit PersonaConfigData.

// From description (requires llm)
const persona = await MolrooPersona.create(
  { apiKey: 'mk_live_xxx', llm: openai('gpt-4o-mini') },
  'A kind and curious barista who remembers customer names',
);

// From explicit config
const persona = await MolrooPersona.create(
  { apiKey: 'mk_live_xxx', llm: openai('gpt-4o-mini') },
  {
    identity: { name: 'Sera', role: 'barista' },
    personality: { O: 0.7, C: 0.6, E: 0.8, A: 0.9, N: 0.3, H: 0.8 },
  },
);
ParameterTypeDescription
config.baseUrlstring?API base URL. Default: https://api.molroo.io
config.apiKeystringAPI key
config.llmLLMInputLLM adapter or Vercel AI SDK provider for chat(). Required when using description string.
config.engineLlmLLMInputOptional separate LLM for appraisal (split mode)
config.memoryMemoryAdapterOptional memory adapter for advanced features
config.recallRecallLimitsOptional recall limits when using a MemoryAdapter
config.eventsEventAdapterOptional event emission adapter
inputstring | PersonaConfigDataNatural language description or explicit persona config

Returns: Promise<MolrooPersona>

MolrooPersona.connect(config, personaId)

Connect to an existing persona by ID. Verifies the persona exists.

const persona = await MolrooPersona.connect(
  { apiKey: 'mk_live_xxx', llm: openai('gpt-4o-mini') },
  'persona_abc123',
);

Returns: Promise<MolrooPersona>

MolrooPersona.listPersonas(config)

List all personas for the authenticated tenant.

const { personas } = await MolrooPersona.listPersonas({
  apiKey: 'mk_live_xxx',
});

Returns: Promise<{ personas: PersonaSummary[]; nextCursor?: string }>

Chat Methods

persona.chat(message, options?)

High-level chat with LLM integration. The SDK:

  1. Calls getPromptContext() to get the server-assembled system prompt
  2. Optionally recalls memories from MemoryAdapter (if configured)
  3. Sends to the LLM for text generation + appraisal
  4. Sends the appraisal to the API via perceive() for emotion computation
  5. Returns PersonaChatResult with updatedHistory

Requires an LLM. Without one, use perceive() directly.

let history: Message[] = [];

const result = await persona.chat('How are you feeling today?', {
  from: 'Alex',
  history,
});

console.log(result.text);                        // LLM-generated text
console.log(result.response.emotion);            // { V: 0.6, A: 0.4, D: 0.3 }
console.log(result.state?.mood);                 // slow-moving baseline mood
console.log(result.updatedHistory);              // updated conversation history

// Save history for next call
history = result.updatedHistory;
ParameterTypeDescription
messagestringUser message
options.fromstring | InterlocutorContextSource entity -- a name string or structured context with description/extensions injected into the system prompt. Default: 'user'
options.historyMessage[]?Conversation history for LLM context. Manage externally using updatedHistory.
options.consumerSuffixstring?Extra text appended to the system prompt. Use for app-specific context only — see consumerSuffix guidelines.
options.onToolCallfunction?Callback invoked when the LLM requests a tool call during generation

Returns: Promise<PersonaChatResult>

interface PersonaChatResult {
  /** LLM-generated response text. */
  text: string;
  /** Emotion engine response with VAD, discrete emotion, and side effects. */
  response: AgentResponse;
  /** Persona state at the time of interaction (if available). */
  state?: PersonaState;
  /** Updated conversation history including this turn. Manage externally. */
  updatedHistory: Message[];
}

See PersonaChatResult and ChatOptions for full type details.

consumerSuffix guidelines

The server-assembled system prompt (via getPromptContext()) already includes identity, personality, emotion state, mood, and StyleProfile constraints. Do not duplicate these in consumerSuffix.

Do include (app-specific context the SDK doesn't know about):

  • Relationship details between user and persona
  • Example messages for tone matching
  • Time gap context ("returning after 3 hours")
  • Game/app state (inventory, quest progress, scene description)
  • Custom behavioral rules specific to your app

Do not include (already in the server prompt):

  • Persona name, role, or speaking style (from identity)
  • Personality trait descriptions (from personality)
  • Current emotion or mood state (from live engine state)
  • StyleProfile constraints (from extractStyleProfile())

Duplicating server prompt content wastes tokens and can confuse the LLM with contradictory instructions.

Combined vs. Split mode

In combined mode (default), a single LLM call returns both response text and appraisal. The emotion is computed after the response.

In split mode (when engineLlm is provided), the appraisal is generated first by engineLlm, the engine updates the emotion, then llm generates the response with the updated emotional state. This means the response reflects the emotion after processing the current message.

Interlocutor context

Pass structured information about the conversation partner via from:

const result = await persona.chat('Hello traveler!', {
  from: {
    name: 'Kim',
    description: 'A weary traveler who just arrived at the tavern.',
    extensions: {
      inventory: 'sword, healing potion',
      quest: 'Find the lost artifact of Eldoria',
    },
  },
  history,
});
history = result.updatedHistory;

When from is an InterlocutorContext object, the SDK automatically injects the description and extensions into the system prompt. The name field is used as the source entity for memory recall and emotion processing.

persona.perceive(message, options?)

Low-level emotion processing without LLM. Send a message (or appraisal) directly to the emotion engine.

const response = await persona.perceive('You did a great job!', {
  from: 'Alex',
  appraisal: {
    goal_relevance: 0.8,
    goal_congruence: 0.9,
    expectedness: 0.3,
    controllability: 0.2,
    agency: -0.7,
    norm_compatibility: 0.9,
    internal_standards: 0.8,
    adjustment_potential: 0.7,
    urgency: 0.1,
  },
});

console.log(response.emotion); // { V: 0.7, A: 0.5, D: 0.3 }
ParameterTypeDefaultDescription
messagestring--Stimulus message
options.fromstring | InterlocutorContext'user'Source entity -- a name string or structured InterlocutorContext
options.appraisalAppraisalVector--Pre-computed appraisal vector for emotion computation
options.typestring?'chat_message'Event type identifier (e.g., 'news_event', 'environment_change')
options.stimulusobject?--Low-level state overrides. See stimulus.
options.payloadRecord<string, unknown>?--Extra event data merged into the event payload alongside the message
options.priorEpisodesEpisode[]--Context memories for appraisal-aware processing
options.relationshipContext{ trust, familiarity }?--Relationship info between source entity and persona for appraisal bias
options.skipMemoryboolean?falseSkip saving the generated memoryEpisode to the memory adapter

Returns: Promise<AgentResponse>

See PerceiveOptions and AgentResponse for full type details.

persona.event(type, description, options)

Convenience wrapper around perceive() for dispatching typed events. Unlike perceive(), the event type is the first argument and appraisal is required.

await persona.event('breaking_news', 'A major earthquake struck the region.', {
  appraisal: {
    goal_relevance: 0.7,
    goal_congruence: -0.6,
    expectedness: 0.1,
    controllability: 0.0,
    agency: -1,
    norm_compatibility: 0,
    internal_standards: 0,
    adjustment_potential: 0.2,
    urgency: 0.9,
  },
});
ParameterTypeDescription
typestringEvent type name (e.g., 'breaking_news', 'weather_change', 'gift_received')
descriptionstringEvent description (passed as the stimulus message to perceive())
optionsPerceiveOptions & { appraisal: AppraisalVector }Same as PerceiveOptions, but appraisal is required

Returns: Promise<AgentResponse>

Example (environment event):

await persona.event('weather_change', 'A sudden thunderstorm began.', {
  appraisal: {
    goal_relevance: 0.3,
    goal_congruence: -0.2,
    expectedness: 0.2,
    controllability: 0.0,
    agency: -1,
    norm_compatibility: 0,
    internal_standards: 0,
    adjustment_potential: 0.6,
    urgency: 0.4,
  },
  stimulus: {
    bodyBudgetDelta: -0.05,
  },
});

Time Methods

persona.tick(seconds)

Advance persona time. Triggers mood decay, body budget recovery, and internal processing.

await persona.tick(3600); // Advance 1 hour

Returns: Promise<{ pendingEvents?: unknown[] }>

persona.setEmotion(vad)

Directly override the persona's emotion state in VAD space.

await persona.setEmotion({ V: -0.5, A: 0.8 }); // Set to anxious state
ParameterTypeDescription
vad.Vnumber?Valence (-1 to +1)
vad.Anumber?Arousal (0 to 1)
vad.Dnumber?Dominance (-1 to +1)

State Methods

persona.getState()

Get the current emotional and psychological state.

const state = await persona.getState();
console.log(state.emotion);    // { V: 0.3, A: 0.2, D: 0.1 }
console.log(state.mood);       // slow-changing mood baseline
console.log(state.somatic);    // somatic marker descriptions

Returns: Promise<PersonaState>

persona.getPromptContext(suffix?, source?)

Get the server-assembled system prompt built from the persona's live emotional state. Used internally by chat(), but available for custom LLM integrations.

const ctx = await persona.getPromptContext('Be extra helpful today.', 'Alex');
console.log(ctx.systemPrompt);    // Full system prompt with emotion context
console.log(ctx.personaPrompt);   // Raw persona prompt data
console.log(ctx.tools);           // Available tools (if configured)
ParameterTypeDescription
suffixstring?Extra text appended to the system prompt (consumer suffix)
sourcestring?Source entity name (used for relationship-aware prompt context)

Returns: Promise<{ systemPrompt: string; personaPrompt: Record<string, unknown>; tools?: Array<Record<string, unknown>> }>

System prompt structure

The server assembles the system prompt from the persona's live state. Understanding this structure helps you avoid duplicating content in consumerSuffix.

┌─────────────────────────────────────────────────┐
│ Identity                                        │
│  Name, role, core values, speaking style,       │
│  description, extensions                        │
├─────────────────────────────────────────────────┤
│ Behavioral Instructions                         │
│  "Stay in character", "Embody your              │
│  psychological state", language matching         │
├─────────────────────────────────────────────────┤
│ ## Current Psychological State                  │
│  • Emotion (VAD → natural language)    always   │
│  • Mood (slow-changing baseline)       if set   │
│  • Somatic markers (body sensations)   if set   │
│  • Goals & motivations                 if set   │
│  • Narrative arc & self-perception     if set   │
│  • Memory context (past interactions)  if source│
├─────────────────────────────────────────────────┤
│ ## Expression Style            if StyleProfile  │
│  • Message length constraints                   │
│  • Formality level                              │
│  • Emoji/laugh/punctuation guidance             │
│  • Signature expressions & sentence endings     │
│  • Forbidden patterns                           │
│  • Emotion-driven modulation reasons            │
├─────────────────────────────────────────────────┤
│ Consumer Suffix                     if provided │
│  (your app-specific context goes here)          │
└─────────────────────────────────────────────────┘

Sections marked "if set" are included only when the corresponding state is non-neutral. This keeps the prompt focused and reduces token usage.

persona.getSnapshot()

Get a full snapshot of the persona's internal state for backup.

const snapshot = await persona.getSnapshot();

Returns: Promise<PersonaSnapshot>

persona.putSnapshot(snapshot)

Restore the persona's state from a previously saved snapshot.

await persona.putSnapshot(savedSnapshot);

Style Methods

persona.extractStyleProfile(corpus, options?)

Extract a speaking style profile from sample text and save it to the persona. Extraction runs server-side. Once set, style constraints are automatically included in the system prompt and modulated by the persona's current emotion.

const profile = await persona.extractStyleProfile([
  'omg that latte art is gorgeous!!',
  'haha yeah i always double-shot my espressos~',
  // ... 20-30 messages recommended
]);
ParameterTypeDescription
corpusstring[]Array of sample messages from the target speaking style
options.timestampsnumber[]?Message timestamps for temporal analysis
options.otherMessagesstring[]?Other people's messages for contrast analysis

Returns: Promise<StyleProfile> — the extracted profile (also auto-saved to the persona)

persona.setStyleProfile(profile)

Manually set a previously extracted StyleProfile. Useful when reusing a profile across multiple personas.

await persona.setStyleProfile(profile);
ParameterTypeDescription
profileStyleProfileA StyleProfile object

See the Speaking Style guide for full details.

Config Methods

persona.patch(updates)

Update the persona's configuration (identity, personality).

await persona.patch({
  config: {
    identity: { name: 'Sera', role: 'head barista' },
    personality: { O: 0.7, C: 0.7, E: 0.8, A: 0.9, N: 0.3, H: 0.8 },
  },
});

Lifecycle Methods

persona.destroy()

Soft-delete this persona. Can be restored with restore().

await persona.destroy();

persona.restore()

Restore a previously soft-deleted persona.

await persona.restore();

PersonaConfigData

Configuration data for creating or updating a persona. See Configuration Types for full details including Identity, PersonalityTraits, and Goal types.

interface PersonaConfigData {
  personality?: PersonalityTraits;   // 6-factor: { O, C, E, A, N, H }
  identity?: Identity;               // name, role, speakingStyle, coreValues
  goals?: Goal[];                    // persona goals and motivations
  [key: string]: unknown;            // extensible
}

Personality factors:

KeyFactorRange
OOpenness to Experience0 - 1
CConscientiousness0 - 1
EExtraversion0 - 1
AAgreeableness0 - 1
NNeuroticism0 - 1
HHonesty-Humility0 - 1

On this page