MotteMB - Memory Bank

Intelligent agent memory

MotteMB provides sophisticated memory management using vector embeddings for context-aware agent memory and retrieval.

Overview

MotteMB implements a sophisticated memory management system that allows agents to store, retrieve, and reason about past interactions. Using vector embeddings and semantic search, agents can access relevant historical context to improve decision-making and maintain coherent long-term behavior.

Key Features

Semantic Search

Find relevant memories using natural language queries with vector similarity search.

Vector Embeddings

Store memories as high-dimensional vectors for efficient similarity matching and retrieval.

Automatic Categorization

Memories are automatically categorized by type and context for better organization.

Bulk Operations

Import/export memories in various formats and perform batch operations efficiently.

Getting Started

1. Create Your First Memory

Start by adding some context or knowledge that your agents can reference:

1. Navigate to MotteMB
2. Click "Create Memory"
3. Enter content:

"Our company policy states that refunds are available 
within 30 days of purchase for digital products, and 
60 days for physical products. Customers need to provide 
proof of purchase and reason for return."

2. Search and Retrieve

Use natural language to find relevant memories:

Example Searches:

  • "What is our refund policy?"
  • "How long do customers have to return items?"
  • "Digital product return requirements"

3. Import Existing Data

Bulk import your existing knowledge base or documentation:

Supported Formats:

  • JSON: Structured data with metadata
  • CSV: Tabular data with content and categories
  • TXT: Plain text, one memory per line
  • JSONL: JSON Lines format for large datasets

Advanced Features

Vector Similarity Search

MotteMB uses OpenAI's text-embedding-3-large model to create vector representations of memories. This enables semantic search that understands context and meaning, not just keyword matching.

How it works:

Query: "customer return policy"
      ↓
Vector Embedding (1536 dimensions)
      ↓
Similarity Search (cosine similarity)
      ↓
Ranked Results (similarity > 0.7)

Memory Optimization

Keep your memory bank efficient with built-in optimization tools:

Duplicate Detection

Automatically identify and remove duplicate or near-duplicate memories.

Storage Optimization

Compress and optimize vector storage for better performance.

Index Rebuilding

Rebuild search indexes for optimal query performance.

API Reference

Store Memory

POST /api/memory/store
{
  "content": "Memory content here",
  "metadata": {
    "category": "policy",
    "source": "handbook",
    "priority": "high"
  }
}

Stores a new memory with optional metadata for categorization.

Search Memories

POST /api/memory/search
{
  "query": "refund policy for digital products",
  "limit": 5,
  "threshold": 0.7
}

Searches for relevant memories using semantic similarity.

Import Memories

POST /api/memory/import
Content-Type: multipart/form-data

file: [uploaded file]

Bulk import memories from JSON, CSV, or text files.

Best Practices

Descriptive Content

Write clear, descriptive memories that include context and relevant keywords.

Use Metadata

Add metadata like categories, sources, and priorities to improve organization.

Regular Optimization

Run optimization regularly to maintain search performance and remove duplicates.

Backup Regularly

Export your memories regularly as backups and for version control.