Skip to main content
Give your product’s AI agent the ability to interact with your tenants’ connected apps. Membrane provides the tools — your agent uses them to read data, create records, send messages, and more across any connected app.

Getting Started

  1. Complete the Quickstart: Product Integrations guide.
  2. Ask your coding agent to build the agent tooling:
“Give our AI assistant tools from our tenants’ connected apps using Membrane. Use static tools loaded at session start.”
If you want to learn how it works under the hood — read on.

Approaches

Static Tools

Load a fixed set of tools from the tenant’s connections at the start of the session. Best when you know what the agent needs upfront. When to use:
  • The toolset is small and predictable
  • You want to leverage LLM context caching
  • Tools don’t change during the session
How it works:
  1. List the tenant’s connections.
  2. Get available actions for the relevant connections.
  3. Pass the actions as tools to your LLM.
// Get tools from a tenant's HubSpot connection
const actions = await membrane.connection(connectionId).actions.find()

// Convert to LLM tool format
const tools = actions.items.map((action) => ({
  name: action.key,
  description: action.name,
  parameters: action.inputSchema,
}))

Dynamic Tools

Let the agent discover and load tools on-demand during a conversation. Best for large tool catalogs or when tools depend on context. When to use:
  • Hundreds or thousands of possible tools
  • Tools depend on user intent or conversation context
  • Users may connect new apps mid-session
How it works: Give your agent two meta-tools:
  1. Search actions — find relevant tools by intent
  2. Run action — execute a discovered tool
// Agent searches for relevant tools
const results = await membrane.actions.search({
  query: 'create a contact in CRM',
  connectionId: connectionId,
})

// Agent runs the selected action
const result = await membrane.action(results[0].id).run({
  email: 'jane@example.com',
  firstName: 'Jane',
})
The agent dynamically discovers what it can do instead of loading everything upfront.

End-to-End Example

Here’s a complete example showing a per-tenant AI agent using the Vercel AI SDK. Each customer gets their own Membrane token, which scopes all operations to their connections.
import { generateText, tool } from 'ai'
import { anthropic } from '@ai-sdk/anthropic'
import jwt from 'jsonwebtoken'
import { MembraneClient } from '@membranehq/sdk'
import { z } from 'zod'

// 1. Generate a tenant-scoped token (see Authentication docs)
function createMembraneClient(tenantId: string, tenantName: string) {
  const token = jwt.sign(
    { workspaceKey: process.env.MEMBRANE_WORKSPACE_KEY!, tenantKey: tenantId, name: tenantName },
    process.env.MEMBRANE_WORKSPACE_SECRET!,
    { expiresIn: 7200, algorithm: 'HS512' },
  )
  // All operations through this client are scoped to the tenant
  return new MembraneClient({ token })
}

// 2. Build tools from tenant's connections
async function handleChat(tenantId: string, tenantName: string, userMessage: string) {
  const membrane = createMembraneClient(tenantId, tenantName)

  const { text } = await generateText({
    model: anthropic('claude-sonnet-4-20250514'),
    tools: {
      listDeals: tool({
        description: 'List recent deals from the CRM',
        parameters: z.object({}),
        execute: async () => {
          // Runs against this tenant's connected CRM
          const result = await membrane.action('list-deals').run()
          return result
        },
      }),
      createContact: tool({
        description: 'Create a new contact in the CRM',
        parameters: z.object({
          email: z.string(),
          name: z.string(),
        }),
        execute: async ({ email, name }) => {
          const result = await membrane.action('create-contact').run({ email, name })
          return result
        },
      }),
    },
    prompt: userMessage,
  })
  return text
}
This pattern works with any LLM framework (OpenAI SDK, LangChain, etc.) — see the agent-skills repository for framework-specific adapters.