Skip to main content
Our product is distributed as a set of Docker images that can be deployed in your infrastructure.

Docker Registry Access

Contact our support team to receive access credentials to our Docker registry (harbor.getmembrane.com). You’ll receive a username in the format robot$<your-company-name> and a password.
docker login harbor.getmembrane.com

Image Versioning

Images are tagged with :latest and date-based immutable tags (e.g., 2026-01-14). For production deployments, we recommend using the immutable tags: harbor.getmembrane.com/core/api:2026-01-14.

Infrastructure Requirements

Membrane requires:
  • Cloud storage (AWS S3, Azure Blob Storage, or Google Cloud Storage)
  • MongoDB server
  • Redis server
  • Auth0 for authentication (free tier sufficient)

Auth0 Configuration

When configuring your Auth0 application:
  • Application Type: Single-page Application
  • Allowed Callback URLs: Base URL of your console service
  • Allowed Web Origins: Base URL of your console service

Core Services

Membrane consists of four essential services:

API Service

The primary engine API that stores and executes integrations. Docker Image: harbor.getmembrane.com/core/api The API service operates in four distinct modes, each activated by specific environment variables:

1. API Mode

Main backend service handling incoming traffic and processing HTTP requests. Mode-specific Environment Variables:
VariableExampleDescription
IS_API1Enables API mode
HEADERS_TIMEOUT_MS61000Maximum time to receive request headers
KEEPALIVE_TIMEOUT_MS61000Maximum time to keep idle connections alive

2. Instant Tasks Worker Mode

Designed for executing semi-instant asynchronous tasks. This mode should be scaled to prevent task queuing. Each worker processes one background job at a time. Mode-specific Environment Variables:
VariableExampleDescription
IS_INSTANT_TASKS_WORKER1Enables instant tasks worker mode

3. Queued Tasks Worker Mode

Handles long-running tasks such as flow runs, event pulls, and external events. Each worker processes one background job at a time. When limits are enabled, tasks are queued and executed to ensure fair resource distribution among tenants. Mode-specific Environment Variables:
VariableExampleDescription
IS_QUEUED_TASKS_WORKER1Enables queued tasks worker mode
MAX_QUEUED_TASKS_MEMORY_MB1024Memory limit for task execution (default: 1024)
MAX_QUEUED_TASKS_PROCESS_TIME_SECONDS3000Time limit for task execution (default: 3000)

4. Orchestrator Mode

Manages schedule triggers, handles data source syncs, and performs cleanup tasks. Mode-specific Environment Variables:
VariableExampleDescription
IS_ORCHESTRATOR1Enables orchestrator mode
All services scale horizontally without additional configuration (aside from load balancing for API services).
Common Environment Variables The following variables are required for all operation modes:
VariableDescriptionExample
NODE_ENVEnvironment modeproduction
BASE_URIService deployment URLhttps://api.yourdomain.com
BASE_WEBHOOKS_URIOptional: Override webhook endpoint base URL. Use this when you need webhooks to be accessible at a different URL than the main API (e.g., when the API is private but webhooks need a public proxy). Works the same way as BASE_URI, but only for webhook endpoints.https://webhooks-proxy.yourdomain.com
BASE_OAUTH_CALLBACK_URIOptional: Override OAuth callback endpoint base URL. Use this when you need OAuth callbacks to be accessible at a different URL than the main API (e.g., when the API is private but OAuth callbacks need a public proxy). Works the same way as BASE_URI, but only for OAuth callback endpoints.https://oauth-proxy.yourdomain.com
BASE_URI_INTERNALInternal service deployment URI (to be used for call from workers to the service)https://api.yourdomain.com
CUSTOM_CODE_RUNNER_URICustom Code Runner service URLhttps://custom-code-runner.yourdomain.com
AUTH0_DOMAINAuth0 domain for authenticationlogin.integration.yourdomain.com
AUTH0_CLIENT_IDAuth0 client IDclientId
AUTH0_CLIENT_SECRETAuth0 client secretclientSecret
TMP_STORAGE_BUCKETTemporary storage bucket (recommend auto-expiration policy)integration-app-tmp
CONNECTORS_STORAGE_BUCKETConnectors storage bucketintegration-app-connectors
STATIC_STORAGE_BUCKETStatic files storage bucketintegration-app-static
BASE_STATIC_URIStatic content base URL (files uploaded to the static bucket should be served from this URL)https://static.yourdomain.com
REDIS_URI or REDIS_CLUSTER_URI_XURL for Redis server. For Redis cluster, provide multiple: REDIS_CLUSTER_URI_1, REDIS_CLUSTER_URI_2, …redis://user:password@redis-service.com:6379/
WORKER_REDIS_URI or WORKER_REDIS_CLUSTER_URI_XOptional. URL for a dedicated Redis server for background worker tasks. This allows you to separate worker task execution from main Redis operations for better performance and isolation. If not provided, the main Redis server will be used for both purposes.For Redis clusters, provide multiple: WORKER_REDIS_CLUSTER_URI_1, WORKER_REDIS_CLUSTER_URI_2, etc.redis://user:password@redis-service.com:6379/
SECRETJWT token signing secrets3cr3tString
ENCRYPTION_SECRETCredentials encryption secretv3rys3cr3tstring
MONGO_URIMongoDB connection stringmongodb+srv://login:pass@mymongo.com/integration-api
PORTContainer listening port (default: 5000)5000
AWS_REGIONAWS S3 region (when using S3 storage)eu-central-1
AWS_ACCESS_KEY_IDAWS S3 access key (when using S3 storage)
AWS_SECRET_ACCESS_KEYAWS S3 secret key (when using S3 storage)
STORAGE_PROVIDERStorage provider type: s3 (AWS S3), abs (Azure Blob Storage), or gcs (Google Cloud Storage). Default: s3abs
AZURE_STORAGE_CONNECTION_STRINGAzure Blob Storage connection string (when using Azure storage)
AZURE_STORAGE_ACCOUNT_NAMEAzure storage account name (alternative to connection string)
AZURE_STORAGE_ACCOUNT_KEYAzure storage account key (alternative to connection string)
GOOGLE_CLOUD_PROJECT_IDGoogle Cloud project ID (when using GCS storage)
GOOGLE_CLOUD_KEYFILEPath to Google Cloud service account key file (optional, uses Application Default Credentials if not provided)
ENABLE_LIMITSOptional: Enable workspace resource limitstrue
CONNECTION_CREDENTIALS_STORAGE_TYPEOptional: storage type for connection credentials: database, external_api. Default: database
CONNECTION_CREDENTIALS_EXTERNAL_API_ENDPOINT_URLWhen external_api chosen for connection credentials storage, this is required.https://api.yourdomain.com
MAX_NODE_RUN_OUTPUT_SIZE_MBOptional: Maximum size in megabytes for node run outputs. Default: 2050
MONGO_MAX_IDLE_TIME_MSMaximum time that a connection can remain idle in the pool before being closed. Default: 0 (connections never timeout)300000
MONGO_MIN_POOL_SIZEMinimum number of connections to maintain in the MongoDB connection pool. Default: 0 (no minimum)3
MONGO_MAX_POOL_SIZEMaximum number of connections allowed in the MongoDB connection pool. Default: 100100
MONGO_SERVER_SELECTION_TIMEOUT_MSMaximum time in milliseconds to wait for MongoDB server selection. Useful in containerized environments with slower DNS resolution. Default: 30000 (MongoDB default)60000
REDIS_CONNECT_TIMEOUT_MSMaximum time in milliseconds to wait for Redis connection. Default: 10000 (standalone), 60000 (cluster)60000
REDIS_DISABLE_TLS_VERIFICATIONSet to true to disable TLS certificate verification for Redis connections. Required for AWS ElastiCache with in-transit encryption.true
SKIP_HEALTH_CHECKSSkip specific health checks during startup. Use all to skip all checks, or a comma-separated list of: mongo, redis, storage, custom_code. Useful for Kubernetes environments with slower initial DNS resolution.all
HEALTH_CHECK_RETRIESNumber of retry attempts for health checks after initial failure. Health checks use exponential backoff with jitter between retries. Default: 35
HEALTH_CHECK_RETRY_DELAY_MSInitial delay between health check retries in milliseconds. Subsequent retries use exponential backoff (delay doubles each retry). Default: 10002000
HEALTH_CHECK_MAX_RETRY_DELAY_MSMaximum delay between health check retries in milliseconds. Prevents exponential backoff from growing too large. Default: 1000030000
SELF_HOSTING_TOKENRequired for self-hosted deployments. Token for validating your self-hosted Membrane installation. Generate this token from your organization settings in console. The token is validated against Membrane’s cloud API on startup.sht_abc123…

UI Service

Provides pre-built integration user interfaces. Docker Image: harbor.getmembrane.com/core/ui Environment Variables:
VariableDescriptionExample
NEXT_PUBLIC_ENGINE_URIAPI service URLhttps://api.yourdomain.com
BASE_PATHOptional: URL path prefix for subpath deployments/ui-app
PORTContainer listening port5000

Console Service

Administration interface for managing integrations. Docker Image: harbor.getmembrane.com/core/console Environment Variables:
VariableDescriptionExample
NEXT_PUBLIC_BASE_URIConsole access URLhttps://console.integration.yourdomain.com
NEXT_PUBLIC_AUTH0_DOMAINAuth0 domainlogin.integration.yourdomain.com
NEXT_PUBLIC_ENGINE_API_URIAPI service URLhttps://api.integrations.yourdomain.com
NEXT_PUBLIC_ENGINE_UI_URIUI service URLhttps://ui.integrations.yourdomain.com
NEXT_PUBLIC_AUTH0_CLIENT_IDAuth0 client IDclientId
BASE_PATHOptional: URL path prefix for subpath deployments/console-app
PORTContainer listening port5000
NEXT_PUBLIC_ENABLE_LIMITSOptional: Enable limits management UItrue

Subpath Deployments

The BASE_PATH environment variable enables deployment on a URL subpath (e.g., integrations.yourdomain.com/ui-app or integrations.yourdomain.com/console-app) rather than requiring a dedicated domain or subdomain. When set, it automatically adjusts all asset paths to work correctly under the specified path prefix. Example: Setting BASE_PATH=/console-app allows the Console service to be accessed at https://integrations.yourdomain.com/console-app with all resources loading from the correct paths.

Custom Code Runner

Provides an isolated environment for executing custom code in connectors or middleware. This service should only be accessible internally from other services. Docker Image: harbor.getmembrane.com/core/custom-code-runner
Note: On AMD architecture (not ARM), set CUSTOM_CODE_MEMORY_LIMIT environment variable for the API service to at least 21474836480 (20GB) to ensure sufficient virtual memory for WebAssembly. Physical memory allocation of 2GB is typically sufficient.

Scaling Recommendations

Backend services emit custom metrics that help determine scaling conditions. The following table outlines these metrics:
MetricEmitted byEndpointDescription
instant_tasks_worker_utilization_rateinstant-tasks-worker/prometheus/instant-tasksValue (0.0-1.0) representing the percentage of time instant tasks worker is actively processing tasks
custom_code_runner_total_job_spacescustom-code-runner/api/v2/prometheusTotal number of job spaces supported by a custom-code-runner pod
custom_code_runner_remaining_job_spacescustom-code-runner/api/v2/prometheusAvailable number of job spaces in a custom-code-runner pod
queued_tasks_worker_utilization_ratequeued-tasks-worker/prometheus/queued-tasksValue (0.0-1.0) representing the percentage of time queued tasks worker is actively processing tasks.
All services scale horizontally. The following table outlines the scaling recommendations for each service for production workloads:
ContainerScaling ApproachRecommended Values
ConsoleFixed number of instances to ensure availability and zero-downtime updates.Instances: 2
UIFixed number of instances to ensure availability and zero-downtime updates.Instances: 2
OrchestratorFixed number of instances to ensure availability and zero-downtime updates.Instances: 2
APIDynamic scaling based on resource usage. Monitor memory and CPU usage to determine scaling needs.Threshold: 50% Min instances: 2
Instant Tasks WorkerDynamic scaling based on the number of active and waiting tasks, using a modifier to adjust the scaling sensitivity. Pseudocode:avg(instant_tasks_worker_utilization_rate)Threshold: 0.75 Min instances: 2
Queued Tasks WorkerDynamic scaling based on workers utilization rate. Pseudocode:avg(queued_tasks_worker_utilization_rate)Threshold: 0.85 Min instances: 2
Custom Code RunnerDynamic scaling based on job spaces availability. Monitor the number of total and remaining job spaces and auto scale maintain a certain rate.Pseudocode: (custom_code_runner_total_job_spaces -custom_code_runner_remaining_job_spaces ) / custom_code_runner_total_job_spacesThreshold: 0.45 Min instances: 2

Connector Management

Automated Deployment

Use Membrane CLI to migrate connectors from cloud to self-hosted environments.

Manual Deployment

Upload connector .zip archives through the Console UI via Integrations > Apps > Upload Connector.

Troubleshooting

For enhanced debugging output, add DEBUG_ALL=1 to any container’s environment variables.

Helm

Helm

Guide for installing Membrane using Helm chart.

Cloud-specific Guides

AWS Self-Hosting

Guide for deploying Membrane on AWS.

Azure Self-Hosting

Guide for deploying Membrane on Azure.

GCP Self-Hosting

Guide for deploying Membrane on Google Cloud Platform.

Resource Limiting

To enable resource limits by workspace and tenant:
  1. Add ENABLE_LIMITS=1 to the API container
  2. Add NEXT_PUBLIC_ENABLE_LIMITS=1 to the Console container
This enables workspace managers to set tenant-level limits and platform administrators to configure workspace-level resource restrictions.

FAQ and Advanced Configuration

Self-Hosting Limitations

  • Managed OAuth Credentials: Auth-proxy (managed credentials) is not supported. Membrane provides managed OAuth credentials for some apps in the cloud version, but this feature is not available in self-hosted deployments.
  • Pre-built Connectors: The full library of pre-built connectors is not automatically available. You need to manually migrate connectors from cloud to self-hosted environments using the Membrane CLI.

Resource Requirements

  • Minimum Requirements: 500 millicores (0.5 CPU) and 2GB of memory per container is sufficient for most deployments.

Data Persistence and Backups

  • MongoDB: Requires regular backups
  • S3 Storage: All buckets should be backed up regularly
  • Redis: Used only as a cache; can be safely rebooted or erased

Health Monitoring

  • HTTP Health Checks: The root endpoint of each service (e.g., https://api.yourdomain.com/) serves as a health check endpoint
  • Worker Health Checks: Workers and custom code runners also expose an HTTP server at their root endpoint for health monitoring

Security

  • We monitor containers daily with the following SLAs:
    • Critical issues: 1 business day
    • High severity issues: 3 business days
    • Other issues: 2 weeks

Logging and Error Handling

  • Log Format: Services log to stdout/stderr in plain text