S3

VersionDownloadsLicenseChat

The S3 extension provides S3-compatible storage persistence for Hocuspocus. It supports Amazon S3 and S3-compatible services like MinIO, DigitalOcean Spaces, Wasabi, and more. Documents are stored as binary files in S3, making your collaborative documents persistent and scalable.

Installation

Install the S3 extension like this:

npm install @hocuspocus/extension-s3

Configuration

bucket (required)

The S3 bucket name where documents will be stored. The bucket must exist and be accessible with your credentials.

region

AWS region for the S3 service.

Default: us-east-1

prefix

Key prefix for document storage. All documents will be stored under this prefix.

Default: hocuspocus-documents/

credentials

AWS credentials object containing accessKeyId and secretAccessKey. If not provided, the extension will use the default AWS credential chain (environment variables, IAM roles, etc.).

endpoint

Custom S3 endpoint URL for S3-compatible services like MinIO or DigitalOcean Spaces.

forcePathStyle

Use path-style URLs instead of virtual hosted-style URLs. Required for MinIO and some S3-compatible services.

Default: false

s3Client

Custom S3Client instance. If provided, other connection options will be ignored.

Usage

Basic AWS S3 Setup

import { Server } from "@hocuspocus/server";
import { S3 } from "@hocuspocus/extension-s3";

const server = new Server({
  extensions: [
    new S3({
      bucket: 'my-documents-bucket',
      region: 'us-east-1',
      credentials: {
        accessKeyId: 'your-access-key',
        secretAccessKey: 'your-secret-key'
      }
    }),
  ],
});

server.listen();

Using Environment Variables

import { Server } from "@hocuspocus/server";
import { S3 } from "@hocuspocus/extension-s3";

// Set environment variables:
// AWS_ACCESS_KEY_ID=your-access-key
// AWS_SECRET_ACCESS_KEY=your-secret-key
// AWS_REGION=us-west-2

const server = new Server({
  extensions: [
    new S3({
      bucket: 'my-documents-bucket',
      // Credentials will be loaded from environment variables
    }),
  ],
});

server.listen();

Using IAM Roles (EC2/Lambda)

import { Server } from "@hocuspocus/server";
import { S3 } from "@hocuspocus/extension-s3";

const server = new Server({
  extensions: [
    new S3({
      bucket: 'my-documents-bucket',
      region: 'us-east-1',
      // No credentials needed when using IAM roles
    }),
  ],
});

server.listen();

MinIO Configuration

For local development with MinIO, the extension automatically enables path-style URLs when an endpoint is provided:

import { Server } from "@hocuspocus/server";
import { S3 } from "@hocuspocus/extension-s3";

const server = new Server({
  extensions: [
    new S3({
      bucket: 'hocuspocus-documents',
      endpoint: 'http://localhost:9000',
      forcePathStyle: true, // Required for MinIO
      credentials: {
        accessKeyId: 'minioadmin',
        secretAccessKey: 'minioadmin'
      }
    }),
  ],
});

server.listen();

DigitalOcean Spaces

import { Server } from "@hocuspocus/server";
import { S3 } from "@hocuspocus/extension-s3";

const server = new Server({
  extensions: [
    new S3({
      bucket: 'my-spaces-bucket',
      region: 'nyc3',
      endpoint: 'https://nyc3.digitaloceanspaces.com',
      credentials: {
        accessKeyId: 'your-spaces-key',
        secretAccessKey: 'your-spaces-secret'
      }
    }),
  ],
});

server.listen();

Custom S3 Client

import { Server } from "@hocuspocus/server";
import { S3 } from "@hocuspocus/extension-s3";
import { S3Client } from "@aws-sdk/client-s3";

const customS3Client = new S3Client({
  region: 'eu-west-1',
  credentials: {
    accessKeyId: 'your-access-key',
    secretAccessKey: 'your-secret-key'
  },
  // Add any custom S3Client configuration here
});

const server = new Server({
  extensions: [
    new S3({
      bucket: 'my-documents-bucket',
      s3Client: customS3Client
    }),
  ],
});

server.listen();

Document Storage

Documents are stored as binary files in S3 with the following naming convention:

  • Key: {prefix}{documentName}.bin
  • Content-Type: application/octet-stream

For example, a document named "my-document" with the default prefix would be stored at: hocuspocus-documents/my-document.bin

CLI Usage

You can also use the S3 extension with the Hocuspocus CLI:

# Basic usage
hocuspocus --s3 --s3-bucket my-documents

# With custom region and prefix
hocuspocus --s3 --s3-bucket my-docs --s3-region eu-west-1 --s3-prefix "collab/"

# MinIO setup (forcePathStyle automatically enabled)
hocuspocus --s3 --s3-bucket hocuspocus-documents --s3-endpoint http://localhost:9000

# Using environment variables
export S3_BUCKET=my-documents
export AWS_ACCESS_KEY_ID=your-key
export AWS_SECRET_ACCESS_KEY=your-secret
hocuspocus --s3

CLI Environment Variables

  • AWS_ACCESS_KEY_ID - AWS access key ID
  • AWS_SECRET_ACCESS_KEY - AWS secret access key
  • AWS_REGION - AWS region
  • S3_BUCKET - S3 bucket name

IAM Permissions

Your AWS credentials or IAM role needs the following permissions for the specified bucket:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetObject",
        "s3:PutObject",
        "s3:HeadObject"
      ],
      "Resource": "arn:aws:s3:::your-bucket-name/*"
    },
    {
      "Effect": "Allow",
      "Action": [
        "s3:HeadBucket"
      ],
      "Resource": "arn:aws:s3:::your-bucket-name"
    }
  ]
}

Scaling with Redis

The S3 extension works seamlessly with the Redis extension for horizontal scaling. Redis handles real-time synchronization between server instances, while S3 provides persistent storage.

Basic S3 + Redis Setup

import { Server } from "@hocuspocus/server";
import { Logger } from "@hocuspocus/extension-logger";
import { Redis } from "@hocuspocus/extension-redis";
import { S3 } from "@hocuspocus/extension-s3";

// Server 1
const server1 = new Server({
  name: "server-1",
  port: 8001,
  extensions: [
    new Logger(),
    new Redis({
      host: "127.0.0.1",
      port: 6379,
    }),
    new S3({
      bucket: 'my-documents-bucket',
      region: 'us-east-1',
      credentials: {
        accessKeyId: process.env.AWS_ACCESS_KEY_ID,
        secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY
      }
    }),
  ],
});

server1.listen();

// Server 2
const server2 = new Server({
  name: "server-2",
  port: 8002,
  extensions: [
    new Logger(),
    new Redis({
      host: "127.0.0.1",
      port: 6379,
    }),
    new S3({
      bucket: 'my-documents-bucket',
      region: 'us-east-1',
      credentials: {
        accessKeyId: process.env.AWS_ACCESS_KEY_ID,
        secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY
      }
    }),
  ],
});

server2.listen();

MinIO + Redis for Local Development

import { Server } from "@hocuspocus/server";
import { Logger } from "@hocuspocus/extension-logger";
import { Redis } from "@hocuspocus/extension-redis";
import { S3 } from "@hocuspocus/extension-s3";

// Development server setup with MinIO and Redis
const createServer = (name, port) => new Server({
  name,
  port,
  extensions: [
    new Logger(),
    new Redis({
      host: "127.0.0.1",
      port: 6379,
    }),
    new S3({
      bucket: 'hocuspocus-documents',
      endpoint: 'http://localhost:9000',
      forcePathStyle: true,
      credentials: {
        accessKeyId: 'minioadmin',
        secretAccessKey: 'minioadmin'
      }
    }),
  ],
});

// Start multiple instances
createServer("dev-server-1", 8001).listen();
createServer("dev-server-2", 8002).listen();

Environment-based Configuration

import { Server } from "@hocuspocus/server";
import { Logger } from "@hocuspocus/extension-logger";
import { Redis } from "@hocuspocus/extension-redis";
import { S3 } from "@hocuspocus/extension-s3";

const server = new Server({
  name: process.env.SERVER_NAME || `server-${Math.random()}`,
  port: Number(process.env.PORT) || 8000,
  extensions: [
    new Logger(),
    new Redis({
      host: process.env.REDIS_HOST || "127.0.0.1",
      port: Number(process.env.REDIS_PORT) || 6379,
    }),
    new S3({
      bucket: process.env.S3_BUCKET,
      region: process.env.S3_REGION || 'us-east-1',
      endpoint: process.env.S3_ENDPOINT, // For MinIO
      forcePathStyle: process.env.S3_ENDPOINT ? true : false,
      prefix: process.env.S3_PREFIX || 'hocuspocus-documents/',
      credentials: process.env.AWS_ACCESS_KEY_ID && process.env.AWS_SECRET_ACCESS_KEY ? {
        accessKeyId: process.env.AWS_ACCESS_KEY_ID,
        secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY
      } : undefined
    }),
  ],
});

server.listen();

Development Environment Setup

Quick Start

For local development with MinIO (S3-compatible storage), use the built-in development scripts:

# Set up complete development environment
npm run dev:setup

# Test S3 configuration
npm run dev:test-s3

# Start S3 playground examples
npm run playground:s3

Available Development Scripts

# Environment setup
npm run dev:setup         # Complete setup (.env + Docker services)
npm run dev:env           # Create .env file from template
npm run dev:services      # Start Docker services (Redis + MinIO)
npm run dev:services:down # Stop Docker services
npm run dev:services:reset # Reset services and data

# S3 configuration testing
npm run dev:test-s3        # Test complete S3 configuration
npm run dev:test-s3:minio  # Test MinIO connection only
npm run dev:test-s3:nodejs # Test Node.js S3 client only

# Playground examples
npm run playground:s3      # S3 extension examples (ports 8000-8003)
npm run playground:s3-redis # S3 + Redis scaling examples

Local MinIO Setup

The development environment includes:

Default credentials for local development:

  • Access Key: minioadmin
  • Secret Key: minioadmin

Environment Variables

Create a .env file (automatically done by npm run dev:setup):

# MinIO S3 Configuration (for local development)
S3_ENDPOINT=http://localhost:9000
S3_BUCKET=hocuspocus-documents
S3_REGION=us-east-1
AWS_ACCESS_KEY_ID=minioadmin
AWS_SECRET_ACCESS_KEY=minioadmin

# Production AWS S3 Configuration
# S3_BUCKET=your-production-bucket
# S3_REGION=us-west-2
# AWS_ACCESS_KEY_ID=your-aws-access-key
# AWS_SECRET_ACCESS_KEY=your-aws-secret-key

Best Practices

Security

  • Use IAM roles instead of access keys when running on AWS infrastructure
  • Store credentials in environment variables, not in code
  • Follow the principle of least privilege for IAM permissions
  • Enable S3 bucket encryption for additional security

Performance

  • Use regions closest to your users for better latency
  • Consider S3 Transfer Acceleration for global applications
  • Monitor S3 request metrics and costs

Reliability

  • Enable S3 versioning to protect against accidental deletions
  • Set up appropriate S3 lifecycle policies for cost optimization
  • Consider cross-region replication for critical applications

Scaling Architecture

  • Redis: Handles real-time synchronization between server instances
  • S3: Provides persistent storage and acts as the source of truth
  • Load Balancer: Distributes connections across server instances
  • Multiple Regions: Deploy in multiple AWS regions for global performance

Troubleshooting

Common Issues

"Bucket not found" or access denied errors:

  • Verify the bucket name and region
  • Check IAM permissions
  • Ensure credentials are correctly configured

Connection timeouts with MinIO:

  • Make sure forcePathStyle: true is set
  • Verify the endpoint URL is accessible
  • Check firewall and network settings

Large document performance:

  • Monitor S3 transfer times
  • Consider using S3 multipart uploads for very large documents
  • Optimize document structure to reduce size