Object stores
LanceDB supports AWS S3 (and compatible stores), Azure Blob Storage, and Google Cloud Storage. The URI scheme in yourconnect call selects the backend.
Configuration options
When running inside the target cloud with correct IAM bindings, LanceDB often needs no extra configuration. When running elsewhere, provide credentials via environment variables orstorage_options.
Keys are case-insensitive. Use lowercase in
storage_options and uppercase in environment variables.General object store options
| Key | Description |
|---|---|
allow_http | Allow non-TLS connections. Default: false. |
allow_invalid_certificates | Skip certificate validation. Default: false. |
connect_timeout | Timeout for the connect phase. Default: 5s. |
timeout | Timeout for the full request. Default: 30s. |
user_agent | User agent string to send with requests. |
proxy_url | Proxy URL to route requests through. |
proxy_ca_certificate | PEM-formatted CA certificate for proxy connections. |
proxy_excludes | Comma-separated hosts that bypass the proxy (domains or CIDR). |
AWS S3

AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and optionally AWS_SESSION_TOKEN as environment variables or pass them in storage_options. Region is optional for AWS but required for most S3-compatible stores.
Minimum permissions usually include s3:PutObject, s3:GetObject, s3:DeleteObject, s3:ListBucket, and s3:GetBucketLocation scoped to the relevant bucket/prefix.
S3-compatible stores
If the endpoint ishttp:// (common in local development), also set ALLOW_HTTP=true or pass allow_http=True in storage_options.
S3 Express
Consult AWS networking requirements for S3 Express before enabling.DynamoDB commit store for concurrent writes
S3 lacks atomic writes. To enable safe concurrent writers, use DynamoDB as a commit store by switching to thes3+ddb scheme and specifying the table name.
Create the DynamoDB table with hash key base_uri (string) and range key version (number). Small provisioned throughput (for example ReadCapacityUnits=1, WriteCapacityUnits=1) is sufficient for coordination.
Google Cloud Storage

GOOGLE_SERVICE_ACCOUNT (path to JSON) or include the path in storage_options. GCS defaults to HTTP/1; set HTTP1_ONLY=false if you need HTTP/2.
Azure Blob Storage

AZURE_STORAGE_ACCOUNT_NAME and AZURE_STORAGE_ACCOUNT_KEY as environment variables, or pass them via storage_options.
Other supported keys include service principal credentials (azure_client_id, azure_client_secret, azure_tenant_id), SAS tokens, managed identities, and custom endpoints.
Tigris Object Storage

AWS_ENDPOINT=https://t3.storage.dev and AWS_DEFAULT_REGION=auto achieve the same configuration.