S3 Storage Integration
You can configure Importly to upload imported media files directly to your own S3 bucket, giving you complete control over your media storage. This feature supports Amazon S3, Cloudflare R2, Backblaze B2, and any other S3-compatible storage provider.
Why Use S3 Storage?
- Full Control: Keep your media in your own infrastructure
- Cost Optimization: Use your own storage pricing and lifecycle policies
- Integration: Easy integration with CDN, processing pipelines, or archival systems
- Compliance: Meet specific data residency and security requirements
- No Vendor Lock-in: Your media stays in your bucket
Setup Guide
Step 1: Prepare Your S3 Bucket
First, create an S3 bucket in your AWS account:
- Open the S3 console
- Click "Create bucket"
- Choose a bucket name and region
- Configure bucket settings according to your needs (versioning, encryption, etc.)
Important: Make sure your bucket has the appropriate permissions for your use case. If you want the files to be publicly accessible, configure bucket policies or CloudFront accordingly.
Step 2: Create IAM User with Limited Permissions
Security Best Practice: Never use your root AWS credentials. Always create a dedicated IAM user with minimal permissions.
Create a Custom Policy:
- Open the IAM console
- Navigate to "Policies" → "Create policy"
- Select JSON and paste the following policy (replace your-bucket-namewith your actual bucket name):
json
- Name the policy (e.g., ImportlyS3UploadPolicy)
- Click "Create policy"
Create IAM User:
- In IAM console, navigate to "Users" → "Add users"
- Enter username (e.g., importly-uploader)
- Select "Access key - Programmatic access"
- Attach the policy you just created
- Complete the wizard and save the credentials:
- Access Key ID (e.g., AKIAIOSFODNN7EXAMPLE)
- Secret Access Key (e.g., wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY)
 
- Access Key ID (e.g., 
Step 3: Configure S3 Storage in Importly
Once you have your S3 bucket and IAM credentials ready, configure Importly to use your S3 storage:
Access the Configuration:
- Go to your Dashboard
- Navigate to S3 Storage settings
Enter Your Credentials:
- Access Key ID: Your IAM user's access key
- Secret Access Key: Your IAM user's secret key
- Bucket Name: Your S3 bucket name
- Region: Your bucket's region (e.g., us-east-1)
- Endpoint (optional): For S3-compatible storage
- Default Path Prefix (optional): Default folder for uploads
- Storage Class (optional): S3 storage class
Test and Save:
- Click "Test Connection" to verify your configuration
- Click "Save Configuration" to store your settings
Using S3 Storage
Once configured, you can use S3 storage with your import requests:
Basic Example
bash
Advanced Options
bash
Parameters
| Parameter | Type | Description | 
|---|---|---|
| store* | boolean | Enable S3 storage upload. Required to use S3 storage. | 
| storagePath | string | Folder path prefix for organizing files (e.g.,  | 
| storageFilename | string | Custom filename without extension (e.g.,  | 
| storageBucket | string | Override the default bucket configured in settings | 
| storageClass | string | S3 storage class ( | 
| storageReturnLocation | boolean | Return the S3 URL in the response | 
Path and Filename Examples
Understanding how storagePath and storageFilename work together:
Example 1: Both parameters specified
json
Result: videos/2024/my-video.mp4
Example 2: Only folder path specified
json
Result: videos/2024/Rick Astley - Never Gonna Give You Up.mp4 (uses original video title)
Example 3: Only filename specified
json
Result: my-video.mp4 (stored in root or default path prefix from settings)
Example 4: Neither specified
json
Result: Rick Astley - Never Gonna Give You Up.mp4 (uses original video title in root or default path prefix)
Response
When S3 storage is enabled, the response includes S3 information:
json
S3-Compatible Storage
Cloudflare R2
Cloudflare R2 is S3-compatible with zero egress fees.
Configuration:
- Endpoint: https://[account-id].r2.cloudflarestorage.com
- Region: auto
- Access Key ID: From R2 dashboard
- Secret Access Key: From R2 dashboard
Backblaze B2
Backblaze B2 offers affordable S3-compatible storage.
Configuration:
- Endpoint: https://s3.[region].backblazeb2.com
- Region: Your bucket's region
- Access Key ID: Your Backblaze keyID
- Secret Access Key: Your Backblaze applicationKey
DigitalOcean Spaces
DigitalOcean Spaces is another S3-compatible option.
Configuration:
- Endpoint: https://[region].digitaloceanspaces.com
- Region: Your space's region
- Access Key ID: Your Spaces access key
- Secret Access Key: Your Spaces secret key
Storage Classes
Choose the appropriate S3 storage class based on your access patterns:
| Parameter | Type | Description | 
|---|---|---|
| STANDARD | class | High availability and performance. Best for frequently accessed data. | 
| INTELLIGENT_TIERING | class | Automatic cost optimization. Best for unknown or changing access patterns. | 
| STANDARD_IA | class | Lower cost, less frequent access. Best for data accessed less than once a month. | 
| ONEZONE_IA | class | Single AZ, lower cost. Best for reproducible data with lower availability needs. | 
| GLACIER | class | Archive storage. Best for long-term archive with rare access. | 
| DEEP_ARCHIVE | class | Lowest cost archive. Best for compliance and long-term retention. | 
Best Practices
Security
- Use IAM Users: Never use root credentials
- Minimal Permissions: Grant only PutObjectpermission
- Rotate Keys: Periodically rotate access keys
- Enable Encryption: Use S3 server-side encryption
- Audit Access: Monitor CloudTrail logs
Cost Optimization
- Lifecycle Policies: Automatically transition older files to cheaper storage classes
- Intelligent Tiering: Use for unpredictable access patterns
- Compression: Consider compressing files before storage
- Delete Old Files: Remove files you no longer need
Performance
- Regional Proximity: Choose a bucket region close to your users
- CDN Integration: Use CloudFront or similar CDN for distribution
- Multipart Upload: Automatically handled for large files
- Path Prefixes: Use logical folder structures for organization
CDN Integration
Amazon CloudFront
- Create a CloudFront distribution pointing to your S3 bucket
- Configure origin access identity (OAI) for secure access
- Use the CloudFront URL for serving media to users
- Configure caching policies for optimal performance
Cloudflare
- Add your domain to Cloudflare
- Create a CNAME record pointing to your R2 bucket
- Configure cache rules and security settings
- Enable automatic image optimization if needed
Troubleshooting
Connection Test Fails
- Verify credentials: Double-check Access Key ID and Secret Access Key
- Check permissions: Ensure IAM policy allows PutObject
- Verify bucket name: Bucket name must be exact (case-sensitive)
- Check region: Ensure region matches your bucket's region
- Endpoint URL: For S3-compatible storage, verify the endpoint URL
Files Not Uploading
- Bucket permissions: Verify bucket policy allows writes
- IAM permissions: Check IAM user has correct permissions
- Bucket exists: Ensure the bucket exists in the specified region
- Storage class: Some regions don't support all storage classes
Files Not Accessible
- Bucket policy: Configure bucket policy for public read if needed
- CloudFront: Use CloudFront for secure, fast distribution
- Presigned URLs: Generate presigned URLs for temporary access
- CORS: Configure CORS if accessing from web browsers
FAQ
Q: Is my S3 configuration secure?
Yes, your credentials are encrypted before being stored in our database using AES-256-GCM encryption.
Q: Can I use multiple buckets?
Yes, you can override the default bucket per-request using the storageBucket parameter.
Q: What happens if S3 upload fails?
If S3 upload fails, the file will still be saved to Importly's storage as a fallback, and you'll receive the standard mediaUrl.
Q: Can I disable S3 storage temporarily?
Yes, simply don't include the store: true parameter in your requests, or delete your S3 configuration from the dashboard.
Q: Does this affect pricing?
No, S3 storage is included in your plan. You only pay for your own S3 storage costs with your provider.
Q: Can I migrate existing imports to S3?
Currently, only new imports can be stored in S3. Contact support if you need to migrate existing media.
Need Help?
- Check our FAQ
- View API documentation
- Contact support