S3 Storage Integration

You can configure Importly to upload imported media files directly to your own S3 bucket, giving you complete control over your media storage. This feature supports Amazon S3, Cloudflare R2, Backblaze B2, and any other S3-compatible storage provider.

Why Use S3 Storage?

  • Full Control: Keep your media in your own infrastructure
  • Cost Optimization: Use your own storage pricing and lifecycle policies
  • Integration: Easy integration with CDN, processing pipelines, or archival systems
  • Compliance: Meet specific data residency and security requirements
  • No Vendor Lock-in: Your media stays in your bucket

Setup Guide

Step 1: Prepare Your S3 Bucket

First, create an S3 bucket in your AWS account:

  • Open the S3 console
  • Click "Create bucket"
  • Choose a bucket name and region
  • Configure bucket settings according to your needs (versioning, encryption, etc.)

Important: Make sure your bucket has the appropriate permissions for your use case. If you want the files to be publicly accessible, configure bucket policies or CloudFront accordingly.

Step 2: Create IAM User with Limited Permissions

Security Best Practice: Never use your root AWS credentials. Always create a dedicated IAM user with minimal permissions.

Create a Custom Policy:

  • Open the IAM console
  • Navigate to "Policies" → "Create policy"
  • Select JSON and paste the following policy (replace your-bucket-name with your actual bucket name):
json
1{
2 "Version": "2012-10-17",
3 "Statement": [
4 {
5 "Effect": "Allow",
6 "Action": ["s3:PutObject", "s3:PutObjectAcl"],
7 "Resource": "arn:aws:s3:::your-bucket-name/*"
8 }
9 ]
10}
  • Name the policy (e.g., ImportlyS3UploadPolicy)
  • Click "Create policy"

Create IAM User:

  • In IAM console, navigate to "Users" → "Add users"
  • Enter username (e.g., importly-uploader)
  • Select "Access key - Programmatic access"
  • Attach the policy you just created
  • Complete the wizard and save the credentials:
    • Access Key ID (e.g., AKIAIOSFODNN7EXAMPLE)
    • Secret Access Key (e.g., wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY)

Step 3: Configure S3 Storage in Importly

Once you have your S3 bucket and IAM credentials ready, configure Importly to use your S3 storage:

Access the Configuration:

  • Go to your Dashboard
  • Navigate to S3 Storage settings

Enter Your Credentials:

  • Access Key ID: Your IAM user's access key
  • Secret Access Key: Your IAM user's secret key
  • Bucket Name: Your S3 bucket name
  • Region: Your bucket's region (e.g., us-east-1)
  • Endpoint (optional): For S3-compatible storage
  • Default Path Prefix (optional): Default folder for uploads
  • Storage Class (optional): S3 storage class

Test and Save:

  • Click "Test Connection" to verify your configuration
  • Click "Save Configuration" to store your settings

Using S3 Storage

Once configured, you can use S3 storage with your import requests:

Basic Example

bash
1curl -X POST https://api.importly.app/import \\
2 -H "Authorization: Bearer YOUR_API_KEY" \\
3 -H "Content-Type: application/json" \\
4 -d '{
5 "url": "https://youtube.com/watch?v=dQw4w9WgXcQ",
6 "store": true
7 }'

Advanced Options

bash
1curl -X POST https://api.importly.app/import \\
2 -H "Authorization: Bearer YOUR_API_KEY" \\
3 -H "Content-Type: application/json" \\
4 -d '{
5 "url": "https://youtube.com/watch?v=dQw4w9WgXcQ",
6 "store": true,
7 "storagePath": "videos/2024",
8 "storageFilename": "my-video",
9 "storageBucket": "my-other-bucket",
10 "storageClass": "INTELLIGENT_TIERING",
11 "storageReturnLocation": true
12 }'

Parameters

ParameterTypeDescription
store*boolean

Enable S3 storage upload. Required to use S3 storage.

storagePathstring

Folder path prefix for organizing files (e.g., videos/2024 or imports/production). Default: uses the default path prefix from your S3 settings, or root if not set.

storageFilenamestring

Custom filename without extension (e.g., my-video). The appropriate extension will be automatically appended based on the file type. Default: uses the original video title as filename.

storageBucketstring

Override the default bucket configured in settings

storageClassstring

S3 storage class (STANDARD, INTELLIGENT_TIERING, etc.)

storageReturnLocationboolean

Return the S3 URL in the response

Path and Filename Examples

Understanding how storagePath and storageFilename work together:

Example 1: Both parameters specified

json
1{
2 "storagePath": "videos/2024",
3 "storageFilename": "my-video"
4}

Result: videos/2024/my-video.mp4

Example 2: Only folder path specified

json
1{
2 "storagePath": "videos/2024"
3}

Result: videos/2024/Rick Astley - Never Gonna Give You Up.mp4 (uses original video title)

Example 3: Only filename specified

json
1{
2 "storageFilename": "my-video"
3}

Result: my-video.mp4 (stored in root or default path prefix from settings)

Example 4: Neither specified

json
1{
2 "store": true
3}

Result: Rick Astley - Never Gonna Give You Up.mp4 (uses original video title in root or default path prefix)

Response

When S3 storage is enabled, the response includes S3 information:

json
1{
2 "success": true,
3 "data": {
4 "jobId": "550e8400-e29b-41d4-a716-446655440000",
5 "status": "completed",
6 "mediaUrl": "https://your-bucket.s3.us-east-1.amazonaws.com/videos/2024/my-video.mp4",
7 "s3Storage": {
8 "url": "https://your-bucket.s3.us-east-1.amazonaws.com/videos/2024/my-video.mp4",
9 "bucket": "your-bucket",
10 "key": "videos/2024/my-video.mp4"
11 },
12 "title": "Video Title",
13 "duration": 240,
14 "fileSizeBytes": 15728640,
15 "costInDollars": 0.05
16 }
17}

S3-Compatible Storage

Cloudflare R2

Cloudflare R2 is S3-compatible with zero egress fees.

Configuration:

  • Endpoint: https://[account-id].r2.cloudflarestorage.com
  • Region: auto
  • Access Key ID: From R2 dashboard
  • Secret Access Key: From R2 dashboard

Backblaze B2

Backblaze B2 offers affordable S3-compatible storage.

Configuration:

  • Endpoint: https://s3.[region].backblazeb2.com
  • Region: Your bucket's region
  • Access Key ID: Your Backblaze keyID
  • Secret Access Key: Your Backblaze applicationKey

DigitalOcean Spaces

DigitalOcean Spaces is another S3-compatible option.

Configuration:

  • Endpoint: https://[region].digitaloceanspaces.com
  • Region: Your space's region
  • Access Key ID: Your Spaces access key
  • Secret Access Key: Your Spaces secret key

Storage Classes

Choose the appropriate S3 storage class based on your access patterns:

ParameterTypeDescription
STANDARDclass

High availability and performance. Best for frequently accessed data.

INTELLIGENT_TIERINGclass

Automatic cost optimization. Best for unknown or changing access patterns.

STANDARD_IAclass

Lower cost, less frequent access. Best for data accessed less than once a month.

ONEZONE_IAclass

Single AZ, lower cost. Best for reproducible data with lower availability needs.

GLACIERclass

Archive storage. Best for long-term archive with rare access.

DEEP_ARCHIVEclass

Lowest cost archive. Best for compliance and long-term retention.

Best Practices

Security

  1. Use IAM Users: Never use root credentials
  2. Minimal Permissions: Grant only PutObject permission
  3. Rotate Keys: Periodically rotate access keys
  4. Enable Encryption: Use S3 server-side encryption
  5. Audit Access: Monitor CloudTrail logs

Cost Optimization

  1. Lifecycle Policies: Automatically transition older files to cheaper storage classes
  2. Intelligent Tiering: Use for unpredictable access patterns
  3. Compression: Consider compressing files before storage
  4. Delete Old Files: Remove files you no longer need

Performance

  1. Regional Proximity: Choose a bucket region close to your users
  2. CDN Integration: Use CloudFront or similar CDN for distribution
  3. Multipart Upload: Automatically handled for large files
  4. Path Prefixes: Use logical folder structures for organization

CDN Integration

Amazon CloudFront

  1. Create a CloudFront distribution pointing to your S3 bucket
  2. Configure origin access identity (OAI) for secure access
  3. Use the CloudFront URL for serving media to users
  4. Configure caching policies for optimal performance

Cloudflare

  1. Add your domain to Cloudflare
  2. Create a CNAME record pointing to your R2 bucket
  3. Configure cache rules and security settings
  4. Enable automatic image optimization if needed

Troubleshooting

Connection Test Fails

  • Verify credentials: Double-check Access Key ID and Secret Access Key
  • Check permissions: Ensure IAM policy allows PutObject
  • Verify bucket name: Bucket name must be exact (case-sensitive)
  • Check region: Ensure region matches your bucket's region
  • Endpoint URL: For S3-compatible storage, verify the endpoint URL

Files Not Uploading

  • Bucket permissions: Verify bucket policy allows writes
  • IAM permissions: Check IAM user has correct permissions
  • Bucket exists: Ensure the bucket exists in the specified region
  • Storage class: Some regions don't support all storage classes

Files Not Accessible

  • Bucket policy: Configure bucket policy for public read if needed
  • CloudFront: Use CloudFront for secure, fast distribution
  • Presigned URLs: Generate presigned URLs for temporary access
  • CORS: Configure CORS if accessing from web browsers

FAQ

Q: Is my S3 configuration secure?

Yes, your credentials are encrypted before being stored in our database using AES-256-GCM encryption.

Q: Can I use multiple buckets?

Yes, you can override the default bucket per-request using the storageBucket parameter.

Q: What happens if S3 upload fails?

If S3 upload fails, the file will still be saved to Importly's storage as a fallback, and you'll receive the standard mediaUrl.

Q: Can I disable S3 storage temporarily?

Yes, simply don't include the store: true parameter in your requests, or delete your S3 configuration from the dashboard.

Q: Does this affect pricing?

No, S3 storage is included in your plan. You only pay for your own S3 storage costs with your provider.

Q: Can I migrate existing imports to S3?

Currently, only new imports can be stored in S3. Contact support if you need to migrate existing media.

Need Help?