AWS S3
The S3 storage handles uploads to AWS S3 service (or any s3-compatible service such as DigitalOcean Spaces or MinIO). It requires the aws-sdk-s3 gem:
# Gemfile
gem "aws-sdk-s3", "~> 1.14"
Initialization
The storage is initialized by providing your bucket name, region and credentials:
require "shrine/storage/s3"
s3 = Shrine::Storage::S3.new(
bucket: "my-app", # required
region: "eu-west-1", # required
access_key_id: "abc",
secret_access_key: "xyz",
)
The storage requires the following AWS S3 permissions:
s3:ListBucket
for the bucket resources3:GetObject
,s3:PutObject
,s3:PutObjectAcl
,s3:DeleteObject
,s3:ListMultipartUploadParts
ands3:AbortMultipartUpload
for the object resources
The
:access_key_id
and:secret_access_key
options is just one form of authentication, see the AWS SDK docs for more options.
The storage exposes the underlying Aws objects:
s3.client #=> #<Aws::S3::Client>
s3.client.access_key_id #=> "abc"
s3.client.secret_access_key #=> "xyz"
s3.client.region #=> "eu-west-1"
s3.bucket #=> #<Aws::S3::Bucket>
s3.bucket.name #=> "my-app"
s3.object("key") #=> #<Aws::S3::Object>
Public uploads
By default, uploaded S3 objects will have private visibility, meaning they can only be accessed via signed expiring URLs generated using your private S3 credentials.
s3 = Shrine::Storage::S3.new(**s3_options)
s3.upload(io, "key") # uploads with default "private" ACL
s3.url("key") # https://my-bucket.s3.amazonaws.com/key?X-Amz-Expires=900&X-Amz-Signature=b22d37c37d...
If you would like to generate public URLs, you can tell S3 storage to make uploads public:
s3 = Shrine::Storage::S3.new(public: true, **s3_options)
s3.upload(io, "key") # uploads with "public-read" ACL
s3.url("key") # https://my-bucket.s3.amazonaws.com/key
If you want to make only some uploads public, you can conditionally apply the
:acl
upload option and :public
URL option:
Shrine.plugin :upload_options, store: -> (io, **) { { acl: "public-read" } }
Shrine.plugin :url_options, store: -> (io, **) { { public: true } }
Prefix
The :prefix
option can be specified for uploading all files inside a specific
S3 prefix (folder), which is useful when using S3 for both cache and store:
Shrine::Storage::S3.new(prefix: "cache", **s3_options)
Upload options
Sometimes you'll want to add additional upload options to all S3 uploads. You
can do that by passing the :upload_options
option:
Shrine::Storage::S3.new(upload_options: { acl: "private" }, **s3_options)
These options will be passed to aws-sdk-s3's methods for uploading, copying and presigning.
You can also generate upload options per upload with the upload_options
plugin
Shrine.plugin :upload_options, store: -> (io, derivative: nil, **) do
if derivative == :thumb
{ acl: "public-read" }
else
{ acl: "private" }
end
end
or when using the uploader directly
uploader.upload(file, upload_options: { acl: "private" })
Unlike the
:upload_options
storage option, upload options given on the uploader level won't be forwarded for generating presigns, since presigns are generated using the storage directly.
Copy options
If you wish to override options that are passed when copying objects from
temporary to permanent storage, you can pass :copy_options
:
# Removes default :tagging_directive, which isn't supported by Cloudflare R2
Shrine::Storage::S3.new(copy_options: {}, **s3_options)
URL Host
If you want your S3 object URLs to be generated with a different URL host (e.g.
a CDN), you can specify the :host
option to #url
:
s3.url("image.jpg", host: "http://abc123.cloudfront.net")
#=> "http://abc123.cloudfront.net/image.jpg"
The host URL can include a path prefix, but it needs to end with a slash:
s3.url("image.jpg", host: "https://your-s3-host.com/prefix/") # needs to end with a slash
#=> "http://your-s3-host.com/prefix/image.jpg"
To have the :host
option passed automatically for every URL, use the
url_options
plugin:
plugin :url_options, store: { host: "http://abc123.cloudfront.net" }
Signer
If you would like to serve private content via CloudFront, you need to sign
the object URLs with a special signer, such as Aws::CloudFront::UrlSigner
provided by the aws-sdk-cloudfront
gem. The S3 storage initializer accepts a
:signer
block, which you can use to call your signer:
require "aws-sdk-cloudfront"
signer = Aws::CloudFront::UrlSigner.new(
key_pair_id: "cf-keypair-id",
private_key_path: "./cf_private_key.pem"
)
Shrine::Storage::S3.new(signer: signer.method(:signed_url))
# or
Shrine::Storage::S3.new(signer: -> (url, **options) { signer.signed_url(url, **options) })
URL options
Other than :host
and :public
URL options, all additional S3#url
options
are forwarded to Aws::S3::Object#presigned_url
.
s3.url(
expires_in: 15,
response_content_disposition: ContentDisposition.attachment("my-filename"),
response_content_type: "foo/bar",
# ...
)
Presigns
The S3#presign
method can be used for generating parameters for direct upload
to S3:
s3.presign("key") #=>
# {
# url: "https://my-bucket.s3.amazonaws.com/...",
# fields: { ... }, # blank for PUT presigns
# headers: { ... }, # blank for POST presigns
# method: "post",
# }
By default, parameters for a POST upload is generated, but you can also generate PUT upload parameters:
s3.presign("key", method: :put)
Any additional options are forwarded to Aws::S3::Object#presigned_post
(for POST uploads) and Aws::S3::Object#presigned_url
(for PUT uploads).
s3.presign("key", method: :put, content_disposition: "attachment; filename=my-file.txt") #=>
# {
# url: "https://my-bucket.s3.amazonaws.com/...",
# fields: {},
# headers: { "Content-Disposition" => "attachment; filename=my-file.txt" },
# method :put,
# }
Large files
The aws-sdk-s3 gem has the ability to automatically use multipart upload/copy for larger files, splitting the file into multiple chunks and uploading/copying them in parallel.
By default, multipart upload will be used for files larger than 15MB, and
multipart copy for files larger than 100MB, but you can change the thresholds
via :multipart_threshold
:
Shrine::Storage::S3.new(
multipart_threshold: { upload: 30*1024*1024, copy: 200*1024*1024 },
**s3_options,
)
Encryption
The easiest way to use server-side encryption for uploaded S3 objects is to configure default encryption for your S3 bucket. Alternatively, you can pass server-side encryption parameters to the API calls.
The S3#upload
method accepts :sse_*
options:
s3.upload(io, "key", sse_customer_algorithm: "AES256",
sse_customer_key: "secret_key",
sse_customer_key_md5: "secret_key_md5",
ssekms_key_id: "key_id")
The S3#presign
method accepts :server_side_encryption_*
options for POST
presigns, and the same :sse_*
options as above for PUT presigns.
s3.presign("key", server_side_encryption_customer_algorithm: "AES256",
server_side_encryption_customer_key: "secret_key",
server_side_encryption_aws_kms_key_id: "key_id")
When downloading encrypted S3 objects, the same server-side encryption parameters need to be passed in.
s3.open("key", sse_customer_algorithm: "AES256",
sse_customer_key: "secret_key",
sse_customer_key_md5: "secret_key_md5")
Client-side encryption is supported as well:
encryption_client = Aws::S3::EncryptionV2::Client.new(...)
s3 = Shrine::Storage::S3.new(client: encryption_client, **other_options)
s3.upload(io, "key") # encrypts on upload
s3.open("key") # decrypts on download
Accelerate endpoint
To use Amazon S3's Transfer Acceleration feature, set
:use_accelerate_endpoint
to true
when initializing the storage:
Shrine::Storage::S3.new(use_accelerate_endpoint: true, **other_options)
Deleting prefixed
If you want to delete all objects in some prefix, you can use
S3#delete_prefixed
:
s3.delete_prefixed("some_prefix/") # deletes all objects in "some_prefix/"
Clearing cache
If you're using S3 as a cache, you will probably want to periodically delete old files which aren't used anymore. S3 has a built-in way to do this, read this article for instructions.
Alternatively you can periodically call the #clear!
method:
# deletes all objects that were uploaded more than 7 days ago
s3.clear! { |object| object.last_modified < Time.now - 7*24*60*60 }