class Shrine::Storage::S3

  1. lib/shrine/storage/s3.rb
Superclass: Object

The S3 storage handles uploads to Amazon S3 service, using the aws-sdk gem:

gem "aws-sdk", "~> 2.1"

It is initialized with the following 4 required options:

storage =
  access_key_id: "abc",
  secret_access_key: "xyz",
  region: "eu-west-1",
  bucket: "my-app",

The storage exposes the underlying Aws objects:

storage.client #=> #<Aws::S3::Client>
storage.client.access_key_id #=> "abc"
storage.client.secret_access_key #=> "xyz"
storage.client.region #=> "eu-west-1"

storage.bucket #=> #<Aws::S3::Bucket> #=> "my-app"

storage.object("key") #=> #<Aws::S3::Object>


The :prefix option can be specified for uploading all files inside a specific S3 prefix (folder), which is useful when using S3 for both cache and store: "cache", **s3_options) "store", **s3_options)

Upload options

Sometimes you'll want to add additional upload options to all S3 uploads. You can do that by passing the :upload option: {acl: "private"}, **s3_options)

These options will be passed to aws-sdk's methods for uploading, copying and presigning.

You can also generate upload options per upload with the upload_options plugin

class MyUploader < Shrine
  plugin :upload_options, store: ->(io, context) do
    if context[:version] == :thumb
      {acl: "public-read"}
      {acl: "private"}

or when using the uploader directly

uploader.upload(file, upload_options: {acl: "private"})

Note that, unlike the :upload_options storage option, upload options given on the uploader level won't be forwarded for generating presigns, since presigns are generated using the storage directly.

URL options

This storage supports various URL options that will be forwarded from uploaded file.

uploaded_file.url(public: true)   # public URL without signed parameters
uploaded_file.url(download: true) # forced download URL

All other options are forwarded to the aws-sdk gem:

uploaded_file.url(expires_in: 15)
uploaded_file.url(virtual_host: true)


If you're using a CDN with S3 like Amazon CloudFront, you can specify the :host option to #url:

uploaded_file.url("image.jpg", host: "")
#=> ""

Accelerate endpoint

To use Amazon S3's Transfer Acceleration feature, you can change the :endpoint of the underlying client to the accelerate endpoint, and this will be applied both to regular and presigned uploads, as well as download URLs. "")


This storage can generate presigns for direct uploads to Amazon S3, and it accepts additional options which are passed to aws-sdk. There are three places in which you can specify presign options:

  • in :upload_options option on this storage

  • in direct_upload plugin through :presign_options

  • in Storage::S3#presign by forwarding options

Large files

The aws-sdk gem has the ability to automatically use multipart upload/copy for larger files, splitting the file into multiple chunks and uploading/copying them in parallel.

By default any files that are uploaded will use the multipart upload if they're larger than 15MB, and any files that are copied will use the multipart copy if they're larger than 150MB, but you can change the thresholds via :multipart_threshold.

thresholds = {upload: 30*1024*1024, copy: 200*1024*1024} thresholds, **s3_options)

If you want to change how many threads aws-sdk will use for multipart upload/copy, you can use the upload_options plugin to specify :thread_count.

plugin :upload_options, store: ->(io, context) do
  {thread_count: 5}

Clearing cache

If you're using S3 as a cache, you will probably want to periodically delete old files which aren't used anymore. S3 has a built-in way to do this, read this article for instructions.


bucket [R]
client [R]
host [R]
prefix [R]
upload_options [R]

Public Class methods

new (bucket:, prefix: nil, host: nil, upload_options: {}, multipart_threshold: {}, **s3_options)

Initializes a storage for uploading to S3.


Credentials required by the aws-sdk gem.


“Folder” name inside the bucket to store files into.


Additional options that will be used for uploading files, they will be passed to Aws::S3::Object#put, Aws::S3::Object#copy_from and Aws::S3::Bucket#presigned_post.


If the input file is larger than the specified size, a parallelized multipart will be used for the upload/copy. Defaults to {upload: 15*1024*1024, copy: 100*1024*1024} (15MB for upload requests, 100MB for copy requests).

All other options are forwarded to Aws::S3::Client#initialize.

[show source]
# File lib/shrine/storage/s3.rb, line 181
def initialize(bucket,, prefix: nil, host: nil, upload_options: {}, multipart_threshold: {}, **s3_options)
  Shrine.deprecation("The :host option to Shrine::Storage::S3#initialize is deprecated and will be removed in Shrine 3. Pass :host to S3#url instead, you can also use default_url_options plugin.") if host
  resource =**s3_options)

  if multipart_threshold.is_a?(Integer)
    Shrine.deprecation("Accepting the :multipart_threshold S3 option as an integer is deprecated, use a hash with :upload and :copy keys instead, e.g. {upload: 15*1024*1024, copy: 150*1024*1024}")
    multipart_threshold = {upload: multipart_threshold}
  multipart_threshold[:upload] ||= 15*1024*1024
  multipart_threshold[:copy]   ||= 100*1024*1024

  @bucket = resource.bucket(bucket)
  @client = resource.client
  @prefix = prefix
  @host = host
  @upload_options = upload_options
  @multipart_threshold = multipart_threshold

Public Instance methods

clear! ()

Deletes all files from the storage.

[show source]
# File lib/shrine/storage/s3.rb, line 307
def clear!
  objects = bucket.object_versions(prefix: prefix)
  objects.respond_to?(:batch_delete!) ? objects.batch_delete! : objects.delete
delete (id)

Deletes the file from S3.

[show source]
# File lib/shrine/storage/s3.rb, line 252
def delete(id)
download (id)

Downloads the file from S3, and returns a Tempfile.

[show source]
# File lib/shrine/storage/s3.rb, line 233
def download(id)
  tempfile =["shrine-s3", File.extname(id)], binmode: true)
  (object = object(id)).get(response_target: tempfile)
  tempfile.singleton_class.instance_eval { attr_accessor :content_type }
  tempfile.content_type = object.content_type
exists? (id)

Returns true file exists on S3.

[show source]
# File lib/shrine/storage/s3.rb, line 247
def exists?(id)
method_missing (name, *args)

Catches the deprecated #stream method.

[show source]
# File lib/shrine/storage/s3.rb, line 330
def method_missing(name, *args)
  if name == :stream
    Shrine.deprecation("Shrine::Storage::S3#stream is deprecated over calling #each_chunk on S3#open.")
    object = object(*args)
    object.get { |chunk| yield chunk, object.content_length }
multi_delete (ids)

This is called when multiple files are being deleted at once. Issues a single MULTI DELETE command for each 1000 objects (S3 delete limit).

[show source]
# File lib/shrine/storage/s3.rb, line 258
def multi_delete(ids)
  ids.each_slice(1000) do |ids_batch|
    delete_params = {objects: { |id| {key: object(id).key} }}
    bucket.delete_objects(delete: delete_params)
object (id)

Returns an Aws::S3::Object for the given id.

[show source]
# File lib/shrine/storage/s3.rb, line 325
def object(id)
  bucket.object([*prefix, id].join("/"))
open (id)

Returns a Down::ChunkedIO object representing the S3 object.

[show source]
# File lib/shrine/storage/s3.rb, line 242
def open(id), ssl_ca_cert: Aws.config[:ssl_ca_bundle])
presign (id, **options)

Returns a signature for direct uploads. Internally it calls Aws::S3::Bucket#presigned_post, and forwards any additional options to it.

[show source]
# File lib/shrine/storage/s3.rb, line 317
def presign(id, **options)
  options = upload_options.merge(options)
  options[:content_disposition] = encode_content_disposition(options[:content_disposition]) if options[:content_disposition]

s3 ()

Returns an Aws::S3::Resource object.

[show source]
# File lib/shrine/storage/s3.rb, line 201
def s3
  Shrine.deprecation("Shrine::Storage::S3#s3 that returns an Aws::S3::Resource is deprecated, use Shrine::Storage::S3#client which returns an Aws::S3::Client object.") @client)
upload (io, id, shrine_metadata: {}, **upload_options)

If the file is an UploadedFile from S3, issues a COPY command, otherwise uploads the file. For files larger than :multipart_threshold a multipart upload/copy will be used for better performance and more resilient uploads.

It assigns the correct “Content-Type” taken from the MIME type, because by default S3 sets everything to “application/octet-stream”.

[show source]
# File lib/shrine/storage/s3.rb, line 213
def upload(io, id, shrine_metadata: {}, **upload_options)
  content_type, filename = shrine_metadata.values_at("mime_type", "filename")

  options = {}
  options[:content_type] = content_type if content_type
  options[:content_disposition] = "inline; filename=\"#{filename}\"" if filename


  options[:content_disposition] = encode_content_disposition(options[:content_disposition]) if options[:content_disposition]

  if copyable?(io)
    copy(io, id, **options)
    put(io, id, **options)
url (id, download: nil, public: nil, host:, **options)

Returns the presigned URL to the file.


Controls whether the URL is signed (false) or unsigned (true). Note that for unsigned URLs the S3 bucket need to be modified to allow public URLs. Defaults to false.


This option replaces the host part of the returned URL, and is typically useful for setting CDN hosts (e.g.


If set to true, creates a “forced download” link, which means that the browser will never display the file and always ask the user to download it.

All other options are forwarded to Aws::S3::Object#presigned_url or Aws::S3::Object#public_url.

[show source]
# File lib/shrine/storage/s3.rb, line 287
def url(id, download: nil, public: nil, host:, **options)
  options[:response_content_disposition] ||= "attachment" if download
  options[:response_content_disposition] = encode_content_disposition(options[:response_content_disposition]) if options[:response_content_disposition]

  if public
    url = object(id).public_url(**options)
    url = object(id).presigned_url(:get, **options)

  if host
    uri = URI.parse(url)
    uri.path = uri.path.match(/^\/#{}/).post_match unless
    url = URI.join(host, uri.request_uri).to_s