class Shrine::Storage::S3

  1. lib/shrine/storage/s3.rb
Superclass: Object


MIN_PART_SIZE = 5*1024*1024  
MULTIPART_THRESHOLD = { upload: 15*1024*1024, copy: 100*1024*1024 }  


bucket [R]
client [R]
prefix [R]
public [R]
signer [R]
upload_options [R]

Public Class methods

new(bucket:, client: nil, prefix: nil, upload_options: {}, multipart_threshold: {}, max_multipart_parts: nil, signer: nil, public: nil, **s3_options)

Initializes a storage for uploading to S3. All options are forwarded to {Aws::S3::Client#initialize}, except the following:


(Required). Name of the S3 bucket.


By default an Aws::S3::Client instance is created internally from additional options, but you can use this option to provide your own client. This can be an Aws::S3::Client or an Aws::S3::Encryption::Client object.


“Directory” inside the bucket to store files into.


Additional options that will be used for uploading files, they will be passed to {Aws::S3::Object#put}, {Aws::S3::Object#copy_from} and {Aws::S3::Bucket#presigned_post}.


If the input file is larger than the specified size, a parallelized multipart will be used for the upload/copy. Defaults to {upload: 15*1024*1024, copy: 100*1024*1024} (15MB for upload requests, 100MB for copy requests).


Limits the number of parts if parellized multipart upload/copy is used. Defaults to 10_000.

In addition to specifying the :bucket, you’ll also need to provide AWS credentials. The most common way is to provide them directly via :access_key_id, :secret_access_key, and :region options. But you can also use any other way of authentication specified in the AWS SDK documentation.

[show source]
   # File lib/shrine/storage/s3.rb
65 def initialize(bucket:, client: nil, prefix: nil, upload_options: {}, multipart_threshold: {}, max_multipart_parts: nil, signer: nil, public: nil, **s3_options)
66   raise ArgumentError, "the :bucket option is nil" unless bucket
68   @client = client ||**s3_options)
69   @bucket = bucket, client: @client)
70   @prefix = prefix
71   @upload_options = upload_options
72   @multipart_threshold = MULTIPART_THRESHOLD.merge(multipart_threshold)
73   @max_multipart_parts = max_multipart_parts || MAX_MULTIPART_PARTS
74   @signer = signer
75   @public = public
76 end

Public Instance methods


If block is given, deletes all objects from the storage for which the block evaluates to true. Otherwise deletes all objects from the storage.

# or
s3.clear! { |object| object.last_modified < - 7*24*60*60 }
[show source]
    # File lib/shrine/storage/s3.rb
213 def clear!(&block)
214   objects_to_delete = bucket.objects(prefix: prefix)
215   objects_to_delete = if block
217   delete_objects(objects_to_delete)
218 end

Deletes the file from the storage.

[show source]
    # File lib/shrine/storage/s3.rb
193 def delete(id)
194   object(id).delete
195 end

Deletes objects at keys starting with the specified prefix.


[show source]
    # File lib/shrine/storage/s3.rb
200 def delete_prefixed(delete_prefix)
201   # We need to make sure to combine with storage prefix, and
202   # that it ends in '/' cause S3 can be squirrely about matching interior.
203   delete_prefix = delete_prefix.chomp("/") + "/"
204   bucket.objects(prefix: [*prefix, delete_prefix].join("/")).batch_delete!
205 end

Returns true file exists on S3.

[show source]
    # File lib/shrine/storage/s3.rb
121 def exists?(id)
122   object(id).exists?
123 end

Returns an Aws::S3::Object for the given id.

[show source]
    # File lib/shrine/storage/s3.rb
221 def object(id)
222   bucket.object(object_key(id))
223 end
open(id, rewindable: true, encoding: nil, **options)

Returns a Down::ChunkedIO object that downloads S3 object content on-demand. By default, read content will be cached onto disk so that it can be rewinded, but if you don’t need that you can pass rewindable: false. A required character encoding can be passed in encoding; the default is Encoding::BINARY via Down::ChunkedIO.

Any additional options are forwarded to {Aws::S3::Object#get}.

[show source]
    # File lib/shrine/storage/s3.rb
112 def open(id, rewindable: true, encoding: nil, **options)
113   chunks, length = get(id, **options)
115 chunks, rewindable: rewindable, size: length, encoding: encoding)
116 rescue Aws::S3::Errors::NoSuchKey
117   raise Shrine::FileNotFound, "file #{id.inspect} not found on storage"
118 end
presign(id, method: :post, **presign_options)

Returns URL, params, headers, and verb for direct uploads.

s3.presign("key") #=>
# {
#   url: "",
#   fields: { ... },  # blank for PUT presigns
#   headers: { ... }, # blank for POST presigns
#   method: "post",
# }

By default it calls {Aws::S3::Object#presigned_post} which generates data for a POST request, but you can also specify method: :put for PUT uploads which calls {Aws::S3::Object#presigned_url}.

s3.presign("key", method: :post) # for POST upload (default)
s3.presign("key", method: :put)  # for PUT upload

Any additional options are forwarded to the underlying AWS SDK method.

[show source]
    # File lib/shrine/storage/s3.rb
182 def presign(id, method: :post, **presign_options)
183   options = {}
184   options[:acl] = "public-read" if public
186   options.merge!(@upload_options)
187   options.merge!(presign_options)
189   send(:"presign_#{method}", id, options)
190 end
upload(io, id, shrine_metadata: {}, **upload_options)

If the file is an UploadedFile from S3, issues a COPY command, otherwise uploads the file. For files larger than :multipart_threshold a multipart upload/copy will be used for better performance and more resilient uploads.

It assigns the correct “Content-Type” taken from the MIME type, because by default S3 sets everything to “application/octet-stream”.

[show source]
    # File lib/shrine/storage/s3.rb
 85 def upload(io, id, shrine_metadata: {}, **upload_options)
 86   content_type, filename = shrine_metadata.values_at("mime_type", "filename")
 88   options = {}
 89   options[:content_type] = content_type if content_type
 90   options[:content_disposition] = ContentDisposition.inline(filename) if filename
 91   options[:acl] = "public-read" if public
 93   options.merge!(@upload_options)
 94   options.merge!(upload_options)
 96   if copyable?(io)
 97     copy(io, id, **options)
 98   else
 99     put(io, id, **options)
100   end
101 end
url(id, public: self.public, host: nil, **options)

Returns the presigned URL to the file.


This option replaces the host part of the returned URL, and is typically useful for setting CDN hosts (e.g.


Returns the unsigned URL to the S3 object. This requires the S3 object to be public.

All other options are forwarded to {Aws::S3::Object#presigned_url} or {Aws::S3::Object#public_url}.

[show source]
    # File lib/shrine/storage/s3.rb
141 def url(id, public: self.public, host: nil, **options)
142   if public || signer
143     url = object(id).public_url(**options)
144   else
145     url = object(id).presigned_url(:get, **options)
146   end
148   if host
149     uri = URI.parse(url)
150     uri.path = uri.path.match(/^\/#{}/).post_match unless
151     url = URI.join(host, uri.request_uri[1..-1]).to_s
152   end
154   if signer
155     url =, **options)
156   end
158   url
159 end