class Shrine::Storage::S3

  1. lib/shrine/storage/s3.rb
Superclass: Object

Constants

MAX_MULTIPART_PARTS = 10_000  
MIN_PART_SIZE = 5*1024*1024  
MULTIPART_THRESHOLD = { upload: 15*1024*1024, copy: 100*1024*1024 }  

Attributes

bucket [R]
client [R]
prefix [R]
public [R]
signer [R]
upload_options [R]

Public Class methods

new (bucket:, client: nil, prefix: nil, upload_options: {}, multipart_threshold: {}, signer: nil, public: nil, **s3_options)

Initializes a storage for uploading to S3. All options are forwarded to Aws::S3::Client#initialize, except the following:

:bucket

(Required). Name of the S3 bucket.

:client

By default an Aws::S3::Client instance is created internally from additional options, but you can use this option to provide your own client. This can be an Aws::S3::Client or an Aws::S3::Encryption::Client object.

:prefix

“Directory” inside the bucket to store files into.

:upload_options

Additional options that will be used for uploading files, they will be passed to Aws::S3::Object#put, Aws::S3::Object#copy_from and Aws::S3::Bucket#presigned_post.

:multipart_threshold

If the input file is larger than the specified size, a parallelized multipart will be used for the upload/copy. Defaults to {upload: 15*1024*1024, copy: 100*1024*1024} (15MB for upload requests, 100MB for copy requests).

In addition to specifying the :bucket, you'll also need to provide AWS credentials. The most common way is to provide them directly via :access_key_id, :secret_access_key, and :region options. But you can also use any other way of authentication specified in the AWS SDK documentation.

[show source]
   # File lib/shrine/storage/s3.rb
60 def initialize(bucket:, client: nil, prefix: nil, upload_options: {}, multipart_threshold: {}, signer: nil, public: nil, **s3_options)
61   raise ArgumentError, "the :bucket option is nil" unless bucket
62 
63   @client = client || Aws::S3::Client.new(**s3_options)
64   @bucket = Aws::S3::Bucket.new(name: bucket, client: @client)
65   @prefix = prefix
66   @upload_options = upload_options
67   @multipart_threshold = MULTIPART_THRESHOLD.merge(multipart_threshold)
68   @signer = signer
69   @public = public
70 end

Public Instance methods

clear! (&block)

If block is given, deletes all objects from the storage for which the block evaluates to true. Otherwise deletes all objects from the storage.

s3.clear!
# or
s3.clear! { |object| object.last_modified < Time.now - 7*24*60*60 }
[show source]
    # File lib/shrine/storage/s3.rb
206 def clear!(&block)
207   objects_to_delete = bucket.objects(prefix: prefix)
208   objects_to_delete = objects_to_delete.lazy.select(&block) if block
209 
210   delete_objects(objects_to_delete)
211 end
delete (id)

Deletes the file from the storage.

[show source]
    # File lib/shrine/storage/s3.rb
186 def delete(id)
187   object(id).delete
188 end
delete_prefixed (delete_prefix)

Deletes objects at keys starting with the specified prefix.

s3.delete_prefixed(“somekey/derivatives/”)

[show source]
    # File lib/shrine/storage/s3.rb
193 def delete_prefixed(delete_prefix)
194   # We need to make sure to combine with storage prefix, and
195   # that it ends in '/' cause S3 can be squirrely about matching interior.
196   delete_prefix = delete_prefix.chomp("/") + "/"
197   bucket.objects(prefix: [*prefix, delete_prefix].join("/")).batch_delete!
198 end
exists? (id)

Returns true file exists on S3.

[show source]
    # File lib/shrine/storage/s3.rb
114 def exists?(id)
115   object(id).exists?
116 end
object (id)

Returns an Aws::S3::Object for the given id.

[show source]
    # File lib/shrine/storage/s3.rb
214 def object(id)
215   bucket.object([*prefix, id].join("/"))
216 end
open (id, rewindable: true, **options)

Returns a Down::ChunkedIO object that downloads S3 object content on-demand. By default, read content will be cached onto disk so that it can be rewinded, but if you don't need that you can pass rewindable: false.

Any additional options are forwarded to Aws::S3::Object#get.

[show source]
    # File lib/shrine/storage/s3.rb
105 def open(id, rewindable: true, **options)
106   chunks, length = get_object(object(id), options)
107 
108   Down::ChunkedIO.new(chunks: chunks, rewindable: rewindable, size: length)
109 rescue Aws::S3::Errors::NoSuchKey
110   raise Shrine::FileNotFound, "file #{id.inspect} not found on storage"
111 end
presign (id, method: :post, **presign_options)

Returns URL, params, headers, and verb for direct uploads.

s3.presign("key") #=>
# {
#   url: "https://my-bucket.s3.amazonaws.com/...",
#   fields: { ... },  # blank for PUT presigns
#   headers: { ... }, # blank for POST presigns
#   method: "post",
# }

By default it calls Aws::S3::Object#presigned_post which generates data for a POST request, but you can also specify method: :put for PUT uploads which calls Aws::S3::Object#presigned_url.

s3.presign("key", method: :post) # for POST upload (default)
s3.presign("key", method: :put)  # for PUT upload

Any additional options are forwarded to the underlying AWS SDK method.

[show source]
    # File lib/shrine/storage/s3.rb
175 def presign(id, method: :post, **presign_options)
176   options = {}
177   options[:acl] = "public-read" if public
178 
179   options.merge!(@upload_options)
180   options.merge!(presign_options)
181 
182   send(:"presign_#{method}", id, options)
183 end
upload (io, id, shrine_metadata: {}, **upload_options)

If the file is an UploadedFile from S3, issues a COPY command, otherwise uploads the file. For files larger than :multipart_threshold a multipart upload/copy will be used for better performance and more resilient uploads.

It assigns the correct “Content-Type” taken from the MIME type, because by default S3 sets everything to “application/octet-stream”.

[show source]
   # File lib/shrine/storage/s3.rb
79 def upload(io, id, shrine_metadata: {}, **upload_options)
80   content_type, filename = shrine_metadata.values_at("mime_type", "filename")
81 
82   options = {}
83   options[:content_type] = content_type if content_type
84   options[:content_disposition] = ContentDisposition.inline(filename) if filename
85   options[:acl] = "public-read" if public
86 
87   options.merge!(@upload_options)
88   options.merge!(upload_options)
89 
90   if copyable?(io)
91     copy(io, id, **options)
92   else
93     put(io, id, **options)
94   end
95 end
url (id, public: self.public, host: nil, **options)

Returns the presigned URL to the file.

:host

This option replaces the host part of the returned URL, and is typically useful for setting CDN hosts (e.g. http://abc123.cloudfront.net)

:public

Returns the unsigned URL to the S3 object. This requires the S3 object to be public.

All other options are forwarded to Aws::S3::Object#presigned_url or Aws::S3::Object#public_url.

[show source]
    # File lib/shrine/storage/s3.rb
134 def url(id, public: self.public, host: nil, **options)
135   if public || signer
136     url = object(id).public_url(**options)
137   else
138     url = object(id).presigned_url(:get, **options)
139   end
140 
141   if host
142     uri = URI.parse(url)
143     uri.path = uri.path.match(/^\/#{bucket.name}/).post_match unless uri.host.include?(bucket.name)
144     url = URI.join(host, uri.request_uri[1..-1]).to_s
145   end
146 
147   if signer
148     url = signer.call(url, **options)
149   end
150 
151   url
152 end