class Shrine::Storage::S3

  1. lib/shrine/storage/s3.rb
Superclass: Object

Classes and Modules

  1. Shrine::Storage::S3::Tempfile

Attributes

bucket [R]
client [R]
host [R]
prefix [R]
public [R]
signer [R]
upload_options [R]

Public Class methods

new (bucket:, client: nil, prefix: nil, host: nil, upload_options: {}, multipart_threshold: {}, signer: nil, public: nil, **s3_options)

Initializes a storage for uploading to S3. All options are forwarded to Aws::S3::Client#initialize, except the following:

:bucket

(Required). Name of the S3 bucket.

:client

By default an Aws::S3::Client instance is created internally from additional options, but you can use this option to provide your own client. This can be an Aws::S3::Client or an Aws::S3::Encryption::Client object.

:prefix

“Directory” inside the bucket to store files into.

:upload_options

Additional options that will be used for uploading files, they will be passed to Aws::S3::Object#put, Aws::S3::Object#copy_from and Aws::S3::Bucket#presigned_post.

:multipart_threshold

If the input file is larger than the specified size, a parallelized multipart will be used for the upload/copy. Defaults to {upload: 15*1024*1024, copy: 100*1024*1024} (15MB for upload requests, 100MB for copy requests).

In addition to specifying the :bucket, you'll also need to provide AWS credentials. The most common way is to provide them directly via :access_key_id, :secret_access_key, and :region options. But you can also use any other way of authentication specified in the AWS SDK documentation.

[show source]
   # File lib/shrine/storage/s3.rb
68 def initialize(bucket:, client: nil, prefix: nil, host: nil, upload_options: {}, multipart_threshold: {}, signer: nil, public: nil, **s3_options)
69   raise ArgumentError, "the :bucket option is nil" unless bucket
70 
71   Shrine.deprecation("The :host option to Shrine::Storage::S3#initialize is deprecated and will be removed in Shrine 3. Pass :host to S3#url instead, you can also use default_url_options plugin.") if host
72 
73   if multipart_threshold.is_a?(Integer)
74     Shrine.deprecation("Accepting the :multipart_threshold S3 option as an integer is deprecated, use a hash with :upload and :copy keys instead, e.g. {upload: 15*1024*1024, copy: 150*1024*1024}")
75     multipart_threshold = { upload: multipart_threshold }
76   end
77   multipart_threshold = { upload: 15*1024*1024, copy: 100*1024*1024 }.merge(multipart_threshold)
78 
79   @client = client || Aws::S3::Client.new(**s3_options)
80   @bucket = Aws::S3::Bucket.new(name: bucket, client: @client)
81   @prefix = prefix
82   @host = host
83   @upload_options = upload_options
84   @multipart_threshold = multipart_threshold
85   @signer = signer
86   @public = public
87 end

Public Instance methods

clear! (&block)

If block is given, deletes all objects from the storage for which the block evaluates to true. Otherwise deletes all objects from the storage.

s3.clear!
# or
s3.clear! { |object| object.last_modified < Time.now - 7*24*60*60 }
[show source]
    # File lib/shrine/storage/s3.rb
251 def clear!(&block)
252   objects_to_delete = Enumerator.new do |yielder|
253     bucket.objects(prefix: prefix).each do |object|
254       yielder << object if block.nil? || block.call(object)
255     end
256   end
257 
258   delete_objects(objects_to_delete)
259 end
delete (id)

Deletes the file from the storage.

[show source]
    # File lib/shrine/storage/s3.rb
241 def delete(id)
242   object(id).delete
243 end
exists? (id)

Returns true file exists on S3.

[show source]
    # File lib/shrine/storage/s3.rb
145 def exists?(id)
146   object(id).exists?
147 end
method_missing (name, *args, &block)

Catches the deprecated #download and #stream methods.

[show source]
    # File lib/shrine/storage/s3.rb
267 def method_missing(name, *args, &block)
268   case name
269   when :stream   then deprecated_stream(*args, &block)
270   when :download then deprecated_download(*args, &block)
271   else
272     super
273   end
274 end
object (id)

Returns an Aws::S3::Object for the given id.

[show source]
    # File lib/shrine/storage/s3.rb
262 def object(id)
263   bucket.object([*prefix, id].join("/"))
264 end
open (id, rewindable: true, **options)

Returns a Down::ChunkedIO object that downloads S3 object content on-demand. By default, read content will be cached onto disk so that it can be rewinded, but if you don't need that you can pass rewindable: false.

Any additional options are forwarded to Aws::S3::Object#get.

[show source]
    # File lib/shrine/storage/s3.rb
131 def open(id, rewindable: true, **options)
132   object = object(id)
133 
134   load_data(object, **options)
135 
136   Down::ChunkedIO.new(
137     chunks:     object.enum_for(:get, **options),
138     rewindable: rewindable,
139     size:       object.content_length,
140     data:       { object: object },
141   )
142 end
presign (id, method: :post, **presign_options)

Returns URL, params, headers, and verb for direct uploads.

s3.presign("key") #=>
# {
#   url: "https://my-bucket.s3.amazonaws.com/...",
#   fields: { ... },  # blank for PUT presigns
#   headers: { ... }, # blank for POST presigns
#   method: "post",
# }

By default it calls Aws::S3::Object#presigned_post which generates data for a POST request, but you can also specify method: :put for PUT uploads which calls Aws::S3::Object#presigned_url.

s3.presign("key", method: :post) # for POST upload (default)
s3.presign("key", method: :put)  # for PUT upload

Any additional options are forwarded to the underlying AWS SDK method.

[show source]
    # File lib/shrine/storage/s3.rb
210 def presign(id, method: :post, **presign_options)
211   options = {}
212   options[:acl] = "public-read" if public
213 
214   options.merge!(@upload_options)
215   options.merge!(presign_options)
216 
217   options[:content_disposition] = encode_content_disposition(options[:content_disposition]) if options[:content_disposition]
218 
219   if method == :post
220     presigned_post = object(id).presigned_post(options)
221 
222     Struct.new(:method, :url, :fields).new(method, presigned_post.url, presigned_post.fields)
223   else
224     url = object(id).presigned_url(method, options)
225 
226     # When any of these options are specified, the corresponding request
227     # headers must be included in the upload request.
228     headers = {}
229     headers["Content-Length"]      = options[:content_length]      if options[:content_length]
230     headers["Content-Type"]        = options[:content_type]        if options[:content_type]
231     headers["Content-Disposition"] = options[:content_disposition] if options[:content_disposition]
232     headers["Content-Encoding"]    = options[:content_encoding]    if options[:content_encoding]
233     headers["Content-Language"]    = options[:content_language]    if options[:content_language]
234     headers["Content-MD5"]         = options[:content_md5]         if options[:content_md5]
235 
236     { method: method, url: url, headers: headers }
237   end
238 end
s3 ()

Returns an Aws::S3::Resource object.

[show source]
   # File lib/shrine/storage/s3.rb
90 def s3
91   Shrine.deprecation("Shrine::Storage::S3#s3 that returns an Aws::S3::Resource is deprecated, use Shrine::Storage::S3#client which returns an Aws::S3::Client object.")
92   Aws::S3::Resource.new(client: @client)
93 end
upload (io, id, shrine_metadata: {}, **upload_options)

If the file is an UploadedFile from S3, issues a COPY command, otherwise uploads the file. For files larger than :multipart_threshold a multipart upload/copy will be used for better performance and more resilient uploads.

It assigns the correct “Content-Type” taken from the MIME type, because by default S3 sets everything to “application/octet-stream”.

[show source]
    # File lib/shrine/storage/s3.rb
102 def upload(io, id, shrine_metadata: {}, **upload_options)
103   content_type, filename = shrine_metadata.values_at("mime_type", "filename")
104 
105   options = {}
106   options[:content_type] = content_type if content_type
107   options[:content_disposition] = ContentDisposition.inline(filename) if filename
108   options[:acl] = "public-read" if public
109 
110   options.merge!(@upload_options)
111   options.merge!(upload_options)
112 
113   options[:content_disposition] = encode_content_disposition(options[:content_disposition]) if options[:content_disposition]
114 
115   if copyable?(io)
116     copy(io, id, **options)
117   else
118     bytes_uploaded = put(io, id, **options)
119     shrine_metadata["size"] ||= bytes_uploaded
120   end
121 end
url (id, download: nil, public: self.public, host: self.host, **options)

Returns the presigned URL to the file.

:host

This option replaces the host part of the returned URL, and is typically useful for setting CDN hosts (e.g. http://abc123.cloudfront.net)

:download

If set to true, creates a “forced download” link, which means that the browser will never display the file and always ask the user to download it.

All other options are forwarded to Aws::S3::Object#presigned_url or Aws::S3::Object#public_url.

[show source]
    # File lib/shrine/storage/s3.rb
166 def url(id, download: nil, public: self.public, host: self.host, **options)
167   options[:response_content_disposition] ||= "attachment" if download
168   options[:response_content_disposition] = encode_content_disposition(options[:response_content_disposition]) if options[:response_content_disposition]
169 
170   if public || signer
171     url = object(id).public_url(**options)
172   else
173     url = object(id).presigned_url(:get, **options)
174   end
175 
176   if host
177     uri = URI.parse(url)
178     uri.path = uri.path.match(/^\/#{bucket.name}/).post_match unless uri.host.include?(bucket.name)
179     url = URI.join(host, uri.request_uri[1..-1]).to_s
180   end
181 
182   if signer
183     url = signer.call(url, **options)
184   end
185 
186   url
187 end