Methods
Public Class
Public Instance
Classes and Modules
Constants
MAX_MULTIPART_PARTS | = | 10_000 | ||
MIN_PART_SIZE | = | 5*1024*1024 | ||
MULTIPART_THRESHOLD | = | { upload: 15*1024*1024, copy: 100*1024*1024 } |
Public Class methods
Initializes a storage for uploading to S3
. All options are forwarded to {Aws::S3::Client#initialize
}, except the following:
:bucket |
(Required). Name of the |
:client |
By default an |
:prefix |
“Directory” inside the bucket to store files into. |
:upload_options |
Additional options that will be used for uploading files, they will be passed to { |
:multipart_threshold |
If the input file is larger than the specified size, a parallelized multipart will be used for the upload/copy. Defaults to |
:max_multipart_parts |
Limits the number of parts if parellized multipart upload/copy is used. Defaults to 10_000. |
In addition to specifying the :bucket
, you’ll also need to provide AWS credentials. The most common way is to provide them directly via :access_key_id
, :secret_access_key
, and :region
options. But you can also use any other way of authentication specified in the AWS SDK documentation.
# File lib/shrine/storage/s3.rb 65 def initialize(bucket:, client: nil, prefix: nil, upload_options: {}, multipart_threshold: {}, max_multipart_parts: nil, signer: nil, public: nil, **s3_options) 66 raise ArgumentError, "the :bucket option is nil" unless bucket 67 68 @client = client || Aws::S3::Client.new(**s3_options) 69 @bucket = Aws::S3::Bucket.new(name: bucket, client: @client) 70 @prefix = prefix 71 @upload_options = upload_options 72 @multipart_threshold = MULTIPART_THRESHOLD.merge(multipart_threshold) 73 @max_multipart_parts = max_multipart_parts || MAX_MULTIPART_PARTS 74 @signer = signer 75 @public = public 76 end
Public Instance methods
If block is given, deletes all objects from the storage for which the block evaluates to true. Otherwise deletes all objects from the storage.
s3.clear! # or s3.clear! { |object| object.last_modified < Time.now - 7*24*60*60 }
# File lib/shrine/storage/s3.rb 213 def clear!(&block) 214 objects_to_delete = bucket.objects(prefix: prefix) 215 objects_to_delete = objects_to_delete.lazy.select(&block) if block 216 217 delete_objects(objects_to_delete) 218 end
Deletes the file from the storage.
# File lib/shrine/storage/s3.rb 193 def delete(id) 194 object(id).delete 195 end
Deletes objects at keys starting with the specified prefix.
s3.delete_prefixed(“somekey/derivatives/”)
# File lib/shrine/storage/s3.rb 200 def delete_prefixed(delete_prefix) 201 # We need to make sure to combine with storage prefix, and 202 # that it ends in '/' cause S3 can be squirrely about matching interior. 203 delete_prefix = delete_prefix.chomp("/") + "/" 204 bucket.objects(prefix: [*prefix, delete_prefix].join("/")).batch_delete! 205 end
Returns true file exists on S3
.
# File lib/shrine/storage/s3.rb 121 def exists?(id) 122 object(id).exists? 123 end
Returns an Aws::S3::Object
for the given id.
# File lib/shrine/storage/s3.rb 221 def object(id) 222 bucket.object(object_key(id)) 223 end
Returns a Down::ChunkedIO
object that downloads S3
object content on-demand. By default, read content will be cached onto disk so that it can be rewinded, but if you don’t need that you can pass rewindable: false
. A required character encoding can be passed in encoding
; the default is Encoding::BINARY
via Down::ChunkedIO
.
Any additional options are forwarded to {Aws::S3::Object#get
}.
# File lib/shrine/storage/s3.rb 112 def open(id, rewindable: true, encoding: nil, **options) 113 chunks, length = get(id, **options) 114 115 Down::ChunkedIO.new(chunks: chunks, rewindable: rewindable, size: length, encoding: encoding) 116 rescue Aws::S3::Errors::NoSuchKey 117 raise Shrine::FileNotFound, "file #{id.inspect} not found on storage" 118 end
Returns URL, params, headers, and verb for direct uploads.
s3.presign("key") #=> # { # url: "https://my-bucket.s3.amazonaws.com/...", # fields: { ... }, # blank for PUT presigns # headers: { ... }, # blank for POST presigns # method: "post", # }
By default it calls {Aws::S3::Object#presigned_post
} which generates data for a POST request, but you can also specify method: :put
for PUT uploads which calls {Aws::S3::Object#presigned_url
}.
s3.presign("key", method: :post) # for POST upload (default) s3.presign("key", method: :put) # for PUT upload
Any additional options are forwarded to the underlying AWS SDK method.
# File lib/shrine/storage/s3.rb 182 def presign(id, method: :post, **presign_options) 183 options = {} 184 options[:acl] = "public-read" if public 185 186 options.merge!(@upload_options) 187 options.merge!(presign_options) 188 189 send(:"presign_#{method}", id, options) 190 end
If the file is an UploadedFile
from S3
, issues a COPY command, otherwise uploads the file. For files larger than :multipart_threshold
a multipart upload/copy will be used for better performance and more resilient uploads.
It assigns the correct “Content-Type” taken from the MIME type, because by default S3
sets everything to “application/octet-stream”.
# File lib/shrine/storage/s3.rb 85 def upload(io, id, shrine_metadata: {}, **upload_options) 86 content_type, filename = shrine_metadata.values_at("mime_type", "filename") 87 88 options = {} 89 options[:content_type] = content_type if content_type 90 options[:content_disposition] = ContentDisposition.inline(filename) if filename 91 options[:acl] = "public-read" if public 92 93 options.merge!(@upload_options) 94 options.merge!(upload_options) 95 96 if copyable?(io) 97 copy(io, id, **options) 98 else 99 put(io, id, **options) 100 end 101 end
Returns the presigned URL to the file.
:host |
This option replaces the host part of the returned URL, and is typically useful for setting CDN hosts (e.g. |
:public |
Returns the unsigned URL to the |
All other options are forwarded to {Aws::S3::Object#presigned_url
} or {Aws::S3::Object#public_url
}.
# File lib/shrine/storage/s3.rb 141 def url(id, public: self.public, host: nil, **options) 142 if public || signer 143 url = object(id).public_url(**options) 144 else 145 url = object(id).presigned_url(:get, **options) 146 end 147 148 if host 149 uri = URI.parse(url) 150 uri.path = uri.path.match(/^\/#{bucket.name}/).post_match unless uri.host.include?(bucket.name) 151 url = URI.join(host, uri.request_uri[1..-1]).to_s 152 end 153 154 if signer 155 url = signer.call(url, **options) 156 end 157 158 url 159 end