Methods
Public Class
Public Instance
Classes and Modules
Constants
MAX_MULTIPART_PARTS | = | 10_000 | ||
MIN_PART_SIZE | = | 5*1024*1024 | ||
MULTIPART_THRESHOLD | = | { upload: 15*1024*1024, copy: 100*1024*1024 } |
Public Class methods
Initializes a storage for uploading to S3
. All options are forwarded to {Aws::S3::Client#initialize
}, except the following:
:bucket |
(Required). Name of the |
:client |
By default an |
:prefix |
“Directory” inside the bucket to store files into. |
:upload_options |
Additional options that will be used for uploading files, they will be passed to { |
:multipart_threshold |
If the input file is larger than the specified size, a parallelized multipart will be used for the upload/copy. Defaults to |
In addition to specifying the :bucket
, you’ll also need to provide AWS credentials. The most common way is to provide them directly via :access_key_id
, :secret_access_key
, and :region
options. But you can also use any other way of authentication specified in the AWS SDK documentation.
# File lib/shrine/storage/s3.rb 61 def initialize(bucket:, client: nil, prefix: nil, upload_options: {}, multipart_threshold: {}, signer: nil, public: nil, **s3_options) 62 raise ArgumentError, "the :bucket option is nil" unless bucket 63 64 @client = client || Aws::S3::Client.new(**s3_options) 65 @bucket = Aws::S3::Bucket.new(name: bucket, client: @client) 66 @prefix = prefix 67 @upload_options = upload_options 68 @multipart_threshold = MULTIPART_THRESHOLD.merge(multipart_threshold) 69 @signer = signer 70 @public = public 71 end
Public Instance methods
If block is given, deletes all objects from the storage for which the block evaluates to true. Otherwise deletes all objects from the storage.
s3.clear! # or s3.clear! { |object| object.last_modified < Time.now - 7*24*60*60 }
# File lib/shrine/storage/s3.rb 207 def clear!(&block) 208 objects_to_delete = bucket.objects(prefix: prefix) 209 objects_to_delete = objects_to_delete.lazy.select(&block) if block 210 211 delete_objects(objects_to_delete) 212 end
Deletes the file from the storage.
# File lib/shrine/storage/s3.rb 187 def delete(id) 188 object(id).delete 189 end
Deletes objects at keys starting with the specified prefix.
s3.delete_prefixed(“somekey/derivatives/”)
# File lib/shrine/storage/s3.rb 194 def delete_prefixed(delete_prefix) 195 # We need to make sure to combine with storage prefix, and 196 # that it ends in '/' cause S3 can be squirrely about matching interior. 197 delete_prefix = delete_prefix.chomp("/") + "/" 198 bucket.objects(prefix: [*prefix, delete_prefix].join("/")).batch_delete! 199 end
Returns true file exists on S3
.
# File lib/shrine/storage/s3.rb 115 def exists?(id) 116 object(id).exists? 117 end
Returns an Aws::S3::Object
for the given id.
# File lib/shrine/storage/s3.rb 215 def object(id) 216 bucket.object(object_key(id)) 217 end
Returns a Down::ChunkedIO
object that downloads S3
object content on-demand. By default, read content will be cached onto disk so that it can be rewinded, but if you don’t need that you can pass rewindable: false
.
Any additional options are forwarded to {Aws::S3::Object#get
}.
# File lib/shrine/storage/s3.rb 106 def open(id, rewindable: true, **options) 107 chunks, length = get(id, **options) 108 109 Down::ChunkedIO.new(chunks: chunks, rewindable: rewindable, size: length) 110 rescue Aws::S3::Errors::NoSuchKey 111 raise Shrine::FileNotFound, "file #{id.inspect} not found on storage" 112 end
Returns URL, params, headers, and verb for direct uploads.
s3.presign("key") #=> # { # url: "https://my-bucket.s3.amazonaws.com/...", # fields: { ... }, # blank for PUT presigns # headers: { ... }, # blank for POST presigns # method: "post", # }
By default it calls {Aws::S3::Object#presigned_post
} which generates data for a POST request, but you can also specify method: :put
for PUT uploads which calls {Aws::S3::Object#presigned_url
}.
s3.presign("key", method: :post) # for POST upload (default) s3.presign("key", method: :put) # for PUT upload
Any additional options are forwarded to the underlying AWS SDK method.
# File lib/shrine/storage/s3.rb 176 def presign(id, method: :post, **presign_options) 177 options = {} 178 options[:acl] = "public-read" if public 179 180 options.merge!(@upload_options) 181 options.merge!(presign_options) 182 183 send(:"presign_#{method}", id, options) 184 end
If the file is an UploadedFile
from S3
, issues a COPY command, otherwise uploads the file. For files larger than :multipart_threshold
a multipart upload/copy will be used for better performance and more resilient uploads.
It assigns the correct “Content-Type” taken from the MIME type, because by default S3
sets everything to “application/octet-stream”.
# File lib/shrine/storage/s3.rb 80 def upload(io, id, shrine_metadata: {}, **upload_options) 81 content_type, filename = shrine_metadata.values_at("mime_type", "filename") 82 83 options = {} 84 options[:content_type] = content_type if content_type 85 options[:content_disposition] = ContentDisposition.inline(filename) if filename 86 options[:acl] = "public-read" if public 87 88 options.merge!(@upload_options) 89 options.merge!(upload_options) 90 91 if copyable?(io) 92 copy(io, id, **options) 93 else 94 put(io, id, **options) 95 end 96 end
Returns the presigned URL to the file.
:host |
This option replaces the host part of the returned URL, and is typically useful for setting CDN hosts (e.g. |
:public |
Returns the unsigned URL to the |
All other options are forwarded to {Aws::S3::Object#presigned_url
} or {Aws::S3::Object#public_url
}.
# File lib/shrine/storage/s3.rb 135 def url(id, public: self.public, host: nil, **options) 136 if public || signer 137 url = object(id).public_url(**options) 138 else 139 url = object(id).presigned_url(:get, **options) 140 end 141 142 if host 143 uri = URI.parse(url) 144 uri.path = uri.path.match(/^\/#{bucket.name}/).post_match unless uri.host.include?(bucket.name) 145 url = URI.join(host, uri.request_uri[1..-1]).to_s 146 end 147 148 if signer 149 url = signer.call(url, **options) 150 end 151 152 url 153 end