class Shrine::Storage::S3

  1. lib/shrine/storage/s3.rb
Superclass: Object

The S3 storage handles uploads to Amazon S3 service, using the aws-sdk-s3 gem:

gem "aws-sdk-s3", "~> 1.2"

It is initialized with the following 4 required options:

s3 = Shrine::Storage::S3.new(
  access_key_id: "abc",
  secret_access_key: "xyz",
  region: "eu-west-1",
  bucket: "my-app",
)

The storage exposes the underlying Aws objects:

s3.client #=> #<Aws::S3::Client>
s3.client.access_key_id #=> "abc"
s3.client.secret_access_key #=> "xyz"
s3.client.region #=> "eu-west-1"

s3.bucket #=> #<Aws::S3::Bucket>
s3.bucket.name #=> "my-app"

s3.object("key") #=> #<Aws::S3::Object>

Prefix

The :prefix option can be specified for uploading all files inside a specific S3 prefix (folder), which is useful when using S3 for both cache and store:

Shrine::Storage::S3.new(prefix: "cache", **s3_options)
Shrine::Storage::S3.new(prefix: "store", **s3_options)

Upload options

Sometimes you'll want to add additional upload options to all S3 uploads. You can do that by passing the :upload option:

Shrine::Storage::S3.new(upload_options: {acl: "private"}, **s3_options)

These options will be passed to aws-sdk-s3's methods for uploading, copying and presigning.

You can also generate upload options per upload with the upload_options plugin

class MyUploader < Shrine
  plugin :upload_options, store: ->(io, context) do
    if context[:version] == :thumb
      {acl: "public-read"}
    else
      {acl: "private"}
    end
  end
end

or when using the uploader directly

uploader.upload(file, upload_options: {acl: "private"})

Note that, unlike the :upload_options storage option, upload options given on the uploader level won't be forwarded for generating presigns, since presigns are generated using the storage directly.

URL options

This storage supports various URL options that will be forwarded from uploaded file.

s3.url(public: true)   # public URL without signed parameters
s3.url(download: true) # forced download URL

All other options are forwarded to the aws-sdk-s3 gem:

s3.url(expires_in: 15)
s3.url(virtual_host: true)

CDN

If you're using a CDN with S3 like Amazon CloudFront, you can specify the :host option to #url:

s3.url("image.jpg", host: "http://abc123.cloudfront.net")
#=> "http://abc123.cloudfront.net/image.jpg"

You have the :host option passed automatically for every URL with the default_url_options plugin.

plugin :default_url_options, store: { host: "http://abc123.cloudfront.net" }

Accelerate endpoint

To use Amazon S3's Transfer Acceleration feature, you can change the :endpoint of the underlying client to the accelerate endpoint, and this will be applied both to regular and presigned uploads, as well as download URLs.

Shrine::Storage::S3.new(endpoint: "https://s3-accelerate.amazonaws.com")

Presigns

This storage can generate presigns for direct uploads to Amazon S3, and it accepts additional options which are passed to aws-sdk-s3. There are three places in which you can specify presign options:

  • in :upload_options option on this storage

  • in presign_endpoint plugin through :presign_options

  • in Storage::S3#presign by forwarding options

Large files

The aws-sdk-s3 gem has the ability to automatically use multipart upload/copy for larger files, splitting the file into multiple chunks and uploading/copying them in parallel.

By default any files that are uploaded will use the multipart upload if they're larger than 15MB, and any files that are copied will use the multipart copy if they're larger than 150MB, but you can change the thresholds via :multipart_threshold.

thresholds = {upload: 30*1024*1024, copy: 200*1024*1024}
Shrine::Storage::S3.new(multipart_threshold: thresholds, **s3_options)

If you want to change how many threads aws-sdk-s3 will use for multipart upload/copy, you can use the upload_options plugin to specify :thread_count.

plugin :upload_options, store: ->(io, context) do
  {thread_count: 5}
end

Clearing cache

If you're using S3 as a cache, you will probably want to periodically delete old files which aren't used anymore. S3 has a built-in way to do this, read this article for instructions.

Alternatively you can periodically call the #clear! method:

# deletes all objects that were uploaded more than 7 days ago
s3.clear! { |object| object.last_modified < Time.now - 7*24*60*60 }

Attributes

bucket [R]
client [R]
host [R]
prefix [R]
upload_options [R]

Public Class methods

new (bucket:, prefix: nil, host: nil, upload_options: {}, multipart_threshold: {}, **s3_options)

Initializes a storage for uploading to S3.

:access_key_id
:secret_access_key
:region
:bucket

Credentials required by the aws-sdk-s3 gem.

:prefix

“Folder” name inside the bucket to store files into.

:upload_options

Additional options that will be used for uploading files, they will be passed to Aws::S3::Object#put, Aws::S3::Object#copy_from and Aws::S3::Bucket#presigned_post.

:multipart_threshold

If the input file is larger than the specified size, a parallelized multipart will be used for the upload/copy. Defaults to {upload: 15*1024*1024, copy: 100*1024*1024} (15MB for upload requests, 100MB for copy requests).

All other options are forwarded to Aws::S3::Client#initialize.

[show source]
# File lib/shrine/storage/s3.rb, line 199
def initialize(bucket,, prefix: nil, host: nil, upload_options: {}, multipart_threshold: {}, **s3_options)
  Shrine.deprecation("The :host option to Shrine::Storage::S3#initialize is deprecated and will be removed in Shrine 3. Pass :host to S3#url instead, you can also use default_url_options plugin.") if host
  resource = Aws::S3::Resource.new(**s3_options)

  if multipart_threshold.is_a?(Integer)
    Shrine.deprecation("Accepting the :multipart_threshold S3 option as an integer is deprecated, use a hash with :upload and :copy keys instead, e.g. {upload: 15*1024*1024, copy: 150*1024*1024}")
    multipart_threshold = { upload: multipart_threshold }
  end
  multipart_threshold = { upload: 15*1024*1024, copy: 100*1024*1024 }.merge(multipart_threshold)

  @bucket = resource.bucket(bucket) or fail(ArgumentError, "the :bucket option was nil")
  @client = resource.client
  @prefix = prefix
  @host = host
  @upload_options = upload_options
  @multipart_threshold = multipart_threshold
end

Public Instance methods

clear! (&block)

If block is given, deletes all objects from the storage for which the block evaluates to true. Otherwise deletes all objects from the storage.

s3.clear!
# or
s3.clear! { |object| object.last_modified < Time.now - 7*24*60*60 }
[show source]
# File lib/shrine/storage/s3.rb, line 347
def clear!(&block)
  objects_to_delete = Enumerator.new do |yielder|
    bucket.objects(prefix: prefix).each do |object|
      condition = block.call(object) if block
      yielder << object unless condition == false
    end
  end

  delete_objects(objects_to_delete)
end
delete (id)

Deletes the file from the storage.

[show source]
# File lib/shrine/storage/s3.rb, line 331
def delete(id)
  object(id).delete
end
download (id, **options)

Downloads the file from S3, and returns a Tempfile. And additional options are forwarded to Aws::S3::Object#get.

[show source]
# File lib/shrine/storage/s3.rb, line 253
def download(id, **options)
  tempfile = Tempfile.new(["shrine-s3", File.extname(id)], binmode: true)
  (object = object(id)).get(response_target: tempfile, **options)
  tempfile.singleton_class.instance_eval { attr_accessor :content_type }
  tempfile.content_type = object.content_type
  tempfile.tap(&:open)
end
exists? (id)

Returns true file exists on S3.

[show source]
# File lib/shrine/storage/s3.rb, line 273
def exists?(id)
  object(id).exists?
end
method_missing (name, *args)

Catches the deprecated #stream method.

[show source]
# File lib/shrine/storage/s3.rb, line 364
def method_missing(name, *args)
  if name == :stream
    Shrine.deprecation("Shrine::Storage::S3#stream is deprecated over calling #each_chunk on S3#open.")
    object = object(*args)
    object.get { |chunk| yield chunk, object.content_length }
  else
    super
  end
end
multi_delete (ids)

Deletes multiple files at once from the storage.

[show source]
# File lib/shrine/storage/s3.rb, line 336
def multi_delete(ids)
  objects_to_delete = ids.map { |id| object(id) }
  delete_objects(objects_to_delete)
end
object (id)

Returns an Aws::S3::Object for the given id.

[show source]
# File lib/shrine/storage/s3.rb, line 359
def object(id)
  bucket.object([*prefix, id].join("/"))
end
open (id, **options)

Returns a Down::ChunkedIO object representing the S3 object. Any additional options are forwarded to Aws::S3::Object#get.

[show source]
# File lib/shrine/storage/s3.rb, line 265
def open(id, **options)
  object = object(id)
  io = Down::ChunkedIO.new(chunks: object.enum_for(:get, **options), data: { object: object })
  io.size = object.content_length
  io
end
presign (id, **options)

Returns a signature for direct uploads. Internally it calls Aws::S3::Bucket#presigned_post, and forwards any additional options to it.

[show source]
# File lib/shrine/storage/s3.rb, line 323
def presign(id, **options)
  options = @upload_options.merge(options)
  options[:content_disposition] = encode_content_disposition(options[:content_disposition]) if options[:content_disposition]

  object(id).presigned_post(options)
end
s3 ()

Returns an Aws::S3::Resource object.

[show source]
# File lib/shrine/storage/s3.rb, line 218
def s3
  Shrine.deprecation("Shrine::Storage::S3#s3 that returns an Aws::S3::Resource is deprecated, use Shrine::Storage::S3#client which returns an Aws::S3::Client object.")
  Aws::S3::Resource.new(client: @client)
end
upload (io, id, shrine_metadata: {}, **upload_options)

If the file is an UploadedFile from S3, issues a COPY command, otherwise uploads the file. For files larger than :multipart_threshold a multipart upload/copy will be used for better performance and more resilient uploads.

It assigns the correct “Content-Type” taken from the MIME type, because by default S3 sets everything to “application/octet-stream”.

[show source]
# File lib/shrine/storage/s3.rb, line 230
def upload(io, id, shrine_metadata: {}, **upload_options)
  content_type, filename = shrine_metadata.values_at("mime_type", "filename")

  options = {}
  options[:content_type] = content_type if content_type
  options[:content_disposition] = "inline; filename=\"#{filename}\"" if filename

  options.update(@upload_options)
  options.update(upload_options)

  options[:content_disposition] = encode_content_disposition(options[:content_disposition]) if options[:content_disposition]

  if copyable?(io)
    copy(io, id, **options)
  else
    put(io, id, **options)
  end
end
url (id, download: nil, public: nil, host: self.host, **options)

Returns the presigned URL to the file.

:public

Controls whether the URL is signed (false) or unsigned (true). Note that for unsigned URLs the S3 bucket need to be modified to allow public URLs. Defaults to false.

:host

This option replaces the host part of the returned URL, and is typically useful for setting CDN hosts (e.g. http://abc123.cloudfront.net)

:download

If set to true, creates a “forced download” link, which means that the browser will never display the file and always ask the user to download it.

All other options are forwarded to Aws::S3::Object#presigned_url or Aws::S3::Object#public_url.

[show source]
# File lib/shrine/storage/s3.rb, line 299
def url(id, download: nil, public: nil, host: self.host, **options)
  options[:response_content_disposition] ||= "attachment" if download
  options[:response_content_disposition] = encode_content_disposition(options[:response_content_disposition]) if options[:response_content_disposition]

  if public
    url = object(id).public_url(**options)
  else
    url = object(id).presigned_url(:get, **options)
  end

  if host
    uri = URI.parse(url)
    uri.path = uri.path.match(/^\/#{bucket.name}/).post_match unless uri.host.include?(bucket.name)
    url = URI.join(host, uri.request_uri).to_s
  end

  url
end