Last Update: 2019-02-19 10:20:44 +0100

Shrine for CarrierWave Users

This guide is aimed at helping CarrierWave users transition to Shrine, and it consists of three parts:

  1. Explanation of the key differences in design between CarrierWave and Shrine

  2. Instructions how to migrate and existing app that uses CarrierWave to Shrine

  3. Extensive reference of CarrierWave's interface with Shrine equivalents


While in CarrierWave you configure storage in global configuration, in Shrine storage is a class which you can pass options to during initialization:

CarrierWave.configure do |config|
  config.fog_provider = "fog/aws"
  config.fog_credentials = {
    provider:              "AWS",
    aws_access_key_id:     "abc",
    aws_secret_access_key: "xyz",
    region:                "eu-west-1",
  config.fog_directory = "my-bucket"
Shrine.storages[:store] =
  bucket:            "my-bucket",
  access_key_id:     "abc",
  secret_access_key: "xyz",
  region:            "eu-west-1",

In CarrierWave temporary storage cannot be configured; it saves and retrieves files from the filesystem, you can only set the directory. With Shrine both temporary (:cache) and permanent (:store) storage are first-class citizens and fully configurable, so you can also have files cached on S3 (preferrably via direct uploads):

Shrine.storages = {
  cache: "cache", **s3_options),


Shrine shares CarrierWave's concept of uploaders, classes which encapsulate file attachment logic for different file types:

class ImageUploader < Shrine
  # attachment logic

However, uploaders in CarrierWave are very broad; in addition to uploading and deleting files, they also represent the uploaded file. Shrine has a separate Shrine::UploadedFile class which represents the uploaded file.

uploader =
uploaded_file = uploader.upload(image)
uploaded_file          #=> #<Shrine::UploadedFile>
uploaded_file.url      #=> "" #=> #<Tempfile>


In contrast to CarrierWave's class-level DSL, in Shrine processing is defined and performed on the instance-level. The result of processing can be a single file or a hash of versions:

class ImageUploader < CarrierWave::Uploader::Base
  include CarrierWave::MiniMagick

  process resize_to_limit: [800, 800]

  version :medium do
    process resize_to_limit: [500, 500]

  version :small, from_version: :medium do
    process resize_to_limit: [300, 300]
require "image_processing/mini_magick"

class ImageUploader < Shrine
  plugin :processing
  plugin :versions

  process(:store) do |io, context|
    versions = {} do |original|
      pipeline = ImageProcessing::MiniMagick.source(original)

      versions[:original]  = pipeline.resize_to_limit!(800, 800)
      versions[:medium] = pipeline.resize_to_limit!(500, 500)
      versions[:small]  = pipeline.resize_to_limit!(300, 300)

    versions # return the hash of processed files

This allows you to fully optimize processing, because you can easily specify which files are processed from which, and even add parallelization.

CarrierWave performs processing before validations, which is a huge security issue, as it allows users to give arbitrary files to your processing tool, even if you have validations. Shrine performs processing after validations.

Reprocessing versions

Shrine doesn't have a built-in way of regenerating versions, because that has to be written and optimized differently depending on whether you're adding or removing a version, what ORM are you using, how many records there are in the database etc. The Reprocessing versions guide provides some useful tips on this task.


Like with processing, validations in Shrine are also defined and performed on instance-level:

class ImageUploader < CarrierWave::Uploader::Base
  def extension_whitelist
    %w[jpg jpeg gif png]

  def content_type_whitelist

  def size_range
class ImageUploader < Shrine
  plugin :validation_helpers

  Attacher.validate do
    validate_extension_inclusion %w[jpg jpeg gif png]
    validate_mime_type_inclusion %w[image/jpeg image/gif image/png]
    validate_max_size 10*1024*1024 unless record.admin?


Like CarrierWave, Shrine also provides integrations with ORMs. It ships with plugins for both Sequel and ActiveRecord, but can also be used with just PORO models.

Shrine.plugin :sequel       # if you're using Sequel
Shrine.plugin :activerecord # if you're using ActiveRecord

Instead of giving you class methods for “mounting” uploaders, in Shrine you generate attachment modules which you simply include in your models, which gives your models similar set of methods that CarrierWave gives:

class Photo < ActiveRecord::Base
  extend CarrierWave::ActiveRecord # done automatically by CarrierWave
  mount_uploader :image, ImageUploader
class Photo < ActiveRecord::Base

Attachment column

You models are required to have the <attachment>_data column, which Shrine uses to save storage, location, and metadata of the uploaded file.

photo.image_data #=>
# {
#   "storage" => "store",
#   "id" => "photo/1/image/0d9o8dk42.png",
#   "metadata" => {
#     "filename"  => "nature.png",
#     "size"      => 49349138,
#     "mime_type" => "image/png"
#   }
# }

photo.image.original_filename #=> "nature.png"
photo.image.size              #=> 49349138
photo.image.mime_type         #=> "image/png"

This is much more powerful than storing only the filename like CarrierWave does, as it allows you to also store any additional metadata that you might want to extract.

Unlike CarrierWave, Shrine will store this information for each processed version, making them first-class citizens:

photo.image[:original]       #=> #<Shrine::UploadedFile>
photo.image[:original].width #=> 800

photo.image[:thumb]          #=> #<Shrine::UploadedFile>
photo.image[:thumb].width    #=> 300

Also, since CarrierWave stores only the filename, it has to recalculate the full location each time it wants to generate the URL. That makes it really difficult to move files to a new location, because changing how the location is generated will now cause incorrect URLs to be generated for all existing files. Shrine calculates the whole location only once and saves it to the column.

Multiple uploads

Shrine doesn't have support for multiple uploads like CarrierWave does, instead it expects that you will attach each file to a separate database record. This is a good thing, because the implementation is specific to the ORM you're using, and it's analogous to how you would implement any nested one-to-many associations. Take a look at the demo app which shows how easy it is to implement multiple uploads.

Migrating from CarrierWave

You have an existing app using CarrierWave and you want to transfer it to Shrine. Let's assume we have a Photo model with the “image” attachment. First we need to create the image_data column for Shrine:

add_column :photos, :image_data, :text # or :json or :jsonb if supported

Afterwards we need to make new uploads write to the image_data column. This can be done by including the below module to all models that have CarrierWave attachments:

module CarrierwaveShrineSynchronization
  def self.included(model)
    model.before_save do
      self.class.uploaders.each_key do |name|
        write_shrine_data(name) if changes.key?(name)

  def write_shrine_data(name)
    uploader = send(name)

    if read_attribute(name).present?
      data = uploader_to_shrine_data(uploader)

      if uploader.versions.any?
        data = {original: data}
        uploader.versions.each do |name, version|
          data[name] = uploader_to_shrine_data(version)

      # Remove the `.to_json` if you're using a JSON column, otherwise the JSON
      # object will be saved as an escaped string.
      write_attribute(:"#{name}_data", data.to_json)
      write_attribute(:"#{name}_data", nil)


  # If you'll be using `:prefix` on your Shrine storage, make sure to
  # subtract it from the path assigned as `:id`.
  def uploader_to_shrine_data(uploader)
    filename = read_attribute(uploader.mounted_as)
    path     = uploader.store_path(filename)

      storage: :store,
      id: path,
      metadata: { filename: filename }
class Photo < ActiveRecord::Base
  mount_uploader :image, ImageUploader
  include CarrierwaveShrineSynchronization # needs to be after `mount_uploader`

After you deploy this code, the image_data column should now be successfully synchronized with new attachments. Next step is to run a script which writes all existing CarrierWave attachments to image_data:

Photo.find_each do |photo|
  Photo.uploaders.each_key { |name| photo.write_shrine_data(name) }!

Now you should be able to rewrite your application so that it uses Shrine instead of CarrierWave, using equivalent Shrine storages. For help with translating the code from CarrierWave to Shrine, you can consult the reference below.

You'll notice that Shrine metadata will be absent from the migrated files' data. You can run a script that will fill in any missing metadata defined in your Shrine uploader:

Shrine.plugin :refresh_metadata

Photo.find_each do |photo|
  attachment = ImageUploader.uploaded_file(photo.image, &:refresh_metadata!)
  photo.update(image_data: attachment.to_json)

CarrierWave to Shrine direct mapping



When using models, by default all storages use :cache for cache, and :store for store. If you want to change that, you can use the default_storage plugin:

Shrine.storages[:foo] =*args)
class ImageUploader
  plugin :default_storage, store: :foo

.process, .version

As explained in the “Processing” section, processing is done by overriding the Shrine#process method.

.before, .after

In Shrine you can get callbacks by loading the hooks plugin. Unlike CarrierWave, and much like Sequel, Shrine implements callbacks by overriding instance methods:

class ImageUploader < Shrine
  plugin :hooks

  def after_upload(io, context)
    # do something

#store!, #cache!

In Shrine you store and cache files by instantiating it with a corresponding storage, and calling #upload:

Note that in Shrine you cannot pass in a path to the file, you always have to pass an IO-like object, which is required to respond to: #read(*args), #size, #eof?, #rewind and #close.

#retrieve_from_store! and #retrieve_from_cache!

In Shrine you simply call #download on the uploaded file:

uploaded_file = #=> #<Tempfile>


In Shrine you call #url on uploaded files:

user.avatar #=> #<Shrine::UploadedFile>
user.avatar.url #=> "/uploads/398454ujedfggf.jpg"


This method corresponds to #original_filename on the uploaded file:

user.avatar #=> #<Shrine::UploadedFile>
user.avatar.original_filename #=> "avatar.jpg"

#store_dir, #cache_dir

Shrine here provides a #generate_location method, which is triggered for all storages:

class ImageUploader < Shrine
  def generate_location(io, context)

The context variable holds the additional data, like the attacment name and the record instance. You might also want to use the pretty_location plugin for automatically generating an organized folder structure.


For default URLs you can use the default_url plugin:

class ImageUploader < Shrine
  plugin :default_url

  Attacher.default_url do |options|

The context variable holds the name of the attachment, record instance and in some cases the :version.

#extension_white_list, #extension_black_list

In Shrine extension whitelisting/blacklisting is a part of validations, and is provided by the validation_helpers plugin:

class ImageUploader < Shrine
  plugin :validation_helpers

  Attacher.validate do
    validate_extension_inclusion %w[jpg jpeg png] # whitelist
    validate_extension_exclusion %w[php]          # blacklist

#blacklist_mime_type_pattern, #whitelist_mime_type_pattern, #content_type_whitelist, #content_type_blacklist

In Shrine MIME type whitelisting/blacklisting is part of validations, and is provided by the validation_helpers plugin, though it doesn't support regexes:

class ImageUploader < Shrine
  plugin :validation_helpers

  Attacher.validate do
    validate_mime_type_inclusion %w[image/jpeg image/png] # whitelist
    validate_mime_type_exclusion %w[text/x-php]           # blacklist


In Shrine file size validations are typically done using the validation_helpers plugin:

class ImageUploader < Shrine
  plugin :validation_helpers

  Attacher.validate do
    validate_min_size 0
    validate_max_size 5*1024*1024 # 5 MB


Shrine doesn't have a built-in way of regenerating versions, because that's very individual and depends on what versions you want regenerated, what ORM are you using, how many records there are in your database etc. The [Regenerating versions] guide provides some useful tips on this task.


The only thing that Shrine requires from your models is a <attachment>_data column (e.g. if your attachment is “avatar”, you need the avatar_data column).


In Shrine you make include attachment modules directly:

Shrine.plugin :sequel
class User < Sequel::Model


The attachment module adds an attachment setter:

user.avatar ="avatar.jpg")

Note that unlike CarrierWave, you cannot pass in file paths, the input needs to be an IO-like object.


CarrierWave returns the uploader, but Shrine returns a Shrine::UploadedFile, a representation of the file uploaded to the storage:

user.avatar #=> #<Shrine::UploadedFile>
user.avatar.methods #=> [:url, :download, :read, :exists?, :delete, ...]

If attachment is missing, nil is returned.


This method is simply a shorthand for “if attachment is present, call #url on it, otherwise return nil”:

user.avatar_url #=> nil
user.avatar ="avatar.jpg")
user.avatar_url #=> "/uploads/ksdf934rt.jpg"

The versions plugin extends this method to also accept a version name as the argument (user.avatar_url(:thumb)).


Shrine has the cached_attachment_data plugin, which gives model a reader method that you can use for retaining the cached file:

Shrine.plugin :cached_attachment_data
form_for @user do |f|
  f.hidden_field :avatar, value: @user.cached_avatar_data
  f.file_field :avatar


In Shrine this method is provided by the remote_url plugin.


In Shrine this method is provided by the remove_attachment plugin.


This section walks through various configuration options in CarrierWave, and shows what are Shrine's equivalents.

root, base_path, permissions, directory_permissions

In Shrine these are configured on the FileSystem storage directly.

storage, storage_engines

As mentioned before, in Shrine you register storages through Shrine.storages, and the attachment storages will automatically be :cache and :store, but you can change this with the default_storage plugin.

delete_tmp_file_after_storage, remove_previously_stored_file_after_update

By default Shrine deletes cached and replaced files, but you can choose to keep those files by loading the keep_files plugin:

Shrine.plugin :keep_files, cached: true, replaced: true

move_to_cache, move_to_store

Shrine brings this functionality through the moving plugin.

Shrine.plugin :moving, storages: [:cache]

validate_integrity, ignore_integrity_errors

Shrine does this with validation, which are best done with the validation_helpers plugin:

class ImageUploader < Shrine
  plugin :validation_helpers

  Attacher.validate do
    # Evaluated inside an instance of Shrine::Attacher.
    if record.guest?
      validate_max_size 2*1024*1024, message: "is too large (max is 2 MB)"
      validate_mime_type_inclusion %w[image/jpg image/png image/gif]

validate_download, ignore_download_errors

Shrine's remote_url plugin always rescues download errors and transforms them to validation errors.

validate_processing, ignore_processing_errors

In Shrine processing is performed after validations, and typically asynchronously in a background job, so it is expected that you validate files before processing.


You can just add conditionals in processing code.


No equivalent, it depends on your application whether you need the form to be multipart or not.


You can use Shrine::Storage::S3 (built-in), Shrine::Storage::GoogleCloudStorage, or generic Shrine::Storage::Fog storage. The reference will assume you're using S3 storage.

:fog_credentials, :fog_directory

The S3 Shrine storage accepts :access_key_id, :secret_access_key, :region, and :bucket options in the initializer:
  access_key_id:     "...",
  secret_access_key: "...",
  region:            "...",
  bucket:            "...",


The object data can be configured via the :upload_options hash: {content_disposition: "attachment"}, **options)


The object permissions can be configured with the :acl upload option: {acl: "private"}, **options)


The #url method accepts the :expires_in option, you can set the default expiration with the default_url_options plugin:

plugin :default_url_options, store: {expires_in: 600}

:fog_use_ssl_for_aws, :fog_aws_accelerate

Shrine allows you to override the S3 endpoint: "", **options)