Upgrading from CarrierWave
This guide is aimed at helping CarrierWave users transition to Shrine, and it consists of three parts:
- Explanation of the key differences in design between CarrierWave and Shrine
- Instructions how to migrate an existing app that uses CarrierWave to Shrine
- Extensive reference of CarrierWave's interface with Shrine equivalents
Overview
Uploader
Shrine shares CarrierWave's concept of uploaders, classes which encapsulate file attachment logic for different file types:
# attachment logic end
However, while CarrierWave uploaders are responsible for most of the attachment logic (uploading to temporary/permanent storage, retrieving the uploaded file, file validation, processing versions), Shrine distributes these responsibilities across several core classes:
Class | Description |
---|---|
Shrine | handles uploads, metadata extraction, location generation |
Shrine::UploadedFile | exposes metadata, implements downloading, URL generation, deletion |
Shrine::Attacher | handles caching & storing, dirty tracking, persistence, versions |
Shrine uploaders themselves are functional: they receive a file on the input and return the uploaded file on the output. There are no state changes.
uploader = ImageUploader.new(:store)uploaded_file = uploader.upload(file, :store)uploaded_file #=> #<Shrine::UploadedFile> uploaded_file.url #=> "https://my-bucket.s3.amazonaws.com/store/kfds0lg9rer.jpg" uploaded_file.download #=> #<File:/tmp/path/to/file>
Storage
In CarrierWave, you configure storage in global configuration:
CarrierWave.configure do |config| config.fog_provider = "fog/aws" config.fog_credentials = { provider: "AWS", aws_access_key_id: "abc", aws_secret_access_key: "xyz", region: "eu-west-1", } config.fog_directory = "my-bucket"end
In Shrine, the configuration options are passed directly to the storage class:
Shrine.storages[:store] = Shrine::Storage::S3.new( bucket: "my-bucket", access_key_id: "abc", secret_access_key: "xyz", region: "eu-west-1",)
Temporary storage
Where CarrierWave's temporary storage is hardcoded to disk, Shrine can use any storage for temporary storage. So, if you have multiple servers or want to do direct uploads, you can use AWS S3 as temporary storage:
Shrine.storages = { cache: Shrine::Storage::S3.new(prefix: "cache", **s3_options), store: Shrine::Storage::S3.new(**s3_options),}
Persistence
While CarrierWave persists only the filename of the original uploaded file, Shrine persists storage and metadata information as well:
{ "id": "path/to/image.jpg", "storage": "store", "metadata": { "filename": "nature.jpg", "size": 4739472, "mime_type": "image/jpeg" }}
This way we have all information about uploaded files, without having to retrieve the file from the storage.
photo.image.id #=> "path/to/image.jpg" photo.image.storage_key #=> :store photo.image.metadata #=> { "filename" => "...", "size" => ..., "mime_type" => "..." } photo.image.original_filename #=> "nature.jpg" photo.image.size #=> 4739472 photo.image.mime_type #=> "image/jpeg"
Location
CarrierWave persists only the filename of the uploaded file, and recalculates the full location dynamically based on location configuration. This can be dangerous, because if some component of the location happens to change, all existing links might become invalid.
To avoid this, Shrine persists the full location on attachment, and uses it when generating file URL. So, even if you change how file locations are generated, existing files that are on old locations will still remain accessible.
Processing
CarrierWave uses a class-level DSL for generating versions, which internally uses uploader subclassing and does in-place processing.
include CarrierWave::MiniMagick version :large do process resize_to_limit: [800, 800] end version :medium do process resize_to_limit: [500, 500] end version :small do process resize_to_limit: [300, 300] endend
In contrast, in Shrine you perform processing on the instance level as a functional transformation, which is a lot simpler and more flexible:
plugin :derivatives Attacher.derivatives do |original| magick = ImageProcessing::MiniMagick.source(original) { large: magick.resize_to_limit!(800, 800), medium: magick.resize_to_limit!(500, 500), small: magick.resize_to_limit!(300, 300), } endend
Retrieving versions
When retrieving versions, CarrierWave returns a list of declared versions which may or may not have been generated. In contrast, Shrine persists data of uploaded processed files into the database (including any extracted metadata), which then becomes the source of truth on which versions have been generated.
photo.image #=> #<Shrine::UploadedFile id="original.jpg" ...> photo.image_derivatives #=> {} photo.image_derivatives! # triggers processing photo.image_derivatives #=> # { # large: #<Shrine::UploadedFile id="large.jpg" metadata={"size"=>873232, ...} ...>, # medium: #<Shrine::UploadedFile id="medium.jpg" metadata={"size"=>94823, ...} ...>, # small: #<Shrine::UploadedFile id="small.jpg" metadata={"size"=>37322, ...} ...>, # }
Reprocessing versions
Shrine doesn't have a built-in way of regenerating versions, because that has to be written and optimized differently depending on what versions have changed which persistence library you're using, how many records there are in the table etc.
However, there is an extensive guide for Managing Derivatives, which provides instructions on how to make these changes safely and with zero downtime.
Validation
File validation in Shrine is also instance-level, which allows using conditionals:
%w[jpg jpeg png webp] end /image\// end 0..(10*1024*1024) endend
plugin :validation_helpers Attacher.validate do validate_max_size 10*1024*1024 validate_extension %w[jpg jpeg png webp] if validate_mime_type %w[image/jpeg image/png image/webp] validate_max_dimensions [5000, 5000] end endend
Custom metadata
With Shrine you can also extract and validate any custom metadata:
plugin :add_metadata plugin :validation add_metadata :duration do |io| FFMPEG::Movie.new(io.path).duration end Attacher.validate do if file.duration > 5*60*60 errors << "must not be longer than 5 hours" end endend
Multiple uploads
Shrine doesn't have support for multiple uploads out-of-the-box like CarrierWave does. Instead, you can implement them using a separate table with a one-to-many relationship to which the files will be attached. The Multiple Files guide explains this setup in more detail.
Migrating from CarrierWave
You have an existing app using CarrierWave and you want to transfer it to
Shrine. Let's assume we have a Photo
model with the "image" attachment.
1. Add Shrine column
First we need to create the image_data
column for Shrine:
add_column :photos, :image_data, :text # or :json or :jsonb if supported
2. Dual write
Next, we need to make new CarrierWave attachments write to the
image_data
column. This can be done by including the below module to all
models that have CarrierWave attachments:
# config/initializers/shrine.rb (Rails) Shrine.storages = { cache: ..., store: ...,} Shrine.plugin :modelShrine.plugin :derivatives model.before_save do self.class.uploaders.each_key do |name| write_shrine_data(name) if changes.key?(name) end end end uploader = send(name) attacher = Shrine::Attacher.from_model(self, name) if read_attribute(name).present? attacher.set shrine_file(uploader) uploader.versions.each do |version_name, version| attacher.merge_derivatives(version_name => shrine_file(version)) end else attacher.set nil end end private name = uploader.mounted_as filename = read_attribute(name) location = uploader.store_path(filename) location = location.sub(%r{^ /}, "") if storage.prefix Shrine.uploaded_file( storage: :store, id: location, metadata: {"filename" => filename }, ) end Shrine.storages[:store] endend
mount_uploader :image, ImageUploader include CarrierwaveShrineSynchronization # needs to be after `mount_uploader` end
After you deploy this code, the image_data
column should now be successfully
synchronized with new attachments.
3. Data migration
Next step is to run a script which writes all existing CarrierWave attachments
to image_data
:
Photo.find_each do |photo| photo.write_shrine_data(:image) photo.save!end
4. Rewrite code
Now you should be able to rewrite your application so that it uses Shrine
instead of CarrierWave (you can consult the reference in the next section). You
can remove the CarrierwaveShrineSynchronization
module as well.
5. Backill metadata
You'll notice that Shrine metadata will be absent from the migrated files' data. You can run a script that will fill in any missing metadata defined in your Shrine uploader:
Shrine.plugin :refresh_metadata Photo.find_each do |photo| attacher = photo.image_attacher attacher.refresh_metadata! attacher.atomic_persistend
6. Remove CarrierWave column
If everything is looking good, we can remove the CarrierWave column:
remove_column :photos, :image
CarrierWave to Shrine direct mapping
CarrierWave::Uploader::Base
.storage
When using models, by default all storages use :cache
for cache, and :store
for store. If you want to change that, you can use the default_storage
plugin:
Shrine.storages[:foo] = Shrine::Storage::Foo.new(*args)
plugin :default_storage, store: :fooend
.process
, .version
Processing is defined by using the derivatives
plugin:
plugin :derivatives Attacher.derivatives do |original| magick = ImageProcessing::MiniMagick.source(image) { large: magick.resize_to_limit!(800, 800), medium: magick.resize_to_limit!(500, 500), small: magick.resize_to_limit!(300, 300), } endend
.before
, .after
There is no Shrine equivalent for CarrierWave's callbacks.
#store!
, #cache!
In Shrine you store and cache files by passing the corresponding storage to
Shrine.upload
:
ImageUploader.upload(file, :cache)ImageUploader.upload(file, :store)
Note that in Shrine you cannot pass in a path to the file, you always have to
pass an IO-like object, which is required to respond to: #read(*args)
,
#size
, #eof?
, #rewind
and #close
.
#retrieve_from_store!
and #retrieve_from_cache!
In Shrine you simply call #download
on the uploaded file:
uploaded_file = ImageUploader.upload(file, :store)uploaded_file.download #=> #<Tempfile:/path/to/file>
#url
In Shrine you call #url
on uploaded files:
photo.image #=> #<Shrine::UploadedFile> photo.image.url #=> "/uploads/398454ujedfggf.jpg" photo.image_url #=> "/uploads/398454ujedfggf.jpg" (shorthand)
#identifier
This method corresponds to #original_filename
on the uploaded file:
photo.image #=> #<Shrine::UploadedFile> photo.image.original_filename #=> "avatar.jpg"
#store_dir
, #cache_dir
Shrine here provides a single #generate_location
method that's triggered for
all storages:
[ storage_key, record && record.class.name.underscore, record && record.id, super, io.original_filename ].compact.join("/") endend
cache/user/123/2feff8c724e7ce17/nature.jpg
store/user/456/7f99669fde1e01fc/kitten.jpg
...
You might also want to use the pretty_location
plugin for automatically
generating an organized folder structure.
#default_url
For default URLs you can use the default_url
plugin:
plugin :default_url Attacher.default_url do |derivative: nil, **| "/fallbacks/ .jpg" endend
#extension_white_list
, #extension_black_list
In Shrine, extension whitelisting/blacklisting is a part of validations, and is
provided by the validation_helpers
plugin:
plugin :validation_helpers Attacher.validate do validate_extension_inclusion %w[jpg jpeg png] # whitelist validate_extension_exclusion %w[php] # blacklist endend
#content_type_whitelist
, #content_type_blacklist
In Shrine, MIME type whitelisting/blacklisting is part of validations, and is
provided by the validation_helpers
plugin, though it doesn't support regexes:
plugin :validation_helpers Attacher.validate do validate_mime_type_inclusion %w[image/jpeg image/png] # whitelist validate_mime_type_exclusion %w[text/x-php] # blacklist endend
Make sure to also load the determine_mime_type
plugin to detect MIME type
from file content.
# Gemfile
Shrine.plugin :determine_mime_type, analyzer: :mimemagic
#size_range
In Shrine file size validations are typically done using the
validation_helpers
plugin:
plugin :validation_helpers Attacher.validate do validate_size 0..5*1024*1024 # 5 MB endend
#recreate_versions!
Shrine doesn't have a built-in way of regenerating versions, because that's very individual and depends on what versions you want regenerated, what ORM are you using, how many records there are in your database etc. The Managing Derivatives guide provides some useful tips on this task.
Models
The only thing that Shrine requires from your models is a <attachment>_data
column (e.g. if your attachment is "image", you need the image_data
column).
.mount_uploader
In Shrine you make include attachment modules directly:
Shrine.plugin :sequel
include ImageUploader::Attachment(:avatar)end
#<attachment>=
The attachment module adds an attachment setter:
photo.image = File.open("avatar.jpg", "rb")
Note that unlike CarrierWave, you cannot pass in file paths, the input needs to be an IO-like object.
#<attachment>
CarrierWave returns the uploader, but Shrine returns a Shrine::UploadedFile
,
a representation of the file uploaded to the storage:
photo.image #=> #<Shrine::UploadedFile> photo.image.methods #=> [:url, :download, :read, :exists?, :delete, ...]
If attachment is missing, nil is returned.
#<attachment>_url
This method is simply a shorthand for "if attachment is present, call #url
on it, otherwise return nil":
photo.image_url #=> nil photo.image = File.open("avatar.jpg", "rb")photo.image_url #=> "/uploads/ksdf934rt.jpg"
The derivatives
plugin extends this method to also accept a version name as
the argument (photo.image_url(:thumb)
).
#<attachment>_cache
Shrine has the cached_attachment_data
plugin, which gives model a reader method
that you can use for retaining the cached file:
Shrine.plugin :cached_attachment_data
form_for @photo do |f| f.hidden_field :image, value: @photo.cached_image_data f.file_field :imageend
#remote_<attachment>_url
In Shrine this method is provided by the remote_url
plugin.
#remove_<attachment>
In Shrine this method is provided by the remove_attachment
plugin.
Configuration
This section walks through various configuration options in CarrierWave, and shows what are Shrine's equivalents.
root
, base_path
, permissions
, directory_permissions
In Shrine these are configured on the FileSystem
storage directly.
storage
, storage_engines
As mentioned before, in Shrine you register storages through Shrine.storages
,
and the attachment storages will automatically be :cache
and :store
, but
you can change this with the default_storage
plugin.
delete_tmp_file_after_storage
, remove_previously_stored_file_after_update
By default Shrine deletes cached and replaced files, but you can choose to keep
those files by loading the keep_files
plugin:
Shrine.plugin :keep_files
move_to_cache
, move_to_store
You can tell the FileSystem
storage that it should move files by specifying
the :move
upload option:
Shrine.plugin :upload_options, cache: {move: true }, store: {move: true }
validate_integrity
, ignore_integrity_errors
Shrine does this with validation, which are best done with the
validation_helpers
plugin:
plugin :validation_helpers Attacher.validate do # Evaluated inside an instance of Shrine::Attacher. if record.guest? validate_max_size 2*1024*1024, message: "must not be larger than 2 MB" validate_mime_type %w[image/jpg image/png image/webp] end endend
validate_download
, ignore_download_errors
Shrine's remote_url
plugin always rescues download errors and transforms
them to validation errors.
validate_processing
, ignore_processing_errors
In Shrine processing is performed after validations, and typically asynchronously in a background job, so it is expected that you validate files before processing.
enable_processing
You can just add conditionals in processing code.
ensure_multipart_form
No equivalent, it depends on your application whether you need the form to be multipart or not.
CarrierWave::Storage::Fog
You can use Shrine::Storage::S3
(built-in),
Shrine::Storage::GoogleCloudStorage
, or generic
Shrine::Storage::Fog
storage. The reference will assume you're
using S3 storage.
:fog_credentials
, :fog_directory
The S3 Shrine storage accepts :access_key_id
, :secret_access_key
, :region
,
and :bucket
options in the initializer:
Shrine::Storage::S3.new( access_key_id: "...", secret_access_key: "...", region: "...", bucket: "...",)
:fog_attributes
The object data can be configured via the :upload_options
hash:
Shrine::Storage::S3.new(upload_options: {content_disposition: "attachment" }, **options)
:fog_public
The object permissions can be configured with the :acl
upload option:
Shrine::Storage::S3.new(upload_options: {acl: "private" }, **options)
:fog_authenticated_url_expiration
The #url
method accepts the :expires_in
option, you can set the default
expiration with the url_options
plugin:
plugin :url_options, store: {expires_in: 600 }
:fog_use_ssl_for_aws
, :fog_aws_accelerate
Shrine allows you to override the S3 endpoint:
Shrine::Storage::S3.new(use_accelerate_endpoint: true, **options)