Shrine 2.13.0
New features
- The S3 object URLs can now be signed with a custom signer. This enables
serving private objects via AWS CloudFront by signing the URLs with the
signer from the
aws-sdk-cloudfrontgem.
require "aws-sdk-cloudfront"
signer = Aws::CloudFront::UrlSigner.new(
key_pair_id: "cf-keypair-id",
private_key_path: "./cf_private_key.pem",
)
Shrine::Storage::S3.new(signer: signer.method(:signed_url))- To make your S3 storage public, previously you needed to do two things: set
the
acl: "public-read"upload option and set passpublic: truewhen generating a URL.
Shrine.storages = {
cache: Shrine::Storage::S3.new(upload_options: { acl: "public-read" }, **options),
store: Shrine::Storage::S3.new(upload_options: { acl: "public-read" }, **options),
}
Shrine.plugin :default_url_options,
cache: { public: true },
store: { public: true } Now you can achieve the same thing just by setting public: true when
initializing the S3 storage.
Shrine.storages = {
cache: Shrine::Storage::S3.new(public: true, **options),
store: Shrine::Storage::S3.new(public: true, **options),
}- The
:forceoption has been added to theinfer_extensionplugin, which makes the extension always determined from the MIME type, regardless of whether it exists or not.
Shrine.plugin :infer_extension, force: trueThis is useful for when you want to normalize the file extensions of uploaded files.
uploader.upload(File.open("image.jpg")) #
uploader.upload(File.open("image.jpeg")) # all these will be uploaded with a .jpeg extension
uploader.upload(File.open("image.JPG")) #Shrine#uploadnow accepts a:metadataoption for manually overrding the extracted metadata.
uploaded_file = uploader.upload(file, metadata: { "filename" => "my-file.txt" })
uploaded_file.original_filename #=> "my-file.txt" Furthermore, Shrine::Attacher#assign now forwards any additional options to
Shrine#upload, so you can also override metadata when attaching files.
photo.image_attacher.assign(file, metadata: { "mime_type" => "text/plain" })Other improvements
- It's now possible to use an S3 endpoint which requires bucket name to be in
the URI path (e.g. Minio) with CDNs where bucket name shouldn't be in the URI
path (e.g. CloudFront). Since version 2.11.0, when initializing
Shrine::Storage::S3with:endpointwith:force_path_style, generating an URL with a:hostreturned an URI with bucket name in the path. This introduced a regression for anyone relying on previous behaviour, so that change has been reverted, and this is the current behaviour:
s3 = Shrine::Storage::S3.new(endpoint: "https://minio.example.com")
s3.url("foo") #=> "https://my-s3-endpoint.com/foo"
s3.url("foo", host: "https://123.cloudfront.net") #=> "https://123.cloudfront.net/foo"
s3 = Shrine::Storage::S3.new(endpoint: "https://my-s3-endpoint.com", force_path_style: true, **options)
s3.url("foo") #=> "https://my-s3-endpoint.com/my-bucket/foo"
s3.url("foo", host: "https://123.cloudfront.net") #=> "https://123.cloudfront.net/foo"- The
:hostoption toShrine::Storage::S3#urlnow handles URLs with path prefixes, provided that the URL ends with a slash.
# old behaviour
s3.url("foo", host: "https://example.com/prefix/") #=> "https://example.com/foo"
# new behaviour
s3.url("foo", host: "https://example.com/prefix/") #=> "https://example.com/prefix/foo"Fixed error that would happen when uploading a file with a filename that had certain combination of UTF-8 characters to
upload_endpoint.The
Content-Typeheader inupload_endpointandpresign_endpointresponses now specifiescharset=utf-8.Shrine::Storage::S3now usesAws::S3::Object#upload_streamif available when uploading large IO streams which are not file objects, which uses parallelized multipart upload. This can make such uploads finish up to 2x faster.Shrine::Storage::S3now usesAws::S3::Object#upload_streamif available when uploading files of unknown size.The
filecommand could sometimes exit successfully, but return acannot open: No such file or directoryon stdout. This is now detected and aShrine::Erroris raised.The
upload_endpointnow returnsUpload Not Validerror message when file parameter was present but not in correct format (previouslyUpload Not Foundwas returned, which was a bit misleading).
Backwards compatibility
- When
Shrine::Storage::S3is initialized with:endpointwith:force_path_style, a file URL generated with a:hostwill not include the bucket name in the URL path anymore. Users relying on this behaviour should update their code to include the bucket name in the:hostURL path.
s3.url("foo", host: "https://example.com/my-bucket/")Using aws-sdk-s3 older than 1.14 with
Shrine::Storage::S3when uploading files with unknown size is now deprecated and won't be supported in Shrine 3.Also, in this case
Shrine::Storage::S3will now first copy the whole file onto disk before uploading it (previously only a chunk of the input file was copied to disk at a time).If you're uploading files of unknown size (ones where
#sizeis not defined or returnsnil), you should upgrade to aws-sdk-s3 1.14 or higher.