In #21498, we split `BaseStore#download` into a "safe" version which returns nil on errors, and an "unsafe" version which raises an exception, which was the old behaviour of `#download`.
This change updates call sites that used the old `#download`, which raised exceptions, to use the new `#download!` to preserve behaviour (and silence deprecation warnings.)
It also silences the deprecation warning in tests.
Previously, 'crop' would resize the image to have the requested width, then crop the height to the requested value. This works when cropping images vertically, but not when cropping them horizontally.
For example, trying to crop a 500x500 image to 200x500 was actually resulting in a 200x200 image. Having an OptimizedImage with width/height columns mismatching the actual OptimizedImage width/height causes some unusual issues.
This commit ensures that a call to `OptimizedImage.crop(from, to, width, height)` will always return an image of the requested width/height. The `w x h^` syntax defines minimum width/height, while maintaining aspect ratio.
Previously certain images may lead to convert / identify to run for unreasonable
amounts of time
This adds a maximum amount of time these commands can run prior to forcing
them to stop
* FEATURE - Add SiteSettings to control JPEG image quality
`recompress_original_jpg_quality` - the maximum quality of a newly
uploaded file.
`image_preview_jpg_quality` - the maximum quality of OptimizedImages
Dependency on gifsicle, allow_animated_avatars and allow_animated_thumbnails
site settings were all removed. Animated GIF images are still allowed, but
the generated optimized images are no longer animated for those (which were
used for avatars and thumbnails).
The added 'animated' is populated by extracting information using FastImage.
This field was used to selectively reoptimize old animations. This process
happens in the background.
`OptimizedImage#filesize` calls `Discourse.store.download` with an OptimizedImage as an argument. It would in turn attempt to call `#original_filename` and `#secure?` on that object. Both would fail as these methods do not exist on OptimizedImage, only on Upload. We didn't know about these issues because:
1. `#calculate_filesize` is not called often, because the filesize is saved on OptimizedImage creation, so it's used mostly for manual filesize recalculation
2. we were using `rescue nil` which swallows all errors
Previously we would raise a warning in the logs if downloading
a file (from s3) takes longer than 60 seconds.
At scale this happens reasonably frequently.
1. Raised the duration to 3 minutes
2. Pulled the resizing mutex out of the downloading mutex
so we have less and clearer error logs
Previously we had many places in the app that called `hostname` to get
hostname of a server. This commit replaces the pattern in 2 ways
1. We cache the result in `Discourse.os_hostname` so it is only ever called once
2. We prefer to use Socket.gethostname which avoids making a shell command
This improves performance as we are not spawning hostname processes throughout
the app lifetime
### General Changes and Duplication
* We now consider a post `with_secure_media?` if it is in a read-restricted category.
* When uploading we now set an upload's secure status straight away.
* When uploading if `SiteSetting.secure_media` is enabled, we do not check to see if the upload already exists using the `sha1` digest of the upload. The `sha1` column of the upload is filled with a `SecureRandom.hex(20)` value which is the same length as `Upload::SHA1_LENGTH`. The `original_sha1` column is filled with the _real_ sha1 digest of the file.
* Whether an upload `should_be_secure?` is now determined by whether the `access_control_post` is `with_secure_media?` (if there is no access control post then we leave the secure status as is).
* When serializing the upload, we now cook the URL if the upload is secure. This is so it shows up correctly in the composer preview, because we set secure status on upload.
### Viewing Secure Media
* The secure-media-upload URL will take the post that the upload is attached to into account via `Guardian.can_see?` for access permissions
* If there is no `access_control_post` then we just deliver the media. This should be a rare occurrance and shouldn't cause issues as the `access_control_post` is set when `link_post_uploads` is called via `CookedPostProcessor`
### Removed
We no longer do any of these because we do not reuse uploads by sha1 if secure media is enabled.
* We no longer have a way to prevent cross-posting of a secure upload from a private context to a public context.
* We no longer have to set `secure: false` for uploads when uploading for a theme component.
Zeitwerk simplifies working with dependencies in dev and makes it easier reloading class chains.
We no longer need to use Rails "require_dependency" anywhere and instead can just use standard
Ruby patterns to require files.
This is a far reaching change and we expect some followups here.
This reduces chances of errors where consumers of strings mutate inputs
and reduces memory usage of the app.
Test suite passes now, but there may be some stuff left, so we will run
a few sites on a branch prior to merging
Treating TIFF and BMP as images cause us to add them to IMG tags, this is very inconsistent across browsers.
You can still upload these files they will simply not be displayed in IMG tags.
Previously we only allowed one image optimization per machine, this meant there
was cross talk between avatar resizing and Sidekiq. This could lead to large
amounts of starvation when optimized image version changed which in turn could
block the Sidekiq queue.
This increases amount of allowed load on machines but this is preferable to
having crosstalk between avatar resizing and Sidekiq.
Previously we had no idea what algorithm generated thumbnails, this starts tracking the version.
We also bumped up the version to force all optimized images to be generated. This is important cause we recently introduced pngquant which results in much smaller images.
This feature ensures optimized images run via pngquant, this results extreme amounts of savings for resized images. Effectively the only impact is that the color palette on small resized images is reduced to 256.
To ensure safety we only apply this optimisation to images smaller than 500k.
This commit also makes a bunch of image specs less fragile.
This generates a 10x10 PNG thumbnail for each lightboxed image.
If Image Lazy Loading is enabled (IntersectionObserver API) then
we'll load the low res version when offscreen. As the image scrolls
in we'll swap it for the high res version.
We use a WeakMap to track the old image attributes. It's much less
memory than storing them as `data-*` attributes and swapping them
back and forth all the time.
We regressed and optimized images no longer worked with svg
The following adds the correct logic to simply copy file for svgs
and bypasses resizing for svg avatars
If we detect redis is in readonly we can not correctly get a mutex
raise an exception to notify caller
When getting optimized images avoid the distributed mutex unless
for some reason it is the first call and we need to generate a thumb
In redis readonly no thumbnails will be generated
Previously we used width and height for thumbnails, new code ensures
1. We auto correct width and height
2. We added extra columns for thumbnail_width and height, this is determined
by actual upload and no longer passed in as a side effect
3. Optimized Image now stores filesize which can be used for analysis, decisions
Also
- fixes Android image manifest as a side effect
- fixes issue where a thumbnail generated that is smaller than the upload is no longer used
In the past the filename of the origin was used as the source
for the extension of the file when optimizing on upload.
We now use the actual calculated extension based on upload data.