Constants should always be only assigned once. The logical OR assignment
of a constant is a relic of the past before we used zeitwerk for
autoloading and had bugs where a file could be loaded twice resulting in
constant redefinition warnings.
The `-ping` option significantly speeds up the ImageMagick `identify` command per our testing and the [documentation](https://imagemagick.org/script/command-line-options.php#ping):
> -ping
Efficiently determine these image characteristics: image number, the file name, the width and height of the image, whether the image is colormapped or not, the number of colors in the image, the number of bytes in the image, the format of the image (JPEG, PNM, etc.). Use +ping to ensure accurate image properties.
We already pass the `-ping` option in other places where the `identify` command is used, so it makes sense to use the option everywhere.
Internal topic: t/121431.
This commit changes `max_image_megapixels` to be used
as is without multiplying by 2 to give extra leway.
We found in reality this was just causing confusion
for admins, especially with the already permissive
40MP default.
* FIX: Video thumbnails can have duplicates
It's possible that a duplicate video or even a very similar video could
generate the same video thumbnail. Because video thumbnails are mapped
to their corresponding video by using the video sha1 in the thumbnail
filename we need to allow for duplicate thumbnails otherwise even when a
thumbnail has been generated for a topic it will not be mapped
correctly.
This will also allow you to re-upload a video on the same topic to
regenerate the thumbnail.
* fix typo
This was inadvertently removed in 4c46c7e. In very specific scenarios,
this could be used execute arbitrary JavaScript.
Only affects instances where SVGs are allowed as uploads and CDN is not
configured.
* Firefox now finally returns PerformanceMeasure from performance.measure
* Some TODOs were really more NOTE or FIXME material or no longer relevant
* retain_hours is not needed in ExternalUploadsManager, it doesn't seem like anywhere in the UI sends this as a param for uploads
* https://github.com/discourse/discourse/pull/18413 was merged so we can remove JS test workaround for settings
This commit renames all secure_media related settings to secure_uploads_* along with the associated functionality.
This is being done because "media" does not really cover it, we aren't just doing this for images and videos etc. but for all uploads in the site.
Additionally, in future we want to secure more types of uploads, and enable a kind of "mixed mode" where some uploads are secure and some are not, so keeping media in the name is just confusing.
This also keeps compatibility with the `secure-media-uploads` path, and changes new
secure URLs to be `secure-uploads`.
Deprecated settings:
* secure_media -> secure_uploads
* secure_media_allow_embed_images_in_emails -> secure_uploads_allow_embed_images_in_emails
* secure_media_max_email_embed_image_size_kb -> secure_uploads_max_email_embed_image_size_kb
We previously had a system which would generate a 10x10px preview of images and add their URLs in a data-small-upload attribute. The client would then use that as the background-image of the `<img>` element. This works reasonably well on fast connections, but on slower connections it can take a few seconds for the placeholders to appear. The act of loading the placeholders can also break or delay the loading of the 'real' images.
This commit replaces the placeholder logic with a new approach. Instead of a 10x10px preview, we use imagemagick to calculate the average color of an image and store it in the database. The hex color value then added as a `data-dominant-color` attribute on the `<img>` element, and the client can use this as a `background-color` on the element while the real image is loading. That means no extra HTTP request is required, and so the placeholder color can appear instantly.
Dominant color will be calculated:
1. When a new upload is created
2. During a post rebake, if the dominant color is missing from an upload, it will be calculated and stored
3. Every 15 minutes, 25 old upload records are fetched and their dominant color calculated and stored. (part of the existing PeriodicalUpdates job)
Existing posts will continue to use the old 10x10px placeholder system until they are next rebaked
This is a fix to address blurry onebox favicon images if the site you
are linking to happens to have a favicon.ico file that contains multiple
images.
This fix detects of we are trying to create an upload for a favicon.ico
file. We then convert it to a png and not a jpeg like we were doing. We
want a png because it will preserve transparency, otherwise if we
convert it to a jpeg we lose that and it looks bad on dark themed sites.
This fix also addresses the fact that .ico files can include multiple
images. The blurry images we were producing was caused by the
ImageMagick `-flatten` option when the .ico file had multiple images
which then squishes them all together. So for .ico files we are no
longer flattening them and instead we are grabbing the last image in the
.ico bundle and converting that single image to a png.
This PR introduces a new `enable_experimental_backup_uploads` site setting (default false and hidden), which when enabled alongside `enable_direct_s3_uploads` will allow for direct S3 multipart uploads of backup .tar.gz files.
To make multipart external uploads work with both the S3BackupStore and the S3Store, I've had to move several methods out of S3Store and into S3Helper, including:
* presigned_url
* create_multipart
* abort_multipart
* complete_multipart
* presign_multipart_part
* list_multipart_parts
Then, S3Store and S3BackupStore either delegate directly to S3Helper or have their own special methods to call S3Helper for these methods. FileStore.temporary_upload_path has also removed its dependence on upload_path, and can now be used interchangeably between the stores. A similar change was made in the frontend as well, moving the multipart related JS code out of ComposerUppyUpload and into a mixin of its own, so it can also be used by UppyUploadMixin.
Some changes to ExternalUploadManager had to be made here as well. The backup direct uploads do not need an Upload record made for them in the database, so they can be moved to their final S3 resting place when completing the multipart upload.
This changeset is not perfect; it introduces some special cases in UploadController to handle backups that was previously in BackupController, because UploadController is where the multipart routes are located. A subsequent pull request will pull these routes into a module or some other sharing pattern, along with hooks, so the backup controller and the upload controller (and any future controllers that may need them) can include these routes in a nicer way.
The file size error messages for max_image_size_kb and
max_attachment_size_kb are shown to the user in the KB
format, regardless of how large the limit is. Since we
are going to support uploading much larger files soon,
this KB-based limit soon becomes unfriendly to the end
user.
For example, if the max attachment size is set to 512000
KB, this is what the user sees:
> Sorry, the file you are trying to upload is too big (maximum
size is 512000KB)
This makes the user do math. In almost all file explorers that
a regular user would be familiar width, the file size is shown
in a format based on the maximum increment (e.g. KB, MB, GB).
This commit changes the behaviour to output a humanized file size
instead of the raw KB. For the above example, it would now say:
> Sorry, the file you are trying to upload is too big (maximum
size is 512 MB)
This humanization also handles decimals, e.g. 1536KB = 1.5 MB
This pull request introduces the endpoints required, and the JavaScript functionality in the `ComposerUppyUpload` mixin, for direct S3 multipart uploads. There are four new endpoints in the uploads controller:
* `create-multipart.json` - Creates the multipart upload in S3 along with an `ExternalUploadStub` record, storing information about the file in the same way as `generate-presigned-put.json` does for regular direct S3 uploads
* `batch-presign-multipart-parts.json` - Takes a list of part numbers and the unique identifier for an `ExternalUploadStub` record, and generates the presigned URLs for those parts if the multipart upload still exists and if the user has permission to access that upload
* `complete-multipart.json` - Completes the multipart upload in S3. Needs the full list of part numbers and their associated ETags which are returned when the part is uploaded to the presigned URL above. Only works if the user has permission to access the associated `ExternalUploadStub` record and the multipart upload still exists.
After we confirm the upload is complete in S3, we go through the regular `UploadCreator` flow, the same as `complete-external-upload.json`, and promote the temporary upload S3 into a full `Upload` record, moving it to its final destination.
* `abort-multipart.json` - Aborts the multipart upload on S3 and destroys the `ExternalUploadStub` record if the user has permission to access that upload.
Also added are a few new columns to `ExternalUploadStub`:
* multipart - Whether or not this is a multipart upload
* external_upload_identifier - The "upload ID" for an S3 multipart upload
* filesize - The size of the file when the `create-multipart.json` or `generate-presigned-put.json` is called. This is used for validation.
When the user completes a direct S3 upload, either regular or multipart, we take the `filesize` that was captured when the `ExternalUploadStub` was first created and compare it with the final `Content-Length` size of the file where it is stored in S3. Then, if the two do not match, we throw an error, delete the file on S3, and ban the user from uploading files for N (default 5) minutes. This would only happen if the user uploads a different file than what they first specified, or in the case of multipart uploads uploaded larger chunks than needed. This is done to prevent abuse of S3 storage by bad actors.
Also included in this PR is an update to vendor/uppy.js. This has been built locally from the latest uppy source at d613b849a6. This must be done so that I can get my multipart upload changes into Discourse. When the Uppy team cuts a proper release, we can bump the package.json versions instead.
This adds a few different things to allow for direct S3 uploads using uppy. **These changes are still not the default.** There are hidden `enable_experimental_image_uploader` and `enable_direct_s3_uploads` settings that must be turned on for any of this code to be used, and even if they are turned on only the User Card Background for the user profile actually uses uppy-image-uploader.
A new `ExternalUploadStub` model and database table is introduced in this pull request. This is used to keep track of uploads that are uploaded to a temporary location in S3 with the direct to S3 code, and they are eventually deleted a) when the direct upload is completed and b) after a certain time period of not being used.
### Starting a direct S3 upload
When an S3 direct upload is initiated with uppy, we first request a presigned PUT URL from the new `generate-presigned-put` endpoint in `UploadsController`. This generates an S3 key in the `temp` folder inside the correct bucket path, along with any metadata from the clientside (e.g. the SHA1 checksum described below). This will also create an `ExternalUploadStub` and store the details of the temp object key and the file being uploaded.
Once the clientside has this URL, uppy will upload the file direct to S3 using the presigned URL. Once the upload is complete we go to the next stage.
### Completing a direct S3 upload
Once the upload to S3 is done we call the new `complete-external-upload` route with the unique identifier of the `ExternalUploadStub` created earlier. Only the user who made the stub can complete the external upload. One of two paths is followed via the `ExternalUploadManager`.
1. If the object in S3 is too large (currently 100mb defined by `ExternalUploadManager::DOWNLOAD_LIMIT`) we do not download and generate the SHA1 for that file. Instead we create the `Upload` record via `UploadCreator` and simply copy it to its final destination on S3 then delete the initial temp file. Several modifications to `UploadCreator` have been made to accommodate this.
2. If the object in S3 is small enough, we download it. When the temporary S3 file is downloaded, we compare the SHA1 checksum generated by the browser with the actual SHA1 checksum of the file generated by ruby. The browser SHA1 checksum is stored on the object in S3 with metadata, and is generated via the `UppyChecksum` plugin. Keep in mind that some browsers will not generate this due to compatibility or other issues.
We then follow the normal `UploadCreator` path with one exception. To cut down on having to re-upload the file again, if there are no changes (such as resizing etc) to the file in `UploadCreator` we follow the same copy + delete temp path that we do for files that are too large.
3. Finally we return the serialized upload record back to the client
There are several errors that could happen that are handled by `UploadsController` as well.
Also in this PR is some refactoring of `displayErrorForUpload` to handle both uppy and jquery file uploader errors.
This change largely targets dev users, but it could potentially change
behaviour in production.
Jamie Wilson & I debugged a problem where "should not be larger than the
maximum thumbnail size" would fail due to timeouts.
On our systems, on ImageMagick 7.1.0-2, with inkscape installed, IM would
attempt to rasterise the svg then check the resulting filesize, causing the
test to timeout.
As of now, we haven't found a way to cause this to behave better, but have a
workaround in that forcing IM to use the internal renderer (`MSVG:`) seems to
make it perform the same on development workstations as it does in our docker
container.
The other processing operations, such as fixing orientation or cropping,
can in rare cases increase the size of the uploaded image. Running the
downsize step after all these operations should create the best image
possible.
* FIX: Allow SVG uploads if dimensions are a fraction of a unit
`UploadCreator` counts the number of pixels in an file to determine if it is valid. `pixels` is calculated by multiplying the width and height of the image, as determined by FastImage.
SVG files can have their width/height expressed in a variety of different units of measurement. For example, ‘px’, ‘in’, ‘cm’, ‘mm’, ‘pt’, ‘pc’, etc are all valid within SVG files. If an image has a width of `0.5in`, FastImage may interpret this as being a width of `0`, meaning it will report the `size` as being `0`.
However, we don’t need to concern ourselves with the number of ‘pixels’ in a SVG files, as that is irrelevant for this file format, so we can skip over the check for `pixels == 0` when processing this file type.
* DEV: Speed up getting SVG dimensions
The `-ping` flag prevents the entire image from being rasterized before a result is returned. See:
https://imagemagick.org/script/command-line-options.php#ping
Since we use the event to perform additional validations on the file, we should check if it added any errors to the upload before saving it. This change makes the UploadCreator more consistent since we no longer have to rely on exceptions.
The main image_optim gem now includes the timeout feature
that we had in our fork. So it is now safe to switch off of our fork and
back to the image_optim gem.
This is the link to the commit in the image_optim repo that adds the
timeout option:
ec3767dde0
One difference with the new timeout implementation is that image_optim
now handles the timeout exceptions instead of bubbling them up:
1ed0328587/lib/image_optim.rb (L128-L129)
```
rescue Errors::TimeoutExceeded
handler.result
```
So a timeout will just return `nil`, which is the same response if it
couldn't optimize an image. I don't think we were really watching for
or doing anything about these timeout warnings in our logs so I think
this is an okay change to have and we will have less warnings in our
logs now too.
Previously certain images may lead to convert / identify to run for unreasonable
amounts of time
This adds a maximum amount of time these commands can run prior to forcing
them to stop
SVG files can have dimensions expressed in inches, centimeters, etc., which may lead to the dimensions being misinterpreted (e.g. “8in” ends up as 8 pixels).
If the file type is `svg`, ask ImageMagick to work out what size the SVG file should be rendered on screen.
NOTE: The `pencil.svg` file was obtained from https://freesvg.org/1534028868, which has placed the file in to the public domain.
`convert_to_jpeg!` is only called if `convert_png_to_jpeg?` and/or `should_alter_quality?` is true.
`convert_png_to_jpeg?` can be disabled by setting `SiteSetting.png_to_jpg_quality` to 100.
However, `should_alter_quality?` could be true if `SiteSetting.recompress_original_jpg_quality` was lower than the quality of the uploaded file, regardless of file type.
This commits changes `should_alter_quality?` so that uploaded png files will use the `SiteSetting.png_to_jpg_quality` value, rather than ``SiteSetting.recompress_original_jpg_quality` value.
This PR adds security_last_changed_at and security_last_changed_reason to uploads. This has been done to make it easier to track down why an upload's secure column has changed and when. This necessitated a refactor of the UploadSecurity class to provide reasons why the upload security would have changed.
As well as this, a source is now provided from the location which called for the upload's security status to be updated as they are several (e.g. post creator, topic security updater, rake tasks, manual change).
It was a problem because during this operation only the first frame
is kept. This commit removes the alternative solution to check if a GIF
image is animated.
Animated emojis were converted to static images. This commit moves the
responsability on site admins to optimize their animated emojis before
uploading them (gifsicle is no longer used).
* FEATURE - Add SiteSettings to control JPEG image quality
`recompress_original_jpg_quality` - the maximum quality of a newly
uploaded file.
`image_preview_jpg_quality` - the maximum quality of OptimizedImages
Dependency on gifsicle, allow_animated_avatars and allow_animated_thumbnails
site settings were all removed. Animated GIF images are still allowed, but
the generated optimized images are no longer animated for those (which were
used for avatars and thumbnails).
The added 'animated' is populated by extracting information using FastImage.
This field was used to selectively reoptimize old animations. This process
happens in the background.
* strip out the href and xlink:href attributes from use element that
are _not_ anchors in svgs which can be used for XSS
* adding the content-disposition: attachment ensures that
uploaded SVGs cannot be opened and executed using the XSS exploit.
svgs embedded using an img tag do not suffer from the same exploit
The previous fix (f43c0a5d85) wasn't working for images that were already uploaded.
The "metadata" (eg. 'for_*' and 'secure' attributes) were not added to existing uploads.
Also used 'Upload.get_from_url' is the admin/site_setting controller to properly retrieve
an upload from its URL.
Fixed the Upload::URL_REGEX to use the \h (hexadecimal) for the SHA
Follow-up-to: f43c0a5d85