Previously we were opening the file handles, then putting them in a queue for upload. If that queue grows too large, we can hit a maximum open files limit.
This commit opens the file handle 'just in time', so the maximum number of open handles is equal to the upload concurrency (20).
This commit adds a hidden `s3_inventory_bucket_region` site setting to
specify the region of the `s3_inventory_bucket` when the `S3Inventory`
class initializes an instance of the `S3Helper`. By default, the
`S3Helper` class uses the value of the `s3_region` site setting but the
region of the `s3_inventory_bucket` is not always the same as the
`s3_region` configured.
This commit introduces a hidden `s3_inventory_bucket` site setting which
replaces the `enable_s3_inventory` and `s3_configure_inventory_policy`
site setting.
The reason `enable_s3_inventory` and `s3_configure_inventory_policy`
site settings are removed is because this feature has technically been
broken since it was introduced. When the `enable_s3_inventory` feature
is turned on, the app will because configure a daily inventory policy for the
`s3_upload_bucket` bucket and store the inventories under a prefix in
the bucket. The problem here is that once the inventories are created,
there is nothing cleaning up all these inventories so whoever that has
enabled this feature would have been paying the cost of storing a whole
bunch of inventory files which are never used. Given that we have not
received any complains about inventory files inflating S3 storage costs,
we think that it is very likely that this feature is no longer being
used and we are looking to drop support for this feature in the not too
distance future.
For now, we will still support a hidden `s3_inventory_bucket` site
setting which site administrators can configure via the
`DISCOURSE_S3_INVENTORY_BUCKET` env.
Followup to fe05fdae24
For consistency with other S3 settings, make the global setting
the same name as the site setting and use SiteSetting.Upload
too so it reads from the correct place.
This commit adds an `enable_s3_transfer_acceleration` site setting,
which is hidden to begin with. We are adding this because in certain
regions, using https://aws.amazon.com/s3/transfer-acceleration/ can
drastically speed up uploads, sometimes as much as 70% in certain
regions depending on the target bucket region. This is important for
us because we have direct S3 multipart uploads enabled everywhere
on our hosting.
To start, we only want this on the uploads bucket, not the backup one.
Also, this will accelerate both uploads **and** downloads, depending
on whether a presigned URL is used for downloading. This is the case
when secure uploads is enabled, not anywhere else at this time. To
enable the S3 acceleration on downloads more generally would be a
more in-depth change, since we currently store S3 Upload record URLs
like this:
```
url: "//test.s3.dualstack.us-east-2.amazonaws.com/original/2X/6/123456.png"
```
For acceleration, `s3.dualstack` would need to be changed to `s3-accelerate.dualstack`
here.
Note that for this to have any effect, Transfer Acceleration must be enabled
on the S3 bucket used for uploads per https://docs.aws.amazon.com/AmazonS3/latest/userguide/transfer-acceleration-examples.html.
This commit adds some system specs to test uploads with
direct to S3 single and multipart uploads via uppy. This
is done with minio as a local S3 replacement. We are doing
this to catch regressions when uppy dependencies need to
be upgraded or we change uppy upload code, since before
this there was no way to know outside manual testing whether
these changes would cause regressions.
Minio's server lifecycle and the installed binaries are managed
by the https://github.com/discourse/minio_runner gem, though the
binaries are already installed on the discourse_test image we run
GitHub CI from.
These tests will only run in CI unless you specifically use the
CI=1 or RUN_S3_SYSTEM_SPECS=1 env vars.
For a history of experimentation here see https://github.com/discourse/discourse/pull/22381
Related PRs:
* https://github.com/discourse/minio_runner/pull/1
* https://github.com/discourse/minio_runner/pull/2
* https://github.com/discourse/minio_runner/pull/3
This adds an option to allow non-image s3 files to be downloaded through CDN URL.
Addresses the issues in:
* meta.discourse.org/t/s3-cdn-url-not-being-used-on-non-image-uploads/175332
* meta.discourse.org/t/s3-uploads-using-cdn-for-pdfs/213218
AWS recommends running buckets without ACLs, and to use resource policies to manage access control instead.
This is not a bad idea, because S3 ACLs are whack, and while resource policies are also whack, they're a more constrained form of whack.
Further, some compliance regimes get antsy if you don't go with the vendor's recommended settings, and arguing that you need to enable ACLs on a bucket just to store images in there is more hassle than it's worth.
The new site setting (s3_use_acls) cannot be disabled when secure
uploads is enabled -- the latter relies on private ACLs for security
at this point in time. We may want to reexamine this in future.
In #21498, we split `BaseStore#download` into a "safe" version which returns nil on errors, and an "unsafe" version which raises an exception, which was the old behaviour of `#download`.
This change updates call sites that used the old `#download`, which raised exceptions, to use the new `#download!` to preserve behaviour (and silence deprecation warnings.)
It also silences the deprecation warning in tests.
When setting the ACL for optimized images after setting the
ACL for the linked upload (e.g. via the SyncACLForUploads job),
we were using the optimized image path as the S3 key. This worked
for single sites, however it would fail silently for multisite
sites since the path would be incorrect, because the Discourse.store.upload_path
was not included.
For example, something like this:
somecluster1/optimized/2X/1/3478534853498753984_2_1380x300.png
Instead of:
somecluster1/uploads/somesite1/2X/1/3478534853498753984_2_1380x300.png
The silent failure is still intentional, since we don't want to
break other things because of ACL updates, but now we will update
the ACL correctly for optimized images on multisite sites.
The #pluck_first freedom patch, first introduced by @danielwaterworth has served us well, and is used widely throughout both core and plugins. It seems to have been a common enough use case that Rails 6 introduced it's own method #pick with the exact same implementation. This allows us to retire the freedom patch and switch over to the built-in ActiveRecord method.
There is no replacement for #pluck_first!, but a quick search shows we are using this in a very limited capacity, and in some cases incorrectly (by assuming a nil return rather than an exception), which can quite easily be replaced with #pick plus some extra handling.
These errors tend to indicate that the upload is missing on the remote store. This is bad, but we don't want it to block the dominant-color calculation process. This commit catches errors when there is an HTTP error, and fixes the `base_store.rb` implementation when `FileHelper.download` returns nil.
Previously we hardcoded the DOWNLOAD_URL_EXPIRES_AFTER_SECONDS const
inside S3Helper to be 5 minutes (300 seconds). For various reasons,
some hosted sites may need this to be longer for other integrations.
The maximum expiry time for presigned URLs is 1 week (which is
604800 seconds), so that has been added as a validation on the
setting as well. The setting is hidden because 99% of the time
it should not be changed.
Sometimes we need to update a _lot_ of ACLs on S3 (such as when secure media
is enabled), and since it takes ~1s per upload to update the ACL, this is
best spread out over many jobs instead of having to do the whole thing serially.
In future, it will be better to have a job that can be run based on
a column on uploads (e.g. acl_stale) so we can track progress, similar
to how we can set the baked_version to nil to rebake posts.
* File.exists? is deprecated and removed in Ruby 3.2 in favor of
File.exist?
* Dir.exists? is deprecated and removed in Ruby 3.2 in favor of
Dir.exist?
This PR introduces a new `enable_experimental_backup_uploads` site setting (default false and hidden), which when enabled alongside `enable_direct_s3_uploads` will allow for direct S3 multipart uploads of backup .tar.gz files.
To make multipart external uploads work with both the S3BackupStore and the S3Store, I've had to move several methods out of S3Store and into S3Helper, including:
* presigned_url
* create_multipart
* abort_multipart
* complete_multipart
* presign_multipart_part
* list_multipart_parts
Then, S3Store and S3BackupStore either delegate directly to S3Helper or have their own special methods to call S3Helper for these methods. FileStore.temporary_upload_path has also removed its dependence on upload_path, and can now be used interchangeably between the stores. A similar change was made in the frontend as well, moving the multipart related JS code out of ComposerUppyUpload and into a mixin of its own, so it can also be used by UppyUploadMixin.
Some changes to ExternalUploadManager had to be made here as well. The backup direct uploads do not need an Upload record made for them in the database, so they can be moved to their final S3 resting place when completing the multipart upload.
This changeset is not perfect; it introduces some special cases in UploadController to handle backups that was previously in BackupController, because UploadController is where the multipart routes are located. A subsequent pull request will pull these routes into a module or some other sharing pattern, along with hooks, so the backup controller and the upload controller (and any future controllers that may need them) can include these routes in a nicer way.
We are only using list_multipart_parts right now in the
uploads controller for multipart uploads to check if the
upload exists; thus we don't need up to 1000 parts.
Also adding a note for future explorers that list_multipart_parts
only gets 1000 parts max, and adding params for max parts
and starting parts.
Previously, while retrieving each upload urls in a post S3 CDN urls with different path in prefix (external urls technically) are considered as uploaded url. It created issue while checking missing uploads.
When copying an existing upload stub temporary object
on S3 to its final destination we were not copying across
its additional headers such as content-disposition and
cache-control, which led to issues like attachments not
downloading with their original filename when clicking
the download links in posts.
This is because the metadata_directive = REPLACE option
was not being passed to object.copy_from(), so only the
source object's headers were being used. Added an option
for apply_metadata_to_destination to apply this option
conditionally, because we may not always want to replace
this metadata, but we definitely do when copying a temporary
upload.
Other locale characters in file names (e.g. é, ä) as well
as special characters can cause issues on S3, notably the S3
copy object operation does not support these special characters.
Instead of storing the original file name in the key, which is
unnecessary, we now generate a random file name with the original
extension for the temporary file and use that for all external
upload stub operations.
The generate_presigned_put endpoint for direct external uploads
(such as the one for the uppy-image-uploader) records allowed
S3 metadata values on the uploaded object. We use this to store
the sha1-checksum generated by the UppyChecksum plugin, for later
comparison in ExternalUploadManager.
However, we were not doing this for the create_multipart endpoint,
so the checksum was never captured and compared correctly.
Also includes a fix to make sure UppyChecksum is the last preprocessor to run.
It is important that the UppyChecksum preprocessor is the last one to
be added; the preprocessors are run in order and since other preprocessors
may modify the file (e.g. the UppyMediaOptimization one), we need to
checksum once we are sure the file data has "settled".
Previously we had temp/ in the middle of the S3 key path like so
* /uploads/default/temp/randomstring/test.png (normal site)
* /sitename/uploads/default/temp/randomstring/test.png (s3 folder path site)
* /standard10/uploads/sitename/temp/randomstring/test.png (multisite site)
However this necessitates making a lifecycle rule to clean up incomplete
S3 multipart uploads for every site, something which we cannot do. It makes
much more sense to have a structure with /temp at the start of the key,
which is what this commit does:
* /temp/uploads/default/randomstring/test.png (normal site)
* /temp/sitename/uploads/default/randomstring/test.png (s3 folder path site)
* /temp/standard10/uploads/sitename/randomstring/test.png (multisite site)
This pull request introduces the endpoints required, and the JavaScript functionality in the `ComposerUppyUpload` mixin, for direct S3 multipart uploads. There are four new endpoints in the uploads controller:
* `create-multipart.json` - Creates the multipart upload in S3 along with an `ExternalUploadStub` record, storing information about the file in the same way as `generate-presigned-put.json` does for regular direct S3 uploads
* `batch-presign-multipart-parts.json` - Takes a list of part numbers and the unique identifier for an `ExternalUploadStub` record, and generates the presigned URLs for those parts if the multipart upload still exists and if the user has permission to access that upload
* `complete-multipart.json` - Completes the multipart upload in S3. Needs the full list of part numbers and their associated ETags which are returned when the part is uploaded to the presigned URL above. Only works if the user has permission to access the associated `ExternalUploadStub` record and the multipart upload still exists.
After we confirm the upload is complete in S3, we go through the regular `UploadCreator` flow, the same as `complete-external-upload.json`, and promote the temporary upload S3 into a full `Upload` record, moving it to its final destination.
* `abort-multipart.json` - Aborts the multipart upload on S3 and destroys the `ExternalUploadStub` record if the user has permission to access that upload.
Also added are a few new columns to `ExternalUploadStub`:
* multipart - Whether or not this is a multipart upload
* external_upload_identifier - The "upload ID" for an S3 multipart upload
* filesize - The size of the file when the `create-multipart.json` or `generate-presigned-put.json` is called. This is used for validation.
When the user completes a direct S3 upload, either regular or multipart, we take the `filesize` that was captured when the `ExternalUploadStub` was first created and compare it with the final `Content-Length` size of the file where it is stored in S3. Then, if the two do not match, we throw an error, delete the file on S3, and ban the user from uploading files for N (default 5) minutes. This would only happen if the user uploads a different file than what they first specified, or in the case of multipart uploads uploaded larger chunks than needed. This is done to prevent abuse of S3 storage by bad actors.
Also included in this PR is an update to vendor/uppy.js. This has been built locally from the latest uppy source at d613b849a6. This must be done so that I can get my multipart upload changes into Discourse. When the Uppy team cuts a proper release, we can bump the package.json versions instead.
This adds a few different things to allow for direct S3 uploads using uppy. **These changes are still not the default.** There are hidden `enable_experimental_image_uploader` and `enable_direct_s3_uploads` settings that must be turned on for any of this code to be used, and even if they are turned on only the User Card Background for the user profile actually uses uppy-image-uploader.
A new `ExternalUploadStub` model and database table is introduced in this pull request. This is used to keep track of uploads that are uploaded to a temporary location in S3 with the direct to S3 code, and they are eventually deleted a) when the direct upload is completed and b) after a certain time period of not being used.
### Starting a direct S3 upload
When an S3 direct upload is initiated with uppy, we first request a presigned PUT URL from the new `generate-presigned-put` endpoint in `UploadsController`. This generates an S3 key in the `temp` folder inside the correct bucket path, along with any metadata from the clientside (e.g. the SHA1 checksum described below). This will also create an `ExternalUploadStub` and store the details of the temp object key and the file being uploaded.
Once the clientside has this URL, uppy will upload the file direct to S3 using the presigned URL. Once the upload is complete we go to the next stage.
### Completing a direct S3 upload
Once the upload to S3 is done we call the new `complete-external-upload` route with the unique identifier of the `ExternalUploadStub` created earlier. Only the user who made the stub can complete the external upload. One of two paths is followed via the `ExternalUploadManager`.
1. If the object in S3 is too large (currently 100mb defined by `ExternalUploadManager::DOWNLOAD_LIMIT`) we do not download and generate the SHA1 for that file. Instead we create the `Upload` record via `UploadCreator` and simply copy it to its final destination on S3 then delete the initial temp file. Several modifications to `UploadCreator` have been made to accommodate this.
2. If the object in S3 is small enough, we download it. When the temporary S3 file is downloaded, we compare the SHA1 checksum generated by the browser with the actual SHA1 checksum of the file generated by ruby. The browser SHA1 checksum is stored on the object in S3 with metadata, and is generated via the `UppyChecksum` plugin. Keep in mind that some browsers will not generate this due to compatibility or other issues.
We then follow the normal `UploadCreator` path with one exception. To cut down on having to re-upload the file again, if there are no changes (such as resizing etc) to the file in `UploadCreator` we follow the same copy + delete temp path that we do for files that are too large.
3. Finally we return the serialized upload record back to the client
There are several errors that could happen that are handled by `UploadsController` as well.
Also in this PR is some refactoring of `displayErrorForUpload` to handle both uppy and jquery file uploader errors.
Uploading lots of small files can be made significantly faster by parallelizing the `s3.put_object` calls. In testing, an UPLOAD_CONCURRENCY of 10 made a large restore 10x faster. An UPLOAD_CONCURRENCY of 20 made the same restore 18x faster.
This commit is careful to parallelize as little as possible, to reduce the chance of concurrency issues. In the worker threads, no database transactions are performed. All modification of shared objects is controlled with a mutex.
Unfortunately we do not have any existing tests for the `ToS3Migration` class. This change has been tested with a large site backup (120k uploads totalling 45GB)
Discourse shouldn't dynamically calculate the path of uploads and optimized images after a file has been stored on disk or S3. Otherwise it might calculate the wrong path if the SHA1 or extension stored in the database doesn't match the actual file path.
Over the years we accrued many spelling mistakes in the code base.
This PR attempts to fix spelling mistakes and typos in all areas of the code that are extremely safe to change
- comments
- test descriptions
- other low risk areas
`GlobalSetting.relative_url_root` comes from the destination site. We
can't be sure whether it was the same on the original site. It's safer
to use a wildcard here, so we can backup/restore sites with different
relative_url_root values.
- Only initialize the S3Helper when needed
- Skip initializing the S3Helper for S3Store#cdn_url
- Allow cook_url to be passed a `local` hint to skip unnecessary checks
DEV: Replace instances of Discourse.base_uri with Discourse.base_path
This is clearer because the base_uri is actually just a path prefix. This continues the work started in 555f467.
The download link on the lightbox for images was not downloading the image if the upload was marked secure, because the code in the upload controller route was not respecting the dl=1 param for force download.
This PR fixes this so the download link works for secure images as well as regular ligthboxed images.
* strip out the href and xlink:href attributes from use element that
are _not_ anchors in svgs which can be used for XSS
* adding the content-disposition: attachment ensures that
uploaded SVGs cannot be opened and executed using the XSS exploit.
svgs embedded using an img tag do not suffer from the same exploit
`OptimizedImage#filesize` calls `Discourse.store.download` with an OptimizedImage as an argument. It would in turn attempt to call `#original_filename` and `#secure?` on that object. Both would fail as these methods do not exist on OptimizedImage, only on Upload. We didn't know about these issues because:
1. `#calculate_filesize` is not called often, because the filesize is saved on OptimizedImage creation, so it's used mostly for manual filesize recalculation
2. we were using `rescue nil` which swallows all errors
* Change S3Helper::DOWNLOAD_URL_EXPIRES_AFTER_SECONDS to 5 minutes, which controls presigned URL expiry and secure-media route cache time.
* This is done because of the composer preview refreshing while typing causes a lot of requests sent to our server because of the short URL expiry. If this ends up being not enough we can always increase the time or explore other avenues (e.g. GitHub has a 7 day validity for secure URLs)
Fixed bugs, added specs, extracted the upload downsizing code to a class, added support for non-S3 setups, changed it so that images aren't downloaded twice.
This code has been tested on production and successfully resized ~180k uploads.
Includes:
* DEV: Extract upload downsizing logic
* DEV: Add support for non-S3 uploads
* DEV: Process only images uploaded by users
* FIX: Incorrect usage of `count` and `exist?` typo
* DEV: Spec S3 image downsizing
* DEV: Avoid downloading images twice
* DEV: Update filesizes earlier in the process
* DEV: Return false on invalid upload
* FIX: Download images that currently above the limit (If the image size limit is decreased, then there was no way to resize those images that now fall outside the allowed size range)
* Update script/downsize_uploads.rb (Co-authored-by: Régis Hanol <regis@hanol.fr>)