AWS recommends running buckets without ACLs, and to use resource policies to manage access control instead.
This is not a bad idea, because S3 ACLs are whack, and while resource policies are also whack, they're a more constrained form of whack.
Further, some compliance regimes get antsy if you don't go with the vendor's recommended settings, and arguing that you need to enable ACLs on a bucket just to store images in there is more hassle than it's worth.
The new site setting (s3_use_acls) cannot be disabled when secure
uploads is enabled -- the latter relies on private ACLs for security
at this point in time. We may want to reexamine this in future.
In some situations (e.g. disaster recovery), it may make sense to spin up a temporary readonly version of a cluster. In that situation, the s3 `expire_missing_assets` job would delete assets which are still in use by the canonical read-write version of the cluster.
To avoid that, this commit will skip deletion if the site is currently in readonly mode.
- Ensure it works with prefixed S3 buckets
- Perform a sanity check that all current assets are present on S3 before starting deletion
- Remove the lifecycle rule configuration and delete expired assets immediately. This task should be run post-deploy anyway, so adding a 10-day window is not required
This task is supposed to skip uploading if the asset is already present in S3. However, when a bucket 'folder path' was configured, this logic was broken and so the assets would be re-uploaded every time.
This commit fixes that logic to include the bucket 'folder path' in the check
Ember CLI gives sourcemaps their own digest. Our `s3.rake` logic assumes that the digest portion of sourcemap filenames remains the same.
The Ember CLI sourcemaps are included in the manifest file, so we can ensure they are uploaded by letting them past the MiniMime check.
Followup to abefb1beff
* File.exists? is deprecated and removed in Ruby 3.2 in favor of
File.exist?
* Dir.exists? is deprecated and removed in Ruby 3.2 in favor of
Dir.exist?
This commit introduces a new s3:ensure_cors_rules rake task
that is run as a prerequisite to s3:upload_assets. This rake
task calls out to the S3CorsRulesets class to ensure that
the 3 relevant sets of CORS rules are applied, depending on
site settings:
* assets
* direct S3 backups
* direct S3 uploads
This works for both Global S3 settings and Database S3 settings
(the latter set directly via SiteSetting).
As it is, only one rule can be applied, which is generally
the assets rule as it is called first. This commit changes
the ensure_cors! method to be able to apply new rules as
well as the existing ones.
This commit also slightly changes the existing rules to cover
direct S3 uploads via uppy, especially multipart, which requires
some more headers.
DO does not implement tagging support for S3 objects. Removing our default
empty tag fixes compatibility.
The expire_missing_assets rake task can't be used with that service still,
but this patch allows normal operation.
If the “secure media” site setting is enabled then ALL files uploaded to Discourse (images, video, audio, pdf, txt, zip etc. etc.) will follow the secure media rules. The “prevent anons from downloading files” setting will no longer have any bearing on upload security. Basically, the feature will more appropriately be called “secure uploads” instead of “secure media”.
This is being done because there are communities out there that would like all attachments and media to be secure based on category rules but still allow anonymous users to download attachments in public places, which is not possible in the current arrangement.
Zeitwerk simplifies working with dependencies in dev and makes it easier reloading class chains.
We no longer need to use Rails "require_dependency" anywhere and instead can just use standard
Ruby patterns to require files.
This is a far reaching change and we expect some followups here.
This reduces chances of errors where consumers of strings mutate inputs
and reduces memory usage of the app.
Test suite passes now, but there may be some stuff left, so we will run
a few sites on a branch prior to merging
We need to handle arbitrary exceptions in this task, especially since the
task is not easily resumable.
Simply output problem uploads as you hit them for now.
Some previous migrations to S3 may have bad ACLs set on objects. This
introduces a new rake task (`rake s3:correct_acl`) that will reset ACL on
every S3 object.
Vast majority of users will never have to run it, but if you have ACL issues
this is the atomic solution.
This refactors handling of s3 so it can be specified via GlobalSetting
This means that in a multisite environment you can configure s3 uploads
without actual sites knowing credentials in s3
It is a critical setting for situations where assets are mirrored to s3.