Same issue as 28b00dc6fc, the
Mocha::ExpectationError inherits from Exception instead
of StandardError so RspecErrorTracker does not show the
actual failed expectation in request specs, the status of
the response is just 500 with no further detail.
This commit adds the RailsMultisite middleware in test mode when Rails.configuration.multisite is true. This allows for much more realistic integration testing. The `multisite_spec.rb` file is rewritten to avoid needing to simulate a middleware stack.
This PR introduces a new `enable_experimental_backup_uploads` site setting (default false and hidden), which when enabled alongside `enable_direct_s3_uploads` will allow for direct S3 multipart uploads of backup .tar.gz files.
To make multipart external uploads work with both the S3BackupStore and the S3Store, I've had to move several methods out of S3Store and into S3Helper, including:
* presigned_url
* create_multipart
* abort_multipart
* complete_multipart
* presign_multipart_part
* list_multipart_parts
Then, S3Store and S3BackupStore either delegate directly to S3Helper or have their own special methods to call S3Helper for these methods. FileStore.temporary_upload_path has also removed its dependence on upload_path, and can now be used interchangeably between the stores. A similar change was made in the frontend as well, moving the multipart related JS code out of ComposerUppyUpload and into a mixin of its own, so it can also be used by UppyUploadMixin.
Some changes to ExternalUploadManager had to be made here as well. The backup direct uploads do not need an Upload record made for them in the database, so they can be moved to their final S3 resting place when completing the multipart upload.
This changeset is not perfect; it introduces some special cases in UploadController to handle backups that was previously in BackupController, because UploadController is where the multipart routes are located. A subsequent pull request will pull these routes into a module or some other sharing pattern, along with hooks, so the backup controller and the upload controller (and any future controllers that may need them) can include these routes in a nicer way.
We are only using list_multipart_parts right now in the
uploads controller for multipart uploads to check if the
upload exists; thus we don't need up to 1000 parts.
Also adding a note for future explorers that list_multipart_parts
only gets 1000 parts max, and adding params for max parts
and starting parts.
Support for invites alongside DiscourseConnect was added in 355d51af. This commit fixes the guardian method so that the bulk invite button functionality also works.
* FIX: Preserve field types when updating revision
When a post was edited quickly twice by the same user, the old post
revision was updated with the newest changes. To check if the change
was reverted (i.e. rename topic A to B and then back to A) a comparison
of the initial value and last value is performed. If the check passes
then the intermediary value is dismissed and only the initial value and
the last ones are preserved. Otherwise, the modification is dismissed
because the field returned to its initial value.
This used to work well for most fields, but failed for "tags" because
the field is an array and the values were transformed to strings to
perform the comparison.
* FIX: Reset last_editor_id if revision is reverted
If a post was revised and then the same revision was reverted,
last_editor_id was still set to the ID of the user who last edited the
post. This was a problem because the same person could then edit the
same post again and because it was the same user and same post, the
system attempted to update the last one (that did not exist anymore).
This commit introduces a new s3:ensure_cors_rules rake task
that is run as a prerequisite to s3:upload_assets. This rake
task calls out to the S3CorsRulesets class to ensure that
the 3 relevant sets of CORS rules are applied, depending on
site settings:
* assets
* direct S3 backups
* direct S3 uploads
This works for both Global S3 settings and Database S3 settings
(the latter set directly via SiteSetting).
As it is, only one rule can be applied, which is generally
the assets rule as it is called first. This commit changes
the ensure_cors! method to be able to apply new rules as
well as the existing ones.
This commit also slightly changes the existing rules to cover
direct S3 uploads via uppy, especially multipart, which requires
some more headers.
FinalDestination's follow_canonical mode used for embedded topics should work when canonical URLs are relative, as specified in [RFC 6596](https://datatracker.ietf.org/doc/html/rfc6596)
Previously, suppressed category topics are included in the digest emails if the user visited that topic before and the `TopicUser` record is created with any notification level except 'muted'.
We are no longer able to display the image returned by Instagram directly within a Discourse site (either in the composer, or within a cooked post within a topic), so:
- Display an image placeholder in the composer preview
- A cooked post should use an iframe to display the Instagram 'embed' content
* DEV: Output webmock errors in request specs
In request specs, if you had not properly mocked an external
HTTP call, you would end up with a 500 error with no further
information instead of your expected response code, with an
rspec output like this:
```
Failures:
1) UploadsController#generate_presigned_put when the store is external generates a presigned URL and creates an external upload stub
Failure/Error: expect(response.status).to eq(200)
expected: 200
got: 500
(compared using ==)
# ./spec/requests/uploads_controller_spec.rb:727:in `block (4 levels) in <top (required)>'
# ./spec/rails_helper.rb:280:in `block (2 levels) in <top (required)>'
```
This is not helpful at all when you want to find what you actually
failed to mock, which is shown straight away in non-request specs.
This commit introduces a rescue_from block in the application
controller to log this error, so we have a much nicer output that
helps the developer find the issue:
```
Failures:
1) UploadsController#generate_presigned_put when the store is external generates a presigned URL and creates an external upload stub
Failure/Error: expect(response.status).to eq(200)
expected: 200
got: 500
(compared using ==)
# ./spec/requests/uploads_controller_spec.rb:727:in `block (4 levels) in <top (required)>'
# ./spec/rails_helper.rb:280:in `block (2 levels) in <top (required)>'
# ------------------
# --- Caused by: ---
# WebMock::NetConnectNotAllowedError:
# Real HTTP connections are disabled. Unregistered request: GET https://s3-upload-bucket.s3.us-west-1.amazonaws.com/?cors with headers {'Accept'=>'*/*', 'Accept-Encoding'=>'', 'Authorization'=>'AWS4-HMAC-SHA256 Credential=some key/20211101/us-west-1/s3/aws4_request, SignedHeaders=host;user-agent;x-amz-content-sha256;x-amz-date, Signature=test', 'Host'=>'s3-upload-bucket.s3.us-west-1.amazonaws.com', 'User-Agent'=>'aws-sdk-ruby3/3.121.2 ruby/2.7.1 x86_64-linux aws-sdk-s3/1.96.1', 'X-Amz-Content-Sha256'=>'test', 'X-Amz-Date'=>'20211101T035113Z'}
#
# You can stub this request with the following snippet:
#
# stub_request(:get, "https://s3-upload-bucket.s3.us-west-1.amazonaws.com/?cors").
# with(
# headers: {
# 'Accept'=>'*/*',
# 'Accept-Encoding'=>'',
# 'Authorization'=>'AWS4-HMAC-SHA256 Credential=some key/20211101/us-west-1/s3/aws4_request, SignedHeaders=host;user-agent;x-amz-content-sha256;x-amz-date, Signature=test',
# 'Host'=>'s3-upload-bucket.s3.us-west-1.amazonaws.com',
# 'User-Agent'=>'aws-sdk-ruby3/3.121.2 ruby/2.7.1 x86_64-linux aws-sdk-s3/1.96.1',
# 'X-Amz-Content-Sha256'=>'test',
# 'X-Amz-Date'=>'20211101T035113Z'
# }).
# to_return(status: 200, body: "", headers: {})
#
# registered request stubs:
#
# stub_request(:head, "https://s3-upload-bucket.s3.us-west-1.amazonaws.com/")
#
# ============================================================
```
* DEV: Require webmock in application controller if rails.env.test
* DEV: Rescue from StandardError and NetConnectNotAllowedError
The `白名单` term becomes `名单 白名单` after it is processed by
cppjieba in :query mode. However, `白名单` is not tokenized as such by cppjieba when it
appears in a string of text. Therefore, this may lead to failed matches as
the search data generated while indexing may not contain all of the
terms generated by :query mode. We've decided to maintain parity for now
such that both indexing and querying uses the same :mix mode. This may
lead to less accurate search but our plan is to properly support CJK
search in the future.
In preparation for adding automatic CORS rules creation
for direct S3 uploads, I am adding tests here and moving the
CORS rule definitions into a dedicated class so they are all
in the one place.
There is a problem with ensure_cors! as well -- if there is
already a CORS rule defined (presumably the asset one) then
we do nothing and do not apply the new rule. This means that
the S3BackupStore.ensure_cors method does nothing right now
if the assets rule is already defined, and it will mean the
same for any direct S3 upload rules I add for uppy. We need
to be able to add more rules, not just one.
This is not a problem on our hosting because we define the
rules at an infra level.
* FIX: allowed_theme_ids should not be persisted in GlobalSettings
It was observed that the memoized value of `GlobalSetting.allowed_theme_ids` would be persisted across requests, which could lead to unpredictable/undesired behaviours in a multisite environment.
This change moves that logic out of GlobalSettings so that the returned theme IDs are correct for the current site.
Uses get_set_cache, which ultimately uses DistributedCache, which will take care of multisite issues for us.
* DEV: Sanitize HTML admin inputs
This PR adds on-save HTML sanitization for:
Client site settings
translation overrides
badges descriptions
user fields descriptions
I used Rails's SafeListSanitizer, which [accepts the following HTML tags and attributes](018cf54073/lib/rails/html/sanitizer.rb (L108))
* Make sure that the sanitization logic doesn't corrupt settings with special characters
This PR doesn't change any behavior, but just removes code that wasn't in use. This is a pretty dangerous place to change, since it gets called during user's registration. At the same time the refactoring is very straightforward, it's clear that this code wasn't doing any work (it still needs to be double-checked during review though). Also, the test coverage of UserNameSuggester is good.
By default, Rails only includes the Vary:Accept header in responses when the Accept: header is included in the request. This means that proxies/browsers may cache a response to a request with a missing Accept header, and then later serve that cached version for a request which **does** supply the Accept header. This can lead to some very unexpected behavior in browsers.
This commit adds the Vary:Accept header for all requests, even if the Accept header is not present in the request. If a format parameter (e.g. `.json` suffix) is included in the path, then the Accept header is still omitted. (The format parameter takes precedence over any Accept: header, so the response is no longer varies based on the Accept header)
Not reseting the registry could lead to assets still being registered for example.
This flakky spec was reprdocible with this call: `bundle exec rspec --seed 9472 spec/components/discourse_plugin_registry_spec.rb spec/components/svg_sprite/svg_sprite_spec.rb`
Which would trigger the following error:
```
Failures:
1) DiscoursePluginRegistry#register_asset registers vendored_core_pretty_text properly
Failure/Error: expect(registry.javascripts.count).to eq(0)
expected: 0
got: 1
(compared using ==)
# ./spec/components/discourse_plugin_registry_spec.rb:248:in `block (3 levels) in <top (required)>'
# ./spec/rails_helper.rb:280:in `block (2 levels) in <top (required)>'
# /Users/joffreyjaffeux/.gem/ruby/2.7.3/gems/webmock-3.14.0/lib/webmock/rspec.rb:37:in `block (2 levels) in <top (required)>'
```
When inviting a group to a topic, there may be members of
the group already in the topic as topic allowed users. These
can be safely removed from the topic, because they are implicitly
allowed in the topic based on their group membership.
Also, this prevents issues with group SMTP emails, which rely
on the topic_allowed_users of the topic to send to and cc's
for emails, and if there are members of the group as topic_allowed_users
then that complicates things and causes odd behaviour.
We also ensure that the OP of the topic is not removed from
the topic_allowed_users when a group they belong to is added,
as it will make it harder to add them back later.
User API keys (not the same thing as admin API keys) are currently
leaked to redis when rate limits are applied to them since redis is the
backend for rate limits in Discourse and the API keys are included in
the redis keys that are used to track usage of user API keys in the last
24 hours.
This commit stops the leak by using a SHA-256 representation of the user
API key instead of the key itself to form the redis key.
We don't need to manually delete the existing redis keys that contain
unhashed user API keys because they're not long-lived and will be
automatically deleted within 48 hours after this commit is deployed to
your Discourse instance.
This is a follow-up to https://github.com/discourse/discourse/pull/14541. This adds a hidden setting for restoring the old behavior for those users who rely on it. We'll likely deprecate this setting at some point in the future.
This commit removes the recipient's username from the
respond to / participants list that is shown at the bottom
of user notification emails. For example if the recipient's
username was jsmith, and there were participants ljones and
bmiller, we currently show this:
> "reply to this email to respond to jsmith, ljones, bmiller"
or
> "Participants: jsmith, ljones, bmiller"
However this is a bit redundant, as you are not replying to
yourself here if you are the recipient user. So we omit the
recipient user's username from this list, which is only used
in the text of the email and not elsewhere.
The method was only used for mega topics but it was redundant as the
first post can be determined from using the condition where
`Post#post_number` equal to one.
* FEATURE: Cache CORS preflight requests for 2h
Browsers will cache this for 5 seconds by default. If using MessageBus
in a different domain, Discourse will issue a new long polling, by
default, every 30s or so. This means we would be issuing a new preflight
request **every time**. This can be incredibly wasteful, so let's cache
the authorization in the client for 2h, which is the maximum Chromium
allows us as of today.
* fix tests