Allow admins to configure exceptions to our Rails rate limiter.
Configuration happens in the environment variables, and work with both
IPs and CIDR blocks.
Example:
```
env:
DISCOURSE_MAX_REQS_PER_IP_EXCEPTIONS: >-
14.15.16.32/27
216.148.1.2
```
In 2018 check was added that TL1 welcome message is sent unless user already has BasicBadge granted.
I think we should also check if BasicBadge is even enabled. Otherwise, each time group is assigned to a user and trust level is recalculated, they will receive a welcome message.
This disallows putting URLs in topic titles for TL0 users, which means that:
If a TL-0 user puts a link into the title, a topic featured link won't be generated (as if it was disabled in the site settings)
Server methods for creating and updating topics will be refusing featured links when they are called by TL-0 users
TL-0 users won't be able to put any link into the topic title. For example, the title "Hey, take a look at https://my-site.com" will be rejected.
Also, it improves a bit server behavior when creating or updating feature links on topics in the categories with disabled featured links. Before the server just silently ignored a featured link field that was passed to him, now it will be returning 422 response.
* FIX: Update draft count when sequence is increased
Sometimes users ended up having a draft count higher than the actual
number of drafts.
* FIX: Do not update draft count twice
The call to DraftSequence.next! above already does it.
Inlining secure images with the same name was not possible because they
were indexed by filename. If an email contained two files with the same
name, only the first image was used for both of them. The other file
was still attached to the email.
When the Reply-To header is present for incoming emails we
want to use it instead of the from address. This is usually the
case when forwarding an email via a mailing list into Discourse.
For now we are only using the Reply-To header if the email has
been forwarded via Google Groups, which is why we are checking the
X-Original-From header too. In future we may want to use the Reply-To
header in more cases.
* FEATURE: Onebox can match engines based on the content_type
`FinalDestination` now returns the `content_type` of a resolved URL.
`Oneboxer` passes this value to `Onebox` itself. Onebox engines can now specify a `matches_content_type` regex of content_types that the engine can handle, regardless of the URL.
`ImageOnebox` will match URLs with a content type of `image/png`, `jpg`, `gif`, `bmp`, `tif`, etc.
This will allow images that exist at a URL without a file type extension to be correctly rendered, assuming a valid `content_type` is returned.
When a post is created, the draft sequence is increased and then older
drafts are automatically executing a raw SQL query. This skipped the
Draft model callbacks and did not update user's draft count.
I fixed another problem related to a raw SQL query from Draft.cleanup!
method.
This adds a few different things to allow for direct S3 uploads using uppy. **These changes are still not the default.** There are hidden `enable_experimental_image_uploader` and `enable_direct_s3_uploads` settings that must be turned on for any of this code to be used, and even if they are turned on only the User Card Background for the user profile actually uses uppy-image-uploader.
A new `ExternalUploadStub` model and database table is introduced in this pull request. This is used to keep track of uploads that are uploaded to a temporary location in S3 with the direct to S3 code, and they are eventually deleted a) when the direct upload is completed and b) after a certain time period of not being used.
### Starting a direct S3 upload
When an S3 direct upload is initiated with uppy, we first request a presigned PUT URL from the new `generate-presigned-put` endpoint in `UploadsController`. This generates an S3 key in the `temp` folder inside the correct bucket path, along with any metadata from the clientside (e.g. the SHA1 checksum described below). This will also create an `ExternalUploadStub` and store the details of the temp object key and the file being uploaded.
Once the clientside has this URL, uppy will upload the file direct to S3 using the presigned URL. Once the upload is complete we go to the next stage.
### Completing a direct S3 upload
Once the upload to S3 is done we call the new `complete-external-upload` route with the unique identifier of the `ExternalUploadStub` created earlier. Only the user who made the stub can complete the external upload. One of two paths is followed via the `ExternalUploadManager`.
1. If the object in S3 is too large (currently 100mb defined by `ExternalUploadManager::DOWNLOAD_LIMIT`) we do not download and generate the SHA1 for that file. Instead we create the `Upload` record via `UploadCreator` and simply copy it to its final destination on S3 then delete the initial temp file. Several modifications to `UploadCreator` have been made to accommodate this.
2. If the object in S3 is small enough, we download it. When the temporary S3 file is downloaded, we compare the SHA1 checksum generated by the browser with the actual SHA1 checksum of the file generated by ruby. The browser SHA1 checksum is stored on the object in S3 with metadata, and is generated via the `UppyChecksum` plugin. Keep in mind that some browsers will not generate this due to compatibility or other issues.
We then follow the normal `UploadCreator` path with one exception. To cut down on having to re-upload the file again, if there are no changes (such as resizing etc) to the file in `UploadCreator` we follow the same copy + delete temp path that we do for files that are too large.
3. Finally we return the serialized upload record back to the client
There are several errors that could happen that are handled by `UploadsController` as well.
Also in this PR is some refactoring of `displayErrorForUpload` to handle both uppy and jquery file uploader errors.
Prior to this fix, post whisperer in personal messages are revealed in
the topic's participants list even though non-staff users are unable to
see the whisper.
Using an invalid value was allowed. This commit tries to automatically
fix the color by adding missing # symbol or will show an error to the
user if it is not possible and it is not a CSS color either.
Configuring staged users to watch categories and tags is a way to sign
them up to get many emails. These emails may be unwanted and get marked
as spam, hurting the site's email deliverability.
Users can opt-in to email notifications by logging on to their
account and configuring their own preferences.
If staff need to be able to configure these preferences on behalf of
staged users, the "allow changing staged user tracking" site setting
can be enabled. Default is to not allow it.
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
Flips content_security_policy_frame_ancestors default to enabled, and
removes HTTP_REFERER checks on embed requests, as the new referer
privacy options made the check fragile.
Fixes two issues:
- ignores invalid XML in custom icon sprite SVG file (and outputs an error if sprite was uploaded via admin UI)
- clears SVG sprite cache when deleting an `icons-sprite` upload in a theme
The date shown in topic timeline was one day later if the post at that
position was made near midnight. This happened because the days number
was rounded down.
Mixing multisite and standard specs can lead to issues (e.g. when using `fab!`)
Disabled the (upcoming https://github.com/discourse/rubocop-discourse/pull/11) rubocop rule for two files that have thoroughly tangled both types of specs.
Take 2 of https://github.com/discourse/discourse/pull/13466.
Fixes a few issues with the original PR:
- color definition stylesheet target now includes the theme id, to avoid themes set to use the default color scheme loading the same stylesheet
- changes the internal cache key for color definition stylesheet to reset the pre-existing cache
We renamed the site setting for this long ago, but there
were a few places left in the code base where "ninja edit"
needed to be turned into "grace period". Doing this here
to avoid combatative language.
Updated the context name, and fixed a typo that was the source of flakiness.
The error was:
```
1) TopicView with a few sample posts #first_post_bookmark_reminder_at gets the first post bookmark reminder at for the user
Failure/Error: expect(second[:reminder_at]).to eq_time(bookmark1.reminder_at)
2021-07-01 06:49:40 UTC is not within 1 millisecond of 2021-07-01 06:49:39 UTC
# ./spec/components/topic_view_spec.rb:426:in `block (4 levels) in <main>'
# ./spec/rails_helper.rb:279:in `block (2 levels) in <top (required)>'
# ./bundle/ruby/2.7.0/gems/webmock-3.13.0/lib/webmock/rspec.rb:37:in `block (2 levels) in <top (required)>'
```
We changed (https://github.com/discourse/discourse/pull/13407) behaviour of the topic level bookmark button recently. That PR made the button be opening the edit bookmark modal when there is only one bookmark on the topic instead of just removing that bookmark as it was before.
This PR fixes the next problems that weren't taken into account in the previous PR:
1. Everything should work fine even on very big topics when a bookmarked post is unloaded from the post stream. I've added code that loads the post we need and makes everything work as expected
2. When at least one bookmark on the topic has a reminder, we should always be showing the icon with a clock on the topic level bookmark button
3. We should show correct tooltips for the topic level bookmark button