The check used to be necessary because we validated the referrer too and
this bypass was a workaround a bug that is present in some browsers that
do not send the correct referrer.
When creating a group membership request, there is no character
limit on the 'reason' field. This can be potentially be used by
an attacker to create enormous amount of data in the database.
Co-authored-by: Ted Johansson <ted@discourse.org>
Currently we don’t have an association between reviewables and posts.
This sometimes leads to inconsistencies in the DB as a post can have
been deleted but an associated reviewable is still present.
This patch addresses this issue simply by adding a new association to
the `Post` model and by using the `dependent: :destroy` option.
This is just cleaning up a TODO I had to add more specs
to this controller -- there are more thorough tests on the
actual HashtagService class and the type-specific hashtag
classes.
In dev/prod, these are absorbed by unicorn. Most commonly, they occur when a client interrupts a message-bus long-polling request.
Also reverts the EPIPE workaround introduced in 011c9b9973
There was an issue where if hashtag-cooked HTML was sent
to the ExcerptParser without the keep_svg option, we would
end up with empty </use> and </svg> tags on the parts of the
excerpt where the hashtag was, in this case when a post
push notification was sent.
Fixed this, and also added a way to only display a plaintext
version of the hashtag for cases like this via PrettyText#excerpt.
We've had the UploadReference table for some time now in core,
but it was added after ChatUpload was and chat was just never
moved over to this new system.
This commit changes all chat code dealing with uploads to create/
update/delete/query UploadReference records instead of ChatUpload
records for consistency. At a later date we will drop the ChatUpload
table, but for now keeping it for data backup.
The migration + post migration are the same, we need both in case
any chat uploads are added/removed during deploy.
When checking whether an existing upload should be secure
based on upload references, do not count deleted posts, since
there is still a reference attached to them. This can lead to
issues where e.g. an upload is used for a post then later on
a custom emoji.
This commit allows us to set the channel slug when creating new chat
channels. As well as this, it introduces a new `SlugsController` which can
generate a slug using `Slug.for` and a name string for input. We call this
after the user finishes typing the channel name (debounced) and fill in
the autogenerated slug in the background, and update the slug input
placeholder.
This autogenerated slug is used by default, but if the user writes anything
else in the input it will be used instead.
When EmbeddableHost is configured for a specific category and that category is deleted, then EmbeddableHost should be deleted as well.
In addition, migration was added to fix existing data.
Currently, `Tag#topic_count` is a count of all regular topics regardless of whether the topic is in a read restricted category or not. As a result, any users can technically poll a sensitive tag to determine if a new topic is created in a category which the user has not excess to. We classify this as a minor leak in sensitive information.
The following changes are introduced in this commit:
1. Introduce `Tag#public_topic_count` which only count topics which have been tagged with a given tag in public categories.
2. Rename `Tag#topic_count` to `Tag#staff_topic_count` which counts the same way as `Tag#topic_count`. In other words, it counts all topics tagged with a given tag regardless of the category the topic is in. The rename is also done so that we indicate that this column contains sensitive information.
3. Change all previous spots which relied on `Topic#topic_count` to rely on `Tag.topic_column_count(guardian)` which will return the right "topic count" column to use based on the current scope.
4. Introduce `SiteSetting.include_secure_categories_in_tag_counts` site setting to allow site administrators to always display the tag topics count using `Tag#staff_topic_count` instead.
This fixes a longstanding issue for sites with the
secure_uploads setting enabled. What would happen is a scenario
like this, since we did not check all places an upload could be
linked to whenever we used UploadSecurity to check whether an
upload should be secure:
* Upload is created and used for site setting, set to secure: false
since site setting uploads should not be secure. Let's say favicon
* Favicon for the site is used inside a post in a private category,
e.g. via a Onebox
* We changed the secure status for the upload to true, since it's been
used in a private category and we don't check if it's originator
was a public place
* The site favicon breaks :'(
This was a source of constant consternation. Now, when an upload is _not_
being created, and we are checking if an existing upload should be
secure, we now check to see what the first record in the UploadReference
table is for that upload. If it's something public like a site setting,
then we will never change the upload to `secure`.
Prior to this change, we were parsing `Post#cooked` every time we
serialize a post to extract the usernames of mentioned users in the
post. However, the only reason we have to do this is to support
displaying a user's status beside each mention in a post on the client side when
the `enable_user_status` site setting is enabled. When
`enable_user_status` is disabled, we should avoid having to parse
`Post#cooked` since there is no point in doing so.
Since the new hashtag format has been added, we want site
admins to be able to rebake old posts with the old hashtag
format. This can now be done with `rake hashtags:mark_old_format_for_rebake`
which goes and marks posts with the old cooked version of hashtags
in this format for rebake:
```
<a class=\"hashtag\" href=\"/c/ux/14\">#<span>ux</span></a>
```
c.f. https://meta.discourse.org/t/what-rebake-is-required-for-the-new-autocomplete-styling/249642/12
This commit fixes the following issue:
* User creates a post
* Akismet or some other thing like requiring posts to be approved puts
the post in the review queue, deleting it
* Admin approves the post
* Email is never sent to mailing list mode subscribers
We intentionally do not enqueue this for every single post when
recovering a topic (i.e. recovering the first post) since the topics
could have a lot of posts with emails already sent, and we don't want
to clog sidekiq with thousands of notify jobs.
Our JS files reference sourcemaps relative to their current path. On sites with non-S3 CDN setups, we use a special path for brotli assets (39a524aa). This caused the sourcemap requests to 404.
This commit fixes the issue by allowing the `.map` files to be accessed under `/brotli_asset/*`.
TL4 users can already list and unlist topics, but they can't see
the unlisted topics. This change brings this to par by allowing
TL4 users to also see unlisted topics.
When the `tags_listed_by_group` site setting is enabled, we were seeing
the N+1 queries problem when multiple `TagGroup` records are listed.
This commit fixes that by ensuring that we are not filtering through the
`tags` association after the association has been eager loaded.
So it can easily be overwritten in a plugin for example.
### Added more tests to provide better coverage
We previously only had `u.silenced_till IS NULL` but I made it consistent with pretty much every other places where we check for "active" users.
These two new lines do change the query a tiny bit though.
**Before**
- You could not get the badge if you were currently silenced (no matter what period is being checked)
- You could get the badge if you were suspended 😬
**After**
- You can't get the badge if you were silenced during the past year
- You can't get the badge if you were suspended during the past year
### Improved the performance of the query by using `NOT EXISTS` instead of `LEFT JOIN / COUNT() = 0`
There is no difference in behaviour between
```sql
LEFT JOIN user_badges AS ub ON ub.user_id = u.id AND ...
[...]
HAVING COUNT(ub.*) = 0
```
and
```sql
NOT EXISTS (SELECT 1 FROM user_badges AS ub WHERE ub.user_id = u.id AND ...)
```
The only difference is performance-wise. The `NOT EXISTS` is 10-30% faster on very large databases (aka. posts and users in X millions). I checked on 3 of the largest datasets I could find.
Added in c2013865d7,
this migration was supposed to only turn off the hashtag
setting for existing sites (since that was the old default)
but its doing it for new ones too because we run all migrations
on new sites.
Instead, we should only run this if the first migration was
only just created, meaning its a new site.
When the thread is aborted, an exception is raised before the `start` of a job is set, and therefore raises an exception in the `ensure` block. This commit checks that `start` exists, and also adds `abort_on_exception=true` so that this issue would have caused test failures.
The `enable_new_notifications_menu` site setting allows sites that have
`navigation_menu` set to `legacy` to use the redesigned notifications
menu before switching to the new sidebar navigation menu.
This patch introduces a cookies rotator as indicated in the Rails
upgrade guide. This allows to migrate from the old SHA1 digest to the
new SHA256 digest.
The latter can be called directly from the Topic page object,
so we can remove some duplication between the two. There are
levels of page objects (e.g. entire page, component, complete flow)
and its perfectly valid to call one from another.
We've been doing some work to support new keyword argument semantics in Ruby 3. As part of that we made some changes to `DiscourseEvent::TestHelper`. The backwards compatibility fix doesn't work if the method is called with an empty hash as the final argument. This fix adds a valid option to the final hash in the particular test.
In "GlobalSetting.redirect_avatar_requests" mode, when the application gets
an avatar request it returns a "redirect" to the S3 CDN.
This shields the application from caching avatars and downloading from S3.
However clients will make 2 requests per avatar. (one to get redirect,
second to get avatar)
A one hour cache on a redirect means there may be an increase in CDN
traffic, given more clients will ask for the redirect every hour.
This may also lead to an increase in origin requests to the application.
To mitigate lets cache the CDN URL for 1 day.
The downside is that any changes to S3 CDN need extra care to allow for
the extra 1 day delay. (leave data around for 1 extra day)
This will give us some aggregate stats on the defer queue performance.
It is limited to 100 entries (for safety) which is stored in an LRU cache.
Scheduler::Defer.stats can then be used to get an array that denotes:
- number of runs and completions (queued, finished)
- error count (errors)
- total duration (duration)
We can look later at exposing these metrics to gain visibility on the reason
the defer queue is clogged.
0403cda1d1 introduced a regression where
topics in non read-restricted categories have its TopicTrackingState
MessageBus messages published with the `group_ids: [nil]` option. This
essentially means that no one would be able to view the message.