This commit fixes the following issue:
* User creates a post
* Akismet or some other thing like requiring posts to be approved puts
the post in the review queue, deleting it
* Admin approves the post
* Email is never sent to mailing list mode subscribers
We intentionally do not enqueue this for every single post when
recovering a topic (i.e. recovering the first post) since the topics
could have a lot of posts with emails already sent, and we don't want
to clog sidekiq with thousands of notify jobs.
We refer to the channel name rather than title elsewhere
(including the new channel modal), so we should be consistent.
Title is an internal abstraction, since DM channels cannot have
names (currently).
Also change the name field on channel edit to a input type="text"
rather than a textarea, since we don't want a huge input here.
Our JS files reference sourcemaps relative to their current path. On sites with non-S3 CDN setups, we use a special path for brotli assets (39a524aa). This caused the sourcemap requests to 404.
This commit fixes the issue by allowing the `.map` files to be accessed under `/brotli_asset/*`.
This was previously disabled because of incompatibility with the ember-cli proxy. This commit fixes that incompatibility, and restores the development behaviour to match production.
There were three issues at play:
1. Our bootstrap-js addon handles the forwarding of most requests in the ember-cli proxy. This is not built to handle streaming responses. Solution: skip our custom request processing for `/message-bus/*` and use ember-cli's default `http-proxy`.
2. The request/response size-limiting middleware (`rawMiddleware`) would apply even to unhandled paths, causing request and response bodies to be buffered. Solution: skip it for any paths which are not handled by our custom addon.
3. Expressjs servers will buffer/compress responses. Solution: add `Cache-Control: no-transform` to message-bus responses. For now I've done this in development only, but it may be useful to add it to message-bus's default headers in future
TL4 users can already list and unlist topics, but they can't see
the unlisted topics. This change brings this to par by allowing
TL4 users to also see unlisted topics.
When the `tags_listed_by_group` site setting is enabled, we were seeing
the N+1 queries problem when multiple `TagGroup` records are listed.
This commit fixes that by ensuring that we are not filtering through the
`tags` association after the association has been eager loaded.
Adds the pre-commit hook for syntax_tree to lefthook. Didn't
add the normal linter because it seems to have a max file
limit, which we exceed with a combination of *.rb and *.rake.
If the format doesn't match on commit we get this:
```
EXECUTE > syntax_tree
[warn] lib/post_jobs_enqueuer.rb
[warn] lib/tasks/s3.rake
The listed files did not match the expected format.
```
So it can easily be overwritten in a plugin for example.
### Added more tests to provide better coverage
We previously only had `u.silenced_till IS NULL` but I made it consistent with pretty much every other places where we check for "active" users.
These two new lines do change the query a tiny bit though.
**Before**
- You could not get the badge if you were currently silenced (no matter what period is being checked)
- You could get the badge if you were suspended 😬
**After**
- You can't get the badge if you were silenced during the past year
- You can't get the badge if you were suspended during the past year
### Improved the performance of the query by using `NOT EXISTS` instead of `LEFT JOIN / COUNT() = 0`
There is no difference in behaviour between
```sql
LEFT JOIN user_badges AS ub ON ub.user_id = u.id AND ...
[...]
HAVING COUNT(ub.*) = 0
```
and
```sql
NOT EXISTS (SELECT 1 FROM user_badges AS ub WHERE ub.user_id = u.id AND ...)
```
The only difference is performance-wise. The `NOT EXISTS` is 10-30% faster on very large databases (aka. posts and users in X millions). I checked on 3 of the largest datasets I could find.
Added in c2013865d7,
this migration was supposed to only turn off the hashtag
setting for existing sites (since that was the old default)
but its doing it for new ones too because we run all migrations
on new sites.
Instead, we should only run this if the first migration was
only just created, meaning its a new site.
If the enable_experimental_hashtag_autocomplete setting is
enabled, then we should autolink hashtag references to the
archived channels (e.g. #blah::channel) for a nicer UX, and
just show the channel name if not (since doing #channelName
can lead to weird inconsistent results).
When the thread is aborted, an exception is raised before the `start` of a job is set, and therefore raises an exception in the `ensure` block. This commit checks that `start` exists, and also adds `abort_on_exception=true` so that this issue would have caused test failures.
The `enable_new_notifications_menu` site setting allows sites that have
`navigation_menu` set to `legacy` to use the redesigned notifications
menu before switching to the new sidebar navigation menu.
This should resolve these warnings under Ruby 3.1
```
warning: Passing safe_level with the 2nd argument of ERB.new is deprecated
```
Unfortunately Sprockets 3.x has not seen a rubygems release since 2018, so we need to fetch these improvements via git.
In Discourse, there are many migration files where we CREATE INDEX CONCURRENTLY which requires us to set disable_ddl_transaction!. Setting disable_ddl_transaction! in a migration file runs the SQL statements outside of a transaction. The implication of this is that there is no ROLLBACK should any of the SQL statements fail.
We have seen lock timeouts occuring when running CREATE INDEX CONCURRENTLY. When that happens, the index would still have been created but marked as invalid by Postgres.
Per the postgres documentation:
> If a problem arises while scanning the table, such as a deadlock or a uniqueness violation in a unique index, the CREATE INDEX command will fail but leave behind an “invalid” index. This index will be ignored for querying purposes because it might be incomplete; however it will still consume update overhead.
> The recommended recovery method in such cases is to drop the index and try again to perform CREATE INDEX CONCURRENTLY . (Another possibility is to rebuild the index with REINDEX INDEX CONCURRENTLY ).
When such scenarios happen, we are supposed to either drop and create the index again or run a REINDEX operation. However, I noticed today that we have not been doing so in Discourse. Instead, we’ve been incorrectly working around the problem by checking for the index existence before creating the index in order to make the migration idempotent. What this potentially mean is that we might have invalid indexes which are lying around in the database which PG will ignore for querying purposes.
This commits adds a migration which queries for all the
invalid indexes in the `public` namespace and reindexes them.
Since the poll post handler runs very early in the post creation
process, it's possible to run the handler on an obiviously invalid post.
This change ensures the post's `raw` value is present before
proceeding.
Data Explorer queries have a `user_id` assigned to each query created. DE Reports can be bookmarked for later reference.
When creating the bookmark notification there was the possibility of a notification error being thrown (that made the notification menu inaccessible) due to a DE Query not having a owner (associated user_id). This can happen in a couple ways:
- having a query created by a user that was then later deleted leaving the query without ownership
- having a TA create a query for a customer using a temporary account, that would then later be deleted leaving the query without ownership
Since there is a case that `bookmark.user` is not valid the PR makes the `bookmark.user.username` optional for a bookmark notification. As [tested](https://github.com/discourse/discourse/pull/19851/files#diff-5b5154de37f96988d551feff6f1dfe5ba804fbcbc1c33b5478dde02a447a634f) in the case a username is not present, we will still render the `content` of the notification minus the username. This creates a safe fallback when looking up non-valid users.