This commit is a redo of2f1ddadff7dd47f824070c8a3f633f00a27aacde
which we reverted because it blew up an internal CI check. I looked
into it, and it happened because the old migration to add the bookmark
columns still existed, and those columns were dropped in a post migrate,
so the two migrations to add the columns were conflicting before
the post migrate was run.
------
This commit only includes the creation of the new columns and index,
and does not add any triggers, backfilling, or new data.
A backfill will be done in the final PR when we switch this over.
Intermediate PRs will look something like this:
Add an experimental site setting for using polymorphic bookmarks,
and make sure in the places where bookmarks are created or updated
we fill in the columns. This setting will be used in subsequent
PRs as well.
Listing and searching bookmarks based on polymorphic associations
Creating post and topic bookmarks using polymorphic associations,
and changing special for_topic logic to just rely on the Topic
bookmarkable_type
Querying bookmark reminders based on polymorphic associations
Make sure various other areas like importers, bookmark guardian,
and others all rely on the associations
Prepare plugins that rely on the Bookmark model to use polymorphic
associations
The final core PR will remove all the setting gates and switch over
to using the polymorphic associations, backfill the bookmarks
table columns, and ignore the old post_id and for_topic colummns.
Then it will just be a matter of dropping the old columns down the
line.
This URL was originally updated in 89cb537fae. However, some sites are not using the proxy, and have configured their forum to hotlink images directly to avatars.discourse.org.
We intend to shut down this domain in favor of `avatars.discourse-cdn.com`, so this migration will re-write any matching site setting values and queue affected posts for rebaking.
* FEATURE: upload an avatar option for uploading avatars with selectable avatars
Allow staff or users at or above a trust level to upload avatars even when the site
has selectable avatars enabled.
Everyone can still pick from the list of avatars. The option to upload is shown
below the selectable avatar list.
refactored boolean site setting into an enum with the following values:
disabled: No selectable avatars enabled (default)
everyone: Show selectable avatars, and allow everyone to upload custom avatars
tl1: Show selectable avatars, but require tl1+ and staff to upload custom avatars
tl2: Show selectable avatars, but require tl2+ and staff to upload custom avatars
tl3: Show selectable avatars, but require tl3+ and staff to upload custom avatars
tl4: Show selectable avatars, but require tl4 and staff to upload custom avatars
staff: Show selectable avatars, but only allow staff to upload custom avatars
no_one: Show selectable avatars. No users can upload custom avatars
Co-authored-by: Régis Hanol <regis@hanol.fr>
This commit makes sure that the email log's bounce_error_code
conforms to the SMTP error code RFC on save, so that
it is always in the format X.X.X or XXX without any
additional string details. Also included is a migration
to fix this issue for past records.
We added this constraint in 5bd55acf83
but it is causing problems in hosted sites and is catching the
issue too far down the line. This commit removes the constraint
for now, and also fixes an issue found with PostDestroyer
which wasn't using the UserStatCountUpdater when updating post_count
and thus was causing negative numbers to occur.
Follow up to 88a8584348. Sets
the baked version of all posts with custom emoji and a secure
media URL in the cooked content to 0. Then our periodic rebake
posts job will rebake them to apply the fix in the linked
commit. This only matters on sites with secure media enabled.
It is too close to release of 2.8 for incomplete
feature shenanigans. Ignores and drops the columns and drops
the trigger/function introduced in
e21c640a3c.
Will pick this feature back up post-release.
Old OnceOff job could perform pretty slowly on sites with millions of emails
New implementation operates in batches in a migration, minimizing locking.
Currently when a user creates posts that are moderated (for whatever
reason), a popup is displayed saying the post needs approval and the
total number of the user’s pending posts. But then this piece of
information is kind of lost and there is nowhere for the user to know
what are their pending posts or how many there are.
This patch solves this issue by adding a new “Pending” section to the
user’s activity page when there are some pending posts to display. When
there are none, then the “Pending” section isn’t displayed at all.
* PERF: Improve database query perf when loading topics for a category.
Instead of left joining the `topics` table against `categories` by filtering with `categories.id`,
we can improve the query plan by filtering against `topics.category_id`
first before joining which helps to reduce the number of rows in the
topics table that has to be joined against the other tables and also
make better use of our existing index.
The following is a before and after of the query plan for a category
with many subcategories.
Before:
```
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
Limit (cost=1.28..747.09 rows=30 width=12) (actual time=85.502..2453.727 rows=30 loops=1)
-> Nested Loop Left Join (cost=1.28..566518.36 rows=22788 width=12) (actual time=85.501..2453.722 rows=30 loops=1)
Join Filter: (category_users.category_id = topics.category_id)
Filter: ((topics.category_id = 11) OR (COALESCE(category_users.notification_level, 1) <> 0) OR (tu.notification_level > 1))
-> Nested Loop Left Join (cost=1.00..566001.58 rows=22866 width=20) (actual time=85.494..2453.702 rows=30 loops=1)
Filter: ((COALESCE(tu.notification_level, 1) > 0) AND ((topics.category_id <> 11) OR (topics.pinned_at IS NULL) OR ((t
opics.pinned_at <= tu.cleared_pinned_at) AND (tu.cleared_pinned_at IS NOT NULL))))
Rows Removed by Filter: 1
-> Nested Loop (cost=0.57..528561.75 rows=68606 width=24) (actual time=85.472..2453.562 rows=31 loops=1)
Join Filter: ((topics.category_id = categories.id) AND ((categories.topic_id <> topics.id) OR (categories.id = 1
1)))
Rows Removed by Join Filter: 13938306
-> Index Scan using index_topics_on_bumped_at on topics (cost=0.42..100480.05 rows=715549 width=24) (actual ti
me=0.010..633.015 rows=464623 loops=1)
Filter: ((deleted_at IS NULL) AND ((archetype)::text <> 'private_message'::text))
Rows Removed by Filter: 105321
-> Materialize (cost=0.14..36.04 rows=30 width=8) (actual time=0.000..0.002 rows=30 loops=464623)
-> Index Scan using categories_pkey on categories (cost=0.14..35.89 rows=30 width=8) (actual time=0.006.
.0.040 rows=30 loops=1)
Index Cond: (id = ANY ('{11,53,57,55,54,56,112,94,107,115,116,117,97,95,102,103,101,105,99,114,106,1
13,104,98,100,96,108,109,110,111}'::integer[]))
-> Index Scan using index_topic_users_on_topic_id_and_user_id on topic_users tu (cost=0.43..0.53 rows=1 width=16) (a
ctual time=0.004..0.004 rows=0 loops=31)
Index Cond: ((topic_id = topics.id) AND (user_id = 1103877))
-> Materialize (cost=0.28..2.30 rows=1 width=8) (actual time=0.000..0.000 rows=0 loops=30)
-> Index Scan using index_category_users_on_user_id_and_last_seen_at on category_users (cost=0.28..2.29 rows=1 width
=8) (actual time=0.004..0.004 rows=0 loops=1)
Index Cond: (user_id = 1103877)
Planning Time: 1.359 ms
Execution Time: 2453.765 ms
(23 rows)
```
After:
```
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=1.28..438.55 rows=30 width=12) (actual time=38.297..657.215 rows=30 loops=1)
-> Nested Loop Left Join (cost=1.28..195944.68 rows=13443 width=12) (actual time=38.296..657.211 rows=30 loops=1)
Filter: ((categories.topic_id <> topics.id) OR (topics.category_id = 11))
Rows Removed by Filter: 29
-> Nested Loop Left Join (cost=1.13..193462.59 rows=13443 width=16) (actual time=38.289..657.092 rows=59 loops=1)
Join Filter: (category_users.category_id = topics.category_id)
Filter: ((topics.category_id = 11) OR (COALESCE(category_users.notification_level, 1) <> 0) OR (tu.notification_level > 1))
-> Nested Loop Left Join (cost=0.85..193156.79 rows=13489 width=20) (actual time=38.282..657.059 rows=59 loops=1)
Filter: ((COALESCE(tu.notification_level, 1) > 0) AND ((topics.category_id <> 11) OR (topics.pinned_at IS NULL) OR ((topics.pinned_at <= tu.cleared_pinned_at) AND (tu.cleared_pinned_at IS NOT NULL))))
Rows Removed by Filter: 1
-> Index Scan using index_topics_on_bumped_at on topics (cost=0.42..134521.06 rows=40470 width=24) (actual time=38.267..656.850 rows=60 loops=1)
Filter: ((deleted_at IS NULL) AND ((archetype)::text <> 'private_message'::text) AND (category_id = ANY ('{11,53,57,55,54,56,112,94,107,115,116,117,97,95,102,103,101,105,99,114,106,113,104,98,100,96,108,109,110,111}'::integer[])))
Rows Removed by Filter: 569895
-> Index Scan using index_topic_users_on_topic_id_and_user_id on topic_users tu (cost=0.43..1.43 rows=1 width=16) (actual time=0.003..0.003 rows=0 loops=60)
Index Cond: ((topic_id = topics.id) AND (user_id = 1103877))
-> Materialize (cost=0.28..2.30 rows=1 width=8) (actual time=0.000..0.000 rows=0 loops=59)
-> Index Scan using index_category_users_on_user_id_and_last_seen_at on category_users (cost=0.28..2.29 rows=1 width=8) (actual time=0.004..0.004 rows=0 loops=1)
Index Cond: (user_id = 1103877)
-> Index Scan using categories_pkey on categories (cost=0.14..0.17 rows=1 width=8) (actual time=0.001..0.001 rows=1 loops=59)
Index Cond: (id = topics.category_id)
Planning Time: 1.633 ms
Execution Time: 657.255 ms
(22 rows)
```
* PERF: Optimize index on topics bumped_at.
Replace `index_topics_on_bumped_at` index with a partial index on `Topic#bumped_at` filtered by archetype since there is already another index that covers private topics.
We don't need no stinkin' denormalization! This commit ignores
the topic_id column on bookmarks, to be deleted at a later date.
We don't really need this column and it's better to rely on the
post.topic_id as the canonical topic_id for bookmarks, then we
don't need to remember to update both columns if the bookmarked
post moves to another topic.
The duration column has been ignored since the commit
4af77f1e38
for topic_timers, we use duration_minutes instead.
Also removing the duration key from Topic.set_or_create_timer. The only
plugin to use this was discourse-solved, which doesn't use it any
longer
since
c722b94a97
In the previous commit 5222247
we added a topic_id column to EmailLog. This simply backfills it in
batches. The next PR will get rid of the topic method defined on EmailLog in favour
of belongs_to.
Having a large number of post-deploy migrations running out-of-numerical-sequence with pre-deploy migrations can be problematic. For example, if we have the sequence
- db/migrate/2017... - add column
- db/post_migrate/2018... - drop the column
- db/migrate/2021... - add the same column again
It will work fine in numerical order. But if you run the pre-deploy migrations **followed by** the post-deploy migrations, you will not get the same result.
Our post-deploy system is designed to allow for seamless upgrades of Discourse. However, it is reasonable for us to only support this totally seamless experience for a limited period of time. This commit moves all post_deploy migrations which are more than 1 year old (i.e. more than 2 major Discourse versions ago) into the regular pre-deploy migrations directory. This limits the impact of any edge cases caused by out-of-numerical-sequence migrations.
The first thing we needed here was an enum rather than a boolean to determine how a directory_column was created. Now we have `automatic`, `user_field` and `plugin` directory columns.
This plugin API is assuming that the plugin has added a migration to a column to the `directory_items` table.
This was created to be initially used by discourse-solved. PR with API usage - https://github.com/discourse/discourse-solved/pull/137/
Users who use encoded slugs on their sites sometimes run into 500 error when pasting a link to another topic in a post. The problem happens when generating a backward "reflection" link that would appear in a linked topic. Link URL restricted on the database level to 500 chars in length. At first glance, it should work since we have a restriction on topic title length.
But it doesn't work when a site uses encoded slugs, like here (take a look at the URL). The link to a topic, in this case, can be much longer than 500 characters.
By the way, an error happens only when generating a "reflection" link and doesn't happen with a direct link, we truncate that link. It works because, in this case, the original long link is still present in the post body and can be used for navigation. But we can't do the same for backward "reflection" links (without rewriting their implementation), the whole link must be saved to the database.
The simplest and cleanest solution will be just to remove the restriction on the database level. Abuse is impossible here since we are already protected by the restriction on topic title length. There aren’t performance benefits in using length-constrained columns in Postgres, in fact, length-constrained columns need a few extra CPU cycles to check the length when storing data.
The excerpt field in the database is constrained to 1000 chars in length. To support this constraint we added a restriction that the topic_excerpt_maxlength setting must be between 0 and 999.
Unfortunately, sometimes it doesn’t work because:
- topic_excerpt_maxlength restricts the length of a visible to user excerpt. But we HTML-escape text before saving. If an excerpt contains & it’ll be &. One character for the user but 5 characters to save to the database. So if topic_excerpt_maxlength is set to 999 it’s not so hard to have an excerpt of for example 1003 characters in length and run into this issue.
- It’s possible to define a custom excerpt for a topic. Such excerpts bypass check for a length. So if the user defines a too long custom excerpt he will run into this issue.
Removing the constraint on the database level solves the problem. But we still need the constraint for topic_excerpt_maxlength on the setting page, because too long excerpts would make UI wonky.
Because bookmarks have both topic and post ID, when the post was moved into another topic the bookmark was still attached to the post but did not show in the UI. This PR makes it so the all topic IDs for bookmarks attached to a post are updated when a post is moved.
Also included is a migration to fix affected records (e.g. on Meta there are 20 affected records).
See: https://meta.discourse.org/t/improved-bookmarks-with-reminders/144542/203
I was adding specs to ensure that post actions and uploads are removed for permanently deleted posts.
I noticed that post revisions were not permanently destroyed. I added a migration to fix old data.
* FIX: optimise MoveNewSinceToTable
Avoids shuffling all ids around to the app (only use min / max)
Ensure the query for boundaries is ordered by user_id
The bug was mentioned on meta: https://meta.discourse.org/t/pressing-dismiss-new-doesnt-clear-new-topics/179858
Problem is that sometimes the user has TopicUser records with `last_read_post_number` set as NULL. In that case, the topic is still "new" to them and should be dismissed when they click dismiss button.
In addition, I added that condition to post_migration and bumped the number to fix existing records. Migration is written to be idempotent so it will make no harm to already deployed instances.
Original PR was reverted because of broken migration https://github.com/discourse/discourse/pull/12058
I fixed it by adding this line
```
AND topics.id IN(SELECT id FROM topics ORDER BY created_at DESC LIMIT :max_new_topics)
```
This time it is left joining a limited amount of topics. I tested it on few databases and it worked quite smooth
Because of a where clause of duration_minutes != duration, where
duration_minutes was NULL, the previous migration to fill the new
duration_minutes column failed. This corrects the failed migration
by just running the update where duration_minutes is NULL and duration
IS NOT NULL.
Previous commit is 4af77f1
Run the 'MigrateSearchDataAfterDefaultLocaleRename' post migration in batches of 500k records.
This will hopefully prevent any potential deadlocks on large tables.
The problem with this index is that on sites with a high non-pm to pm
posts ratio, the index is esstentially duplicating the existing index on
`PostSearchData#search_data`. If the site is huge, the index ends up
taking up more diskspace.
Very large batches can take an enormous amount of time due to churn
Limiting to 200k changes at a time gives us a far larger chance of finishing
the job without timing out or deadlocking.
A giant transaction in a post migration can be very risky.
This splits the large amount of work this migration needs to do into 2 parts:
1. A re-runnable cleanup job prior to transaction
2. A minimally sized transaction to add the database constraint
This avoids large amounts of churn on the table
Because previous migration was already deployed and some databases were already migrated, I needed to add some conditions to the migration.
Previous migration - https://github.com/discourse/discourse/blob/master/db/post_migrate/20200629232159_rename_path_whitelist_to_allowed_paths.rb
What will happen in a scenario when previous migration was not run.
1. column allowed_paths will be created
2. allowed_path will be populated with data from path_whitelist
3. path_whitelist column will be dropped
What will happen in a scenario when previous migration was already run.
1. column allowed_paths will not be created because already exists - `unless column_exists?(:embeddable_hosts, :allowed_paths)`
2. Data will not be copied because path_whitelist is missing - `if column_exists?(:embeddable_hosts, :path_whitelist) && column_exists?(:embeddable_hosts, :allowed_paths)`
3. path_whitelist column deletion will be skipped - `if column_exists?(:embeddable_hosts, :path_whitelist)`
This adds an option to "delete on owner reply" to bookmarks. If you select this option in the modal, then reply to the topic the bookmark is in, the bookmark will be deleted on reply.
This PR also changes the checkboxes for these additional bookmark options to an Integer column in the DB with a combobox to select the option you want.
The use cases are:
* Sometimes I will bookmark the topics to read it later. In this case we definitely don’t need to keep the bookmark after I replied to it.
* Sometimes I will read the topic in mobile and I will prefer to reply in PC later. Or I may have to do some research before reply. So I will bookmark it for reply later.
This is a no-op for modern installations. It only affects older sites.
Follow-up to f7d676dce1
This needs to be done in a transaction, because we need to drop and recreate the badge_posts view.
This reverts commit 20780a1eee.
* SECURITY: re-adds accidentally reverted commit:
03d26cd6: ensure embed_url contains valid http(s) uri
* when the merge commit e62a85cf was reverted, git chose the 2660c2e2 parent to land on
instead of the 03d26cd6 parent (which contains security fixes)
* PERF: Dematerialize topic_reply_count
It's only ever used for trust level promotions that run daily, or compared to 0. We don't need to track it on every post creation.
* UX: Add symbol in TL3 report if topic reply count is capped
* DEV: Drop user_stats.topic_reply_count column
We need to recreate the badge_posts view each time we change
column on the posts table.
The p.* is auto expanded on view create, cascade should not have
been used
avg_time on posts and topics have not been used in a year.
This uses a re-runnable ddl transaction diasabled migration to
drop the column, cause it touchs very high traffic table and may
deadlock
This reverts commit 6f9177e2ed.
We decided on a completely different approach to the problem.
Instead we will let blocked emails be treated as canonical.
Introduce the concept of "high priority notifications" which include PM and bookmark reminder notifications. Now bookmark reminder notifications act in the same way as PM notifications (float to top of recent list, show in the green bubble) and most instances of unread_private_messages in the UI have been replaced with unread_high_priority_notifications.
The user email digest is changed to just have a section about unread high priority notifications, the unread PM section has been removed.
A high_priority boolean column has been added to the Notification table and relevant indices added to account for it.
unread_private_messages has been kept on the User model purely for backwards compat, but now just returns unread_high_priority_notifications count so this may cause some inconsistencies in the UI.
This syncs the value of the `reply_id` column into the `reply_post_id` column until all servers have been deployed and the post migrations ran.
Follow-up to ab07b945c2
Temporarily recreate already dropped functions in the discourse_functions schema in order to allow restoring of backups which still reference dropped functions.
DEV: deprecate `invite.via_email` in favor of `invite.emailed_status`
This commit adds a new column `emailed_status` in `invites` table for
tracking email sending status.
0 - not required
1 - pending
2 - bulk pending
3 - sending
4 - sent
For normal email invites, invite record is created with emailed_status
set to 'pending'.
When bulk invites are sent invite record is created with emailed_status
set to 'bulk pending'.
For invites that generates link, invite record is created with
emailed_status set to 'not required'.
When invite email is in queue emailed_status is updated to 'sending'
Once the email is sent via `InviteEmail` job the invite emailed_status
is updated to 'sent'.
This reduces chances of errors where consumers of strings mutate inputs
and reduces memory usage of the app.
Test suite passes now, but there may be some stuff left, so we will run
a few sites on a branch prior to merging
This is a feature that used to be present in discourse-assign but is
much easier to implement in core. It also allows a topic to be assigned
without it claiming for review and vice versa and allows it to work with
category group reviewers.
Includes support for flags, reviewable users and queued posts, with REST API
backwards compatibility.
Co-Authored-By: romanrizzi <romanalejandro@gmail.com>
Co-Authored-By: jjaffeux <j.jaffeux@gmail.com>
Migrates email user options to a new data structure, where `email_always`, `email_direct` and `email_private_messages` are replace by
* `email_messages_level`, with options: `always`, `only_when_away` and `never` (defaults to `always`)
* `email_level`, with options: `always`, `only_when_away` and `never` (defaults to `only_when_away`)
* Doing it in a post migration was a bad idea
because the migration will fail if the site
is down while trying to download uploads
which points to the instance. This mainly
affects self-hosters using `discourse_docker`
where `./launcher rebuild` will take the
existing container down.
This moves us away from the delayed drops pattern which
was problematic on two counts. First, it uses a hardcoded "delay for"
duration which may be too short for certain deployment strategies.
Second, delayed drop doesn't ensure that it only runs after
the latest application code has been deployed. If the migration runs
and the application code fails to deploy, running the migration after
"delay for" has been met will cause the application to blow up.
The new strategy allows post deployment migrations to be skipped if the
env `SKIP_POST_DEPLOYMENT_MIGRATIONS` is provided.
```
SKIP_POST_DEPLOYMENT_MIGRATIONS=1 rake db:migrate
-> deploy app servers
SKIP_POST_DEPLOYMENT_MIGRATIONS=0 rake db:migrate
```
To aid with the generation of a post deployment migration, a generator
has been added. Simply run `rails generate post_migration`.