This table holds associations between uploads and other models. This can be used to prevent removing uploads that are still in use.
* DEV: Create upload_references
* DEV: Use UploadReference instead of PostUpload
* DEV: Use UploadReference for SiteSetting
* DEV: Use UploadReference for Badge
* DEV: Use UploadReference for Category
* DEV: Use UploadReference for CustomEmoji
* DEV: Use UploadReference for Group
* DEV: Use UploadReference for ThemeField
* DEV: Use UploadReference for ThemeSetting
* DEV: Use UploadReference for User
* DEV: Use UploadReference for UserAvatar
* DEV: Use UploadReference for UserExport
* DEV: Use UploadReference for UserProfile
* DEV: Add method to extract uploads from raw text
* DEV: Use UploadReference for Draft
* DEV: Use UploadReference for ReviewableQueuedPost
* DEV: Use UploadReference for UserProfile's bio_raw
* DEV: Do not copy user uploads to upload references
* DEV: Copy post uploads again after deploy
* DEV: Use created_at and updated_at from uploads table
* FIX: Check if upload site setting is empty
* DEV: Copy user uploads to upload references
* DEV: Make upload extraction less strict
This commit migrates all bookmarks to be polymorphic (using the
bookmarkable_id and bookmarkable_type) columns. It also deletes
all the old code guarded behind the use_polymorphic_bookmarks setting
and changes that setting to true for all sites and by default for
the sake of plugins.
No data is deleted in the migrations, the old post_id and for_topic
columns for bookmarks will be dropped later on.
There is no need to wait until after the deploy for this cleanup. In fact, running it later will mean there could be a window of a few minutes during which the site is broken.
The only requirement is that it runs after the broken `20220401130745_create_category_required_tag_groups` migration.
Followup to 39a6de3d73
The category table's required_tag_group_id contained references to deleted tag groups, which we copied to the new table. The new serializer tries to get the associated tag group name but fails because the tag group is nil.
This PR adds an inner join in the original migration to make sure tag groups still exist and adds a new post-migration to fix already migrated sites.
Previously we only supported a single 'required tag group' for a category. This commit allows admins to specify multiple required tag groups, each with their own minimum tag count.
A new category_required_tag_groups database table replaces the existing columns on the categories table. Data is automatically migrated.
As we are gradually moving to having a polymorphic
bookmarkable relationship on the Bookmark table,
we need to make the post_id column nullable to be
able to develop and test the new columns, and
for cutover/migration purposes later as well.
This commit promotes all post_deploy migrations which existed in Discourse v2.7.13 (timestamp <= 20210328233843)
This reduces the likelihood of issues relating to migration run order
Also fixes a couple of typos in `script/promote_migrations`
This commit is a redo of2f1ddadff7dd47f824070c8a3f633f00a27aacde
which we reverted because it blew up an internal CI check. I looked
into it, and it happened because the old migration to add the bookmark
columns still existed, and those columns were dropped in a post migrate,
so the two migrations to add the columns were conflicting before
the post migrate was run.
------
This commit only includes the creation of the new columns and index,
and does not add any triggers, backfilling, or new data.
A backfill will be done in the final PR when we switch this over.
Intermediate PRs will look something like this:
Add an experimental site setting for using polymorphic bookmarks,
and make sure in the places where bookmarks are created or updated
we fill in the columns. This setting will be used in subsequent
PRs as well.
Listing and searching bookmarks based on polymorphic associations
Creating post and topic bookmarks using polymorphic associations,
and changing special for_topic logic to just rely on the Topic
bookmarkable_type
Querying bookmark reminders based on polymorphic associations
Make sure various other areas like importers, bookmark guardian,
and others all rely on the associations
Prepare plugins that rely on the Bookmark model to use polymorphic
associations
The final core PR will remove all the setting gates and switch over
to using the polymorphic associations, backfill the bookmarks
table columns, and ignore the old post_id and for_topic colummns.
Then it will just be a matter of dropping the old columns down the
line.
This URL was originally updated in 89cb537fae. However, some sites are not using the proxy, and have configured their forum to hotlink images directly to avatars.discourse.org.
We intend to shut down this domain in favor of `avatars.discourse-cdn.com`, so this migration will re-write any matching site setting values and queue affected posts for rebaking.
* FEATURE: upload an avatar option for uploading avatars with selectable avatars
Allow staff or users at or above a trust level to upload avatars even when the site
has selectable avatars enabled.
Everyone can still pick from the list of avatars. The option to upload is shown
below the selectable avatar list.
refactored boolean site setting into an enum with the following values:
disabled: No selectable avatars enabled (default)
everyone: Show selectable avatars, and allow everyone to upload custom avatars
tl1: Show selectable avatars, but require tl1+ and staff to upload custom avatars
tl2: Show selectable avatars, but require tl2+ and staff to upload custom avatars
tl3: Show selectable avatars, but require tl3+ and staff to upload custom avatars
tl4: Show selectable avatars, but require tl4 and staff to upload custom avatars
staff: Show selectable avatars, but only allow staff to upload custom avatars
no_one: Show selectable avatars. No users can upload custom avatars
Co-authored-by: Régis Hanol <regis@hanol.fr>
This commit makes sure that the email log's bounce_error_code
conforms to the SMTP error code RFC on save, so that
it is always in the format X.X.X or XXX without any
additional string details. Also included is a migration
to fix this issue for past records.
We added this constraint in 5bd55acf83
but it is causing problems in hosted sites and is catching the
issue too far down the line. This commit removes the constraint
for now, and also fixes an issue found with PostDestroyer
which wasn't using the UserStatCountUpdater when updating post_count
and thus was causing negative numbers to occur.
Follow up to 88a8584348. Sets
the baked version of all posts with custom emoji and a secure
media URL in the cooked content to 0. Then our periodic rebake
posts job will rebake them to apply the fix in the linked
commit. This only matters on sites with secure media enabled.
It is too close to release of 2.8 for incomplete
feature shenanigans. Ignores and drops the columns and drops
the trigger/function introduced in
e21c640a3c.
Will pick this feature back up post-release.
Old OnceOff job could perform pretty slowly on sites with millions of emails
New implementation operates in batches in a migration, minimizing locking.
Currently when a user creates posts that are moderated (for whatever
reason), a popup is displayed saying the post needs approval and the
total number of the user’s pending posts. But then this piece of
information is kind of lost and there is nowhere for the user to know
what are their pending posts or how many there are.
This patch solves this issue by adding a new “Pending” section to the
user’s activity page when there are some pending posts to display. When
there are none, then the “Pending” section isn’t displayed at all.
* PERF: Improve database query perf when loading topics for a category.
Instead of left joining the `topics` table against `categories` by filtering with `categories.id`,
we can improve the query plan by filtering against `topics.category_id`
first before joining which helps to reduce the number of rows in the
topics table that has to be joined against the other tables and also
make better use of our existing index.
The following is a before and after of the query plan for a category
with many subcategories.
Before:
```
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
Limit (cost=1.28..747.09 rows=30 width=12) (actual time=85.502..2453.727 rows=30 loops=1)
-> Nested Loop Left Join (cost=1.28..566518.36 rows=22788 width=12) (actual time=85.501..2453.722 rows=30 loops=1)
Join Filter: (category_users.category_id = topics.category_id)
Filter: ((topics.category_id = 11) OR (COALESCE(category_users.notification_level, 1) <> 0) OR (tu.notification_level > 1))
-> Nested Loop Left Join (cost=1.00..566001.58 rows=22866 width=20) (actual time=85.494..2453.702 rows=30 loops=1)
Filter: ((COALESCE(tu.notification_level, 1) > 0) AND ((topics.category_id <> 11) OR (topics.pinned_at IS NULL) OR ((t
opics.pinned_at <= tu.cleared_pinned_at) AND (tu.cleared_pinned_at IS NOT NULL))))
Rows Removed by Filter: 1
-> Nested Loop (cost=0.57..528561.75 rows=68606 width=24) (actual time=85.472..2453.562 rows=31 loops=1)
Join Filter: ((topics.category_id = categories.id) AND ((categories.topic_id <> topics.id) OR (categories.id = 1
1)))
Rows Removed by Join Filter: 13938306
-> Index Scan using index_topics_on_bumped_at on topics (cost=0.42..100480.05 rows=715549 width=24) (actual ti
me=0.010..633.015 rows=464623 loops=1)
Filter: ((deleted_at IS NULL) AND ((archetype)::text <> 'private_message'::text))
Rows Removed by Filter: 105321
-> Materialize (cost=0.14..36.04 rows=30 width=8) (actual time=0.000..0.002 rows=30 loops=464623)
-> Index Scan using categories_pkey on categories (cost=0.14..35.89 rows=30 width=8) (actual time=0.006.
.0.040 rows=30 loops=1)
Index Cond: (id = ANY ('{11,53,57,55,54,56,112,94,107,115,116,117,97,95,102,103,101,105,99,114,106,1
13,104,98,100,96,108,109,110,111}'::integer[]))
-> Index Scan using index_topic_users_on_topic_id_and_user_id on topic_users tu (cost=0.43..0.53 rows=1 width=16) (a
ctual time=0.004..0.004 rows=0 loops=31)
Index Cond: ((topic_id = topics.id) AND (user_id = 1103877))
-> Materialize (cost=0.28..2.30 rows=1 width=8) (actual time=0.000..0.000 rows=0 loops=30)
-> Index Scan using index_category_users_on_user_id_and_last_seen_at on category_users (cost=0.28..2.29 rows=1 width
=8) (actual time=0.004..0.004 rows=0 loops=1)
Index Cond: (user_id = 1103877)
Planning Time: 1.359 ms
Execution Time: 2453.765 ms
(23 rows)
```
After:
```
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=1.28..438.55 rows=30 width=12) (actual time=38.297..657.215 rows=30 loops=1)
-> Nested Loop Left Join (cost=1.28..195944.68 rows=13443 width=12) (actual time=38.296..657.211 rows=30 loops=1)
Filter: ((categories.topic_id <> topics.id) OR (topics.category_id = 11))
Rows Removed by Filter: 29
-> Nested Loop Left Join (cost=1.13..193462.59 rows=13443 width=16) (actual time=38.289..657.092 rows=59 loops=1)
Join Filter: (category_users.category_id = topics.category_id)
Filter: ((topics.category_id = 11) OR (COALESCE(category_users.notification_level, 1) <> 0) OR (tu.notification_level > 1))
-> Nested Loop Left Join (cost=0.85..193156.79 rows=13489 width=20) (actual time=38.282..657.059 rows=59 loops=1)
Filter: ((COALESCE(tu.notification_level, 1) > 0) AND ((topics.category_id <> 11) OR (topics.pinned_at IS NULL) OR ((topics.pinned_at <= tu.cleared_pinned_at) AND (tu.cleared_pinned_at IS NOT NULL))))
Rows Removed by Filter: 1
-> Index Scan using index_topics_on_bumped_at on topics (cost=0.42..134521.06 rows=40470 width=24) (actual time=38.267..656.850 rows=60 loops=1)
Filter: ((deleted_at IS NULL) AND ((archetype)::text <> 'private_message'::text) AND (category_id = ANY ('{11,53,57,55,54,56,112,94,107,115,116,117,97,95,102,103,101,105,99,114,106,113,104,98,100,96,108,109,110,111}'::integer[])))
Rows Removed by Filter: 569895
-> Index Scan using index_topic_users_on_topic_id_and_user_id on topic_users tu (cost=0.43..1.43 rows=1 width=16) (actual time=0.003..0.003 rows=0 loops=60)
Index Cond: ((topic_id = topics.id) AND (user_id = 1103877))
-> Materialize (cost=0.28..2.30 rows=1 width=8) (actual time=0.000..0.000 rows=0 loops=59)
-> Index Scan using index_category_users_on_user_id_and_last_seen_at on category_users (cost=0.28..2.29 rows=1 width=8) (actual time=0.004..0.004 rows=0 loops=1)
Index Cond: (user_id = 1103877)
-> Index Scan using categories_pkey on categories (cost=0.14..0.17 rows=1 width=8) (actual time=0.001..0.001 rows=1 loops=59)
Index Cond: (id = topics.category_id)
Planning Time: 1.633 ms
Execution Time: 657.255 ms
(22 rows)
```
* PERF: Optimize index on topics bumped_at.
Replace `index_topics_on_bumped_at` index with a partial index on `Topic#bumped_at` filtered by archetype since there is already another index that covers private topics.
We don't need no stinkin' denormalization! This commit ignores
the topic_id column on bookmarks, to be deleted at a later date.
We don't really need this column and it's better to rely on the
post.topic_id as the canonical topic_id for bookmarks, then we
don't need to remember to update both columns if the bookmarked
post moves to another topic.
The duration column has been ignored since the commit
4af77f1e38
for topic_timers, we use duration_minutes instead.
Also removing the duration key from Topic.set_or_create_timer. The only
plugin to use this was discourse-solved, which doesn't use it any
longer
since
c722b94a97
In the previous commit 5222247
we added a topic_id column to EmailLog. This simply backfills it in
batches. The next PR will get rid of the topic method defined on EmailLog in favour
of belongs_to.
Having a large number of post-deploy migrations running out-of-numerical-sequence with pre-deploy migrations can be problematic. For example, if we have the sequence
- db/migrate/2017... - add column
- db/post_migrate/2018... - drop the column
- db/migrate/2021... - add the same column again
It will work fine in numerical order. But if you run the pre-deploy migrations **followed by** the post-deploy migrations, you will not get the same result.
Our post-deploy system is designed to allow for seamless upgrades of Discourse. However, it is reasonable for us to only support this totally seamless experience for a limited period of time. This commit moves all post_deploy migrations which are more than 1 year old (i.e. more than 2 major Discourse versions ago) into the regular pre-deploy migrations directory. This limits the impact of any edge cases caused by out-of-numerical-sequence migrations.
The first thing we needed here was an enum rather than a boolean to determine how a directory_column was created. Now we have `automatic`, `user_field` and `plugin` directory columns.
This plugin API is assuming that the plugin has added a migration to a column to the `directory_items` table.
This was created to be initially used by discourse-solved. PR with API usage - https://github.com/discourse/discourse-solved/pull/137/
Users who use encoded slugs on their sites sometimes run into 500 error when pasting a link to another topic in a post. The problem happens when generating a backward "reflection" link that would appear in a linked topic. Link URL restricted on the database level to 500 chars in length. At first glance, it should work since we have a restriction on topic title length.
But it doesn't work when a site uses encoded slugs, like here (take a look at the URL). The link to a topic, in this case, can be much longer than 500 characters.
By the way, an error happens only when generating a "reflection" link and doesn't happen with a direct link, we truncate that link. It works because, in this case, the original long link is still present in the post body and can be used for navigation. But we can't do the same for backward "reflection" links (without rewriting their implementation), the whole link must be saved to the database.
The simplest and cleanest solution will be just to remove the restriction on the database level. Abuse is impossible here since we are already protected by the restriction on topic title length. There aren’t performance benefits in using length-constrained columns in Postgres, in fact, length-constrained columns need a few extra CPU cycles to check the length when storing data.
The excerpt field in the database is constrained to 1000 chars in length. To support this constraint we added a restriction that the topic_excerpt_maxlength setting must be between 0 and 999.
Unfortunately, sometimes it doesn’t work because:
- topic_excerpt_maxlength restricts the length of a visible to user excerpt. But we HTML-escape text before saving. If an excerpt contains & it’ll be &. One character for the user but 5 characters to save to the database. So if topic_excerpt_maxlength is set to 999 it’s not so hard to have an excerpt of for example 1003 characters in length and run into this issue.
- It’s possible to define a custom excerpt for a topic. Such excerpts bypass check for a length. So if the user defines a too long custom excerpt he will run into this issue.
Removing the constraint on the database level solves the problem. But we still need the constraint for topic_excerpt_maxlength on the setting page, because too long excerpts would make UI wonky.
Because bookmarks have both topic and post ID, when the post was moved into another topic the bookmark was still attached to the post but did not show in the UI. This PR makes it so the all topic IDs for bookmarks attached to a post are updated when a post is moved.
Also included is a migration to fix affected records (e.g. on Meta there are 20 affected records).
See: https://meta.discourse.org/t/improved-bookmarks-with-reminders/144542/203
I was adding specs to ensure that post actions and uploads are removed for permanently deleted posts.
I noticed that post revisions were not permanently destroyed. I added a migration to fix old data.
* FIX: optimise MoveNewSinceToTable
Avoids shuffling all ids around to the app (only use min / max)
Ensure the query for boundaries is ordered by user_id
The bug was mentioned on meta: https://meta.discourse.org/t/pressing-dismiss-new-doesnt-clear-new-topics/179858
Problem is that sometimes the user has TopicUser records with `last_read_post_number` set as NULL. In that case, the topic is still "new" to them and should be dismissed when they click dismiss button.
In addition, I added that condition to post_migration and bumped the number to fix existing records. Migration is written to be idempotent so it will make no harm to already deployed instances.
Original PR was reverted because of broken migration https://github.com/discourse/discourse/pull/12058
I fixed it by adding this line
```
AND topics.id IN(SELECT id FROM topics ORDER BY created_at DESC LIMIT :max_new_topics)
```
This time it is left joining a limited amount of topics. I tested it on few databases and it worked quite smooth
Because of a where clause of duration_minutes != duration, where
duration_minutes was NULL, the previous migration to fill the new
duration_minutes column failed. This corrects the failed migration
by just running the update where duration_minutes is NULL and duration
IS NOT NULL.
Previous commit is 4af77f1