We reverted these commits because they weren't stable enough. We'll keep working on them on `tests-passed`.
* Revert "UX: Use full width when displaying a single recommendations list. (#22896)"
This reverts commit dd8d89d9c8.
* Revert "UX: Topic recommendations tweaks. (#22880)"
This reverts commit e7fb4be23e.
PresenceChannel configuration is cached using redis. That cache is used, and sometimes repopulated, during normal GET requests. When the primary redis server was readonly, that `redis.set` call would raise an error and cause the entire request to fail. Instead, we should ignore the failure and continue without populating the cache.
This PR updates how we display related and suggested topics on mobile and desktop. It adds a new `PluginOutlet` specifically designed for adding new topic lists, which automatically work if following the same conventions as the ones inside `<MoreTopics />`.
While we display lists side by side on desktop, we only display one in mobile. You can switch to another one by clicking on the nav pills, and we'll automatically save your preference for next time.
In e1d27400f5 we started running the splash-screen JS through terser, which removed the trailing newline from the `sourceMappingURL` line.
Adding a reliable end-to-end test for this isn't possible because our testing environment doesn't use terser.
This commit introduces five rake tasks to help us with version bump procedures:
- `version_bump:beta` and `version_bump:minor_stable` are for our minor releases
- `version_bump:major_stable_prepare` and `version_bump:major_stable_merge` are for our major release process
- `version_bump:stage_security_fixes` is to collate multiple security fixes from private branches into a single branch for release
The scripts will stage the necessary commits in a branch and prompt you to create a PR for review. No changes to release branches or tags will be made without the PR being approved, and explicit confirmation of prompts in the scripts.
To avoid polluting the operator's primary working tree, the scripts create a temporary git worktree in a temporary directory and perform all checkouts/commits there.
Toggling these settings will prevent 'published' topics (e.g. from shared drafts) from creating notifications for people watching the relevant category/tag.
FEATURE: Only approved flags for post counters
* Why was this change necessary?
The counters for flagged posts in the user's profile and user index from
the admin view include flags that were rejected, ignored or pending
review. This introduces unnecessary noise. Also the flagged posts
counter in the user's profile includes custom flags which add further
noise to this signal.
* How does it address the problem?
* Modifying User#flags_received_count to return posts with only approved
standard flags
* Refactoring User#number_of_flagged_posts to alias to
User#flags_received_count
* Updating the flagged post staff counter hyperlink to navigate to a
filtered view of that user's approved flagged posts to maintain
consistency with the counter
* Adding system tests for the profile page to cover the flagged posts
staff counter
In the query generated by `TopicTrackingState.report`, there are two
subqueies being executed. The first subquery fetches all the topics
that are new for a given user while the second subquery fetches all the topics with
unread posts for a given user. For the second subquery, there is a
filter `topics.updated_at >= user_stats.first_unread_at` which is used
as a performance optimisation to reduce the number of rows that PG has
to scan through the `topics` table.
However, we started to notice in production that the PG planner doesn't
always execute the filter first to reduce the number of rows that it has
to scan through. Running the following query in one of our production
instance,
```
EXPLAIN ANALYZE
SELECT
DISTINCT topics.id as topic_id,
u.id as user_id,
topics.created_at,
topics.updated_at,
topics.highest_staff_post_number AS highest_post_number,
last_read_post_number,
c.id as category_id,
c.topic_id AS category_topic_id,
tu.notification_level,
us.first_unread_at,
GREATEST(
CASE
WHEN COALESCE(uo.new_topic_duration_minutes, 2880) = -1 THEN u.created_at
WHEN COALESCE(uo.new_topic_duration_minutes, 2880) = -2 THEN COALESCE(
u.previous_visit_at,u.created_at
)
ELSE ('2023-07-31 03:29:45.737630'::timestamp - INTERVAL '1 MINUTE' * COALESCE(uo.new_topic_duration_minutes, 2880))
END, u.created_at, '2023-07-25 15:06:44'
) AS treat_as_new_topic_start_date
FROM topics
JOIN users u on u.id = 13455
JOIN user_stats AS us ON us.user_id = u.id
JOIN user_options AS uo ON uo.user_id = u.id
JOIN categories c ON c.id = topics.category_id
LEFT JOIN topic_users tu ON tu.topic_id = topics.id AND tu.user_id = u.id
WHERE u.id = 13455 AND
topics.updated_at >= us.first_unread_at AND
topics.archetype <> 'private_message' AND
(("topics"."deleted_at" IS NULL AND (tu.last_read_post_number < topics.highest_staff_post_number) AND (COALESCE(tu.notification_level, 1) >= 2)) OR (1=0)) AND
NOT (
COALESCE((select array_agg(tag_id) from topic_tags where topic_tags.topic_id = topics.id), ARRAY[]::int[]) && ARRAY[451,452,453]
) AND
topics.deleted_at IS NULL AND
NOT (
last_read_post_number IS NULL AND
(
topics.category_id IN (SELECT "categories"."id" FROM "categories" LEFT JOIN categories categories2 ON categories2.id = categories.parent_category_id LEFT JOIN category_users ON category_users.category_id = categories.id AND category_users.user_id = 13455 LEFT JOIN category_users category_users2 ON category_users2.category_id = categories2.id AND category_users2.user_id = 13455 WHERE ((category_users.id IS NULL AND COALESCE(category_users2.notification_level, 1) = 0) OR COALESCE(category_users.notification_level, 1) = 0))
AND tu.notification_level <= 1
)
)
```
we get the following
```
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Unique (cost=201606.06..201608.15 rows=76 width=60) (actual time=91.279..91.294 rows=14 loops=1)
-> Sort (cost=201606.06..201606.25 rows=76 width=60) (actual time=91.278..91.284 rows=14 loops=1)
Sort Key: topics.id, topics.created_at, topics.updated_at, topics.highest_staff_post_number, tu.last_read_post_number, c.id, c.topic_id, tu.notification_level, us.first_unread_at, (GREATEST(CASE WHEN (COALESCE(uo.new_topic_duration_minutes, 2880) = '-1'::integer) THEN u.created_at WHEN (COALESCE(uo.new_topic_duration_minutes, 2880) = '-2'::integer) THEN COALESCE(u.previous_visit_at, u.created_at) ELSE ('2023-07-31 03:29:45.73763'::timestamp without time zone - ('00:01:00'::interval * (COALESCE(uo.new_topic_duration_minutes, 2880))::double precision)) END, u.created_at, '2023-07-25 15:06:44'::timestamp without time zone))
Sort Method: quicksort Memory: 26kB
-> Hash Join (cost=97519.51..201603.69 rows=76 width=60) (actual time=87.662..91.268 rows=14 loops=1)
Hash Cond: (topics.id = tu.topic_id)
Join Filter: ((tu.last_read_post_number < topics.highest_staff_post_number) AND ((tu.last_read_post_number IS NOT NULL) OR (NOT (hashed SubPlan 2)) OR (tu.notification_level > 1)))
Rows Removed by Join Filter: 10
-> Nested Loop (cost=1.54..104075.36 rows=3511 width=68) (actual time=0.055..3.609 rows=548 loops=1)
-> Nested Loop (cost=1.13..25.20 rows=1 width=32) (actual time=0.027..0.033 rows=1 loops=1)
-> Nested Loop (cost=0.71..16.76 rows=1 width=28) (actual time=0.020..0.023 rows=1 loops=1)
-> Index Scan using users_pkey on users u (cost=0.42..8.44 rows=1 width=20) (actual time=0.010..0.012 rows=1 loops=1)
Index Cond: (id = 13455)
-> Index Scan using user_stats_pkey on user_stats us (cost=0.29..8.31 rows=1 width=12) (actual time=0.008..0.010 rows=1 loops=1)
Index Cond: (user_id = 13455)
-> Index Scan using index_user_options_on_user_id_and_default_calendar on user_options uo (cost=0.42..8.44 rows=1 width=8) (actual time=0.007..0.008 rows=1 loops=1)
Index Cond: (user_id = 13455)
-> Nested Loop (cost=0.41..104015.12 rows=3504 width=36) (actual time=0.026..3.503 rows=548 loops=1)
-> Seq Scan on categories c (cost=0.00..13.73 rows=73 width=8) (actual time=0.003..0.039 rows=73 loops=1)
-> Index Only Scan using index_topics_on_updated_at_public on topics (cost=0.41..1424.20 rows=48 width=28) (actual time=0.012..0.046 rows=8 loops=73)
Index Cond: ((updated_at >= us.first_unread_at) AND (category_id = c.id))
Filter: (NOT (COALESCE((SubPlan 1), '{}'::integer[]) && '{451,452,453}'::integer[]))
Heap Fetches: 553
SubPlan 1
-> Aggregate (cost=4.31..4.32 rows=1 width=32) (actual time=0.002..0.002 rows=1 loops=548)
-> Index Only Scan using index_topic_tags_on_topic_id_and_tag_id on topic_tags (cost=0.29..4.31 rows=1 width=4) (actual time=0.002..0.002 rows=1 loops=548)
Index Cond: (topic_id = topics.id)
Heap Fetches: 178
-> Hash (cost=97222.14..97222.14 rows=19914 width=16) (actual time=87.545..87.546 rows=42884 loops=1)
Buckets: 65536 (originally 32768) Batches: 1 (originally 1) Memory Usage: 2387kB
-> Bitmap Heap Scan on topic_users tu (cost=1217.47..97222.14 rows=19914 width=16) (actual time=14.419..78.286 rows=42884 loops=1)
Recheck Cond: (user_id = 13455)
Filter: (COALESCE(notification_level, 1) >= 2)
Rows Removed by Filter: 15839
Heap Blocks: exact=45285
-> Bitmap Index Scan on index_topic_users_on_user_id_and_topic_id (cost=0.00..1212.49 rows=59741 width=0) (actual time=6.448..6.448 rows=58723 loops=1)
Index Cond: (user_id = 13455)
SubPlan 2
-> Nested Loop Left Join (cost=0.74..46.90 rows=1 width=4) (never executed)
Join Filter: (category_users2.category_id = categories2.id)
Filter: (((category_users.id IS NULL) AND (COALESCE(category_users2.notification_level, 1) = 0)) OR (COALESCE(category_users.notification_level, 1) = 0))
-> Nested Loop Left Join (cost=0.45..32.31 rows=73 width=16) (never executed)
Join Filter: (category_users.category_id = categories.id)
-> Nested Loop Left Join (cost=0.15..18.45 rows=73 width=8) (never executed)
-> Seq Scan on categories (cost=0.00..13.73 rows=73 width=8) (never executed)
-> Memoize (cost=0.15..0.28 rows=1 width=4) (never executed)
Cache Key: categories.parent_category_id
Cache Mode: logical
-> Index Only Scan using categories_pkey on categories categories2 (cost=0.14..0.27 rows=1 width=4) (never executed)
Index Cond: (id = categories.parent_category_id)
Heap Fetches: 0
-> Materialize (cost=0.29..11.69 rows=2 width=12) (never executed)
-> Index Scan using idx_category_users_user_id_category_id on category_users (cost=0.29..11.68 rows=2 width=12) (never executed)
Index Cond: (user_id = 13455)
-> Materialize (cost=0.29..11.69 rows=2 width=8) (never executed)
-> Index Scan using idx_category_users_user_id_category_id on category_users category_users2 (cost=0.29..11.68 rows=2 width=8) (never executed)
Index Cond: (user_id = 13455)
Planning Time: 1.740 ms
Execution Time: 91.414 ms
(59 rows)
```
From the execution plan, we can see the most of the time is spent
joining about 42888 rows in the `topics` table to the `topic_users` table.
However, we know that we only have to scan through a
subset of the `topics` table because the user's last unread at is '2023-07-20 11:33:05'.
If we filter the `topics` table with `topics.updated_at >= '2023-07-20 11:33:05'`, this would only
return about 1500 rows.
From our testing in production, the PG planner is able to execute a
better query plan when we avoid the unnecessary joins on `user_stats` just to be
able to get the user's `UserStat#first_unread_at`. Instead, we can just
pass the value of `UserStat#first_unread_at` directly as a query
parameter.
```
EXPLAIN ANALYZE
SELECT
DISTINCT topics.id as topic_id,
u.id as user_id,
topics.created_at,
topics.updated_at,
topics.highest_staff_post_number AS highest_post_number,
last_read_post_number,
c.id as category_id,
c.topic_id AS category_topic_id,
tu.notification_level,
GREATEST(
CASE
WHEN COALESCE(uo.new_topic_duration_minutes, 2880) = -1 THEN u.created_at
WHEN COALESCE(uo.new_topic_duration_minutes, 2880) = -2 THEN COALESCE(
u.previous_visit_at,u.created_at
)
ELSE ('2023-07-31 03:29:45.737630'::timestamp - INTERVAL '1 MINUTE' * COALESCE(uo.new_topic_duration_minutes, 2880))
END, u.created_at, '2023-07-25 15:06:44'
) AS treat_as_new_topic_start_date
FROM topics
JOIN users u on u.id = 13455
JOIN user_options AS uo ON uo.user_id = u.id
JOIN categories c ON c.id = topics.category_id
LEFT JOIN topic_users tu ON tu.topic_id = topics.id AND tu.user_id = u.id
WHERE u.id = 13455 AND
topics.updated_at >= '2023-07-20 11:33:05' AND
topics.archetype <> 'private_message' AND
(("topics"."deleted_at" IS NULL AND (tu.last_read_post_number < topics.highest_staff_post_number) AND (COALESCE(tu.notification_level, 1) >= 2)) OR (1=0)) AND
NOT (
COALESCE((select array_agg(tag_id) from topic_tags where topic_tags.topic_id = topics.id), ARRAY[]::int[]) && ARRAY[451,452,453]
) AND
topics.deleted_at IS NULL AND
NOT (
last_read_post_number IS NULL AND
(
topics.category_id IN (SELECT "categories"."id" FROM "categories" LEFT JOIN categories categories2 ON categories2.id = categories.parent_category_id LEFT JOIN category_users ON category_users.category_id = categories.id AND category_users.user_id = 13455 LEFT JOIN category_users category_users2 ON category_users2.category_id = categories2.id AND category_users2.user_id = 13455 WHERE ((category_users.id IS NULL AND COALESCE(category_users2.notification_level, 1) = 0) OR COALESCE(category_users.notification_level, 1) = 0))
AND tu.notification_level <= 1
)
);
```
Note how the filter is now `topics.updated_at >= '2023-07-20 11:33:05'`
instead of `topics.updated_at >= us.first_unread_at`. The modified query
above generates the following execution plan.
```
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Unique (cost=5189.86..5189.88 rows=1 width=52) (actual time=4.991..5.002 rows=14 loops=1)
-> Sort (cost=5189.86..5189.86 rows=1 width=52) (actual time=4.990..4.994 rows=14 loops=1)
Sort Key: topics.id, topics.created_at, topics.updated_at, topics.highest_staff_post_number, tu.last_read_post_number, c.id, c.topic_id, tu.notification_level, (GREATEST(CASE WHEN (COALESCE(uo.new_topic_duration_minutes, 2880) = '-1'::integer) THEN u.created_at WHEN (COALESCE(uo.new_topic_duration_minutes, 2880) = '-2'::integer) THEN COALESCE(u.previous_visit_at, u.created_at) ELSE ('2023-07-31 03:29:45.73763'::timestamp without time zone - ('00:01:00'::interval * (COALESCE(uo.new_topic_duration_minutes, 2880))::double precision)) END, u.created_at, '2023-07-25 15:06:44'::timestamp without time zone))
Sort Method: quicksort Memory: 26kB
-> Nested Loop (cost=52.11..5189.85 rows=1 width=52) (actual time=0.093..4.974 rows=14 loops=1)
-> Nested Loop (cost=51.70..5181.39 rows=1 width=60) (actual time=0.084..4.931 rows=14 loops=1)
-> Nested Loop (cost=51.28..5172.94 rows=1 width=44) (actual time=0.076..4.887 rows=14 loops=1)
-> Nested Loop (cost=0.41..1698.46 rows=59 width=36) (actual time=0.029..3.537 rows=548 loops=1)
-> Seq Scan on categories c (cost=0.00..13.73 rows=73 width=8) (actual time=0.005..0.039 rows=73 loops=1)
-> Index Only Scan using index_topics_on_updated_at_public on topics (cost=0.41..23.07 rows=1 width=28) (actual time=0.012..0.047 rows=8 loops=73)
Index Cond: ((updated_at >= '2023-07-20 11:33:05'::timestamp without time zone) AND (category_id = c.id))
Filter: (NOT (COALESCE((SubPlan 1), '{}'::integer[]) && '{451,452,453}'::integer[]))
Heap Fetches: 552
SubPlan 1
-> Aggregate (cost=4.31..4.32 rows=1 width=32) (actual time=0.002..0.002 rows=1 loops=548)
-> Index Only Scan using index_topic_tags_on_topic_id_and_tag_id on topic_tags (cost=0.29..4.31 rows=1 width=4) (actual time=0.002..0.002 rows=1 loops=548)
Index Cond: (topic_id = topics.id)
Heap Fetches: 178
-> Index Scan using index_topic_users_on_user_id_and_topic_id on topic_users tu (cost=50.86..58.88 rows=1 width=16) (actual time=0.002..0.002 rows=0 loops=548)
Index Cond: ((user_id = 13455) AND (topic_id = topics.id))
Filter: ((COALESCE(notification_level, 1) >= 2) AND (last_read_post_number < topics.highest_staff_post_number) AND ((last_read_post_number IS NOT NULL) OR (NOT (hashed SubPlan 2)) OR (notification_level > 1)))
Rows Removed by Filter: 0
SubPlan 2
-> Nested Loop Left Join (cost=0.74..50.43 rows=1 width=4) (never executed)
Join Filter: (category_users2.category_id = categories2.id)
Filter: (((category_users.id IS NULL) AND (COALESCE(category_users2.notification_level, 1) = 0)) OR (COALESCE(category_users.notification_level, 1) = 0))
-> Nested Loop Left Join (cost=0.45..35.84 rows=73 width=16) (never executed)
Join Filter: (category_users.category_id = categories.id)
-> Nested Loop Left Join (cost=0.15..21.97 rows=73 width=8) (never executed)
-> Seq Scan on categories (cost=0.00..13.73 rows=73 width=8) (never executed)
-> Memoize (cost=0.15..0.61 rows=1 width=4) (never executed)
Cache Key: categories.parent_category_id
Cache Mode: logical
-> Index Only Scan using categories_pkey on categories categories2 (cost=0.14..0.60 rows=1 width=4) (never executed)
Index Cond: (id = categories.parent_category_id)
Heap Fetches: 0
-> Materialize (cost=0.29..11.69 rows=2 width=12) (never executed)
-> Index Scan using idx_category_users_user_id_category_id on category_users (cost=0.29..11.68 rows=2 width=12) (never executed)
Index Cond: (user_id = 13455)
-> Materialize (cost=0.29..11.69 rows=2 width=8) (never executed)
-> Index Scan using idx_category_users_user_id_category_id on category_users category_users2 (cost=0.29..11.68 rows=2 width=8) (never executed)
Index Cond: (user_id = 13455)
-> Index Scan using users_pkey on users u (cost=0.42..8.44 rows=1 width=20) (actual time=0.003..0.003 rows=1 loops=14)
Index Cond: (id = 13455)
-> Index Scan using index_user_options_on_user_id_and_default_calendar on user_options uo (cost=0.42..8.44 rows=1 width=8) (actual time=0.002..0.002 rows=1 loops=14)
Index Cond: (user_id = 13455)
Planning Time: 1.281 ms
Execution Time: 5.092 ms
(48 rows)
```
With the new query, PG first does an index scan using the `index_topics_on_updated_at_public` index to filter away most of the topics making the subsequent joins much cheaper. Total query time has been reduced from ~90ms to ~5ms.
This optimisation will mostly affect users with very few/recent unread topics since a large `UserStat#firsts_unread_at` value will still mean scanning through a large portion of the `topics` table.
This is a similar fix to 32d4810e2b
Why this change?
Prior to this change, there is a bug in `TopicsController#bulk`
where it does not dismiss new unred posts in sub-subcategories when the
`category_id` and `include_subcategories=true` params are present. This
is because the controller did not account for sub-subcategories when
fetching the category ids of the new topics that should be dismissed.
This commit fixes the problem by relying on the `Category.subcategory_ids` class
method which accounts for sub-subcategories.
We currently are accumulating orphaned upload references whenever drafts are deleted.
This change deals with future cases by adding a dependent strategy of delete_all on the Draft#upload_references association. (We don't really need destroy strategy here, since UploadReference is a simple data bag and there are no validations or callbacks on the model.)
It deals with existing cases through a migration that deletes all existing, orphaned draft upload references.
We already handled 429 rate limit errors correctly. This commit adds backoff logic to other types of error to avoid requests being retried every second.
We were attempting to fetch from last read but this is actually complicated to get right when you have a lot unread, as we might still have more to load after this but the last unread id is still the same and would make the user end up in a loop.
A previous change updated `ReviewableQueuedPost`'s `created_by`
to be consistent with other reviewable types. It assigns
the the creator of the post being queued to `target_created_by` and sets
the `created_by` to the creator of the reviewable itself.
This fix updates some of the `created_by` references missed during the
intial fix.
The store expects values for property names ending with `_id` to be a resource id
and `_ids` to be an array of resource ids.
This change ensures the store gracefully handles situations where an
embedded field with incompliant data structure sneaks its way to production.
Internal oneboxes to posts that contained oneboxed github links to
commits or PRs with long enough commit messages to have the `show-more`
and the `excerpt hidden` classes in their html were being stripped of
their content resulting in empty internal oneboxes.
see: https://meta.discourse.org/t/269436
This fixes a regression introduced in:
0b3cf83e3c