Affects the following settings:
* whispers_allowed_groups
* anonymous_posting_allowed_groups
* personal_message_enabled_groups
* shared_drafts_allowed_groups
* here_mention_allowed_groups
* uploaded_avatars_allowed_groups
* ignore_allowed_groups
This turns off `client: true` for these group-based settings,
because there is no guarantee that the current user gets all
their group memberships serialized to the client. Better to check
server-side first.
Why this change?
Back in May 17 2023 along with the release of Discourse 3.1, we announced
on meta that the legacy hamburger dropdown navigation menu is
deprecated and will be dropped in Discourse 3.2. This is the link to the announcement
on meta: https://meta.discourse.org/t/removing-the-legacy-hamburger-navigation-menu-option/265274
## What does this change do?
This change removes the `legacy` option from the `navigation_menu` site
setting and migrates existing sites on the `legacy` option to the
`header dropdown` option.
All references to the `legacy` option in code and tests have been
removed as well.
In the query generated by `TopicTrackingState.report`, there are two
subqueies being executed. The first subquery fetches all the topics
that are new for a given user while the second subquery fetches all the topics with
unread posts for a given user. For the second subquery, there is a
filter `topics.updated_at >= user_stats.first_unread_at` which is used
as a performance optimisation to reduce the number of rows that PG has
to scan through the `topics` table.
However, we started to notice in production that the PG planner doesn't
always execute the filter first to reduce the number of rows that it has
to scan through. Running the following query in one of our production
instance,
```
EXPLAIN ANALYZE
SELECT
DISTINCT topics.id as topic_id,
u.id as user_id,
topics.created_at,
topics.updated_at,
topics.highest_staff_post_number AS highest_post_number,
last_read_post_number,
c.id as category_id,
c.topic_id AS category_topic_id,
tu.notification_level,
us.first_unread_at,
GREATEST(
CASE
WHEN COALESCE(uo.new_topic_duration_minutes, 2880) = -1 THEN u.created_at
WHEN COALESCE(uo.new_topic_duration_minutes, 2880) = -2 THEN COALESCE(
u.previous_visit_at,u.created_at
)
ELSE ('2023-07-31 03:29:45.737630'::timestamp - INTERVAL '1 MINUTE' * COALESCE(uo.new_topic_duration_minutes, 2880))
END, u.created_at, '2023-07-25 15:06:44'
) AS treat_as_new_topic_start_date
FROM topics
JOIN users u on u.id = 13455
JOIN user_stats AS us ON us.user_id = u.id
JOIN user_options AS uo ON uo.user_id = u.id
JOIN categories c ON c.id = topics.category_id
LEFT JOIN topic_users tu ON tu.topic_id = topics.id AND tu.user_id = u.id
WHERE u.id = 13455 AND
topics.updated_at >= us.first_unread_at AND
topics.archetype <> 'private_message' AND
(("topics"."deleted_at" IS NULL AND (tu.last_read_post_number < topics.highest_staff_post_number) AND (COALESCE(tu.notification_level, 1) >= 2)) OR (1=0)) AND
NOT (
COALESCE((select array_agg(tag_id) from topic_tags where topic_tags.topic_id = topics.id), ARRAY[]::int[]) && ARRAY[451,452,453]
) AND
topics.deleted_at IS NULL AND
NOT (
last_read_post_number IS NULL AND
(
topics.category_id IN (SELECT "categories"."id" FROM "categories" LEFT JOIN categories categories2 ON categories2.id = categories.parent_category_id LEFT JOIN category_users ON category_users.category_id = categories.id AND category_users.user_id = 13455 LEFT JOIN category_users category_users2 ON category_users2.category_id = categories2.id AND category_users2.user_id = 13455 WHERE ((category_users.id IS NULL AND COALESCE(category_users2.notification_level, 1) = 0) OR COALESCE(category_users.notification_level, 1) = 0))
AND tu.notification_level <= 1
)
)
```
we get the following
```
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Unique (cost=201606.06..201608.15 rows=76 width=60) (actual time=91.279..91.294 rows=14 loops=1)
-> Sort (cost=201606.06..201606.25 rows=76 width=60) (actual time=91.278..91.284 rows=14 loops=1)
Sort Key: topics.id, topics.created_at, topics.updated_at, topics.highest_staff_post_number, tu.last_read_post_number, c.id, c.topic_id, tu.notification_level, us.first_unread_at, (GREATEST(CASE WHEN (COALESCE(uo.new_topic_duration_minutes, 2880) = '-1'::integer) THEN u.created_at WHEN (COALESCE(uo.new_topic_duration_minutes, 2880) = '-2'::integer) THEN COALESCE(u.previous_visit_at, u.created_at) ELSE ('2023-07-31 03:29:45.73763'::timestamp without time zone - ('00:01:00'::interval * (COALESCE(uo.new_topic_duration_minutes, 2880))::double precision)) END, u.created_at, '2023-07-25 15:06:44'::timestamp without time zone))
Sort Method: quicksort Memory: 26kB
-> Hash Join (cost=97519.51..201603.69 rows=76 width=60) (actual time=87.662..91.268 rows=14 loops=1)
Hash Cond: (topics.id = tu.topic_id)
Join Filter: ((tu.last_read_post_number < topics.highest_staff_post_number) AND ((tu.last_read_post_number IS NOT NULL) OR (NOT (hashed SubPlan 2)) OR (tu.notification_level > 1)))
Rows Removed by Join Filter: 10
-> Nested Loop (cost=1.54..104075.36 rows=3511 width=68) (actual time=0.055..3.609 rows=548 loops=1)
-> Nested Loop (cost=1.13..25.20 rows=1 width=32) (actual time=0.027..0.033 rows=1 loops=1)
-> Nested Loop (cost=0.71..16.76 rows=1 width=28) (actual time=0.020..0.023 rows=1 loops=1)
-> Index Scan using users_pkey on users u (cost=0.42..8.44 rows=1 width=20) (actual time=0.010..0.012 rows=1 loops=1)
Index Cond: (id = 13455)
-> Index Scan using user_stats_pkey on user_stats us (cost=0.29..8.31 rows=1 width=12) (actual time=0.008..0.010 rows=1 loops=1)
Index Cond: (user_id = 13455)
-> Index Scan using index_user_options_on_user_id_and_default_calendar on user_options uo (cost=0.42..8.44 rows=1 width=8) (actual time=0.007..0.008 rows=1 loops=1)
Index Cond: (user_id = 13455)
-> Nested Loop (cost=0.41..104015.12 rows=3504 width=36) (actual time=0.026..3.503 rows=548 loops=1)
-> Seq Scan on categories c (cost=0.00..13.73 rows=73 width=8) (actual time=0.003..0.039 rows=73 loops=1)
-> Index Only Scan using index_topics_on_updated_at_public on topics (cost=0.41..1424.20 rows=48 width=28) (actual time=0.012..0.046 rows=8 loops=73)
Index Cond: ((updated_at >= us.first_unread_at) AND (category_id = c.id))
Filter: (NOT (COALESCE((SubPlan 1), '{}'::integer[]) && '{451,452,453}'::integer[]))
Heap Fetches: 553
SubPlan 1
-> Aggregate (cost=4.31..4.32 rows=1 width=32) (actual time=0.002..0.002 rows=1 loops=548)
-> Index Only Scan using index_topic_tags_on_topic_id_and_tag_id on topic_tags (cost=0.29..4.31 rows=1 width=4) (actual time=0.002..0.002 rows=1 loops=548)
Index Cond: (topic_id = topics.id)
Heap Fetches: 178
-> Hash (cost=97222.14..97222.14 rows=19914 width=16) (actual time=87.545..87.546 rows=42884 loops=1)
Buckets: 65536 (originally 32768) Batches: 1 (originally 1) Memory Usage: 2387kB
-> Bitmap Heap Scan on topic_users tu (cost=1217.47..97222.14 rows=19914 width=16) (actual time=14.419..78.286 rows=42884 loops=1)
Recheck Cond: (user_id = 13455)
Filter: (COALESCE(notification_level, 1) >= 2)
Rows Removed by Filter: 15839
Heap Blocks: exact=45285
-> Bitmap Index Scan on index_topic_users_on_user_id_and_topic_id (cost=0.00..1212.49 rows=59741 width=0) (actual time=6.448..6.448 rows=58723 loops=1)
Index Cond: (user_id = 13455)
SubPlan 2
-> Nested Loop Left Join (cost=0.74..46.90 rows=1 width=4) (never executed)
Join Filter: (category_users2.category_id = categories2.id)
Filter: (((category_users.id IS NULL) AND (COALESCE(category_users2.notification_level, 1) = 0)) OR (COALESCE(category_users.notification_level, 1) = 0))
-> Nested Loop Left Join (cost=0.45..32.31 rows=73 width=16) (never executed)
Join Filter: (category_users.category_id = categories.id)
-> Nested Loop Left Join (cost=0.15..18.45 rows=73 width=8) (never executed)
-> Seq Scan on categories (cost=0.00..13.73 rows=73 width=8) (never executed)
-> Memoize (cost=0.15..0.28 rows=1 width=4) (never executed)
Cache Key: categories.parent_category_id
Cache Mode: logical
-> Index Only Scan using categories_pkey on categories categories2 (cost=0.14..0.27 rows=1 width=4) (never executed)
Index Cond: (id = categories.parent_category_id)
Heap Fetches: 0
-> Materialize (cost=0.29..11.69 rows=2 width=12) (never executed)
-> Index Scan using idx_category_users_user_id_category_id on category_users (cost=0.29..11.68 rows=2 width=12) (never executed)
Index Cond: (user_id = 13455)
-> Materialize (cost=0.29..11.69 rows=2 width=8) (never executed)
-> Index Scan using idx_category_users_user_id_category_id on category_users category_users2 (cost=0.29..11.68 rows=2 width=8) (never executed)
Index Cond: (user_id = 13455)
Planning Time: 1.740 ms
Execution Time: 91.414 ms
(59 rows)
```
From the execution plan, we can see the most of the time is spent
joining about 42888 rows in the `topics` table to the `topic_users` table.
However, we know that we only have to scan through a
subset of the `topics` table because the user's last unread at is '2023-07-20 11:33:05'.
If we filter the `topics` table with `topics.updated_at >= '2023-07-20 11:33:05'`, this would only
return about 1500 rows.
From our testing in production, the PG planner is able to execute a
better query plan when we avoid the unnecessary joins on `user_stats` just to be
able to get the user's `UserStat#first_unread_at`. Instead, we can just
pass the value of `UserStat#first_unread_at` directly as a query
parameter.
```
EXPLAIN ANALYZE
SELECT
DISTINCT topics.id as topic_id,
u.id as user_id,
topics.created_at,
topics.updated_at,
topics.highest_staff_post_number AS highest_post_number,
last_read_post_number,
c.id as category_id,
c.topic_id AS category_topic_id,
tu.notification_level,
GREATEST(
CASE
WHEN COALESCE(uo.new_topic_duration_minutes, 2880) = -1 THEN u.created_at
WHEN COALESCE(uo.new_topic_duration_minutes, 2880) = -2 THEN COALESCE(
u.previous_visit_at,u.created_at
)
ELSE ('2023-07-31 03:29:45.737630'::timestamp - INTERVAL '1 MINUTE' * COALESCE(uo.new_topic_duration_minutes, 2880))
END, u.created_at, '2023-07-25 15:06:44'
) AS treat_as_new_topic_start_date
FROM topics
JOIN users u on u.id = 13455
JOIN user_options AS uo ON uo.user_id = u.id
JOIN categories c ON c.id = topics.category_id
LEFT JOIN topic_users tu ON tu.topic_id = topics.id AND tu.user_id = u.id
WHERE u.id = 13455 AND
topics.updated_at >= '2023-07-20 11:33:05' AND
topics.archetype <> 'private_message' AND
(("topics"."deleted_at" IS NULL AND (tu.last_read_post_number < topics.highest_staff_post_number) AND (COALESCE(tu.notification_level, 1) >= 2)) OR (1=0)) AND
NOT (
COALESCE((select array_agg(tag_id) from topic_tags where topic_tags.topic_id = topics.id), ARRAY[]::int[]) && ARRAY[451,452,453]
) AND
topics.deleted_at IS NULL AND
NOT (
last_read_post_number IS NULL AND
(
topics.category_id IN (SELECT "categories"."id" FROM "categories" LEFT JOIN categories categories2 ON categories2.id = categories.parent_category_id LEFT JOIN category_users ON category_users.category_id = categories.id AND category_users.user_id = 13455 LEFT JOIN category_users category_users2 ON category_users2.category_id = categories2.id AND category_users2.user_id = 13455 WHERE ((category_users.id IS NULL AND COALESCE(category_users2.notification_level, 1) = 0) OR COALESCE(category_users.notification_level, 1) = 0))
AND tu.notification_level <= 1
)
);
```
Note how the filter is now `topics.updated_at >= '2023-07-20 11:33:05'`
instead of `topics.updated_at >= us.first_unread_at`. The modified query
above generates the following execution plan.
```
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Unique (cost=5189.86..5189.88 rows=1 width=52) (actual time=4.991..5.002 rows=14 loops=1)
-> Sort (cost=5189.86..5189.86 rows=1 width=52) (actual time=4.990..4.994 rows=14 loops=1)
Sort Key: topics.id, topics.created_at, topics.updated_at, topics.highest_staff_post_number, tu.last_read_post_number, c.id, c.topic_id, tu.notification_level, (GREATEST(CASE WHEN (COALESCE(uo.new_topic_duration_minutes, 2880) = '-1'::integer) THEN u.created_at WHEN (COALESCE(uo.new_topic_duration_minutes, 2880) = '-2'::integer) THEN COALESCE(u.previous_visit_at, u.created_at) ELSE ('2023-07-31 03:29:45.73763'::timestamp without time zone - ('00:01:00'::interval * (COALESCE(uo.new_topic_duration_minutes, 2880))::double precision)) END, u.created_at, '2023-07-25 15:06:44'::timestamp without time zone))
Sort Method: quicksort Memory: 26kB
-> Nested Loop (cost=52.11..5189.85 rows=1 width=52) (actual time=0.093..4.974 rows=14 loops=1)
-> Nested Loop (cost=51.70..5181.39 rows=1 width=60) (actual time=0.084..4.931 rows=14 loops=1)
-> Nested Loop (cost=51.28..5172.94 rows=1 width=44) (actual time=0.076..4.887 rows=14 loops=1)
-> Nested Loop (cost=0.41..1698.46 rows=59 width=36) (actual time=0.029..3.537 rows=548 loops=1)
-> Seq Scan on categories c (cost=0.00..13.73 rows=73 width=8) (actual time=0.005..0.039 rows=73 loops=1)
-> Index Only Scan using index_topics_on_updated_at_public on topics (cost=0.41..23.07 rows=1 width=28) (actual time=0.012..0.047 rows=8 loops=73)
Index Cond: ((updated_at >= '2023-07-20 11:33:05'::timestamp without time zone) AND (category_id = c.id))
Filter: (NOT (COALESCE((SubPlan 1), '{}'::integer[]) && '{451,452,453}'::integer[]))
Heap Fetches: 552
SubPlan 1
-> Aggregate (cost=4.31..4.32 rows=1 width=32) (actual time=0.002..0.002 rows=1 loops=548)
-> Index Only Scan using index_topic_tags_on_topic_id_and_tag_id on topic_tags (cost=0.29..4.31 rows=1 width=4) (actual time=0.002..0.002 rows=1 loops=548)
Index Cond: (topic_id = topics.id)
Heap Fetches: 178
-> Index Scan using index_topic_users_on_user_id_and_topic_id on topic_users tu (cost=50.86..58.88 rows=1 width=16) (actual time=0.002..0.002 rows=0 loops=548)
Index Cond: ((user_id = 13455) AND (topic_id = topics.id))
Filter: ((COALESCE(notification_level, 1) >= 2) AND (last_read_post_number < topics.highest_staff_post_number) AND ((last_read_post_number IS NOT NULL) OR (NOT (hashed SubPlan 2)) OR (notification_level > 1)))
Rows Removed by Filter: 0
SubPlan 2
-> Nested Loop Left Join (cost=0.74..50.43 rows=1 width=4) (never executed)
Join Filter: (category_users2.category_id = categories2.id)
Filter: (((category_users.id IS NULL) AND (COALESCE(category_users2.notification_level, 1) = 0)) OR (COALESCE(category_users.notification_level, 1) = 0))
-> Nested Loop Left Join (cost=0.45..35.84 rows=73 width=16) (never executed)
Join Filter: (category_users.category_id = categories.id)
-> Nested Loop Left Join (cost=0.15..21.97 rows=73 width=8) (never executed)
-> Seq Scan on categories (cost=0.00..13.73 rows=73 width=8) (never executed)
-> Memoize (cost=0.15..0.61 rows=1 width=4) (never executed)
Cache Key: categories.parent_category_id
Cache Mode: logical
-> Index Only Scan using categories_pkey on categories categories2 (cost=0.14..0.60 rows=1 width=4) (never executed)
Index Cond: (id = categories.parent_category_id)
Heap Fetches: 0
-> Materialize (cost=0.29..11.69 rows=2 width=12) (never executed)
-> Index Scan using idx_category_users_user_id_category_id on category_users (cost=0.29..11.68 rows=2 width=12) (never executed)
Index Cond: (user_id = 13455)
-> Materialize (cost=0.29..11.69 rows=2 width=8) (never executed)
-> Index Scan using idx_category_users_user_id_category_id on category_users category_users2 (cost=0.29..11.68 rows=2 width=8) (never executed)
Index Cond: (user_id = 13455)
-> Index Scan using users_pkey on users u (cost=0.42..8.44 rows=1 width=20) (actual time=0.003..0.003 rows=1 loops=14)
Index Cond: (id = 13455)
-> Index Scan using index_user_options_on_user_id_and_default_calendar on user_options uo (cost=0.42..8.44 rows=1 width=8) (actual time=0.002..0.002 rows=1 loops=14)
Index Cond: (user_id = 13455)
Planning Time: 1.281 ms
Execution Time: 5.092 ms
(48 rows)
```
With the new query, PG first does an index scan using the `index_topics_on_updated_at_public` index to filter away most of the topics making the subsequent joins much cheaper. Total query time has been reduced from ~90ms to ~5ms.
This optimisation will mostly affect users with very few/recent unread topics since a large `UserStat#firsts_unread_at` value will still mean scanning through a large portion of the `topics` table.
Why this change?
Prior to this change, dismissing unreads posts did not publish the
changes across clients for the same user. As a result, users can end up
seeing an unread count being present but saw no topics being loaded when
visiting the `/unread` route.
When revising a post, if the topic that post belonged to did not have a category attached it would error with
> NoMethodError (undefined method `read_restricted' for nil:NilClass)
What is the problem?
We have a hidden site setting `show_category_definitions_in_topic_lists`
which is set to false by default. What this means is that category
definition topics are not shown in the topic list by default. Only the
category definition topic for the category being viewed will be shown.
However, we have a bug where we would show that a category has new
topics when a new child category along with its category definition
topic is created even though the topic list does not list the child
category's category definition topic.
What is the fix here?
This commit fixes the problem by shipping down an additional
`is_category_topic` attribute in `TopicTrackingStateItemSerializer` when
the `show_category_definitions_in_topic_lists` site setting has been set
to false. With the new attribute, we can then exclude counting child
categories' category definition topics when counting new and unread
counts for a category.
## Why do we need this change?
When loading the ember app, [MessageBus does not start polling immediately](f31f0b70f8/app/assets/javascripts/discourse/app/initializers/message-bus.js (L71-L81)) and instead waits for `document.readyState` to be `complete`. What this means is that if there are new messages being created while we have yet to start polling, those messages will not be received by the client.
With sidebar being the default navigation menu, the counts derived from `topic-tracking-state.js` on the client side is prominently displayed on every page. Therefore, we want to ensure that we are not dropping any messages on the channels that `topic-tracking-state.js` subscribes to.
## What does this change do?
This includes the `MessageBus.last_id`s for the MessageBus channels which `topic-tracking-state.js` subscribes to as part of the preloaded data when loading a page. The last ids are then used when we subscribe the MessageBus channels so that messages which are published before MessageBus starts polling will not be missed.
## Review Notes
1. See https://github.com/discourse/message_bus#client-support for documentation about subscribing from a given message id.
0403cda1d1 introduced a regression where
topics in non read-restricted categories have its TopicTrackingState
MessageBus messages published with the `group_ids: [nil]` option. This
essentially means that no one would be able to view the message.
When a topic belongs to category that is read restricted but permission
has not been granted to any groups, publishing ceratin topic tracking state
updates for the topic will result in the `MessageBus::InvalidMessageTarget` error being raised
because we're passing `nil` to `group_ids` which is not support by
MessageBus.
This commit ensures that for said category above, we will publish the
updates to the admin groups.
This new site setting replaces the
`enable_experimental_sidebar_hamburger` and `enable_sidebar` site
settings as the sidebar feature exits the experimental phase.
Note that we're replacing this without depreciation since the previous
site setting was considered experimental.
Internal Ref: /t/86563
Previously, when categories were not muted by default, we were sending message about unmuted topics (topics which user explicitly set notification level to watching)
The same mechanism can be used to fix a bug. When the user was explicitly watching topic, but category was muted, then the user was not informed about new reply.
This commit removes the ability to enable/disable the Sidebar on a per
user basis and introduces a site wide setting. For testing purposes, sidebar can be enabled/disabled via the `enable_sidebar=1` or `enable_sidebar=0` query param.
The `unread_not_too_old` attribute is a little odd because there should never be a case where
the user's first_unread_at column is less than the `Topic#updated_at`
column of an unread topic. The `unread_not_too_old` attribute is causing
a bug where topic states synced into `TopicTrackingState` do not appear
as unread because the attribute does not exsist on a normal `Topic`
object and hence never set.
Before, whispers were only available for staff members.
Config has been changed to allow to configure privileged groups with access to whispers. Post migration was added to move from the old setting into the new one.
I considered having a boolean column `whisperer` on user model similar to `admin/moderator` for performance reason. Finally, I decided to keep looking for groups as queries are only done for current user and didn't notice any N+1 queries.
This commit fixes two issues at play. The first was introduced
in f6c852b (or maybe not introduced
but rather revealed). When a user posted a new message in a topic,
they received the unread topic tracking state MessageBus message,
and the Unread (X) indicator was incremented by one, because with the
aforementioned perf commit we "guess" the correct last read post
for the user, because we no longer calculate individual users' read
status there. This meant that every time a user posted in a topic
they tracked, the unread indicator was incremented. To get around
this, we can just exclude the user who created the post from the
target users of the unread state message.
The second issue was related to the private message topic tracking
state, and was somewhat similar. Whenever a user created a new private
message, the New (X) indicator was incremented, and could not be
cleared until the page was refreshed. To solve this, we just don't
update the topic state for the user when the new_topic tracking state
message comes through if the user who created the topic is the
same as the current user.
cf. https://meta.discourse.org/t/bottom-of-topic-shows-there-is-1-unread-remaining-when-there-are-actually-0-unread-topics-remaining/220817
Previously we were publishing one messagebus message per user which was 'tracking' a topic. On large sites, this can easily be 1000+ messages. The important information in the message is common between all users, so we can manage with a single message on a shared channel, which will be much more efficient.
For user-specific values (notification_level and last_read_post_number), the JS app can infer values which are 'good enough'. Correct values will be loaded as soon as a topic-list containing the topic is visited.
There are certain design decisions that were made in this commit.
Private messages implements its own version of topic tracking state because there are significant differences between regular and private_message topics. Regular topics have to track categories and tags while private messages do not. It is much easier to design the new topic tracking state if we maintain two different classes, instead of trying to mash this two worlds together.
One MessageBus channel per user and one MessageBus channel per group. This allows each user and each group to have their own channel backlog instead of having one global channel which requires the client to filter away unrelated messages.
This allows us to do DISTINCT on the topic_id to remove
duplicates (e.g. in extensions to the report SQL), and
also introduces an additional_join_sql string to allow
extensions to JOIN additional tables.
I merged this PR in yesterday, finally thinking this was done https://github.com/discourse/discourse/pull/12958 but then a wild performance regression occurred. These are the problem methods:
1aa20bd681/app/serializers/topic_tracking_state_serializer.rb (L13-L21)
Turns out date comparison is super expensive on the backend _as well as_ the frontend.
The fix was to just move the `treat_as_new_topic_start_date` into the SQL query rather than using the slower `UserOption#treat_as_new_topic_start_date` method in ruby. After this change, 1% of the total time is spent with the `created_in_new_period` comparison instead of ~20%.
----
History:
Original PR which had to be reverted **https://github.com/discourse/discourse/pull/12555**. See the description there for what this PR is achieving, plus below.
The issue with the original PR is addressed in 92ef54f402
If you went to the `x unread` link for a tag Chrome would freeze up and possibly crash, or eventually unfreeze after nearly 10 mins. Other routes for unread/new were similarly slow. From profiling the issue was the `sync` function of `topic-tracking-state.js`, which calls down to `isNew` which in turn calls `moment`, a change I had made in the PR above. The time it takes locally with ~1400 topics in the tracking state is 2.3 seconds.
To solve this issue, I have moved these calculations for "created in new period" and "unread not too old" into the tracking state serializer.
When I was looking at the profiler I also noticed this issue which was just compounding the problem. Every time we modify topic tracking state we recalculate the sidebar tracking/everything/tag counts. However this calls `forEachTracked` and `countTags` which can be quite expensive as they go through the whole tracking state (and were also calling the removed moment functions).
I added some logs and this was being called 30 times when navigating to a new /unread route because `sync` is being called from `build-topic-route` (one for each topic loaded due to pagination). So I just added a debounce here and it makes things even faster.
Finally, I changed topic tracking state to use a Map so our counts of the state keys is faster (Maps have .size whereas objects you have to do Object.keys(obj) which is O(n).)
<!-- NOTE: All pull requests should have tests (rspec in Ruby, qunit in JavaScript). If your code does not include test coverage, please include an explanation of why it was omitted. -->
Original PR which had to be reverted **https://github.com/discourse/discourse/pull/12555**. See the description there for what this PR is achieving, plus below.
The issue with the original PR is addressed in 92ef54f402
If you went to the `x unread` link for a tag Chrome would freeze up and possibly crash, or eventually unfreeze after nearly 10 mins. Other routes for unread/new were similarly slow. From profiling the issue was the `sync` function of `topic-tracking-state.js`, which calls down to `isNew` which in turn calls `moment`, a change I had made in the PR above. The time it takes locally with ~1400 topics in the tracking state is 2.3 seconds.
To solve this issue, I have moved these calculations for "created in new period" and "unread not too old" into the tracking state serializer.
When I was looking at the profiler I also noticed this issue which was just compounding the problem. Every time we modify topic tracking state we recalculate the sidebar tracking/everything/tag counts. However this calls `forEachTracked` and `countTags` which can be quite expensive as they go through the whole tracking state (and were also calling the removed moment functions).
I added some logs and this was being called 30 times when navigating to a new /unread route because `sync` is being called from `build-topic-route` (one for each topic loaded due to pagination). So I just added a debounce here and it makes things even faster.
Finally, I changed topic tracking state to use a Map so our counts of the state keys is faster (Maps have .size whereas objects you have to do Object.keys(obj) which is O(n).)
Over the years we accrued many spelling mistakes in the code base.
This PR attempts to fix spelling mistakes and typos in all areas of the code that are extremely safe to change
- comments
- test descriptions
- other low risk areas
The aim of this PR is to improve the topic tracking state JavaScript code and test coverage so further modifications can be made in plugins and in core. This is focused on making topic tracking state changes easier to respond to with callbacks, and changing it so all state modifications go through a single method instead of modifying `this.state` all over the place. I have also tried to improve documentation, make the code clearer and easier to follow, and make it clear what are public and private methods.
The changes I have made here should not break backwards compatibility, though there is no way to tell for sure if other plugin/theme authors are using tracking state methods that are essentially private methods. Any name changes made in the tracking-state.js code have been reflected in core.
----
We now have a `_trackedTopicLimit` in the tracking state. Previously, if a topic was neither new nor unread it was removed from the tracking state; now it is only removed if we are tracking more than `_trackedTopicLimit` topics (which is set to 4000). This is so plugins/themes adding topics with `TopicTrackingState.register_refine_method` can add topics to track that aren't necessarily new or unread, e.g. for totals counts.
Anywhere where we were doing `tracker.states["t" + data.topic_id] = newObject` has now been changed to flow through central `modifyState` and `modifyStateProp` methods. This is so state objects are not modified until they need to be (e.g. sometimes properties are set based on certain conditions) and also so we can run callback functions when the state is modified.
I added `onStateChange` and `onMessageIncrement` methods to register callbacks that are called when the state is changed and when the message count is incremented, respectively. This was done so we no longer need to do things like `@observes("trackingState.states")` in other Ember classes.
I split up giant functions like `sync` and `establishChannels` into smaller functions for readability and testability, and renamed many small functions to _functionName to designate them as private functions which not be called by consumers of `topicTrackingState`. Public functions are now all documented (well...at least ones that are not immediately obvious).
----
On the backend side, I have changed the MessageBus publish events for TopicTrackingState to send back tags and tag IDs for more channels, and done some extra code cleanup and refactoring. Plugins may override `TopicTrackingState.report` so I have made its footprint as small as possible and externalised the main parts of it into other methods.
Original PR was reverted because of broken migration https://github.com/discourse/discourse/pull/12058
I fixed it by adding this line
```
AND topics.id IN(SELECT id FROM topics ORDER BY created_at DESC LIMIT :max_new_topics)
```
This time it is left joining a limited amount of topics. I tested it on few databases and it worked quite smooth
Follow up https://github.com/discourse/discourse/pull/11968
Dismiss all new topics using the same DismissTopicService. In addition, MessageBus receives exact topic ids which should be marked as `seen`.
* FEATURE: Ability to dismiss new topics in a specific tag
Follow up of https://github.com/discourse/discourse/pull/11927
Using the same mechanism to disable new topics in a tag.
* FIX: respect when category and tag is selected
This is a try to simplify logic around dismiss new topics to have one solution to work in all places - dismiss all-new, dismiss new in a specific category or even in a specific tag.
* DEV: Remove with_deleted workarounds for old Rails version
These workarounds using private APIs are no longer required in the latest version of Rails. The referenced issue (https://github.com/rails/rails/issues/4306) was closed in 2013. The acts_as_paranoid workaround which this was based on was removed for rails > 5.
Switching to using a scope also allows us to use it within a `belongs_to` relation (e.g. in the Poll model). This avoids issues which can be caused by unscoping all `where` clauses.
Predicates are not necessarily strings, so calling `.join(" AND ")` can sometimes cause weird errors. If we use `WhereClause#ast`, and then `.to_sql` we achieve the same thing with fully public APIs, and it will work successfully for all predicates.
PostDestroyer should accept the option to permanently destroy post from the database. In addition, when the first post is destroyed it destroys the whole topic.
Currently, that feature is limited to private messages and creator of the post. It will be used by discourse-encrypt to explode encrypted private messages.