This adds a new secure_uploads_pm_only site setting. When secure_uploads
is true with this setting, only uploads created in PMs will be marked
secure; no uploads in secure categories will be marked as secure, and
the login_required site setting has no bearing on upload security
either.
This is meant to be a stopgap solution to prevent secure uploads
in a single place (private messages) for sensitive admin data exports.
Ideally we would want a more comprehensive way of saying that certain
upload types get secured which is a hybrid/mixed mode secure uploads,
but for now this will do the trick.
They're both constant per-instance values, there is no need to store them
in the session. This also makes the code a bit more readable by moving
the `session_challenge_key` method up to the `DiscourseWebauthn` module.
Follow-up to #23199 in which we moved the "delete user" options under the relevant action menu for flagged post. This change does the same, but to queued posts.
As per discussion, we want to move the options to delete the user under the "Yes" menu of the review queue, since these options are often the most recommended, but also frequently missed because they are tucked away under their own menu.
In addition to the move, I removed the title case of the options so that it matches the other options in the "Yes" menu.
Reverts e2705df and re-lands #23187 and #23219.
The issue was incorrect order of execution of Rails' `assets:precompile` task in our own precompilation stack.
Co-authored-by: David Taylor <david@taylorhq.com>
Manipulating theme module paths means that the paths you author are not the ones used at runtime. This can lead to some very unexpected behavior and potential module name clashes. It also meant that the refactor in 16c6ab8661 was unable to correctly match up theme connector js/templates.
While this could technically be a breaking change, I think it is reasonably safe because:
1. Themes are already forced to use relative paths when referencing their own modules (since they're namespaced based on the site-specific id). The only time this might be problematic is when theme tests reference modules in the theme's main `javascripts` directory
2. For things like components/services/controllers/etc. our custom Ember resolver works backwards from the end of the path, so adding `discourse/` in the middle will not affect resolution.
This is a bug that happens only when the current date is less than 90 days from a date on which the time zone transitions into or out of Daylight Savings Time.
In these conditions, bulk invites show the time of day of their expiration as being 1 hour later than the current time.
Whereas it should match the time of day the invite was generated.
This is because the server has not been using the user's timezone in calculating the expiration time of day. This PR fixes issue by considering the user's timezone when doing the date math.
https://meta.discourse.org/t/bulk-invite-logic-to-generate-expire-date-bug/274689
`ReviewableQueuedPost` got refactored a while back to use the more
appropriate `target_created_by` for the user of the post being queued
instead of `created_by`. The change was not extended to the `DELETE
/review/:id` endpoint leading to error responses for a user attempting
to deleting their own queued post.
This fix extends the `Reviewable` lookup implementation in
`ReviewablesController#destroy` and Guardian implementation to account
for this change.
Previously we were discovering plugin outlets by checking first for dedicated template files, and then looking for classes to match them. This doesn't work for components which are entirely defined in JS (e.g. those authored with gjs, or those which are re-exports of a colocated component).
This commit refactors our detection logic to look for both class and template modules in a single pass. It also refactors things so that the modules themselves are required lazily when needd, rather than all being loaded during app boot.
When hiding a post (essentially updating hidden, hidden_at, and hidden_reason_id) our callbacks are running the whole battery of post validations. This can cause the hiding to fail in a number of edge cases. The issue is similar to the one fixed in #11680, but applies to all post validations, none of which should apply when hiding a post.
After some code reading and discussion, none of the validations in PostValidator seem to be relevant when hiding posts, so instead of just skipping unique check, we skip all post validator checks.
* scrub non-a html tags from tag descriptions on create, strips all tags from tag description when displayed in tag hover
* test for tag description links
* UX: basic render-tag test
* UX: fix linting
* UX: fix linting
* fix broken tests
* Update spec/models/tag_spec.rb
Co-authored-by: Penar Musaraj <pmusaraj@gmail.com>
* UX: use has_sanitizable_fields instead of has_scrubbable_fields to ensafen tag.description
---------
Co-authored-by: Penar Musaraj <pmusaraj@gmail.com>
Currently when we decide we're going to drop a column in the future we just mark it with a TODO comment and add it to ignored_columns. This makes it instantly unavailable, and we mostly forget about the TODO in the end. 😬
This change adds a HasDeprecatedColumns concern which offers a little bit more flexibility. We can still simulate the old behaviour by setting drop_from to the current version, but we can also set it to a future version, causing it to raise a deprecation warning until then if used.
This commit removes any logic in the app and in specs around
enable_experimental_hashtag_autocomplete and deletes some
old category hashtag code that is no longer necessary.
It also adds a `slug_ref` category instance method, which
will generate a reference like `parent:child` for a category,
with an optional depth, which hashtags use. Also refactors
PostRevisor which was using CategoryHashtagDataSource directly
which is a no-no.
Deletes the old hashtag markdown rule as well.
This brings the theme development experience (via the discourse_theme cli) closer to the experience of making javascript changes in Discourse core/plugins via Ember CLI. Whenever a change is made to a non-css theme field, all clients will be instructed to immediately refresh via message-bus.
In #20135 we prevented invalid inputs from being accepted in category setting form fields on the front-end. We didn't do anything on the back-end at that time, because we were still discussing which path we wanted to take. Eventually we decided we want to move this to a new CategorySetting model.
This PR moves the num_auto_bump_daily from custom fields to the new CategorySetting model.
In addition it sets the default value to 0, which exhibits the same behaviour as when the value is NULL.
* FIX Add 'Ignored' flags to Moderator Activity report
The Moderator Activity query didn’t include the number of deferred flags in the Flags Reviewed totals. As this number is designed to reflect how many flags a moderator has seen, reviewed, and made a judgement on, the Ignored ones should also be included.
* Apply suggestions from code review
---------
Co-authored-by: Jarek Radosz <jradosz@gmail.com>
FEATURE: Only approved flags for post counters
* Why was this change necessary?
The counters for flagged posts in the user's profile and user index from
the admin view include flags that were rejected, ignored or pending
review. This introduces unnecessary noise. Also the flagged posts
counter in the user's profile includes custom flags which add further
noise to this signal.
* How does it address the problem?
* Modifying User#flags_received_count to return posts with only approved
standard flags
* Refactoring User#number_of_flagged_posts to alias to
User#flags_received_count
* Updating the flagged post staff counter hyperlink to navigate to a
filtered view of that user's approved flagged posts to maintain
consistency with the counter
* Adding system tests for the profile page to cover the flagged posts
staff counter
In the query generated by `TopicTrackingState.report`, there are two
subqueies being executed. The first subquery fetches all the topics
that are new for a given user while the second subquery fetches all the topics with
unread posts for a given user. For the second subquery, there is a
filter `topics.updated_at >= user_stats.first_unread_at` which is used
as a performance optimisation to reduce the number of rows that PG has
to scan through the `topics` table.
However, we started to notice in production that the PG planner doesn't
always execute the filter first to reduce the number of rows that it has
to scan through. Running the following query in one of our production
instance,
```
EXPLAIN ANALYZE
SELECT
DISTINCT topics.id as topic_id,
u.id as user_id,
topics.created_at,
topics.updated_at,
topics.highest_staff_post_number AS highest_post_number,
last_read_post_number,
c.id as category_id,
c.topic_id AS category_topic_id,
tu.notification_level,
us.first_unread_at,
GREATEST(
CASE
WHEN COALESCE(uo.new_topic_duration_minutes, 2880) = -1 THEN u.created_at
WHEN COALESCE(uo.new_topic_duration_minutes, 2880) = -2 THEN COALESCE(
u.previous_visit_at,u.created_at
)
ELSE ('2023-07-31 03:29:45.737630'::timestamp - INTERVAL '1 MINUTE' * COALESCE(uo.new_topic_duration_minutes, 2880))
END, u.created_at, '2023-07-25 15:06:44'
) AS treat_as_new_topic_start_date
FROM topics
JOIN users u on u.id = 13455
JOIN user_stats AS us ON us.user_id = u.id
JOIN user_options AS uo ON uo.user_id = u.id
JOIN categories c ON c.id = topics.category_id
LEFT JOIN topic_users tu ON tu.topic_id = topics.id AND tu.user_id = u.id
WHERE u.id = 13455 AND
topics.updated_at >= us.first_unread_at AND
topics.archetype <> 'private_message' AND
(("topics"."deleted_at" IS NULL AND (tu.last_read_post_number < topics.highest_staff_post_number) AND (COALESCE(tu.notification_level, 1) >= 2)) OR (1=0)) AND
NOT (
COALESCE((select array_agg(tag_id) from topic_tags where topic_tags.topic_id = topics.id), ARRAY[]::int[]) && ARRAY[451,452,453]
) AND
topics.deleted_at IS NULL AND
NOT (
last_read_post_number IS NULL AND
(
topics.category_id IN (SELECT "categories"."id" FROM "categories" LEFT JOIN categories categories2 ON categories2.id = categories.parent_category_id LEFT JOIN category_users ON category_users.category_id = categories.id AND category_users.user_id = 13455 LEFT JOIN category_users category_users2 ON category_users2.category_id = categories2.id AND category_users2.user_id = 13455 WHERE ((category_users.id IS NULL AND COALESCE(category_users2.notification_level, 1) = 0) OR COALESCE(category_users.notification_level, 1) = 0))
AND tu.notification_level <= 1
)
)
```
we get the following
```
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Unique (cost=201606.06..201608.15 rows=76 width=60) (actual time=91.279..91.294 rows=14 loops=1)
-> Sort (cost=201606.06..201606.25 rows=76 width=60) (actual time=91.278..91.284 rows=14 loops=1)
Sort Key: topics.id, topics.created_at, topics.updated_at, topics.highest_staff_post_number, tu.last_read_post_number, c.id, c.topic_id, tu.notification_level, us.first_unread_at, (GREATEST(CASE WHEN (COALESCE(uo.new_topic_duration_minutes, 2880) = '-1'::integer) THEN u.created_at WHEN (COALESCE(uo.new_topic_duration_minutes, 2880) = '-2'::integer) THEN COALESCE(u.previous_visit_at, u.created_at) ELSE ('2023-07-31 03:29:45.73763'::timestamp without time zone - ('00:01:00'::interval * (COALESCE(uo.new_topic_duration_minutes, 2880))::double precision)) END, u.created_at, '2023-07-25 15:06:44'::timestamp without time zone))
Sort Method: quicksort Memory: 26kB
-> Hash Join (cost=97519.51..201603.69 rows=76 width=60) (actual time=87.662..91.268 rows=14 loops=1)
Hash Cond: (topics.id = tu.topic_id)
Join Filter: ((tu.last_read_post_number < topics.highest_staff_post_number) AND ((tu.last_read_post_number IS NOT NULL) OR (NOT (hashed SubPlan 2)) OR (tu.notification_level > 1)))
Rows Removed by Join Filter: 10
-> Nested Loop (cost=1.54..104075.36 rows=3511 width=68) (actual time=0.055..3.609 rows=548 loops=1)
-> Nested Loop (cost=1.13..25.20 rows=1 width=32) (actual time=0.027..0.033 rows=1 loops=1)
-> Nested Loop (cost=0.71..16.76 rows=1 width=28) (actual time=0.020..0.023 rows=1 loops=1)
-> Index Scan using users_pkey on users u (cost=0.42..8.44 rows=1 width=20) (actual time=0.010..0.012 rows=1 loops=1)
Index Cond: (id = 13455)
-> Index Scan using user_stats_pkey on user_stats us (cost=0.29..8.31 rows=1 width=12) (actual time=0.008..0.010 rows=1 loops=1)
Index Cond: (user_id = 13455)
-> Index Scan using index_user_options_on_user_id_and_default_calendar on user_options uo (cost=0.42..8.44 rows=1 width=8) (actual time=0.007..0.008 rows=1 loops=1)
Index Cond: (user_id = 13455)
-> Nested Loop (cost=0.41..104015.12 rows=3504 width=36) (actual time=0.026..3.503 rows=548 loops=1)
-> Seq Scan on categories c (cost=0.00..13.73 rows=73 width=8) (actual time=0.003..0.039 rows=73 loops=1)
-> Index Only Scan using index_topics_on_updated_at_public on topics (cost=0.41..1424.20 rows=48 width=28) (actual time=0.012..0.046 rows=8 loops=73)
Index Cond: ((updated_at >= us.first_unread_at) AND (category_id = c.id))
Filter: (NOT (COALESCE((SubPlan 1), '{}'::integer[]) && '{451,452,453}'::integer[]))
Heap Fetches: 553
SubPlan 1
-> Aggregate (cost=4.31..4.32 rows=1 width=32) (actual time=0.002..0.002 rows=1 loops=548)
-> Index Only Scan using index_topic_tags_on_topic_id_and_tag_id on topic_tags (cost=0.29..4.31 rows=1 width=4) (actual time=0.002..0.002 rows=1 loops=548)
Index Cond: (topic_id = topics.id)
Heap Fetches: 178
-> Hash (cost=97222.14..97222.14 rows=19914 width=16) (actual time=87.545..87.546 rows=42884 loops=1)
Buckets: 65536 (originally 32768) Batches: 1 (originally 1) Memory Usage: 2387kB
-> Bitmap Heap Scan on topic_users tu (cost=1217.47..97222.14 rows=19914 width=16) (actual time=14.419..78.286 rows=42884 loops=1)
Recheck Cond: (user_id = 13455)
Filter: (COALESCE(notification_level, 1) >= 2)
Rows Removed by Filter: 15839
Heap Blocks: exact=45285
-> Bitmap Index Scan on index_topic_users_on_user_id_and_topic_id (cost=0.00..1212.49 rows=59741 width=0) (actual time=6.448..6.448 rows=58723 loops=1)
Index Cond: (user_id = 13455)
SubPlan 2
-> Nested Loop Left Join (cost=0.74..46.90 rows=1 width=4) (never executed)
Join Filter: (category_users2.category_id = categories2.id)
Filter: (((category_users.id IS NULL) AND (COALESCE(category_users2.notification_level, 1) = 0)) OR (COALESCE(category_users.notification_level, 1) = 0))
-> Nested Loop Left Join (cost=0.45..32.31 rows=73 width=16) (never executed)
Join Filter: (category_users.category_id = categories.id)
-> Nested Loop Left Join (cost=0.15..18.45 rows=73 width=8) (never executed)
-> Seq Scan on categories (cost=0.00..13.73 rows=73 width=8) (never executed)
-> Memoize (cost=0.15..0.28 rows=1 width=4) (never executed)
Cache Key: categories.parent_category_id
Cache Mode: logical
-> Index Only Scan using categories_pkey on categories categories2 (cost=0.14..0.27 rows=1 width=4) (never executed)
Index Cond: (id = categories.parent_category_id)
Heap Fetches: 0
-> Materialize (cost=0.29..11.69 rows=2 width=12) (never executed)
-> Index Scan using idx_category_users_user_id_category_id on category_users (cost=0.29..11.68 rows=2 width=12) (never executed)
Index Cond: (user_id = 13455)
-> Materialize (cost=0.29..11.69 rows=2 width=8) (never executed)
-> Index Scan using idx_category_users_user_id_category_id on category_users category_users2 (cost=0.29..11.68 rows=2 width=8) (never executed)
Index Cond: (user_id = 13455)
Planning Time: 1.740 ms
Execution Time: 91.414 ms
(59 rows)
```
From the execution plan, we can see the most of the time is spent
joining about 42888 rows in the `topics` table to the `topic_users` table.
However, we know that we only have to scan through a
subset of the `topics` table because the user's last unread at is '2023-07-20 11:33:05'.
If we filter the `topics` table with `topics.updated_at >= '2023-07-20 11:33:05'`, this would only
return about 1500 rows.
From our testing in production, the PG planner is able to execute a
better query plan when we avoid the unnecessary joins on `user_stats` just to be
able to get the user's `UserStat#first_unread_at`. Instead, we can just
pass the value of `UserStat#first_unread_at` directly as a query
parameter.
```
EXPLAIN ANALYZE
SELECT
DISTINCT topics.id as topic_id,
u.id as user_id,
topics.created_at,
topics.updated_at,
topics.highest_staff_post_number AS highest_post_number,
last_read_post_number,
c.id as category_id,
c.topic_id AS category_topic_id,
tu.notification_level,
GREATEST(
CASE
WHEN COALESCE(uo.new_topic_duration_minutes, 2880) = -1 THEN u.created_at
WHEN COALESCE(uo.new_topic_duration_minutes, 2880) = -2 THEN COALESCE(
u.previous_visit_at,u.created_at
)
ELSE ('2023-07-31 03:29:45.737630'::timestamp - INTERVAL '1 MINUTE' * COALESCE(uo.new_topic_duration_minutes, 2880))
END, u.created_at, '2023-07-25 15:06:44'
) AS treat_as_new_topic_start_date
FROM topics
JOIN users u on u.id = 13455
JOIN user_options AS uo ON uo.user_id = u.id
JOIN categories c ON c.id = topics.category_id
LEFT JOIN topic_users tu ON tu.topic_id = topics.id AND tu.user_id = u.id
WHERE u.id = 13455 AND
topics.updated_at >= '2023-07-20 11:33:05' AND
topics.archetype <> 'private_message' AND
(("topics"."deleted_at" IS NULL AND (tu.last_read_post_number < topics.highest_staff_post_number) AND (COALESCE(tu.notification_level, 1) >= 2)) OR (1=0)) AND
NOT (
COALESCE((select array_agg(tag_id) from topic_tags where topic_tags.topic_id = topics.id), ARRAY[]::int[]) && ARRAY[451,452,453]
) AND
topics.deleted_at IS NULL AND
NOT (
last_read_post_number IS NULL AND
(
topics.category_id IN (SELECT "categories"."id" FROM "categories" LEFT JOIN categories categories2 ON categories2.id = categories.parent_category_id LEFT JOIN category_users ON category_users.category_id = categories.id AND category_users.user_id = 13455 LEFT JOIN category_users category_users2 ON category_users2.category_id = categories2.id AND category_users2.user_id = 13455 WHERE ((category_users.id IS NULL AND COALESCE(category_users2.notification_level, 1) = 0) OR COALESCE(category_users.notification_level, 1) = 0))
AND tu.notification_level <= 1
)
);
```
Note how the filter is now `topics.updated_at >= '2023-07-20 11:33:05'`
instead of `topics.updated_at >= us.first_unread_at`. The modified query
above generates the following execution plan.
```
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Unique (cost=5189.86..5189.88 rows=1 width=52) (actual time=4.991..5.002 rows=14 loops=1)
-> Sort (cost=5189.86..5189.86 rows=1 width=52) (actual time=4.990..4.994 rows=14 loops=1)
Sort Key: topics.id, topics.created_at, topics.updated_at, topics.highest_staff_post_number, tu.last_read_post_number, c.id, c.topic_id, tu.notification_level, (GREATEST(CASE WHEN (COALESCE(uo.new_topic_duration_minutes, 2880) = '-1'::integer) THEN u.created_at WHEN (COALESCE(uo.new_topic_duration_minutes, 2880) = '-2'::integer) THEN COALESCE(u.previous_visit_at, u.created_at) ELSE ('2023-07-31 03:29:45.73763'::timestamp without time zone - ('00:01:00'::interval * (COALESCE(uo.new_topic_duration_minutes, 2880))::double precision)) END, u.created_at, '2023-07-25 15:06:44'::timestamp without time zone))
Sort Method: quicksort Memory: 26kB
-> Nested Loop (cost=52.11..5189.85 rows=1 width=52) (actual time=0.093..4.974 rows=14 loops=1)
-> Nested Loop (cost=51.70..5181.39 rows=1 width=60) (actual time=0.084..4.931 rows=14 loops=1)
-> Nested Loop (cost=51.28..5172.94 rows=1 width=44) (actual time=0.076..4.887 rows=14 loops=1)
-> Nested Loop (cost=0.41..1698.46 rows=59 width=36) (actual time=0.029..3.537 rows=548 loops=1)
-> Seq Scan on categories c (cost=0.00..13.73 rows=73 width=8) (actual time=0.005..0.039 rows=73 loops=1)
-> Index Only Scan using index_topics_on_updated_at_public on topics (cost=0.41..23.07 rows=1 width=28) (actual time=0.012..0.047 rows=8 loops=73)
Index Cond: ((updated_at >= '2023-07-20 11:33:05'::timestamp without time zone) AND (category_id = c.id))
Filter: (NOT (COALESCE((SubPlan 1), '{}'::integer[]) && '{451,452,453}'::integer[]))
Heap Fetches: 552
SubPlan 1
-> Aggregate (cost=4.31..4.32 rows=1 width=32) (actual time=0.002..0.002 rows=1 loops=548)
-> Index Only Scan using index_topic_tags_on_topic_id_and_tag_id on topic_tags (cost=0.29..4.31 rows=1 width=4) (actual time=0.002..0.002 rows=1 loops=548)
Index Cond: (topic_id = topics.id)
Heap Fetches: 178
-> Index Scan using index_topic_users_on_user_id_and_topic_id on topic_users tu (cost=50.86..58.88 rows=1 width=16) (actual time=0.002..0.002 rows=0 loops=548)
Index Cond: ((user_id = 13455) AND (topic_id = topics.id))
Filter: ((COALESCE(notification_level, 1) >= 2) AND (last_read_post_number < topics.highest_staff_post_number) AND ((last_read_post_number IS NOT NULL) OR (NOT (hashed SubPlan 2)) OR (notification_level > 1)))
Rows Removed by Filter: 0
SubPlan 2
-> Nested Loop Left Join (cost=0.74..50.43 rows=1 width=4) (never executed)
Join Filter: (category_users2.category_id = categories2.id)
Filter: (((category_users.id IS NULL) AND (COALESCE(category_users2.notification_level, 1) = 0)) OR (COALESCE(category_users.notification_level, 1) = 0))
-> Nested Loop Left Join (cost=0.45..35.84 rows=73 width=16) (never executed)
Join Filter: (category_users.category_id = categories.id)
-> Nested Loop Left Join (cost=0.15..21.97 rows=73 width=8) (never executed)
-> Seq Scan on categories (cost=0.00..13.73 rows=73 width=8) (never executed)
-> Memoize (cost=0.15..0.61 rows=1 width=4) (never executed)
Cache Key: categories.parent_category_id
Cache Mode: logical
-> Index Only Scan using categories_pkey on categories categories2 (cost=0.14..0.60 rows=1 width=4) (never executed)
Index Cond: (id = categories.parent_category_id)
Heap Fetches: 0
-> Materialize (cost=0.29..11.69 rows=2 width=12) (never executed)
-> Index Scan using idx_category_users_user_id_category_id on category_users (cost=0.29..11.68 rows=2 width=12) (never executed)
Index Cond: (user_id = 13455)
-> Materialize (cost=0.29..11.69 rows=2 width=8) (never executed)
-> Index Scan using idx_category_users_user_id_category_id on category_users category_users2 (cost=0.29..11.68 rows=2 width=8) (never executed)
Index Cond: (user_id = 13455)
-> Index Scan using users_pkey on users u (cost=0.42..8.44 rows=1 width=20) (actual time=0.003..0.003 rows=1 loops=14)
Index Cond: (id = 13455)
-> Index Scan using index_user_options_on_user_id_and_default_calendar on user_options uo (cost=0.42..8.44 rows=1 width=8) (actual time=0.002..0.002 rows=1 loops=14)
Index Cond: (user_id = 13455)
Planning Time: 1.281 ms
Execution Time: 5.092 ms
(48 rows)
```
With the new query, PG first does an index scan using the `index_topics_on_updated_at_public` index to filter away most of the topics making the subsequent joins much cheaper. Total query time has been reduced from ~90ms to ~5ms.
This optimisation will mostly affect users with very few/recent unread topics since a large `UserStat#firsts_unread_at` value will still mean scanning through a large portion of the `topics` table.
We currently are accumulating orphaned upload references whenever drafts are deleted.
This change deals with future cases by adding a dependent strategy of delete_all on the Draft#upload_references association. (We don't really need destroy strategy here, since UploadReference is a simple data bag and there are no validations or callbacks on the model.)
It deals with existing cases through a migration that deletes all existing, orphaned draft upload references.
A previous change updated `ReviewableQueuedPost`'s `created_by`
to be consistent with other reviewable types. It assigns
the the creator of the post being queued to `target_created_by` and sets
the `created_by` to the creator of the reviewable itself.
This fix updates some of the `created_by` references missed during the
intial fix.
What is the problem here?
In multiple controllers, we are accepting a `limit` params but do not
impose any upper bound on the values being accepted. Without an upper
bound, we may be allowing arbituary users from generating DB queries
which may end up exhausing the resources on the server.
What is the fix here?
A new `fetch_limit_from_params` helper method is introduced in
`ApplicationController` that can be used by controller actions to safely
get the limit from the params as a default limit and maximum limit has
to be set. When an invalid limit params is encountered, the server will
respond with the 400 response code.
This is happening because despite the user already existing in the forum, the `SingleSignOnRecord` doesn't exist and "require_activation" is set on the provider, causing us to skip looking for the email, and resulting in us creating a new User then seeing Validation failed: Primary email has already been taken when DiscourseConnect is attempting to make a new account.
Why this change?
In `PostDestroyer#make_previous_post_the_last_one` and
`Topic.reset_highest`, we have a query that looks something like this:
```
SELECT user_id FROM posts
WHERE topic_id = :topic_id AND
deleted_at IS NULL AND
post_type <> 4
#{post_type}
ORDER BY created_at desc
LIMIT 1
```
However, we currently don't have an index that caters directly to this
query. As a result, we have seen this query performing poorly on large
sites if the PG planner ends up using an index that is suboptimal for
the query.
This commit adds an index to the `posts` table on `topic_id` and then
`created_at`. For the query above, PG will be able to do a backwards
index scan efficiently.
Context of this change:
There are two site settings which an admin can configured to set the
default categories and tags that are shown for a new user. `default_navigation_menu_categories`
is used to determine the default categories while
`default_navigation_menu_tags` is used to determine the default tags.
Prior to this change when seeding the defaults, we will filter out the
categories/tags that the user do not have permission to see. However,
this means that when the user does eventually gain permission down the
line, the default categories and tags do not appear.
What does this change do?
With this commit, we have changed it such that all the categories and tags
configured in the `default_navigation_menu_categories` and
`default_navigation_menu_tags` site settings are seeded regardless of
whether the user's visibility of the categories or tags. During
serialization, we will then filter out the categories and tags which the
user does not have visibility of.