This commit adds the ability for site administrators to mark users'
passwords as expired. Note that this commit does not add any client side
interface to mark a user's password as expired.
The following changes are introduced in this commit:
1. Adds a `user_passwords` table and `UserPassword` model. While the
`user_passwords` table is currently used to only store expired
passwords, it will be used in the future to store a user's current
password as well.
2. Adds a `UserPasswordExpirer.expire_user_password` method which can
be used from the Rails console to mark a user's password as expired.
3. Updates `SessionsController#create` to check that the user's current
password has not been marked as expired after confirming the
password. If the password is determined to be expired based on the
existence of a `UserPassword` record with the `password_expired_at`
column set, we will not log the user in and will display a password
expired notice. A forgot password email is automatically send out to
the user as well.
This commit introduces the following changes which allows a site
administrator to mark `Upload` records with the `s3_file_missing`
verification status which will result in the `Upload` record being ignored when
`Discourse.store.list_missing_uploads` is ran on a site where S3 uploads
are enabled and `SiteSetting.enable_s3_inventory` is set to `true`.
1. Introduce `s3_file_missing` to `Upload.verification_statuses`
2. Introduce `Upload.mark_invalid_s3_uploads_as_missing` which updates
`Upload#verification_status` of all `Upload` records from `invalid_etag` to `s3_file_missing`.
3. Introduce `rake uploads:mark_invalid_s3_uploads_as_missing` Rake task
which allows a site administrator to change `Upload` records with
`invalid_etag` verification status to the `s3_file_missing`
verificaton_status.
4. Update `S3Inventory` to ignore `Upload` records with the
`s3_file_missing` verification status.
This gives us daily fidelity of topic view stats
New table stores a row per topic viewed per day tracking
anonymous and logged on views
We also have a new endpoint `/t/ID/views-stats.json` to get the statistics for the topic.
After flags were moved to the database, with each save they are changing available PostActionTypes. Therefore, flag specs should clear the state before and after each example not just before.
In addition, we need to clear `nil` counts for dynamically created flags from serializer.
* FEATURE: add agree and edit
adds agree and edit - an alias for agree and keep -- but with a client action to
edit the post in the composer before the flag is agreed with
---------
Co-authored-by: Juan David Martinez <juan@discourse.org>
We're planning to implement a feature that allows adding required fields for existing users. This PR does some preparatory refactoring to make that possible. There should be no changes to existing behaviour. Just a small update to the admin UI.
This commit updates `Post#each_upload_url` to reject URLs that do not
have a host which matches `Discourse.current_hostname` but follows the
`/uploads/short-url` uploads URL format. This situation most commonly
happen when users copy upload URL link between different Discourse
sites.
This PR introduces a basic AdminNotice model to store these notices. Admin notices are categorized by their source/type (currently only notices from problem check.) They also have a priority.
Whenever one creates, updates, or deletes a post, we should keep the `topic.word_count` counter in sync.
Context - https://meta.discourse.org/t/-/308062
The users directory is updated on a daily cadence. However, when a site is new and doesn't have many users, it can be confusing that a user who has just joined doesn't show up in the users until a day after they join. To eliminate this confusion, this commit triggers a refresh for the users directory as soon as as a user joins, if the site is in bootstrap mode. The reason for the conditional trigger is that refreshing the users directory is an expensive operation and doing it often on a large site with many users could lead to performance problems.
Internal topic: t/126076.
If there's ever a circular reference in categories, don't go into an infinite loop when generating the category slug.
Instead, keep track of parent ids, and bail out as soon as we're encountering one more than once.
In #22851 we added a dependent strategy for deleting upload references when a draft is destroyed. This, however, didn't catch all cases, because we still have some code that issues DELETE drafts queries directly to the database. Specifically in the weekly cleanup job handled by Draft#cleanup!.
This PR fixes that by turning the raw query into an ActiveRecord #destroy_all, which will invoke the dependent strategy that ultimately deletes the upload references. It also includes a post migration to clear orphaned upload references that are already in the database.
This commit introduces the `run_theme_migration` spec helper to allow
theme developers to write RSpec tests for theme migrations. For example,
this allows the following RSpec test to be written in themes:
```
RSpec.describe "0003-migrate-small-links-setting migration" do
let!(:theme) { upload_theme_component }
it "should set target property to `_blank` if previous target component is not valid or empty" do
theme.theme_settings.create!(
name: "small_links",
theme: theme,
data_type: ThemeSetting.types[:string],
value: "some text, #|some text 2, #, invalid target",
)
run_theme_migration(theme, "0003-migrate-small-links-setting")
expect(theme.settings[:small_links].value).to eq(
[
{ "text" => "some text", "url" => "#", "target" => "_blank" },
{ "text" => "some text 2", "url" => "#", "target" => "_blank" },
],
)
end
end
```
This change is being introduced because we realised that writting just
javascript tests for the migrations is insufficient since javascript
tests do not ensure that the migrated theme settings can actually be
successfully saved into the database. Hence, we are introduce this
helper as a way for theme developers to write "end-to-end" migrations
tests.
At the moment, there is no way to create a group of related watched words together. If a user needed a set of words to be created together, they'll have to create them individually one at a time.
This change attempts to allow related watched words to be created as a group. The idea here is to have a list of words be tied together via a common `WatchedWordGroup` record. Given a list of words, a `WatchedWordGroup` record is created and assigned to each `WatchedWord` record. The existing WatchedWord creation behaviour remains largely unchanged.
Co-authored-by: Selase Krakani <skrakani@gmail.com>
Co-authored-by: Martin Brennan <martin@discourse.org>
This commit introduces a few changes as a result of
customer issues with finding why a topic was relisted.
In one case, if a user edited the OP of a topic that was
unlisted and hidden because of too many flags, the topic
would get relisted by directly changing topic.visible,
instead of going via TopicStatusUpdater.
To improve tracking we:
* Introduce a visibility_reason_id to topic which functions
in a similar way to hidden_reason_id on post, this column is
set from the various places we change topic visibility
* Fix Post#unhide! which was directly modifying topic.visible,
instead we use TopicStatusUpdater which sets visibility_reason_id
and also makes a small action post
* Show the reason topic visibility changed when hovering the
unlisted icon in topic status on topic titles
In a large forum with millions of users and millions of user_fields
updating the list of dropdown user field options will result in a
502 now due to the large number of fields.
This commit moves the indexing into a job.
This ensures we only ever store correct post and topic timing when the client
notifies.
Previous to this change we would blindly trust the client.
Additionally this has error correction code that will correct the last seen
post number when you visit a topic with incorrect timings.
This commit addresses an issue for sites where secure_uploads
is turned on after the site has been operating without it for
some time.
When uploads are linked when they are used inside a post,
we were setting the access_control_post_id unconditionally
if it was NULL to that post ID and secure_uploads was true.
However this causes issues if an upload has been used in a
few different places, especially if a post was previously
used in a PM and marked secure, so we end up with a case of
the upload using a public post for its access control, which
causes URLs to not use the /secure-uploads/ path in the post,
breaking things like image uploads.
We should only set the access_control_post_id if the post is the first time the
upload is referenced so it cannot hijack uploads from other places.
When lazy load categories is enabled, categories should be loaded with
user activity items and drafts because the categories may not be
preloaded on the client side.
This method name is a bit confusing; with_secure_uploads implies
it may return a block or something with the uploads of the post,
and has_secure_uploads implies that it's checking whether the post
is linked to any secure uploads.
should_secure_uploads? communicates the true intent of this method --
which is to say whether uploads attached to this post should be
secure or not.
* DEV: Add `topic_embed_import_create_args` plugin modifier
This modifier allows a plugin to change the arguments used when creating
a new topic for an imported article.
For example: let's say you want to prepend "Imported: " to the title of
every imported topic. You could use this modifier like so:
```ruby
# In your plugin's code
plugin.register_modifier(:topic_embed_import_create_args) do |args|
args[:title] = "Imported: #{args[:title]}"
args
end
```
In this example, the modifier is prepending "Imported: " to the `title` in the `create_args` hash. This modified title would then be used when the new topic is created.
This PR improves the performance of the `most_replied_to_users` method on the `UserSummary` model.
### Old Query
```ruby
post_query
.joins(
"JOIN posts replies ON posts.topic_id = replies.topic_id AND posts.reply_to_post_number = replies.post_number",
)
# We are removing replies by @user, but we can simplify this by getting the using the user_id on the posts.
.where("replies.user_id <> ?", @user.id)
.group("replies.user_id")
.order("COUNT(*) DESC")
.limit(MAX_SUMMARY_RESULTS)
.pluck("replies.user_id, COUNT(*)")
.each { |r| replied_users[r[0]] = r[1] }
```
### Old Query with corrections
```ruby
post_query
.joins(
"JOIN posts replies ON posts.topic_id = replies.topic_id AND replies.reply_to_post_number = posts.post_number",
)
# Remove replies by @user but instead look on loaded posts (we do this so we don't count self replies)
.where("replies.user_id <> posts.user_id")
.group("replies.user_id")
.order("COUNT(*) DESC")
.limit(MAX_SUMMARY_RESULTS)
.pluck("replies.user_id, COUNT(*)")
.each { |r| replied_users[r[0]] = r[1] }
```
### New Query
```ruby
post_query
.joins(
"JOIN posts replies ON posts.topic_id = replies.topic_id AND posts.reply_to_post_number = replies.post_number",
)
# Only include regular posts in our joins, this makes sure we don't have the bloat of loading private messages
.joins(
"JOIN topics ON replies.topic_id = topics.id AND topics.archetype <> 'private_message'",
)
# Only include visible post types, so exclude posts like whispers, etc
.joins(
"AND replies.post_type IN (#{Topic.visible_post_types(@user, include_moderator_actions: false).join(",")})",
)
.where("replies.user_id <> posts.user_id")
.group("replies.user_id")
.order("COUNT(*) DESC")
.limit(MAX_SUMMARY_RESULTS)
.pluck("replies.user_id, COUNT(*)")
.each { |r| replied_users[r[0]] = r[1] }
```
# Conclusion
`most_replied_to_users` was untested, so I introduced a test for the logic, and have confirmed that it passes on both the new query **AND** the old query.
Thank you @danielwaterworth for the debugging assistance.
When a user is manually deactivated, they should not be deleted by our background job that purges inactive users.
In addition, site settings keywords should accept an array of keywords.
Previously the problem check registry simply looked at the subclasses of ProblemCheck. This was causing some confusion in environments where eager loading is not enabled, as the registry would appear empty as a result of the classes never being referenced (and thus never loaded.)
This PR changes the approach to a more explicit one. I followed other implementations (bookmarkable and hashtag autocomplete.) As a bonus, this now has a neat plugin entry point as well.
This enables the following in Discourse AI
```
plugin.register_modifier(:chat_allowed_bot_user_ids) do |user_ids, guardian|
if guardian.user
mentionables = AiPersona.mentionables(user: guardian.user)
allowed_bot_ids = mentionables.map { |mentionable| mentionable[:user_id] }
user_ids.concat(allowed_bot_ids)
end
user_ids
end
```
some bots that are id < 0 need to be discoverable in search otherwise people can not talk to them.
---------
Co-authored-by: Joffrey JAFFEUX <j.jaffeux@gmail.com>
When "lazy load categories" is enabled and parent_category_id was set,
the query fetching categories contained a contradiction filtering both
by parent_category_id and parent_category_id = NULL.
In #26122 we promoted all problem checks defined as class methods on AdminDashboardData to their own first-class ProblemCheck instances.
This PR continues that by promoting problem checks that are implemented as blocks as well. This includes updating a couple plugins that have problem checks.
We never use that information and this also fixes an issue with the BCC plugin which ends up triggering a rate-limit because we were publishing a "NEW_PRIVATE_MESSAGE" to the user sending the BCC for every recipients 💥
Internal - t/118283
This commit makes it so the site settings filter controls and
the list of settings input editors themselves can be used elsewhere
in the admin UI outside of /admin/site_settings
This allows us to provide more targeted groups of settings in different
UI areas where it makes sense to provide them, such as on plugin pages.
You could open a single page for a plugin where you can see information
about that plugin, change settings, and configure it with custom UIs
in the one place.
In future we will do this in "config areas" for other parts of the
admin UI.
In AdminDashboardData we have a bunch of problem checks implemented as methods on that class. This PR absolves it of the responsibility by promoting each of those checks to a first class ProblemCheck. This way each of them can have their own priority and arbitrary functionality can be isolated in its own class.
Think "extract class" refactoring over and over. Since they were all moved we can also get rid of the @@problem_syms class variable which was basically the old version of the registry now replaced by ProblemCheck.realtime.
In addition AdminDashboardData::Problem value object has been entirely replaced with the new ProblemCheck::Problem (with compatible API).
Lastly, I added some RSpec matchers to simplify testing of problem checks and provide helpful error messages when assertions fail.
There are a couple of reasons for this.
The first one is practical, and related to eager loading. Since /lib is not eager loaded, when the application boots, ProblemCheck["identifier"] will be nil because the child classes aren't loaded.
The second one is more conceptual. There turns out to be a lot of inter-dependencies between the part of the problem check system that live in /app and the parts that live in /lib, which probably suggests it should all go in /app.
Why this change?
There are two problematic queries in question here when loading
notifications in various tabs in the user menu:
```
SELECT "notifications".*
FROM "notifications"
LEFT JOIN topics ON notifications.topic_id = topics.id
WHERE "notifications"."user_id" = 1338 AND (topics.id IS NULL OR topics.deleted_at IS NULL)
ORDER BY notifications.high_priority AND NOT notifications.read DESC,
NOT notifications.read AND notifications.notification_type NOT IN (5,19,25) DESC,
notifications.created_at DESC
LIMIT 30;
```
and
```
EXPLAIN ANALYZE SELECT "notifications".*
FROM "notifications"
LEFT JOIN topics ON notifications.topic_id = topics.id
WHERE "notifications"."user_id" = 1338
AND (topics.id IS NULL OR topics.deleted_at IS NULL)
AND "notifications"."notification_type" IN (5, 19, 25)
ORDER BY notifications.high_priority AND NOT notifications.read DESC, NOT notifications.read DESC, notifications.created_at DESC LIMIT 30;
```
For a particular user, the queries takes about 40ms and 26ms
respectively on one of our production instance where the user has 10K notifications while the site has 600K notifications in total.
What does this change do?
1. Adds the `index_notifications_user_menu_ordering` index to the `notifications` table which is
indexed on `(user_id, (high_priority AND NOT read) DESC, (NOT read)
DESC, created_at DESC)`.
1. Adds a second index `index_notifications_user_menu_ordering_deprioritized_likes` to the `notifications`
table which is indexed on `(user_id, (high_priority AND NOT read) DESC, (NOT read AND notification_type NOT IN (5,19,25)) DESC, created_at DESC)`. Note that we have to hardcode the like typed notifications type here as it is being used in an ordering clause.
With the two indexes above, both queries complete in roughly 0.2ms. While I acknowledge that there will be some overhead in insert,update or delete operations. I believe this trade-off is worth it since viewing notifications in the user menu is something that is at the core of using a Discourse forum so we should optimise this experience as much as possible.
Currently, the trust level method is calculating trust level based on maximum value from:
- locked trust level
- group automatic trust level
- previously granted trust level by admin
https://github.com/discourse/discourse/blob/main/lib/trust_level.rb#L33
Let's say the user belongs to groups with automatic trust level 1 and in the meantime meets all criteria to get trust level 2.
Each time, a user is removed from a group with automatic trust_level 1, they will be downgraded to trust_level 1 and promoted to trust_level 2
120a2f70a9/lib/promotion.rb (L142)
This will cause duplicated promotion messages.
Therefore, we have to check if the user meets the criteria, before downgrading.
Why this change?
Prior to this change, the `CategoryList#find_relevant_topics` method was
loading and allocating all `CategoryFeaturedTopic` records in the
database to eventually only just use its `category_id` and `topic_id`
column. On a site with many `CategoryFeaturedTopic` records, the loading
of the ActiveRecord objects is a source of bottleneck.
The other problem with the `CategoryList#find_relevant_topics` method is
that it is unconditionally loading all records from the database even if
the user does not have access to the category. This again is wasteful.
What does this change do?
This commit makes it such that `CategoryList#find_relevant_topics` is
called only after `CategoryList#find_categories` in the `CategoryList#initialize`
method so that we can filter featured topics against categories that the
user has access to.
The second change is that Instead of loading `CategoryFeaturedTopic` records, we make an
inner join agains the `topics` table instead and skip any allocation of
`CatgoryFeaturedTopic` ActiveRecord objects.
It's February 29th, you know what that means...date-based flaky specs! If today is
February 29th 2024:
```
freeze_time(1.year.ago) -> Tue, 28 Feb 2023 01:38:42.732875000 UTC +00:00
```
Then
```
freeze_time(1.year.from_now) -> Wed, 28 Feb 2024 01:38:42.732875000 UTC +00:00
```
So then our "now" for the insert query ends up being "yesterday"
```
WHERE topic_hot_scores.topic_id IS NULL
AND topics.deleted_at IS NULL
AND topics.archetype <> :private_message
AND topics.created_at <= :now
```
As part of problem checks refactoring, we're moving some data to be DB backed. In this PR it's the tracking of problem check execution. When was it last run, when was the last problem, when should it run next, how many consecutive checks had problems, etc.
This allows us to implement the perform_every feature in scheduled problem checks for checks that don't need to be run every 10 minutes.
Why this change?
This change adds validation for the default value for `type: objects` theme
settings when a setting theme field is uploaded. This helps the theme
author to ensure that the objects which they specifc in the default
value adhere to the schema which they have declared.
When an error is encountered in one of the objects, the error
message will look something like:
`"The property at JSON Pointer '/0/title' must be at least 5 characters
long."`
We use a JSON Pointer to reference the property in the object which is
something most json-schema validator uses as well.
What does this change do?
1. This commit once again changes the shape of hash returned by
`ThemeSettingsObjectValidator.validate`. Instead of using the
property name as the key previously, we have decided to avoid
multiple levels of nesting and instead use a JSON Pointer as the key
which helps to simplify the implementation.
2 Introduces `ThemeSettingsObjectValidator.validate_objects` which
returns an array of validation error messages for all the objects
passed to the method.
Previously, problem checks were all added as either class methods or blocks in AdminDashboardData. Another set of class methods were used to add and run problem checks.
As of this PR, problem checks are promoted to first-class citizens. Each problem check receives their own class. This class of course contains the implementation for running the check, but also configuration items like retry strategies (for scheduled checks.)
In addition, the parent class ProblemCheck also serves as a registry for checks. For example we can get a list of all existing check classes through ProblemCheck.checks, or just the ones running on a schedule through ProblemCheck.scheduled.
After this refactor, the task of adding a new check is significantly simplified. You add a class that inherits ProblemCheck, you implement it, add a test, and you're good to go.
Why this change?
Firstly, note that this is not a security commit because this feature is
still in development and should not be used anywhere.
The reason we want to set a limit here is to greatly reduce the
possibility of a DoS attack in the future via `ThemeSetting` where
someone would set an arbituary large json string in
`ThemeSetting#json_value` and causing the server to run out of resources
trying to serialize/deserialize the value.
What does this change do?
Adds an ActiveRecord validation to ensure that the bytesize of the json
string being stored is smaller than or equal to 0.5mb. We believe 0.5mb
is a decent limit for now but we can review the limit in the future if
we believe it is too small.
Why this change?
The logic for validating a theme setting's value and default value was
not consistent as each part of the code would implement its own logic.
This is not ideal as the default value may be validated differently than
when we are setting a new value. Therefore, this commit seeks to
refactor all the validation logic for a theme setting's value into a
single service class.
What does this change do?
Introduce the `ThemeSettingsValidator` service class which holds all the
necessary helper methods required to validate a theme setting's value
When "lazy load categories" is enabled, only the categories present in
the sidebar are preloaded. This is insufficient because the parent
categories are necessary too for the sidebar to be rendered properly.
The strict-dynamic CSP directive is supported in all our target browsers, and makes for a much simpler configuration. Instead of allowlisting paths, we use a per-request nonce to authorize `<script>` tags, and then those scripts are allowed to load additional scripts (or add additional inline scripts) without restriction.
This becomes especially useful when admins want to add external scripts like Google Tag Manager, or advertising scripts, which then go on to load a ton of other scripts.
All script tags introduced via themes will automatically have the nonce attribute applied, so it should be zero-effort for theme developers. Plugins *may* need some changes if they are inserting their own script tags.
This commit introduces a strict-dynamic-based CSP behind an experimental `content_security_policy_strict_dynamic` site setting.
Reactions needs this to be able to filter out likes received
actions, where there is also an associated reaction, since
now most reactions also count as a like.
Fixes an issue where private topics that are quoted have an incorrectly formatted url when using a subfolder install.
This update returns a relative url that includes the base_path rather than a combination of base_url + base_path.
This commit changes `max_image_megapixels` to be used
as is without multiplying by 2 to give extra leway.
We found in reality this was just causing confusion
for admins, especially with the already permissive
40MP default.
When we insert into the hot set we add things with a score of 0
This means that if hot has more than batch size items in it with a score, then the 0s don't get an initial score
This corrects the situation by always ensuring we re-score:
1. batch size high scoring topics
2. (new) batch size recently bumped topics
* Update spec/models/topic_hot_scores_spec.rb
Co-authored-by: Isaac Janzen <50783505+janzenisaac@users.noreply.github.com>
---------
Co-authored-by: Isaac Janzen <50783505+janzenisaac@users.noreply.github.com>
Why this change?
This commit introduces an experimental `type: objects` theme setting
which will allow theme developers to store a collection of objects as
JSON in the database. Currently, the feature is still in development and
this commit is simply setting up the ground work for us to introduce the
feature in smaller pieces.
What does this change do?
1. Adds a `json_value` column as `jsonb` data type to the `theme_settings` table.
2. Adds a `experimental_objects_type_for_theme_settings` site setting to
determine whether `ThemeSetting` records of with the `objects` data
type can be created.
3. Updates `ThemeSettingsManager` to support read/write access from the
`ThemeSettings#json_value` column.
Why this change?
This is caused by a regression in
59839e428f, where we stopped saving the
`Theme` object because it was unnecessary. However, it resulted in the
`after_save` callback not being called and hence
`Theme#update_javascript_cache!` not being called. As a result, some
sites were reporting that after runing a theme migration, the defaults
for the theme settings were used instead of the settings overrides
stored in the database.
What does this change do?
Add a call to `Theme#update_javascript_cache!` after running theme
migrations.
1. Don't show visited line for hot filter, it is in random order
2. Don't count likes on non regular posts (eg: whispers / small actions)
3. Don't count participants in non regular posts
1. Serial likers will just like a bunch of posts on the same topic, this will
heavily inflate hot score. To avoid artificial "heat" generated by one user only count
the first like on the topic within the recent_cutoff range per topic
2. When looking at recent topics prefer "unique likers", defer to total likes on
older topics cause we do not have an easy count for unique likers
3. Stop taking 1 off like_count, it is not needed - platforms like reddit
allow you to like own post so they need to remove it.
Why this change?
Returning an array makes it hard to immediately retrieve a setting by
name and makes the retrieval an O(N) operation. By returning an array,
we make it easier for us to lookup a setting by name and retrieval is
O(1) as well.
Internal links always notify and add internal connections in topics.
This adds a special feature that lets you append `?silent=true` to a link
to have it excluded from:
1. Notifications - users will not be notified for these links
2. Post links below posts in the UI
This is specifically useful for large reports where adding all these connections
just results in noise.
For performance reasons we don't automatically add fabricated users to trust level auto-groups. However, when explicitly passing a trust level to the fabricator, in 99% of cases it means that trust level is relevant for the test, and we need the groups.
This change makes it so that when a trust level is explicitly passed to the fabricator, the auto-groups are refreshed. There's no longer a need to also pass refresh_auto_groups: true, which means clearer tests, fewer mistakes, and less confusion.
We're changing the implementation of trust levels to use groups. Part of this is to have site settings that reference trust levels use groups instead. It converts the min_trust_level_to_tag_topics site setting to tag_topic_allowed_groups.
We have all these calls to Group.refresh_automatic_groups! littered throughout the tests. Including tests that are seemingly unrelated to groups. This is because automatic group memberships aren't fabricated when making a vanilla user. There are two places where you'd want to use this:
You have fabricated a user that needs a certain trust level (which is now based on group membership.)
You need the system user to have a certain trust level.
In the first case, we can pass refresh_auto_groups: true to the fabricator instead. This is a more lightweight operation that only considers a single user, instead of all users in all groups.
The second case is no longer a thing after #25400.
We want to exclude the system user from group user counts, since intuitively admins wouldn't include them.
Originally this was accomplished by booting said system user from the groups, but this is causing problems, because the system user needs TL group membership to perform certain tasks.
After this PR, system user is still in the TL groups, but excluded when refreshing the user count.
Some preparatory refactoring as we're working on TL groups for the system user. On User we have a scope #human_users to exclude the system user, DiscoBot, etc. This PR adds the same scope (delegated to User) on Group.
This is a temporary fix to address an issue where the
system user is losing its automatic groups when the server
is running. If any auto groups are provided, and the user is
a system user, then we return true. The system user is admin,
moderator, and TL4, so they usually have all auto groups.
We can remove this when we get to the bottom of why the auto
groups are being deleted.
We're changing the implementation of trust levels to use groups. Part of this is to have site settings that reference trust levels use groups instead. It converts the min_trust_to_post_links site setting to post_links_allowed_groups.
This isn't used by any of our plugins or themes, so very little fallout.
The old strategy used to load 25 categories at a time, including the
subcategories. The new strategy loads 20 parent categories and at most
5 subcategories for each parent category, for a maximum of 120
categories in total.