Followup to e37fb3042d,
in some cases we cannot get git information for the
plugin folder (e.g. permission issues), so we need
to only try and get information about it if
commit_hash is present.
Reverts
- DEV: maxmind license checking failing tests #24534
- UX: Show if MaxMind key is missing on IP lookup #18993
These changes are leading to surprising results, our logs are now filling up with warnings on dev environments
We need the change to be redone
This improves the implementation of #18993
1. Error message displayed to user is clearer
2. open_db will also be called, even if license key is blank, as it was previously
3. This in turn means no need to keep stubbing 'maxmind_license_key'
This breaks the `plugin:install_all_gems` Rake task when used before
Redis is running. Need to go back to the drawing board.
This reverts commit 189aa5fa4e.
This change converts the `email_in_min_trust` site setting to
`email_in_allowed_groups`.
See: https://meta.discourse.org/t/283408
- Hides the old setting
- Adds the new site setting
- Add a deprecation warning
- Updates to use the new setting
- Adds a migration to fill in the new setting if the old setting was
changed
- Adds an entry to the site_setting.keywords section
- Updates tests to account for the new change
After a couple of months we will remove the
`email_in_min_trust` setting entirely.
Internal ref: /t/115696
Why this change?
This regressed in dec68d780c where the
commit assumes that plugin gems are always installed when the
`plugin:install_all_gems` Rake task is ran as it would run the our Rails
initializers which activates plugins and install the gems. However, this
assumption only holds true when the `LOAD_PLUGINS` is present and set to
`1`.
What does this change do?
This commit changes the `plugin:install_all_gems` to load the Rails
environment with `LOAD_PLUGINS` set to `1` such that the plugin gems
will be installed as part of our initialization process for the app.
The commit also removes the `plugin:install_gems` Rake task which is
currently a noop and does not seem to be used anywhere..
Why this change?
Similar to d0117ff6e3, `plugins:update_all` spends most of its time waiting
on the network. On my local machine, this takes up to 2 mins when I have
all the official plugins installed. On a 32 cores machine, the total
time is cut down to 4 seconds.
What does this change do?
1. Move the logic in the `plugin:update` Rake task into a method.
2. Updates the `plugin:update` and `plugin:update_all` to rely on the
new method.
3. Wraps the method call to update a plugin in `plugin:update_all` in a
`Concurrent::Promise`
This change also adds the `--quiet` option to the `git pull` option
since the `git pull` output is just noise for 99% of the time.
Followup to e37fb3042d
Some plugins like discourse-ai and discourse-saml do not
nicely change from kebab-case to Title Case (e.g. Ai, Saml),
and anyway this method of getting the plugin name is not
translated either.
Better to use the plugin setting category if it exists,
since that is written by a human and is translated.
* DEV: Convert approve_new_topics_unless_trust_level to groups
This change converts the `approve_new_topics_unless_trust_level` site
setting to `approve_new_topics_unless_allowed_groups`.
See: https://meta.discourse.org/t/283408
- Hides the old setting
- Adds the new site setting
- Add a deprecation warning
- Updates to use the new setting
- Adds a migration to fill in the new setting if the old setting was
changed
- Adds an entry to the site_setting.keywords section
- Updates tests to account for the new change
After a couple of months we will remove the
`approve_new_topics_unless_trust_level` setting entirely.
Internal ref: /t/115696
* add missing translation
* Add keyword entry
* Add migration
Why this change?
`plugin:install_all_official` is quite slow at the moment taking roughly
1 minute and 51 seconds on my machine. Since most of the time is spent
waiting on the network, we can actually speed up the Rake task
significantly by executing the cloning concurrently. With a 8 cores
machine, cloning all plugins will only take 15 seconds.
What does this change do?
This change wraps the `git clone` operation in the
`plugin:install_all_official` Rake task in a `Concurrent::Promise` which
basically runs the `git clone` operation in a Thread. The `--quiet`
option has also been added to `git clone` since running stuff
concurrently messes up the output. That could be fixed but it has been
determined to be not worth it since the output from `git clone` is
meaningless to us.
Why this change?
There are instances where we would like to customize what the
`docker:test:setup` Rake task does.
What does this change do?
Adds a bunch of env variables that could be set to customize what the
`docker:test:setup` Rake test does.
We were throwing ArgumentError in UrlHelper.normalised_encode,
but it was incorrect -- we were passing ArgumentError.new
2 arguments which is not supported. Fix this and have a hint
of which URL is causing the issue for debugging.
Why this change?
We support a `USE_TURBO` environment variable which tells the
`docker:test` rake task to run rspec tests in parallel. However, this
currently does not apply to system tests.
What does this change do?
This commit runs system specs for both core and plugins using
`./bin/turbo_rspec` when the `USE_TURBO` environment is present. Note
that when running system specs, we will only spawn X number of test
processes where X is half the number of available CPU cores. This is
done because we have to leave CPU resources for the chrome processes
that will be created.
This change converts the `approve_unless_trust_level` site setting to
`approve_unless_allowed_groups`.
See: https://meta.discourse.org/t/283408
- Adds the new site setting
- Adds a deprecation warning
- Updates core to use the new settings.
- Adds a migration to fill in the new setting of the old setting was
changed
- Adds an entry to the site_setting.keywords section
- Updates many tests to account for the new change
After a couple of months we will remove the `approve_unless_trust_level`
setting entirely.
Internal ref: /t/115696
* Remove checkmark for official plugins
* Add author for plugin, which is By Discourse for all discourse
and discourse-org github plugins
* Link to meta topic instead of github repo
* Add experimental flag for plugin metadata and show this as a
badge on the plugin list if present
---------
Co-authored-by: chapoi <101828855+chapoi@users.noreply.github.com>
This commit adds a new `search_default_sort_order` site setting,
set to "relevance" by default, that controls the default sort order
for the full page /search route.
If the user changes the order in the dropdown on that page, we remember
their preference automatically, and it takes precedence over the site
setting as a default from then on. This way people who prefer e.g.
Latest Post as their default can make it so.
Why this change?
As the number of themes which the Discourse team supports officially
grows, we want to ensure that changes made to Discourse core do not
break the plugins. As such, we are adding a step to our Github actions
test job to run the QUnit tests for all official themes.
What does this change do?
This change adds a new job to our tests Github actions workflow to run the QUnit
tests for all official plugins. This is achieved with the following
changes:
1. Update `testem.js` to rely on the `THEME_TEST_PAGES` env variable to set the
`test_page` option when running theme QUnit tests with testem. The
`test_page` option [allows an array to be specified](https://github.com/testem/testem#multiple-test-pages) such that tests for
multiple pages can be run at the same time. We are relying on a ENV variable
because the `testem` CLI does not support passing a list of pages
to the `--test_page` option.
2. Support a `/testem-theme-qunit/:testem_id/theme-qunit` Rails route in the development environment. This
is done because testem prefixes the path with a unique ID to the configured `test_page` URL.
This is problematic for us because we proxy all testem requests to the
Rails server and testem's proxy configuration option does not allow us
to easily rewrite the URL to remove the prefix. Therefore, we configure a proxy in testem to prefix `theme-qunit` requests with
`/testem-theme-qunit` which can then be easily identified by the Rails server and routed accordingly.
3. Update `qunit:test` to support a `THEME_IDS` environment variable
which will allow it to run QUnit tests for multiple themes at the
same time.
4. Support `bin/rake themes:qunit[ids,"<theme_id>|<theme_id>"]` to run
the QUnit tests for multiple themes at the same time.
5. Adds a `themes:qunit_all_official` Rake task which runs the QUnit
tests for all the official themes.
Why this change?
By default the `db:create` Rake task in activerecord creates the
databases for both the development and test environment. This while
seemingly odd is by design from Rails. In order to avoid creating the
test database, Rails supports the `SKIP_TEST_DATABASE` environment
variable which we should respect when creating the multisite test
database.
When we started using NumberField for integer site settings
in e113eff663, we did not end up
passing down a min/max value for the integer to the field, which
meant that for some fields where negative numbers were allowed
we were not accepting that as valid input.
This commit passes down the min/max options from the server for
integer settings then in turn passes them down to NumberField.
c.f. https://meta.discourse.org/t/delete-user-self-max-post-count-not-accepting-1-to-disable/285162
Why this change?
As the number of themes which the Discourse team supports officially
grows, we want to ensure that changes made to Discourse core do not
break the plugins. As such, we are adding a step to our Github actions
test job to run the system tests for all official themes.
What does this change do?
This change adds a step to our Github actions test job to run the system
tests for all official plugins. This is achieved by the introduction of
the `themes:install_all_official` Rake task which installs all the
themes that are officially supported by the Discourse team.
- Remove vendored copy
- Update Rails implementation to look for language definitions in node_modules
- Use webpack-based dynamic import for hljs core
- Use browser-native dynamic import for site-specific language bundle (and fallback to webpack-based dynamic import in tests)
- Simplify markdown implementation to allow all languages into the `lang-{blah}` className
- Now that all languages are passed through, resolve aliases at runtime to avoid the need for the pre-built `highlightjs-aliases` index
Previously, the app HTML served by the Ember-CLI proxy was generated based on a 'bootstrap json' payload generated by Rails. This inevitably leads to differences between the Rails HTML and the Ember-CLI HTML.
This commit overhauls our proxying strategy. Now, we totally ignore the ember-cli `index.html` file. Instead, we take the full HTML from Rails and surgically replace script URLs based on a `data-discourse-entrypoint` attribute. This should be faster (only one request to Rails), more robust, and less confusing for developers.
Followup to fe05fdae24
For consistency with other S3 settings, make the global setting
the same name as the site setting and use SiteSetting.Upload
too so it reads from the correct place.
This adds the ability to collect stats without exposing them
among other stats via API.
The most important thing I wanted to achieve is to provide
an API where stats are not exposed by default, and a developer
has to explicitly specify that they should be
exposed (`expose_via_api: true`). Implementing an opposite
solution would be simpler, but that's less safe in terms of
potential security issues.
When working on this, I had to refactor the current solution.
I would go even further with the refactoring, but the next steps
seem to be going too far in changing the solution we have,
and that would also take more time. Two things that can be
improved in the future:
1. Data structures for holding stats can be further improved
2. Core stats are hard-coded in the About template (it's hard
to fix it without correcting data structures first, see point 1):
63a0700d45/app/views/about/index.html.erb (L61-L101)
The most significant refactorings are:
1. Introducing the `Stat` model
2. Aligning the way the core and the plugin stats' are registered
There was a registry for preloaded site categories and a new one has
been introduced recently for categories serialized through a
CategoryList.
Having two registries created a lot of friction for developers and this
commit merges them into a single one, providing a unified API.
There is an edge case where the following occurs:
1. The user sets a bookmark reminder on a post/topic
2. The post/topic is changed to a PM before or after the reminder
fires, and the notification remains unread by the user
3. The user opens their bookmark reminder notification list
and they can still see the notification even though they cannot
access the topic anymore
There is a very low chance for information leaking here, since
the only thing that could be exposed is the topic title if it
changes to something sensitive.
This commit filters the bookmark unread notifications by using
the bookmarkable can_see? methods and also prevents sending
reminder notifications for bookmarks the user can no longer see.
Followup to 5fc1586abf
There are certain cases where the tos_url and privacy_policy_url
can end up with a "nil" value in the Discourse.urls_cache.
The cause of this is unclear, but it seems to behave differently
between doing this caching in the rails console and the running
server.
To avoid this we can just not store anything that looks like nil
in the cache; we can delete the cache keys entirely if we don't
need them anymore.
When quoting a chat message in a post, if that message contains a mention,
that mention should be ignored. But we've been detecting them and sending
notifications to users. This PR fixes the problem. Since this fix is for
the chat plugin, I had to introduce a new API for plugins:
# We strip posts before detecting mentions, oneboxes, attachments etc.
# We strip those elements that shouldn't be detected. For example,
# a mention inside a quote should be ignored, so we strip it off.
# Using this API plugins can register their own post strippers.
def register_post_stripper(&block)
end
Previously, we were parsing webpack JS chunk filenames from the HTML files which ember-cli generates. This worked ok for simple entrypoints, but falls apart once we start using async imports(), which are not included in the HTML.
This commit uses the stats plugin to generate an assets.json file, and updates Rails to parse it instead of the HTML. Caching on the Rails side is also improved to avoid reading from the filesystem multiple times per request in develoment.
Co-authored-by: Godfrey Chan <godfreykfc@gmail.com>
This commit adds an `enable_s3_transfer_acceleration` site setting,
which is hidden to begin with. We are adding this because in certain
regions, using https://aws.amazon.com/s3/transfer-acceleration/ can
drastically speed up uploads, sometimes as much as 70% in certain
regions depending on the target bucket region. This is important for
us because we have direct S3 multipart uploads enabled everywhere
on our hosting.
To start, we only want this on the uploads bucket, not the backup one.
Also, this will accelerate both uploads **and** downloads, depending
on whether a presigned URL is used for downloading. This is the case
when secure uploads is enabled, not anywhere else at this time. To
enable the S3 acceleration on downloads more generally would be a
more in-depth change, since we currently store S3 Upload record URLs
like this:
```
url: "//test.s3.dualstack.us-east-2.amazonaws.com/original/2X/6/123456.png"
```
For acceleration, `s3.dualstack` would need to be changed to `s3-accelerate.dualstack`
here.
Note that for this to have any effect, Transfer Acceleration must be enabled
on the S3 bucket used for uploads per https://docs.aws.amazon.com/AmazonS3/latest/userguide/transfer-acceleration-examples.html.
With Embroider, we can rely on async `import()` to do the splitting
for us.
This commit extracts from `pretty-text` all the parts that are
meant to be loaded async into a new `discourse-markdown-it` package
that is also a V2 addon (meaning that all files are presumed unused
until they are imported, aka "static").
Mostly I tried to keep the very discourse specific stuff (accessing
site settings and loading plugin features) inside discourse proper,
while the new package aims to have some resembalance of a general
purpose library, a MarkdownIt++ if you will. It is far from perfect
because of how all the "options" stuff work but I think it's a good
start for more refactorings (clearing up the interfaces) to happen
later.
With this, pretty-text and app/lib/text are mostly a kitchen sink
of loosely related text processing utilities.
After the refactor, a lot more code related to setting up the
engine are now loaded lazily, which should be a pretty nice win. I
also noticed that we are currently pulling in the `xss` library at
initial load to power the "sanitize" stuff, but I suspect with a
similar refactoring effort those usages can be removed too. (See
also #23790).
This PR does not attempt to fix the sanitize issue, but I think it
sets things up on the right trajectory for that to happen later.
Co-authored-by: David Taylor <david@taylorhq.com>
When Discourse first introduced brotli support, reverse-proxy/CDN support for passing through the accept-encoding header to our NGINX server was very poor. Therefore, a separate `/brotli_assets/...` path was introduced to serve the brotli assets. This worked well, but introduces additional complexity and inconsistencies.
Nowadays, Brotli encoding is well supported, so we don't need the separate paths any more. Requests can be routed to the asset `.js` URLs, and NGINX will serve the brotli/gzip version of the asset automatically.
The motivation of this PR is to remove our dependence on Ember's 'named outlets', which are removed in Ember 4+.
At a high-level, the changes can be summarized as:
- The top-level `discovery` route is totally emptied of all logic. The HTML structure of the template is moved into the `<Discovery::Layout />` component for use by child routes.
- `AbstractTopicRoute` and `AbstractCategoryRoute` routes now both lean on the `DiscoverySortableController` and associated template. This controller is where most of the logic from the old top-level `discovery` controller has ended up.
- All navigation controllers/templates have been replaced with components. `navigation/categories`, `navigation/category` and `navigation/default` were very similar, and so they've all been combined into `<Navigation::Default>`. `navigation/filter` gets its own component.
- The `discovery/topics` controller/template have been moved into a new `<Discovery::Topics>` component.
Various other parts of the app have been tweaked to support these changes, but I've tried to keep that to a minimum.
Anything from `<TopicList>` down is untouched, which should hopefully mean that a large proportion of topic-list-customizing themes are unaffected.
For more information, see https://meta.discourse.org/t/282816
For deprecated site settings, we log out a warning when
the old setting is used. However when we convert all the client
settings to JSON, we are creating a lot of log noise like this:
> Deprecation notice: `SiteSetting.anonymous_posting_min_trust_level` has been deprecated.
We don't need to do this because we are just dumping the JSON.
We updated scheduled admin checks to run concurrently in their own jobs. The main reason for this was so that we can implement re-check functionality for especially flaky checks (e.g. group e-mail credentials check.)
This works in the following way:
1. The check declares its retry policy using class methods.
2. A block can be yielded to if there are problems, but before they are committed to Redis.
3. The job uses this block to either a) schedule a retry if there are any remaining or b) do nothing and let the check commit.
This commit introduces a new feature that allows theme developers to manage the transformation of theme settings over time. Similar to Rails migrations, the theme settings migration system enables developers to write and execute migrations for theme settings, ensuring a smooth transition when changes are required in the format or structure of setting values.
Example use cases for the theme settings migration system:
1. Renaming a theme setting.
2. Changing the data type of a theme setting (e.g., transforming a string setting containing comma-separated values into a proper list setting).
3. Altering the format of data stored in a theme setting.
All of these use cases and more are now possible while preserving theme setting values for sites that have already modified their theme settings.
Usage:
1. Create a top-level directory called `migrations` in your theme/component, and then within the `migrations` directory create another directory called `settings`.
2. Inside the `migrations/settings` directory, create a JavaScript file using the format `XXXX-some-name.js`, where `XXXX` is a unique 4-digit number, and `some-name` is a descriptor of your choice that describes the migration.
3. Within the JavaScript file, define and export (as the default) a function called `migrate`. This function will receive a `Map` object and must also return a `Map` object (it's acceptable to return the same `Map` object that the function received).
4. The `Map` object received by the `migrate` function will include settings that have been overridden or changed by site administrators. Settings that have never been changed from the default will not be included.
5. The keys and values contained in the `Map` object that the `migrate` function returns will replace all the currently changed settings of the theme.
6. Migrations are executed in numerical order based on the XXXX segment in the migration filenames. For instance, `0001-some-migration.js` will be executed before `0002-another-migration.js`.
Here's a complete example migration script that renames a setting from `setting_with_old_name` to `setting_with_new_name`:
```js
// File name: 0001-rename-setting.js
export default function migrate(settings) {
if (settings.has("setting_with_old_name")) {
settings.set("setting_with_new_name", settings.get("setting_with_old_name"));
}
return settings;
}
```
Internal topic: t/109980
Followup to b53449eac9, we cannot
generate the links to plugin admin pages in this way because it
depends on which plugins are installed; we would need to somehow
do it at runtime. Leaving it out for now, for people who need to
find these admin routes the Ember Inspector extension for Chrome
can be used in the meantime.
NOTE: Most of this is experimental and will be removed at a later
time, which is why things like translations have not been added.
The new /admin-revamp UI uses a sidebar for admin nav. This initial
step adds a script to generate a map of all the current admin nav
into a format the sidebar to read. Then, people can experiment
with different changes to this structure.
The structure can then be edited from `/admin-revamp/config/sidebar-experiment`,
and it is saved to local storage so people can visually experiment with different ways
of showing the admin sidebar links.
Plugins can use a new modifier to change which site settings are hidden using the :hidden_site_settings modifier. For example:
```
register_modifier(:hidden_site_settings) do |hidden|
(hidden + [:invite_only, :login_required]).uniq
end
```
This commit fixes an issue where clicking the default
"Take Action" option on a flag for a post doesn't always
end up with the post hidden.
This is because the "take_action" score bonus doesn’t take into account
the final score required to hide the post.
Especially with the `hide_post_sensitivity` site setting set to `low`
sensitivity, there is a likelihood the score needed to hide the post
won’t be reached.
Now, the default "Take Action" button has been changed to "Hide Post"
to reflect what is actually happening and the description has been
improved, and if "Take Action" is clicked we _always_ hide the post
regardless of score and sensitivity settings. This way the action reflects
expectations of the user.
* FEATURE: Add keywords support for site_settings search
This change allows for a new `keywords` field that can be added to site
settings in order to help with searching. Keywords are not visible in
the UI, but site settings matching one of the contained keywords will
appear when searching for that keyword.
Keywords can be added for site settings inside of the
`config/locales/server.en.yml` file under the new `keywords` key.
```
site_settings
example_1: "fancy description"
example_2: "another description"
keywords:
example_1: "capybara"
```
* Add keywords entry for a recently changed site setting and add system specs
* Use page.visit now that we have our own visit
Previously we were memoizing based on `defined?`, but the `clear_cache!` method was doing `@blah = nil`. That meant that after the cache was cleared, future calls to the memoized method would return `nil` instead of triggering a recalculation.
The message: :signup_not_allowed option to the IP address validator does nothing, because the AllowedIpAddressValidator chooses one of either:
- ip_address.blocked or
- ip_address.max_new_accounts_per_registration_ip
internally. This means that the translation for this was also never used.
This PR removes the ineffectual option and the unused translation. It also moves the translated error messages for blocked and max_new_accounts_per_registration_ip into the correct location so we can pass a symbol to ActiveModel::Errors#add.
There is no actual change in behaviour.
The EmailValidator.email_regex method was moved to EmailAddressValidator.email_regex and marked for removal in 2.9.0. The method was proxied for backwards compatibility in plugins. This PR removes the method.
The #pluck_first method got a replacement in ActiveRecord core named #pick. After a bunch of replacements in core and plugins, we are now ready to retire this freedom patch.
Plugins can use a new modifier to change which site settings are
hidden using the :hidden_site_settings modifier. For example:
register_modifier(:hidden_site_settings) do |hidden|
(hidden + [:invite_only, :login_required]).uniq
end
- Remove the wildcard crawler. This was already excluding almost all file types, but the exclude list was missing '.gjs' which meant those files were unnecessarily being hoisted into the `public/` directory during precompile
- Automatically include all ember-cli-generated assets without needing them to be listed. The main motivation for this change is to allow us to start using async imports via Embroider/Webpack. The filenames for those new async bundles will not be known in advance.
- Skips sprockets fingerprinting on Embroider/Webpack chunk JS files. Their filenames already include a fingerprint, and having sprockets change the filenames will cause problems for the async import feature (where filenames are included deep inside js bundles)
This commit also updates our ember-cli build so that it skips building plugin tests in the production environment. This should provide a slight build speed improvement.
In the past we would build the stack of Omniauth providers at boot, which meant that plugins had to register any authenticators in the root of their plugin.rb (i.e. not in an `after_initialize` block). This could be frustrating because many features are not available that early in boot (e.g. Zeitwerk autoloading).
Now that we build the omniauth strategy stack 'just in time', it is safe for plugins to register their auth methods in an `after_initialize` block. This commit relaxes the old restrictions so that plugin authors have the option to move things around.
Previously, we would build the stack of omniauth authenticators once on boot. That meant that all strategies had to be included, even if they were disabled. We then used the `before_request_phase` to ensure disabled strategies could not be used. This works well, but it means that omniauth is often doing unnecessary work running logic in disabled strategies.
This commit refactors things so that we build the stack of strategies on each request. That means we only need to include the enabled strategies in the stack - disabled strategies are totally ignored. Building the stack on-demand like this does add some overhead to auth requests, but on the majority of sites that will be significantly outweighed by the fact we're now skipping logic for disabled authenticators.
As well as the slight performance improvement, this new approach means that:
- Broken (i.e. exception-raising) strategies cannot cause issues on a site if they're disabled
- `other_phase` of disabled strategies will never appear in the backtrace of other authentication errors
No plugins or themes rely on anonymous_posting_min_trust_level so we
can just switch straight over to anonymous_posting_allowed_groups
This also adds an AUTO_GROUPS const which can be imported in JS
tests which is analogous to the one defined in group.rb. This can be used
to set the current user's groups where JS tests call for checking these groups
against site settings.
Finally a AtLeastOneGroupValidator validator is added for group_list site
settings which ensures that at least one group is always selected, since if
you want to allow all users to use a feature in this way you can just use
the everyone group.
Using Wizard.exclude_steps applies to all sites in a multisite cluster.
In order to exclude steps for individual sites at run-time, a new
instance method `remove_step` is being added.
* DEV: refactor rake asset precompile tasks
add a separate ember build task that does not depend on rails env
allowing us to compile assets without db+redis connections
rename EMBER_CLI_COMPILE_DONE to SKIP_EMBER_CLI_COMPILE
better semantics in build steps
This PR addresses the push to unify the icon representing AI throughout Discourse, by using the discourse-sparkles icon.
The icon is being moved to core to make changes with dependencies included in core that were using the "magic" icon instead.
In 2 places "magic" -> "discourse-sparkles,
1. topic summaries
2. (unreleased) chat summaries example
The UrlHelper#escape_uri helper has been deprecated and replaced by UrlHelper#normalized_encode, and was marked for removal in 3.0. This PR removes the method.
* FIX: Secure upload post processing race condition
This commit fixes a couple of issues.
A little background -- when uploads are created in the composer
for posts, regardless of whether the upload will eventually be
marked secure or not, if secure_uploads is enabled we always mark
the upload secure at first. This is so the upload is by default
protected, regardless of post type (regular or PM) or category.
This was causing issues in some rare occasions though because
of the order of operations of our post creation and processing
pipeline. When creating a post, we enqueue a sidekiq job to
post-process the post which does various things including
converting images to lightboxes. We were also enqueuing a job
to update the secure status for all uploads in that post.
Sometimes the secure status job would run before the post process
job, marking uploads as _not secure_ in the background and changing
their ACL before the post processor ran, which meant the users
would see a broken image in their posts. This commit fixes that issue
by always running the upload security changes inline _within_ the
cooked_post_processor job.
The other issue was that the lightbox wrapper link for images in
the post would end up with a URL like this:
```
href="/secure-uploads/original/2X/4/4e1f00a40b6c952198bbdacae383ba77932fc542.jpeg"
```
Since we weren't actually using the `upload.url` to pass to
`UrlHelper.cook_url` here, we weren't converting this href to the CDN
URL if the post was not in a secure context (the UrlHelper does not
know how to convert a secure-uploads URL to a CDN one). Now we
always end up with the correct lightbox href. This was less of an issue
than the other one, since the secure-uploads URL works even when the
upload has become non-secure, but it was a good inconsistency to fix
anyway.
As of #23867 this is now a real package, so updating the imports to
use the real package name, rather than relying on the alias. The
name change in the package name is because `I18n` is not a valid
name as NPM packages must be all lowercase.
This commit also introduces an eslint rule to prevent importing from
the old I18n path.
For themes/plugins, the old 'i18n' name remains functional.
There are cases where a user can copy image markdown from a public
post (such as via the discourse-templates plugin) into a PM which
is then sent via an email. Since a PM is a secure context (via the
.with_secure_uploads? check on Post), the image will get a secure
URL in the PM post even though the backing upload is not secure.
This fixes the bug in that case where the image would be stripped
from the email (since it had a /secure-uploads/ URL) but not re-attached
further down the line using the secure_uploads_allow_embed_images_in_emails
setting because the upload itself was not secure.
The flow in Email::Sender for doing this is still not ideal, but
there are chicken and egg problems around when to strip the images,
how to fit in with other attachments and email size limits, and
when to apply the images inline via Email::Styles. It's convoluted,
but at least this fixes the Template use case for now.
Why this change?
This ensures that malicious requests cannot end up causing the logs to
quickly fill up. The default chosen is sufficient for most legitimate
requests to the Discourse application.
When truncation happens, parsing of logs in supported format like
lograge may break down.
Why this change?
The `PostsController#create` action allows arbitrary topic custom fields
to be set by any user that can create a topic. Without any restrictions,
this opens us up to potential security issues where plugins may be using
topic custom fields in security sensitive areas.
What does this change do?
1. This change introduces the `register_editable_topic_custom_field` plugin
API which allows plugins to register topic custom fields that are
editable either by staff users only or all users. The registered
editable topic custom fields are stored in `DiscoursePluginRegistry` and
is called by a new method `Topic#editable_custom_fields` which is then
used in the `PostsController#create` controller action. When an unpermitted custom fields is present in the `meta_data` params,
a 400 response code is returned.
2. Removes all reference to `meta_data` on a topic as it is confusing
since we actually mean topic custom fields instead.
Preloading just metadata is not always respected by browsers, and
sometimes the whole video will be downloaded. This switches to using a
placeholder image for the video and only loads the video when the play
button is clicked.
Currently, `window.I18n` is defined in an old school hand written
script, inlined into locale/*.js by the Rails asset pipeline, and
then the global variable is shimmed into a pseudo AMD module later
in `module-shims.js`.
This approach has some problems – for one thing, when we add a new
V2 addon (e.g. in #23859), Embroider/Webpack is stricter about its
dependencies and won't let you `import from "I18n";` when `"I18n"`
isn't listed as one of its `dependencies` or `peerDependencies`.
This moves `I18n` into a real package – `discourse-i18n`. (I was
originally planning to keep the `I18n` name since it's a private
package anyway, but NPM packages are supposed to have lower case
names and that may cause problems with other tools.)
This package defines and exports a regular class, but also defines
the default global instance for backwards compatibility. We should
use the exported class in tests to make one-off instances without
mutating the global instance and having to clean it up after the
test run. However, I did not attempt that refactor in this PR.
Since `discourse-i18n` is now included by the app, the locale
scripts needs to be loaded after the app chunks. Since no "real"
work happens until later on when we kick things off in the boot
script, the order in which the script tags appear shouldn't be a
problem. Alternatively, we can rework the locale bundles to be more
lazy like everything else, and require/import them into the app.
I avoided renaming the imports in this commit since that would be
quite noisy and drowns out the actual changes here. Instead, I used
a Webpack alias to redirect the current `"I18n"` import to the new
package for the time being. In a separate commit later on, I'll
rename all the imports in oneshot and remove the alias. As always,
plugins and the legacy bundles (admin/wizard) still relies on the
runtime AMD shims regardless.
For the most part, I avoided refactoring the actual I18n code too
much other than making it a class, and some light stuff like `var`
into `let`.
However, now that it is in a reasonable format to work with (no
longer inside the global script context!) it may also be a good
opportunity to refactor and make clear what is intended to be
public API vs internal implementation details.
Speaking of, I took the librety to make `PLACEHOLDER`, `SEPARATOR`
and `I18nMissingInterpolationArgument` actual constants since it
seemed pretty clear to me those were just previously stashed on to
the `I18n` global to avoid polluting the global namespace, rather
than something we expect the consumers to set/replace.
This is part 2 (of 3) for passkeys support.
This adds a hidden site setting plus routes and controller actions.
1. registering passkeys
Passkeys are registered in a two-step process. First, `create_passkey`
returns details for the browser to create a passkey. This includes
- a challenge
- the relying party ID and Origin
- the user's secure identifier
- the supported algorithms
- the user's existing passkeys (if any)
Then the browser creates a key with this information, and submits it to
the server via `register_passkey`.
2. authenticating passkeys
A similar process happens here as well. First, a challenge is created
and sent to the browser. Then the browser makes a public key credential
and submits it to the server via `passkey_auth_perform`.
3. renaming/deleting passkeys
These routes allow changing the name of a key and deleting it.
4. checking if session is trusted for sensitive actions
Since a passkey is a password replacement, we want to make sure to confirm the user's identity before allowing adding/deleting passkeys. The u/trusted-session GET route returns success if user has confirmed their session (and failed if user hasn't). In the frontend (in the next PR), we're using these routes to show the password confirmation screen.
The `/u/confirm-session` route allows the user to confirm their session with a password. The latter route's functionality already existed in core, under the 2FA flow, but it has been abstracted into its own here so it can be used independently.
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
For the admin plugin list we want to be able to link to
a meta topic for plugins, but we have no standard way to
do this at the moment. This adds support for meta_topic_id
alongside other plugin metadata like authors, URL etc,
that gets built into a Meta topic URL in the serializer.
To match discourse_theme CLI behavior, we should skip hidden files/directories (e.g. `.git`), and two regular directories: `node_modules/` and `src/`.
Without these excludes, it's very easy for a theme to hit the file count limit. e.g. when trying this with discourse-kanban-board, I got:
> The number of files (20366) in the theme has exceeded the maximum allowed number of files (1024)
We have a custom implementation of #symbolize_keys in our Onebox helpers. This is likely a legacy from when Onebox was a standalone gem. This change replaces all usages with either #deep_symbolize_keys from ActiveSupport, or appropriate option to the JSON parser gem used.
We have a custom implementation of #blank? in our Onebox helpers. This is likely a legacy from when Onebox was a standalone gem. This change replaces all usages with respective incarnations of #blank?, #present?, and #presence from ActiveSupport. It changes a bunch of "unless blank" to "if present" as well.
This is part 1 of 3, split up of PR #23529. This PR refactors the
webauthn code to support passkey authentication/registration.
Passkeys aren't used yet, that is coming in PRs 2 and 3.
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
This commit adds support for an optional `prompt` parameter in the
payload of the /session/sso_provider endpoint. If an SSO Consumer
adds a `prompt=none` parameter to the encoded/signed `sso` payload,
then Discourse will avoid trying to login a not-logged-in user:
* If the user is already logged in, Discourse will immediately
redirect back to the Consumer with the user's credentials in a
signed payload, as usual.
* If the user is not logged in, Discourse will immediately redirect
back to the Consumer with a signed payload bearing the parameter
`failed=true`.
This allows the SSO Consumer to simply test whether or not a user is
logged in, without forcing the user to try to log in. This is useful
when the SSO Consumer allows both anonymous and authenticated access.
(E.g., users that are already logged-in to Discourse can be seamlessly
logged-in to the Consumer site, and anonymous users can remain
anonymous until they explicitly ask to log in.)
This feature is similar to the `prompt=none` functionality in an
OpenID Connect Authentication Request; see
https://openid.net/specs/openid-connect-core-1_0.html#AuthRequest
If a user somehow is looking at an old version of the page and attempts
to like a post they already like. Display a more reasonable error message.
Previously we would display:
> You are not permitted to view the requested resource.
New error message is:
> Oops! You already performed this action. Can you try refreshing the page?
Triggering this error condition is very tricky, you need to stop the
message bus. A possible reason for it could be bad network connectivity.
We will soon be dropping support for `/theme-qunit` in production, so this will start failing if we don't remove it. Plus, we now have system specs which verify the end-to-end functionality of the Theme QUnit system.
This was the last thing which was using the legacy `run-qunit` script, so that can also be dropped.
* FIX: min_personal_message_post_length not applying to first post
Due to the way PostCreator is wired, we were not applying min_personal_message_post_length
to the first post.
This meant that admins could not configure it so PMs have different
limits.
The code was already pretending that this works, but had no reliable way
of figuring out if we were dealing with a private message
This commit adds limits to themes and theme components on the:
- file size of about.json and .discourse-compatibility
- file size of theme assets
- number of files in a theme
This extends search so it can have consumers that:
1. Can split off "term" from various advanced filters and orders
2. Can build a relation of either order or filter
It also moves a lot of stuff around in the search class for clarity.
Two new APIs are exposed:
`.apply_filter` to apply all the special filters to a posts/topics relation
`.apply_order` to force a particular order (eg: order:latest)
This can then be used by semantic search in Discourse AI
Why this change?
Currently, we do not have an easy way to test themes and theme components
using Rails system tests. While we support QUnit acceptance tests for
themes and theme components, QUnit acceptance tests stubs out the server
and setting up the fixtures for server responses is difficult and can lead to a
frustrating experience. System tests on the other hand allow authors to
set up the test fixtures using our fabricator system which is much
easier to use.
What does this change do?
In order for us to allow authors to run system tests with their themes
installed, we are adding a `upload_theme` helper that is made available
when writing system tests. The `upload_theme` helper requires a single
`directory` parameter where `directory` is the directory of the theme
locally and returns a `Theme` record.
This comment isn't necessarily on a line by itself, so we need to remove the `^` from the regex. This will fix `EMBER_ENV=development bin/rake assets:precompile`
Until now, we have allowed testing themes in production environments via `/theme-qunit`. This was made possible by hacking the ember-cli build so that it would create the `tests.js` bundle in production. However, this is fundamentally problematic because a number of test-specific things are still optimized out of the Ember build in production mode. It also makes asset compilation significantly slower, and makes it more difficult for us to update our build pipeline (e.g. to introduce Embroider).
This commit removes the ability to run qunit tests in production builds of the JS app when the Embdroider flag is enabled. If a production instance of Discourse exists exclusively for the development of themes (e.g. discourse.theme-creator.io) then they can add `EMBER_ENV: development` to their `app.yml` file. This will build the entire app in development mode, and has a significant performance impact. This must not be used for real production sites.
This commit also refactors many of the request specs into system specs. This means that the tests are guaranteed to have Ember assets built, and is also a better end-to-end test than simply checking for the presence of certain `<script>` tags in the HTML.
This method is used by assets:precompile to decide whether to apply `terser` to a file. Embroider chunks do not necessarily start with `chunk.`, and so they were incorrectly being re-terser'd by our assets:precompile task. This is inefficient, and also led to broken sourcemaps on some assets.
This is a follow up to 9caba30d5c
In that commit, we were migrating the database but we didn't actually
ensure that the database was created and that plugins were updated
before the databases were migrated.
## What is the context here?
The `docker.rake` Rakefile contains Rake tasks that are meant to be run
in the `discourse/discourse_test:release` Docker image. For example, we
have the `docker:test` Rake task that makes it easier to run the test
suite for a particular Discourse commit.
Why are we introducing a `docker:test:setup` Rake task?
While we have the `docker:test` Rake task, it is very limited in the
test commands that can be executed. It is very useful for automated
testing but not very useful for running tests in the development
environment. Therefore, we are introducing a `docker:test:setup` rake
task that can be used to set up the test environment for running tests.
The envisioned example usage is something like this:
```
docker run -d --name=discourse_test --entrypoint=/sbin/boot discourse/discourse_test:release
docker exec -u discourse:discourse discourse_test ruby script/docker_test.rb --no-tests
docker exec -u discourse:discourse discourse_test bundle exec rake docker:test:setup
docker exec -u discourse:discourse discourse_test bundle exec rspec <path to file>
```
Currently, if the review queue has both a flagged post and a flagged chat message, one of the two will have some of the labels of their actions replaced by those of the other. In other words, the labels are getting mixed up. For example, a flagged chat message might show up with an action labelled "Delete post".
This is happening because when using bundles, we are sending along the actions in a separate part of the response, so they can be shared by many reviewables. The bundles then index into this bag of actions by their ID, which is something generic describing the server action, e.g. "agree_and_delete".
The problem here is the same action can have different labels depending on the type of reviewable. Now that the bag of actions contains multiple actions with the same ID, which one is chosen is arbitrary. I.e. it doesn't distinguish based on the type of the reviewable.
This change adds an additional field to the actions, server_action, which now contains what used to be the ID. Meanwhile, the ID has been turned into a concatenation of the reviewable type and the server action, e.g. post-agree_and_delete.
This still provides the upside of denormalizing the actions while allowing for different reviewable types to have different labels and descriptions.
At first I thought I would prepend the reviewable type to the ID, but this doesn't work well because the ID is used on the server-side to determine which actions are possible, and these need to be shared between different reviewables. Hence the introduction of server_action, which now serves that purpose.
I also thought about changing the way that the bundle indexes into the bag of actions, but this is happening through some EmberJS mechanism, so we don't own that code.
This was causing the following notice to be printed out when running system specs:
```
I did no detect a custom `config/dev.yml` file, creating one for you where you can amend defaults.
```
(since 61571bee43)
This adds a new secure_uploads_pm_only site setting. When secure_uploads
is true with this setting, only uploads created in PMs will be marked
secure; no uploads in secure categories will be marked as secure, and
the login_required site setting has no bearing on upload security
either.
This is meant to be a stopgap solution to prevent secure uploads
in a single place (private messages) for sensitive admin data exports.
Ideally we would want a more comprehensive way of saying that certain
upload types get secured which is a hybrid/mixed mode secure uploads,
but for now this will do the trick.
Admins are always able to send PMs, so it doesn't make
sense that they shouldn't be able to convert topics just
because they aren't in personal_message_enabled_groups.
Previously we would respect it if the filter was `nil`, but if `default` was explicitly passed then it would ignore the category order settings. This explicit passing of `filter=default` happens for some types of navigations in the JS app.
This extends the fix from 92bc61b4be
The theme tests we use for the smoke-test typically take 3-4 seconds to complete. This commit reduces the timeout from 10 minutes to 20 seconds, so that failures are detected more quickl
Our Ember build compiles assets into multiple chunks. In the past, we used the output from ember-auto-import-chunks-json-generator to give Rails a map of those chunks. However, that addon is specific to ember-auto-import, and is not compatible with Embroider.
Instead, we can switch to parsing the html files which are output by ember-cli. These are guaranteed to have the correct JS files in the correct place. A <discourse-chunked-script> will allow us to easily identify which chunks belong to which entrypoint.
In future, as we update more entrypoints to be compiled by Embroider/Webpack, we can easily introduce new wrappers.
Previously applied in 2c58d45 and reverted in 24d46fd. This version has been updated for subfolder support.
New headers were added to upload PUT requests as part of a MinIO update (cf42466). This change updates the asset bucket CORS ruleset to allow the new headers in the preflight request.
See https://dev.discourse.org/t/111136
Co-authored-by: Sam Saffron <sam.saffron@gmail.com>
They're both constant per-instance values, there is no need to store them
in the session. This also makes the code a bit more readable by moving
the `session_challenge_key` method up to the `DiscourseWebauthn` module.
Our Ember build compiles assets into multiple chunks. In the past, we used the output from `ember-auto-import-chunks-json-generator` to give Rails a map of those chunks. However, that addon is specific to ember-auto-import, and is not compatible with Embroider.
Instead, we can switch to parsing the html files which are output by ember-cli. These are guaranteed to have the correct JS files in the correct place. A `<discourse-chunked-script>` will allow us to easily identify which chunks belong to which entrypoint.
In future, as we update more entrypoints to be compiled by Embroider/Webpack, we can easily introduce new wrappers.
Followup to eea74e0e32. Site settings
which are a list without a list_type should also have the _map
extension added which returns an array based on split("|").
For example:
```
SiteSetting.post_menu_map
=> ["read", "like"]
```
* DEV: Add rake command to help detect dead settings
Some Site Settings may still exist but are no longer being used in the
core discourse code or in related plugins. This rake task will help
identify any unused (aka: dead) settings by using the `rg` command to
search for them.
You can execute the rake task by using this command:
`LOAD_PLUGINS=1 bin/rails "site_settings:find_dead"`
* Add env variable, apply feedback
When navigating around, we make ajax requests with a parameter like `?filter=latest`. This results in the TopicQuery being set up with `filter: "latest"` as a string. The logic introduced in fd9a5bc0 checks for equality with `:latest` and `:unseen` symbols, which didn't work correctly in this situation
This commit makes the logic detect both strings and symbols, and adds a spec for the behaviour.
This adds support for oneboxing WEBP and AVIF images in posts and fixing
oneboxing fixes download remote images for those formats too.
Reported in https://meta.discourse.org/t/-/276433?u=falco
Reverts e2705df and re-lands #23187 and #23219.
The issue was incorrect order of execution of Rails' `assets:precompile` task in our own precompilation stack.
Co-authored-by: David Taylor <david@taylorhq.com>
This commit adds some system specs to test uploads with
direct to S3 single and multipart uploads via uppy. This
is done with minio as a local S3 replacement. We are doing
this to catch regressions when uppy dependencies need to
be upgraded or we change uppy upload code, since before
this there was no way to know outside manual testing whether
these changes would cause regressions.
Minio's server lifecycle and the installed binaries are managed
by the https://github.com/discourse/minio_runner gem, though the
binaries are already installed on the discourse_test image we run
GitHub CI from.
These tests will only run in CI unless you specifically use the
CI=1 or RUN_S3_SYSTEM_SPECS=1 env vars.
For a history of experimentation here see https://github.com/discourse/discourse/pull/22381
Related PRs:
* https://github.com/discourse/minio_runner/pull/1
* https://github.com/discourse/minio_runner/pull/2
* https://github.com/discourse/minio_runner/pull/3
Manipulating theme module paths means that the paths you author are not the ones used at runtime. This can lead to some very unexpected behavior and potential module name clashes. It also meant that the refactor in 16c6ab8661 was unable to correctly match up theme connector js/templates.
While this could technically be a breaking change, I think it is reasonably safe because:
1. Themes are already forced to use relative paths when referencing their own modules (since they're namespaced based on the site-specific id). The only time this might be problematic is when theme tests reference modules in the theme's main `javascripts` directory
2. For things like components/services/controllers/etc. our custom Ember resolver works backwards from the end of the path, so adding `discourse/` in the middle will not affect resolution.
`ReviewableQueuedPost` got refactored a while back to use the more
appropriate `target_created_by` for the user of the post being queued
instead of `created_by`. The change was not extended to the `DELETE
/review/:id` endpoint leading to error responses for a user attempting
to deleting their own queued post.
This fix extends the `Reviewable` lookup implementation in
`ReviewablesController#destroy` and Guardian implementation to account
for this change.
This PR adds a new toggle to switch the (new) /new list between showing topics with new replies (a.k.a unread topics), new topics, or everything mixed together.
The category feature that automatically closes topics does it silently
This amends it so `rake topics:apply_autoclose` which does retroactive
closing will also do so silently.
When we receive the stream parameter, we'll queue a job that periodically publishes partial updates, and after the summarization finishes, a final one with the completed version, plus metadata.
`summary-box` listens to these updates via MessageBus, and updates state accordingly.
Prior to this fix we would output an image with no width/height which would then bypass a large part of `CookedProcessorMixin` and have no aspect ratio. As a result, an image with no size would cause layout shift.
It also removes a fix for oneboxes in chat messages due to this case.
This adds a new `loaderShim()` function to ensure certain modules
are present in the `loader.js` registry and therefore runtime
`require()`-able.
Currently, the classic build pipeline puts a lot of things in the
runtime `loader.js` registry automatically. For example, all of
the ember-auto-import packages are in there.
Going forward, and especially as we switch to the Embroider build
pipeline, this will not be guarenteed. We need to keep an eye on
what modules (packages) our "external" bundles (admin, wizard,
markdown-it, plugins, etc) are expecting to be present and put
them into the registry proactively.
This commit removes any logic in the app and in specs around
enable_experimental_hashtag_autocomplete and deletes some
old category hashtag code that is no longer necessary.
It also adds a `slug_ref` category instance method, which
will generate a reference like `parent:child` for a category,
with an optional depth, which hashtags use. Also refactors
PostRevisor which was using CategoryHashtagDataSource directly
which is a no-no.
Deletes the old hashtag markdown rule as well.
What is the context of this change?
Before 7c6a8f1c74, we were using
`preload(:tags)` on the topics relation but that was accidentally
removed in the refactor. This was discovered and fixed in
5bec894a8c but insteadl of using
`preload(:tags)` we ended up using `includes(:tags)`. The problem here
is that `includes(:tags)` can either result in `preload(:tags)` or
`eager_load(:tags)` but for some reason ActiveRecord is deciding to
`eager_load(:tags)` resulting in a joins to the `topic_tags` and `tags`
table which is not necessarily and leads to more inefficient queries.
When `includes(:tags)` is used, listing the latest topics ended up
generating the following sample queries to fetch the list of topics to display.
```
SELECT DISTINCT "topics"."pinned_at" AS alias_0, "topics"."id" FROM "topics" LEFT OUTER JOIN "categories" ON "categories"."id" = "topics"."category_id" LEFT OUTER JOIN "topic_tags" ON "topic_tags"."topic_id" = "topics"."id" LEFT OUTER JOIN "tags" ON "tags"."id" = "topic_tags"."tag_id" LEFT OUTER JOIN topic_users AS tu ON (topics.id = tu.topic_id AND tu.user_id = 29) LEFT JOIN category_users ON category_users.category_id = topics.category_id AND category_users.user_id = 29 WHERE "topics"."deleted_at" IS NULL AND (topics.archetype <> 'private_message') AND (COALESCE(categories.topic_id, 0) <> topics.id) AND (COALESCE(tu.notification_level,1) > 0) AND (topics.category_id = -1
OR
(COALESCE(category_users.notification_level, 1) <> 0 AND (topics.category_id IS NULL OR topics.category_id NOT IN(-1)))
OR tu.notification_level > 1) AND (pinned_globally AND pinned_at IS NOT NULL AND (topics.pinned_at > tu.cleared_pinned_at OR tu.cleared_pinned_at IS NULL)) ORDER BY "topics"."pinned_at" DESC LIMIT 30
SELECT "topics"."id" AS t0_r0, "topics"."title" AS t0_r1, "topics"."last_posted_at" AS t0_r2, "topics"."created_at" AS t0_r3, "topics"."updated_at" AS t0_r4, "topics"."views" AS t0_r5, "topics"."posts_count" AS t0_r6, "topics"."user_id" AS t0_r7, "topics"."last_post_user_id" AS t0_r8, "topics"."reply_count" AS t0_r9, "topics"."featured_user1_id" AS t0_r10, "topics"."featured_user2_id" AS t0_r11, "topics"."featured_user3_id" AS t0_r12, "topics"."deleted_at" AS t0_r13, "topics"."highest_post_number" AS t0_r14, "topics"."like_count" AS t0_r15, "topics"."incoming_link_count" AS t0_r16, "topics"."category_id" AS t0_r17, "topics"."visible" AS t0_r18, "topics"."moderator_posts_count" AS t0_r19, "topics"."closed" AS t0_r20, "topics"."archived" AS t0_r21, "topics"."bumped_at" AS t0_r22, "topics"."has_summary" AS t0_r23, "topics"."archetype" AS t0_r24, "topics"."featured_user4_id" AS t0_r25, "topics"."notify_moderators_count" AS t0_r26, "topics"."spam_count" AS t0_r27, "topics"."pinned_at" AS t0_r28, "topics"."score" AS t0_r29, "topics"."percent_rank" AS t0_r30, "topics"."subtype" AS t0_r31, "topics"."slug" AS t0_r32, "topics"."deleted_by_id" AS t0_r33, "topics"."participant_count" AS t0_r34, "topics"."word_count" AS t0_r35, "topics"."excerpt" AS t0_r36, "topics"."pinned_globally" AS t0_r37, "topics"."pinned_until" AS t0_r38, "topics"."fancy_title" AS t0_r39, "topics"."highest_staff_post_number" AS t0_r40, "topics"."featured_link" AS t0_r41, "topics"."reviewable_score" AS t0_r42, "topics"."image_upload_id" AS t0_r43, "topics"."slow_mode_seconds" AS t0_r44, "topics"."bannered_until" AS t0_r45, "topics"."external_id" AS t0_r46, "categories"."id" AS t1_r0, "categories"."name" AS t1_r1, "categories"."color" AS t1_r2, "categories"."topic_id" AS t1_r3, "categories"."topic_count" AS t1_r4, "categories"."created_at" AS t1_r5, "categories"."updated_at" AS t1_r6, "categories"."user_id" AS t1_r7, "categories"."topics_year" AS t1_r8, "categories"."topics_month" AS t1_r9, "categories"."topics_week" AS t1_r10, "categories"."slug" AS t1_r11, "categories"."description" AS t1_r12, "categories"."text_color" AS t1_r13, "categories"."read_restricted" AS t1_r14, "categories"."auto_close_hours" AS t1_r15, "categories"."post_count" AS t1_r16, "categories"."latest_post_id" AS t1_r17, "categories"."latest_topic_id" AS t1_r18, "categories"."position" AS t1_r19, "categories"."parent_category_id" AS t1_r20, "categories"."posts_year" AS t1_r21, "categories"."posts_month" AS t1_r22, "categories"."posts_week" AS t1_r23, "categories"."email_in" AS t1_r24, "categories"."email_in_allow_strangers" AS t1_r25, "categories"."topics_day" AS t1_r26, "categories"."posts_day" AS t1_r27, "categories"."allow_badges" AS t1_r28, "categories"."name_lower" AS t1_r29, "categories"."auto_close_based_on_last_post" AS t1_r30, "categories"."topic_template" AS t1_r31, "categories"."contains_messages" AS t1_r32, "categories"."sort_order" AS t1_r33, "categories"."sort_ascending" AS t1_r34, "categories"."uploaded_logo_id" AS t1_r35, "categories"."uploaded_background_id" AS t1_r36, "categories"."topic_featured_link_allowed" AS t1_r37, "categories"."all_topics_wiki" AS t1_r38, "categories"."show_subcategory_list" AS t1_r39, "categories"."num_featured_topics" AS t1_r40, "categories"."default_view" AS t1_r41, "categories"."subcategory_list_style" AS t1_r42, "categories"."default_top_period" AS t1_r43, "categories"."mailinglist_mirror" AS t1_r44, "categories"."minimum_required_tags" AS t1_r45, "categories"."navigate_to_first_post_after_read" AS t1_r46, "categories"."search_priority" AS t1_r47, "categories"."allow_global_tags" AS t1_r48, "categories"."reviewable_by_group_id" AS t1_r49, "categories"."read_only_banner" AS t1_r50, "categories"."default_list_filter" AS t1_r51, "categories"."allow_unlimited_owner_edits_on_first_post" AS t1_r52, "categories"."default_slow_mode_seconds" AS t1_r53, "categories"."uploaded_logo_dark_id" AS t1_r54, "tags"."id" AS t2_r0, "tags"."name" AS t2_r1, "tags"."created_at" AS t2_r2, "tags"."updated_at" AS t2_r3, "tags"."pm_topic_count" AS t2_r4, "tags"."target_tag_id" AS t2_r5, "tags"."description" AS t2_r6, "tags"."public_topic_count" AS t2_r7, "tags"."staff_topic_count" AS t2_r8 FROM "topics" LEFT OUTER JOIN "categories" ON "categories"."id" = "topics"."category_id" LEFT OUTER JOIN "topic_tags" ON "topic_tags"."topic_id" = "topics"."id" LEFT OUTER JOIN "tags" ON "tags"."id" = "topic_tags"."tag_id" LEFT OUTER JOIN topic_users AS tu ON (topics.id = tu.topic_id AND tu.user_id = 29) LEFT JOIN category_users ON category_users.category_id = topics.category_id AND category_users.user_id = 29 WHERE "topics"."deleted_at" IS NULL AND (topics.archetype <> 'private_message') AND (COALESCE(categories.topic_id, 0) <> topics.id) AND (COALESCE(tu.notification_level,1) > 0) AND (topics.category_id = -1
OR
(COALESCE(category_users.notification_level, 1) <> 0 AND (topics.category_id IS NULL OR topics.category_id NOT IN(-1)))
OR tu.notification_level > 1) AND (pinned_globally AND pinned_at IS NOT NULL AND (topics.pinned_at > tu.cleared_pinned_at OR tu.cleared_pinned_at IS NULL)) AND "topics"."id" = 7 ORDER BY "topics"."pinned_at" DESC
SELECT DISTINCT topics.bumped_at AS alias_0, "topics"."id" FROM "topics" LEFT OUTER JOIN "categories" ON "categories"."id" = "topics"."category_id" LEFT OUTER JOIN "topic_tags" ON "topic_tags"."topic_id" = "topics"."id" LEFT OUTER JOIN "tags" ON "tags"."id" = "topic_tags"."tag_id" LEFT OUTER JOIN topic_users AS tu ON (topics.id = tu.topic_id AND tu.user_id = 29) LEFT JOIN category_users ON category_users.category_id = topics.category_id AND category_users.user_id = 29 WHERE "topics"."deleted_at" IS NULL AND (topics.archetype <> 'private_message') AND (COALESCE(categories.topic_id, 0) <> topics.id) AND (COALESCE(tu.notification_level,1) > 0) AND (topics.category_id = -1
OR
(COALESCE(category_users.notification_level, 1) <> 0 AND (topics.category_id IS NULL OR topics.category_id NOT IN(-1)))
OR tu.notification_level > 1) AND (NOT ( pinned_globally AND pinned_at IS NOT NULL AND (topics.pinned_at > tu.cleared_pinned_at OR tu.cleared_pinned_at IS NULL) )) ORDER BY topics.bumped_at DESC LIMIT 30
SELECT "topics"."id" AS t0_r0, "topics"."title" AS t0_r1, "topics"."last_posted_at" AS t0_r2, "topics"."created_at" AS t0_r3, "topics"."updated_at" AS t0_r4, "topics"."views" AS t0_r5, "topics"."posts_count" AS t0_r6, "topics"."user_id" AS t0_r7, "topics"."last_post_user_id" AS t0_r8, "topics"."reply_count" AS t0_r9, "topics"."featured_user1_id" AS t0_r10, "topics"."featured_user2_id" AS t0_r11, "topics"."featured_user3_id" AS t0_r12, "topics"."deleted_at" AS t0_r13, "topics"."highest_post_number" AS t0_r14, "topics"."like_count" AS t0_r15, "topics"."incoming_link_count" AS t0_r16, "topics"."category_id" AS t0_r17, "topics"."visible" AS t0_r18, "topics"."moderator_posts_count" AS t0_r19, "topics"."closed" AS t0_r20, "topics"."archived" AS t0_r21, "topics"."bumped_at" AS t0_r22, "topics"."has_summary" AS t0_r23, "topics"."archetype" AS t0_r24, "topics"."featured_user4_id" AS t0_r25, "topics"."notify_moderators_count" AS t0_r26, "topics"."spam_count" AS t0_r27, "topics"."pinned_at" AS t0_r28, "topics"."score" AS t0_r29, "topics"."percent_rank" AS t0_r30, "topics"."subtype" AS t0_r31, "topics"."slug" AS t0_r32, "topics"."deleted_by_id" AS t0_r33, "topics"."participant_count" AS t0_r34, "topics"."word_count" AS t0_r35, "topics"."excerpt" AS t0_r36, "topics"."pinned_globally" AS t0_r37, "topics"."pinned_until" AS t0_r38, "topics"."fancy_title" AS t0_r39, "topics"."highest_staff_post_number" AS t0_r40, "topics"."featured_link" AS t0_r41, "topics"."reviewable_score" AS t0_r42, "topics"."image_upload_id" AS t0_r43, "topics"."slow_mode_seconds" AS t0_r44, "topics"."bannered_until" AS t0_r45, "topics"."external_id" AS t0_r46, "categories"."id" AS t1_r0, "categories"."name" AS t1_r1, "categories"."color" AS t1_r2, "categories"."topic_id" AS t1_r3, "categories"."topic_count" AS t1_r4, "categories"."created_at" AS t1_r5, "categories"."updated_at" AS t1_r6, "categories"."user_id" AS t1_r7, "categories"."topics_year" AS t1_r8, "categories"."topics_month" AS t1_r9, "categories"."topics_week" AS t1_r10, "categories"."slug" AS t1_r11, "categories"."description" AS t1_r12, "categories"."text_color" AS t1_r13, "categories"."read_restricted" AS t1_r14, "categories"."auto_close_hours" AS t1_r15, "categories"."post_count" AS t1_r16, "categories"."latest_post_id" AS t1_r17, "categories"."latest_topic_id" AS t1_r18, "categories"."position" AS t1_r19, "categories"."parent_category_id" AS t1_r20, "categories"."posts_year" AS t1_r21, "categories"."posts_month" AS t1_r22, "categories"."posts_week" AS t1_r23, "categories"."email_in" AS t1_r24, "categories"."email_in_allow_strangers" AS t1_r25, "categories"."topics_day" AS t1_r26, "categories"."posts_day" AS t1_r27, "categories"."allow_badges" AS t1_r28, "categories"."name_lower" AS t1_r29, "categories"."auto_close_based_on_last_post" AS t1_r30, "categories"."topic_template" AS t1_r31, "categories"."contains_messages" AS t1_r32, "categories"."sort_order" AS t1_r33, "categories"."sort_ascending" AS t1_r34, "categories"."uploaded_logo_id" AS t1_r35, "categories"."uploaded_background_id" AS t1_r36, "categories"."topic_featured_link_allowed" AS t1_r37, "categories"."all_topics_wiki" AS t1_r38, "categories"."show_subcategory_list" AS t1_r39, "categories"."num_featured_topics" AS t1_r40, "categories"."default_view" AS t1_r41, "categories"."subcategory_list_style" AS t1_r42, "categories"."default_top_period" AS t1_r43, "categories"."mailinglist_mirror" AS t1_r44, "categories"."minimum_required_tags" AS t1_r45, "categories"."navigate_to_first_post_after_read" AS t1_r46, "categories"."search_priority" AS t1_r47, "categories"."allow_global_tags" AS t1_r48, "categories"."reviewable_by_group_id" AS t1_r49, "categories"."read_only_banner" AS t1_r50, "categories"."default_list_filter" AS t1_r51, "categories"."allow_unlimited_owner_edits_on_first_post" AS t1_r52, "categories"."default_slow_mode_seconds" AS t1_r53, "categories"."uploaded_logo_dark_id" AS t1_r54, "tags"."id" AS t2_r0, "tags"."name" AS t2_r1, "tags"."created_at" AS t2_r2, "tags"."updated_at" AS t2_r3, "tags"."pm_topic_count" AS t2_r4, "tags"."target_tag_id" AS t2_r5, "tags"."description" AS t2_r6, "tags"."public_topic_count" AS t2_r7, "tags"."staff_topic_count" AS t2_r8 FROM "topics" LEFT OUTER JOIN "categories" ON "categories"."id" = "topics"."category_id" LEFT OUTER JOIN "topic_tags" ON "topic_tags"."topic_id" = "topics"."id" LEFT OUTER JOIN "tags" ON "tags"."id" = "topic_tags"."tag_id" LEFT OUTER JOIN topic_users AS tu ON (topics.id = tu.topic_id AND tu.user_id = 29) LEFT JOIN category_users ON category_users.category_id = topics.category_id AND category_users.user_id = 29 WHERE "topics"."deleted_at" IS NULL AND (topics.archetype <> 'private_message') AND (COALESCE(categories.topic_id, 0) <> topics.id) AND (COALESCE(tu.notification_level,1) > 0) AND (topics.category_id = -1
OR
(COALESCE(category_users.notification_level, 1) <> 0 AND (topics.category_id IS NULL OR topics.category_id NOT IN(-1)))
OR tu.notification_level > 1) AND (NOT ( pinned_globally AND pinned_at IS NOT NULL AND (topics.pinned_at > tu.cleared_pinned_at OR tu.cleared_pinned_at IS NULL) )) AND "topics"."id" IN (477, 481, 480, 479, 478, 467, 466, 230, 209, 183, 173, 179, 168, 139, 102, 144, 150, 118, 126, 88, 63, 46, 117, 171, 45, 77, 154, 158, 43, 79) ORDER BY topics.bumped_at DESC
```
Note how there are two extra queries which has to select `DISTINCT
topics.pinned_at` and `DISTINCT topics.bumped_at` because of the
unnecessary left joins to the `topic_tags` and `tags` table result in
duplicated rows in the topic tables. As a result, PG is not able to
use our indexes to effectively execute the query.
Comparing this to the queries being executed when `preload(:tags)` is
used.
```
SELECT "topics"."id" AS t0_r0, "topics"."title" AS t0_r1, "topics"."last_posted_at" AS t0_r2, "topics"."created_at" AS t0_r3, "topics"."updated_at" AS t0_r4, "topics"."views" AS t0_r5, "topics"."posts_count" AS t0_r6, "topics"."user_id" AS t0_r7, "topics"."last_post_user_id" AS t0_r8, "topics"."reply_count" AS t0_r9, "topics"."featured_user1_id" AS t0_r10, "topics"."featured_user2_id" AS t0_r11, "topics"."featured_user3_id" AS t0_r12, "topics"."deleted_at" AS t0_r13, "topics"."highest_post_number" AS t0_r14, "topics"."like_count" AS t0_r15, "topics"."incoming_link_count" AS t0_r16, "topics"."category_id" AS t0_r17, "topics"."visible" AS t0_r18, "topics"."moderator_posts_count" AS t0_r19, "topics"."closed" AS t0_r20, "topics"."archived" AS t0_r21, "topics"."bumped_at" AS t0_r22, "topics"."has_summary" AS t0_r23, "topics"."archetype" AS t0_r24, "topics"."featured_user4_id" AS t0_r25, "topics"."notify_moderators_count" AS t0_r26, "topics"."spam_count" AS t0_r27, "topics"."pinned_at" AS t0_r28, "topics"."score" AS t0_r29, "topics"."percent_rank" AS t0_r30, "topics"."subtype" AS t0_r31, "topics"."slug" AS t0_r32, "topics"."deleted_by_id" AS t0_r33, "topics"."participant_count" AS t0_r34, "topics"."word_count" AS t0_r35, "topics"."excerpt" AS t0_r36, "topics"."pinned_globally" AS t0_r37, "topics"."pinned_until" AS t0_r38, "topics"."fancy_title" AS t0_r39, "topics"."highest_staff_post_number" AS t0_r40, "topics"."featured_link" AS t0_r41, "topics"."reviewable_score" AS t0_r42, "topics"."image_upload_id" AS t0_r43, "topics"."slow_mode_seconds" AS t0_r44, "topics"."bannered_until" AS t0_r45, "topics"."external_id" AS t0_r46, "categories"."id" AS t1_r0, "categories"."name" AS t1_r1, "categories"."color" AS t1_r2, "categories"."topic_id" AS t1_r3, "categories"."topic_count" AS t1_r4, "categories"."created_at" AS t1_r5, "categories"."updated_at" AS t1_r6, "categories"."user_id" AS t1_r7, "categories"."topics_year" AS t1_r8, "categories"."topics_month" AS t1_r9, "categories"."topics_week" AS t1_r10, "categories"."slug" AS t1_r11, "categories"."description" AS t1_r12, "categories"."text_color" AS t1_r13, "categories"."read_restricted" AS t1_r14, "categories"."auto_close_hours" AS t1_r15, "categories"."post_count" AS t1_r16, "categories"."latest_post_id" AS t1_r17, "categories"."latest_topic_id" AS t1_r18, "categories"."position" AS t1_r19, "categories"."parent_category_id" AS t1_r20, "categories"."posts_year" AS t1_r21, "categories"."posts_month" AS t1_r22, "categories"."posts_week" AS t1_r23, "categories"."email_in" AS t1_r24, "categories"."email_in_allow_strangers" AS t1_r25, "categories"."topics_day" AS t1_r26, "categories"."posts_day" AS t1_r27, "categories"."allow_badges" AS t1_r28, "categories"."name_lower" AS t1_r29, "categories"."auto_close_based_on_last_post" AS t1_r30, "categories"."topic_template" AS t1_r31, "categories"."contains_messages" AS t1_r32, "categories"."sort_order" AS t1_r33, "categories"."sort_ascending" AS t1_r34, "categories"."uploaded_logo_id" AS t1_r35, "categories"."uploaded_background_id" AS t1_r36, "categories"."topic_featured_link_allowed" AS t1_r37, "categories"."all_topics_wiki" AS t1_r38, "categories"."show_subcategory_list" AS t1_r39, "categories"."num_featured_topics" AS t1_r40, "categories"."default_view" AS t1_r41, "categories"."subcategory_list_style" AS t1_r42, "categories"."default_top_period" AS t1_r43, "categories"."mailinglist_mirror" AS t1_r44, "categories"."minimum_required_tags" AS t1_r45, "categories"."navigate_to_first_post_after_read" AS t1_r46, "categories"."search_priority" AS t1_r47, "categories"."allow_global_tags" AS t1_r48, "categories"."reviewable_by_group_id" AS t1_r49, "categories"."read_only_banner" AS t1_r50, "categories"."default_list_filter" AS t1_r51, "categories"."allow_unlimited_owner_edits_on_first_post" AS t1_r52, "categories"."default_slow_mode_seconds" AS t1_r53, "categories"."uploaded_logo_dark_id" AS t1_r54 FROM "topics" LEFT OUTER JOIN "categories" ON "categories"."id" = "topics"."category_id" LEFT OUTER JOIN topic_users AS tu ON (topics.id = tu.topic_id AND tu.user_id = 29) LEFT JOIN category_users ON category_users.category_id = topics.category_id AND category_users.user_id = 29 WHERE "topics"."deleted_at" IS NULL AND (topics.archetype <> 'private_message') AND (COALESCE(categories.topic_id, 0) <> topics.id) AND (COALESCE(tu.notification_level,1) > 0) AND (topics.category_id = -1
OR
(COALESCE(category_users.notification_level, 1) <> 0 AND (topics.category_id IS NULL OR topics.category_id NOT IN(-1)))
OR tu.notification_level > 1) AND (pinned_globally AND pinned_at IS NOT NULL AND (topics.pinned_at > tu.cleared_pinned_at OR tu.cleared_pinned_at IS NULL)) ORDER BY "topics"."pinned_at" DESC LIMIT 30
SELECT "topic_tags".* FROM "topic_tags" WHERE "topic_tags"."topic_id" = 7
SELECT "topics"."id" AS t0_r0, "topics"."title" AS t0_r1, "topics"."last_posted_at" AS t0_r2, "topics"."created_at" AS t0_r3, "topics"."updated_at" AS t0_r4, "topics"."views" AS t0_r5, "topics"."posts_count" AS t0_r6, "topics"."user_id" AS t0_r7, "topics"."last_post_user_id" AS t0_r8, "topics"."reply_count" AS t0_r9, "topics"."featured_user1_id" AS t0_r10, "topics"."featured_user2_id" AS t0_r11, "topics"."featured_user3_id" AS t0_r12, "topics"."deleted_at" AS t0_r13, "topics"."highest_post_number" AS t0_r14, "topics"."like_count" AS t0_r15, "topics"."incoming_link_count" AS t0_r16, "topics"."category_id" AS t0_r17, "topics"."visible" AS t0_r18, "topics"."moderator_posts_count" AS t0_r19, "topics"."closed" AS t0_r20, "topics"."archived" AS t0_r21, "topics"."bumped_at" AS t0_r22, "topics"."has_summary" AS t0_r23, "topics"."archetype" AS t0_r24, "topics"."featured_user4_id" AS t0_r25, "topics"."notify_moderators_count" AS t0_r26, "topics"."spam_count" AS t0_r27, "topics"."pinned_at" AS t0_r28, "topics"."score" AS t0_r29, "topics"."percent_rank" AS t0_r30, "topics"."subtype" AS t0_r31, "topics"."slug" AS t0_r32, "topics"."deleted_by_id" AS t0_r33, "topics"."participant_count" AS t0_r34, "topics"."word_count" AS t0_r35, "topics"."excerpt" AS t0_r36, "topics"."pinned_globally" AS t0_r37, "topics"."pinned_until" AS t0_r38, "topics"."fancy_title" AS t0_r39, "topics"."highest_staff_post_number" AS t0_r40, "topics"."featured_link" AS t0_r41, "topics"."reviewable_score" AS t0_r42, "topics"."image_upload_id" AS t0_r43, "topics"."slow_mode_seconds" AS t0_r44, "topics"."bannered_until" AS t0_r45, "topics"."external_id" AS t0_r46, "categories"."id" AS t1_r0, "categories"."name" AS t1_r1, "categories"."color" AS t1_r2, "categories"."topic_id" AS t1_r3, "categories"."topic_count" AS t1_r4, "categories"."created_at" AS t1_r5, "categories"."updated_at" AS t1_r6, "categories"."user_id" AS t1_r7, "categories"."topics_year" AS t1_r8, "categories"."topics_month" AS t1_r9, "categories"."topics_week" AS t1_r10, "categories"."slug" AS t1_r11, "categories"."description" AS t1_r12, "categories"."text_color" AS t1_r13, "categories"."read_restricted" AS t1_r14, "categories"."auto_close_hours" AS t1_r15, "categories"."post_count" AS t1_r16, "categories"."latest_post_id" AS t1_r17, "categories"."latest_topic_id" AS t1_r18, "categories"."position" AS t1_r19, "categories"."parent_category_id" AS t1_r20, "categories"."posts_year" AS t1_r21, "categories"."posts_month" AS t1_r22, "categories"."posts_week" AS t1_r23, "categories"."email_in" AS t1_r24, "categories"."email_in_allow_strangers" AS t1_r25, "categories"."topics_day" AS t1_r26, "categories"."posts_day" AS t1_r27, "categories"."allow_badges" AS t1_r28, "categories"."name_lower" AS t1_r29, "categories"."auto_close_based_on_last_post" AS t1_r30, "categories"."topic_template" AS t1_r31, "categories"."contains_messages" AS t1_r32, "categories"."sort_order" AS t1_r33, "categories"."sort_ascending" AS t1_r34, "categories"."uploaded_logo_id" AS t1_r35, "categories"."uploaded_background_id" AS t1_r36, "categories"."topic_featured_link_allowed" AS t1_r37, "categories"."all_topics_wiki" AS t1_r38, "categories"."show_subcategory_list" AS t1_r39, "categories"."num_featured_topics" AS t1_r40, "categories"."default_view" AS t1_r41, "categories"."subcategory_list_style" AS t1_r42, "categories"."default_top_period" AS t1_r43, "categories"."mailinglist_mirror" AS t1_r44, "categories"."minimum_required_tags" AS t1_r45, "categories"."navigate_to_first_post_after_read" AS t1_r46, "categories"."search_priority" AS t1_r47, "categories"."allow_global_tags" AS t1_r48, "categories"."reviewable_by_group_id" AS t1_r49, "categories"."read_only_banner" AS t1_r50, "categories"."default_list_filter" AS t1_r51, "categories"."allow_unlimited_owner_edits_on_first_post" AS t1_r52, "categories"."default_slow_mode_seconds" AS t1_r53, "categories"."uploaded_logo_dark_id" AS t1_r54 FROM "topics" LEFT OUTER JOIN "categories" ON "categories"."id" = "topics"."category_id" LEFT OUTER JOIN topic_users AS tu ON (topics.id = tu.topic_id AND tu.user_id = 29) LEFT JOIN category_users ON category_users.category_id = topics.category_id AND category_users.user_id = 29 WHERE "topics"."deleted_at" IS NULL AND (topics.archetype <> 'private_message') AND (COALESCE(categories.topic_id, 0) <> topics.id) AND (COALESCE(tu.notification_level,1) > 0) AND (topics.category_id = -1
OR
(COALESCE(category_users.notification_level, 1) <> 0 AND (topics.category_id IS NULL OR topics.category_id NOT IN(-1)))
OR tu.notification_level > 1) AND (NOT ( pinned_globally AND pinned_at IS NOT NULL AND (topics.pinned_at > tu.cleared_pinned_at OR tu.cleared_pinned_at IS NULL) )) ORDER BY topics.bumped_at DESC LIMIT 30
SELECT "topic_tags".* FROM "topic_tags" WHERE "topic_tags"."topic_id" IN (477, 481, 480, 479, 478, 467, 466, 230, 209, 183, 173, 179, 168, 139, 102, 144, 150, 118, 126, 88, 63, 46, 117, 171, 45, 77, 154, 158, 43, 79)
SELECT "tags"."id", "tags"."name", "tags"."created_at", "tags"."updated_at", "tags"."pm_topic_count", "tags"."target_tag_id", "tags"."description", "tags"."public_topic_count", "tags"."staff_topic_count" FROM "tags" WHERE "tags"."id" IN (10, 20, 26, 7, 27, 28, 30, 19, 9, 4, 15, 29, 14, 18, 11, 25, 1, 21, 8, 22, 5, 32)
```
We end up with queries that are much more efficient as those queries can
effectively use the indexes.
This commit introduces the :push_notification event and deprecates :post_notification_alert.
The old :post_notification_alert event was not triggered when pushing chat notifications and did not respect when the user was in "do not disturb" mode.
The new event fixes these issues.
This plugin is no longer supported, and so we no longer need to run its tests in CI
(removing the comment and the 'Canned Replies' value from the array caused syntax_tree to change to the `%w` syntax)
Why this change?
This is a follow up to e8f7b62752.
Tracking of GC stats didn't really belong in the `MethodProfiler` class
so we want to extract that concern into its own class.
As part of this PR, the `track_gc_stat_per_request` site setting has
also been renamed to `instrument_gc_stat_per_request`.
What does this change do?
This change adds a hidden `track_gc_stat_per_request` site setting which
when enabled will track the time spent in GC, major GC count and minor
GC count during a request.
Why is this change needed?
We have plans to tune our GC in production but without any
instrumentation, we will not be able to know if our tuning is effective
or not. This commit takes the first step at instrumenting some basic GC
stats in core during a request which can then be consumed by the discourse-prometheus plugin.
Linking to the #feedback category can break if the category gets renamed or a different site locale is used. By using the correct hashtag (at the time of seeding) this issues can be avoided.
PresenceChannel configuration is cached using redis. That cache is used, and sometimes repopulated, during normal GET requests. When the primary redis server was readonly, that `redis.set` call would raise an error and cause the entire request to fail. Instead, we should ignore the failure and continue without populating the cache.
This commit introduces five rake tasks to help us with version bump procedures:
- `version_bump:beta` and `version_bump:minor_stable` are for our minor releases
- `version_bump:major_stable_prepare` and `version_bump:major_stable_merge` are for our major release process
- `version_bump:stage_security_fixes` is to collate multiple security fixes from private branches into a single branch for release
The scripts will stage the necessary commits in a branch and prompt you to create a PR for review. No changes to release branches or tags will be made without the PR being approved, and explicit confirmation of prompts in the scripts.
To avoid polluting the operator's primary working tree, the scripts create a temporary git worktree in a temporary directory and perform all checkouts/commits there.
A previous change updated `ReviewableQueuedPost`'s `created_by`
to be consistent with other reviewable types. It assigns
the the creator of the post being queued to `target_created_by` and sets
the `created_by` to the creator of the reviewable itself.
This fix updates some of the `created_by` references missed during the
intial fix.
Internal oneboxes to posts that contained oneboxed github links to
commits or PRs with long enough commit messages to have the `show-more`
and the `excerpt hidden` classes in their html were being stripped of
their content resulting in empty internal oneboxes.
see: https://meta.discourse.org/t/269436
This fixes a regression introduced in:
0b3cf83e3c
What is the problem here?
In multiple controllers, we are accepting a `limit` params but do not
impose any upper bound on the values being accepted. Without an upper
bound, we may be allowing arbituary users from generating DB queries
which may end up exhausing the resources on the server.
What is the fix here?
A new `fetch_limit_from_params` helper method is introduced in
`ApplicationController` that can be used by controller actions to safely
get the limit from the params as a default limit and maximum limit has
to be set. When an invalid limit params is encountered, the server will
respond with the 400 response code.
For the Discourse 3.2 beta series, we intend to use a `-dev` suffix while beta versions are being developed in `main`/`tests-passed`. When a beta version is ready, it will be 'released' without the `-dev` suffix.
This commit adds support for the `-dev` suffix, and also refactors `Discourse::VERSION` so that the canonical representation is a simple human-readable string. Constants for each segment are derived from that, so the interface remains unchanged.
Embed Motoko service's primary URL is transiting from embed.smartcontracts.org to embed.motoko.org, this PR updates the Onebox logic to work for either domain.
This adds support for the `<=` and `<` version operators in `.discourse-compatibility` files. This allows for more flexibility (e.g. targeting the entire 3.1.x stable release via `< 3.2.0.beta1`), and should also make compatibility files to be more readable.
If an operator is not specified we default to `<=`, which matches the old behavior.
We recently added a "don't feed the trolls" feature which warns you about interacting with posts that have been flagged and are pending review. The problem is the warning persists even if an admin reviews the post and rejects the flag.
After this change we only consider active flags when deciding whether to show the warning or not.
We're seeing unhandled errors in production when web push notifications are failing with an SSL error. This is happening for a few users, but generating a large amount of log noise due to the sheer number of notifications.
This adds handling of SSL errors in two places:
1. In FinalDestination::HTTP, this is handled the same as a timeout error, and gives a chance to recover.
2. In PushNotificationPusher. This will cause the notification to retry a number of times, and if it keeps failing, disable push notifications for the user. (Existing behaviour.)
I wanted to wrap the SSL error in e.g. WebPush::RequestError, but the gem doesn't have request error handling, so didn't want to have the freedom patch diverge from the gem as well. Instead just propagating the raw SSL error.
The parameter ascending was deprecated (replaced by asc) and marked for deletion in 2.9. This PR removes it. Since the resulting code was a simple one-liner, the method body was inlined instead.
Allow anonymous users (logged-in, but set to anonymous posting) to like posts
---------
Co-authored-by: Emmett Ling <eling@zendesk.com>
Co-authored-by: Nat <natalie.tay@discourse.org>
We deprecated the keywords method, route, and format (replaced with methods, actions, and formats respectively) as parameters to Plugin::Instance#add_api_parameter_route, marked for removal in 2.7. This PR deletes them.
These methods were deprecated and marked for removal in 2.6. This change deletes them.
These deprecations use raise_error: true, so the fallbacks are at this point unreachable and can't be used anyway.
- Convert `admin-incoming-email` modal to component-based API
- Testing that the modal was working in local development was extremely challenging due to the need for `rejected` and `bounced` emails. Something that is not easy to stub in a local dev environment. To make this process more smooth for future developers I have added a new rake task:
```
desc "Creates sample email logs"
task "email_logs:populate" => ["db:load_config"] do |_, args|
DiscourseDev::EmailLog.populate!
end
```
That will generate fully functional email logs in development to be toyed with.
<img width="787" alt="Screenshot 2023-07-20 at 3 27 04 PM" src="https://github.com/discourse/discourse/assets/50783505/47b3fe34-cd7e-49a5-8fe6-768c0fbd1aa2">
* DEV: Skip srcset for onebox thumbnails
In an effort to preserve bandwidth especially for mobile devices this
change will prevent upscaled srcset attributes from being added to
onebox thumbnail images.
Besides checking the html for onebox classes, our database structure for
uploads does not distinguish between regular images and onebox thumbnail
images, but all upload images in discourse do have a thumbnail. By
default this thumbnail is what is used for the non-upscaled image for
onebox images, so we should only use that thumbnail. Because the
rendered onebox image size is likely smaller than the upload thumbnail
size there really shouldn't be a need to upscale.
Followup to b583872eed
and 54001060ea
Another place where we need to filter hashtag types to
only enabled ones is PrettyText, though the latter PR
above should also already make it so the correct priority
types are passed.
This is causing errors in the email processing workflow
for some customers (presumably ones with tagging disabled).
Performing a `Delete User`/`Delete and Block User` reviewable actions for a
queued post reviewable from the `review.show` route results in an error
popup even if the action completes successfully.
This happens because unlike other reviewable types, a user delete action
on a queued post reviewable results in the deletion of the reviewable
itself. A subsequent attempt to reload the reviewable record results in
404. The deletion happens as part of the call to `UserDestroyer` which
includes a step for destroying reviewables created by the user being
destroyed. At the root of this is the creator of the queued post
being set as the creator of the reviewable as instead of the system
user.
This change assigns the creator of the reviewable to the system user and
uses the more approapriate `target_created_by` column for the creator of the
post being queued.
Wikimedia provides a thumbnail url for its images, so we should use that
for oneboxes instead of the full-size image. Because the size of the
onebox image we display is quite small anyways the thumbnail wikimedia
provides should suffice and will save bandwidth.
See: https://meta.discourse.org/t/264039
The primary motivation is to simplify `eagerLoadRawTemplateModules` which curently introspects the module dependencies (the `imports` at runtime). This is no longer supported in Embroider as the AMD shims do not have any dependencies (since it's managed internally with webpack).
Previously we had three query parameters to control which tests would be run. The default was to run all core/plugin tests together, which would almost always lead to errors and does not match the way we run tests in CI.
This commit removes the three old parameters (skip_core, skip_plugins and single_plugin), and introduces a new 'target' parameter. This can have a value of 'core', 'plugins', 'all', or a specific plugin name. The default is 'core'. Attempting to use the old parameters will raise an error.
Why this change?
Prior to this change, dismissing unreads posts did not publish the
changes across clients for the same user. As a result, users can end up
seeing an unread count being present but saw no topics being loaded when
visiting the `/unread` route.
Instead of having to remember every time, just always wait until the
current transaction (if it exists) has committed before clearing any
DistributedCache.
The only exception to this is caches that aren't caching things from
postgres.
This means we have to do the test setup after setting the test
transaction, because doing the test setup involves clearing caches.
Reapplying this - it now doesn't use after_commit if skip_db is set
* FEATURE: Inline topic summary. Cached version accessible to everyone.
Anons and non-members of the `custom_summarization_allowed_groups_map` groups can see cached summaries for any accessible topic. After the first 12 hours and if the posts to summarize have changed, allowed users clicking on the button will automatically re-generate it.
* Ensure chat summaries work and prevent model hallucinations when there are no messages.
These avatar-related helper functions are used in pretty-text, which currently means we load the entire `discourse/lib/utilities` module into the mini-racer when running pretty-text on the server side. This stops us adding any logic or imports to discourse/lib/utilities which may depend on other `discourse/` namespace features.
This commit moves the avatar-related utils into a dedicated module in the `discourse-common` namespace, adds backwards-compatibility shims, and updates the pretty-text config accordingly.
This adds an option to allow non-image s3 files to be downloaded through CDN URL.
Addresses the issues in:
* meta.discourse.org/t/s3-cdn-url-not-being-used-on-non-image-uploads/175332
* meta.discourse.org/t/s3-uploads-using-cdn-for-pdfs/213218
Instead of having to remember every time, just always wait until the
current transaction (if it exists) has committed before clearing any
DistributedCache.
The only exception to this is caches that aren't caching things from
postgres.
This means we have to do the test setup after setting the test
transaction, because doing the test setup involves clearing caches.
Follow-up to b27e12445d
This commit adds 2 new site settings `default_sidebar_link_to_filtered_list` and `default_sidebar_show_count_of_new_items` to control the default values for the navigation menu preferences that were added in the linked commit (`sidebar_link_to_filtered_list` and `sidebar_show_count_of_new_items` respectively).
Why this change?
The process's pid is useful when we're trying to link output from
different processes together. In this case, we want to be able to link
the Rails server logs to the right rspec process.
Before:
[2] Viewing sidebar mobile collapses the sidebar when clicking outside of it
After:
[2] (#176342) Viewing sidebar mobile collapses the sidebar when clicking outside of it
Recently, site setting watched_precedence_over_muted was introduced - https://github.com/discourse/discourse/pull/22252
In this PR, we are allowing users to override it. The option is only displayed when the user has watched categories and muted tags, or vice versa.
This will be used when we move the channel creation for DMs
to happen when we first send a message in a DM channel to avoid
a double-request. For now we can just have a new API endpoint
for creating this that the existing frontend code can use,
that uses the new service pattern.
This also uses the new policy pattern for services where the policy
can be defined in a class so a more dynamic reason for the policy
failing can be sent to the controller.
Co-authored-by: Loïc Guitaut <loic@discourse.org>
Twitter is now redirecting anonymous users (with a browser-like user agent, which FinalDestination uses) to the login page. Skipping redirect-following for twitter.com will allow us to continue oneboxing tweets via the OpenGraph data and the API (when credentials are present).
https://meta.discourse.org/t/269371/17
Updates the interface for implementing summarization strategies and adds a cache layer to summarize topics once.
The cache stores the final summary and each chunk used to build it, which will be useful when we have to extend or rebuild it.
New setting which allow admin to define behavior when topic is in watched category and muted topic and vice versa.
If watched_precedence_over_muted setting is true, that topic is still visible in list of topics and notification is created.
If watched_precedence_over_muted setting is false, that topic is not still visible in list of topics and notification is skipped as well.
While we are unable to support OAUTH2 with pop3 (due to upstream dependency ruby/net-pop#16), we are adding the support for mail pollers plugin. Doing so, it would be possible to write a plugin which then uses other ways (microsoft graph sdk for example) to poll emails from a mailbox.
The idea is that a plugin would define a class which inherits from Email::Poller and defines a poll_mailbox static method which returns an array of strings. Then the plugin could call register_mail_poller(<class_name>) to have it registered. All the configuration (oauth2 tokens, email, etc) could be managed by sitesettings defined in the plugin.
This change adds support retroactively updating display names in the new quote format when the user's name is changed. It happens through a background job that is triggered by a callback when a user is saved with a new name.
Under exceptional cases people may need to resize the notification table.
This only happens on forums with a total of more than 2.5 billion notifications.
This rake task can be used to convert all the notification columns to
bigint to make more room.
https://meta.discourse.org/t/markdown-preview-and-result-differ/263878
The result of this markdown had different results in the composer preview and the post. This is solved by updating Loofah to the latest version and using html5 fragments like our user had reported. While the change was only needed in cooked_post_processor.rb for this fix, other areas also had to be updated due to various side effects.
Upstream added a capital 'T' to the 'Translation missing' message in https://github.com/ruby-i18n/i18n/commit/c5c6e753f3. This caused our translate accelerator patch to diverge, and the change in case affected a number of our specs. This commit updates the translate accelerator to match the upstream casing, and introduces a spec to detect future divergence.
This method is a huge footgun in production, since it calls
the Redis KEYS command. From the Redis documentation at
https://redis.io/commands/keys/:
> Warning: consider KEYS as a command that should only be used in
production environments with extreme care. It may ruin performance when
it is executed against large databases. This command is intended for
debugging and special operations, such as changing your keyspace layout.
Don't use KEYS in your regular application code.
Since we were only using `delete_prefixed` in specs (now that we
removed the usage in production in 24ec06ff85)
we can remove this and instead rely on `use_redis_snapshotting` on the
particular tests that need this kind of clearing functionality.
Added a new modifier hook to allow plugins to modify the @mentions
extracted from a cooked text.
Use case: Some plugins may change how the mentions are cooked to prevent
them from being confused with user or group mentions and display the user
card.
This modifier hook allows the plugin to filter the mentions detected or add new ways
to add mentions into cooked text.
Communities can use sidebar or header dropdown, therefore navigation menu is a better name settings in 2 places:
- Old user sidebar preferences;
- Site setting about default tags and categories.
When we introduced the new quote format with full-name display name:
```
[quote="Ted Johansson, post:1, topic:2, username:ted"]
we overlooked the code responsible for rewriting quotes when a user's name is changed.
```
The functional part of this change adds support for the new quote format in the code that updates quotes when a user's username changes. See the test case in `spec/services/username_changer_spec.rb` for the details.
In addition, this change adds a regression test for PrettyText to cover the new quote format, and extracts the code responsible for rewriting raw and cooked quotes into its own `QuoteRewriter` class. The functionality of the latter is tested through the tests in `spec/services/username_changer_spec.rb`.
* FEATURE: Content custom summarization strategies.
This PR establishes a pattern for plugins to register alternative ways of summarizing content by extending a class that defines an interface.
Core controls which strategy we'll use and who has access to it through the `summarization_strategy` and `custom_summarization_allowed_groups`. It also defines the UI for summarizing topics.
Other plugins can access this summarization mechanism and implement their features, removing cross-plugin customizations, as it currently happens between chat and the discourse-ai plugin.
* Group membership validation and rate limiting
* Work with objects instead of classes
* Port summarization feature from discourse-ai to chat
* Rename available summaries to 'Top Replies' and 'Summary'
Previously workbox JS was vendored into our git repository, and would be loaded from the `public/javascripts` directory with a 1 day cache lifetime. The main aim of this commit is to add 'cachebuster' to the workbox URL so that the cache lifetime can be increased.
- Remove vendored copies of workbox.
- Use ember-cli/broccoli to collect workbox files from node_modules into assets/workbox-{digest}
- Add assets to sprockets manifest so that they're collected from the ember-cli output directory (and uploaded to s3 when configured)
Some of the sprockets-related changes in this commit are not ideal, but we hope to remove sprockets in the not-too-distant future.
Usually, when a user is promoted to TL2 two messages are sent. The
first one is a system message 'tl2_promotion_message' which triggers a
'system_message_sent' Discourse event.
When the event is fired and if Discourse Narrative Bot is enabled, then
a second message is sent to the recipient of the first message. The
recipients was determined by looking at the list of users that can
access that topic and pick the last one. This method does not work if
'site_contact_group_name' site setting is set because it adds the group
in the list of recipients.
A solution to this problem would have been to select the last user in
the list of 'topic_allowed_users', but an even better solution is to
pass the name of the recipients when the 'system_message_sent'
Discourse event is fired.
Adds a new `[grid]` tag that can arrange images (or other media) into a grid in posts.
The grid defaults to a 3-column with a few exceptions:
- if there are only 2 or 4 items, it defaults to a 2-column grid (because it generally looks better)
- on mobile, it defaults to a 2-column grid
- if there is only one item, the grid has no effect
AWS recommends running buckets without ACLs, and to use resource policies to manage access control instead.
This is not a bad idea, because S3 ACLs are whack, and while resource policies are also whack, they're a more constrained form of whack.
Further, some compliance regimes get antsy if you don't go with the vendor's recommended settings, and arguing that you need to enable ACLs on a bucket just to store images in there is more hassle than it's worth.
The new site setting (s3_use_acls) cannot be disabled when secure
uploads is enabled -- the latter relies on private ACLs for security
at this point in time. We may want to reexamine this in future.
Not all revisions involve changes to the actual post/topic content. We
may want to know if a revisions includes the topic title or post raw.
Specifically introducing these for use in the Akismet plugin to
conditionally queue checks.
What does this change do?
Suggested topics by default are ordered in the following way:
1. Unread topics in current category of topic that is being viewed
2. Unread topics in other categories
3. New topics in current category of topics that is being viewed
4. New topics in other categories
5. Random topics
With the experimental new new view, we want to remove the concept of
read and new so that new order is as such:
1. Topics created by the current user with posts that the user has not
read ordered by topic's bumped date
2. Topics in current category of topic with posts that the user has not
read ordered by topic's bumped date
3. Topics in other categories with posts that the user has not read
ordered by topic's bumped date
4. Random topics ordered by topic's bumped date
When a topic already has multiple synonym tags of a target tag, if we try to update the "`tag_id`" column to target tag id then it will raise a unique violation error since there are multiple synonyms present in the topic. So before doing that action, we must delete the problematic tags so the topic has only one synonym tag to update.
This is not an issue when the topic has a target tag already along with synonyms.
`DiscourseIpInfo` expects zeitwerk auto-loading to be available, so we need to ensure the rake task loads the full rails environment. Normally we run this task as part of assets:precompile, so the app is already initialized. This commit only affects the case where the maxmind task is run directly.
Test were sometimes failing with similar error to the following:
```
1) UsernameChanger#override when unicode_usernames is off overrides the username if a new name has different case
Failure/Error:
protect { v8.eval(<<~JS) }
__paths = #{paths_json};
__utils.avatarImg({size: #{size.inspect}, avatarTemplate: #{avatar_template.inspect}}, __getURL);
JS
MiniRacer::RuntimeError:
ReferenceError: __optInput is not defined
# JavaScript at exports.helperContext (<anonymous>:21:17)
# JavaScript at getRawAvatarSize (<anonymous>:108:49)
# JavaScript at avatarUrl (<anonymous>:102:21)
# JavaScript at Object.avatarImg (<anonymous>:129:15)
# JavaScript at <anonymous>:2:9
# ./lib/pretty_text.rb:259:in `block in avatar_img'
# ./lib/pretty_text.rb:661:in `block in protect'
# ./lib/pretty_text.rb:661:in `synchronize'
# ./lib/pretty_text.rb:661:in `protect'
# ./lib/pretty_text.rb:259:in `avatar_img'
# ./app/jobs/regular/update_username.rb:14:in `execute'
```
This should not be needed as it should already have been initialised but that should stop the flakey-ness for now while being a safe change.
Prior to this commit, we didn't have RTL versions of our admin and plugins CSS bundles and we always served LTR versions of those bundles even when users used an RTL locale, causing admin and plugins UI elements to never look as good as when an LTR locale was used. Example of UI issues prior to this commit were: missing margins, borders on the wrong side and buttons too close to each other etc.
This commit creates an RTL version for the admin CSS bundle as well as RTL bundles for all the installed plugins and serves those RTL bundles to users/sites who use RTL locales.
* FEATURE: reduce avatar sizes to 6 from 20
This PR introduces 3 changes:
1. SiteSetting.avatar_sizes, now does what is says on the tin.
previously it would introduce a large number of extra sizes, to allow for
various DPIs. Instead we now trust the admin with the size list.
2. When `avatar_sizes` changes, we ensure consistency and remove resized
avatars that are not longer allowed per site setting. This happens on the
12 hourly job and limited out of the box to 20k cleanups per cycle, given
this may reach out to AWS 20k times to remove things.
3.Our default avatar sizes are now "24|48|72|96|144|288" these sizes were
very specifically picked to limit amount of bluriness introduced by webkit.
Our avatars are already blurry due to 1px border, so this corrects old blur.
This change heavily reduces storage required by forums which simplifies
site moves and more.
Co-authored-by: David Taylor <david@taylorhq.com>
This commit adds modifiers that allow plugins to change how categories and groups are prefetched into the application and listed in the respective controllers.
Possible use cases:
- prevent some categories/groups from being prefetched when the application loads for performance reasons.
- prevent some categories/groups from being listed in their respective index pages.
Rescuing them still makes timing-out tests fail but doesn't break `after` spec cleanup (which could trigger more errors) Using custom error class to avoid any other possible timeout-catching code.
Also:
* remove an unnecessary `.select { |x| x.size > 0 }`
* fix a typo in a test title
Privacy Policy and Terms of Service topics are no longer created by
default for communities that have not set a company name. For this
reason, some URLs were pointing to 404 page.
Why is this change required?
By default, `RSpec` comes with a `--profile=[COUNT]` option as well but
enabling that option means that the entire test suite needs to be
executed. This does not work so well for `turbo_rspec` which splits our
test files into various "buckets" for the tests to be executed in
multiple processes. Therefore, this commit adds a similar
`--profile=[COUNT]` option to `turbo_rspec` but will only profile the
tests being executed. Examples:
`LOAD_PLUGINS=1 bin/turbo_rspec --profile plugins/*/spec/system`
or
`LOAD_PLUGINS=1 bin/turbo_rspec --profile=20 plugins/*/spec/system`
What is the problem?
In the test environement, we were calling `SiteSetting.setting` directly
to introduce new site settings. However, this leads to changes in state of the SiteSettings
hash that is stored in memory as test runs. Changing or leaking states
when running tests is one of the major contributors of test flakiness.
An example of how this resulted in test flakiness is our `spec/integrity/i18n_spec.rb` spec file which
had a test case that would fail because a new "plugin_setting" site
setting was registered in another test case but the site setting did not
have translations for the site setting set.
What is the fix?
There are a couple of changes being introduced in this commit:
1. Make `SiteSetting.setting` a private method as it is not safe to be
exposed as a public method of the `SiteSetting` class
2. Change test cases to use existing site settings in Discourse instead
of creating custom site settings. Existing site settings are not
removed often so we don't really need to dynamically add new site
settings in test cases. Even if the site settings being used in test
cases are removed, updating the test cases to rely on other site
settings is a very easy change.
3. Set up a plugin instance in the test environment as a "fixture"
instead of having each test create its own plugin instance.
Legal topics, such as the Terms of Service and Privacy Policy topics
do not make sense if the entity creating the community is not a company.
These topics will be created and updated only when the company name is
present and deleted when it is not.
In some cases reverse chronological can be very important.
- Oldest post by sam
- Oldest topic by sam
Prior to these new filters we had no way of searching for them.
Now the 2 new orders `order:oldest` and `order:oldest_topic` can be used
to find oldest topics and posts
* Update spec/lib/search_spec.rb
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
* Update spec/lib/search_spec.rb
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
---------
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
* FIX: Video thumbnails can have duplicates
It's possible that a duplicate video or even a very similar video could
generate the same video thumbnail. Because video thumbnails are mapped
to their corresponding video by using the video sha1 in the thumbnail
filename we need to allow for duplicate thumbnails otherwise even when a
thumbnail has been generated for a topic it will not be mapped
correctly.
This will also allow you to re-upload a video on the same topic to
regenerate the thumbnail.
* fix typo
This commit makes some fundamental changes to how hashtag cooking and
icon generation works in the new experimental hashtag autocomplete mode.
Previously we cooked the appropriate SVG icon with the cooked hashtag,
though this has proved inflexible especially for theming purposes.
Instead, we now cook a data-ID attribute with the hashtag and add a new
span as an icon placeholder. This is replaced on the client side with an
icon (or a square span in the case of categories) on the client side via
the decorateCooked API for posts and chat messages.
This client side logic uses the generated hashtag, category, and channel
CSS classes added in a previous commit.
This is missing changes to the sidebar to use the new generated CSS
classes and also colors and the split square for categories in the
hashtag autocomplete menu -- I will tackle this in a separate PR so it
is clearer.
Regressed in eec10efc3d. It means that backend plugin spec failures in CI were not failing the spec suite.
Fixes recent regressions and skips two of them - to be handled next week.
---------
Co-authored-by: Andrei Prigorshnev <a.prigorshnev@gmail.com>
https://meta.discourse.org/t/improving-mailman-email-parsing/253041
When mirroring a public mailling list which uses mailman, there were some cases where the incoming email was not associated to the proper user.
As it happens, for various (undertermined) reasons, the email from the sender is often not in the `From` header but can be in any of the following headers: `Reply-To`, `CC`, `X-Original-From`, `X-MailFrom`.
It might be in other headers as well, but those were the ones we found the most reliable.
The old method updated only existing records, without considering that
new tags might have been created or some tags might not exist anymore.
This was usually not a problem because the stats were also updated by
other code paths.
However, the ensure consistency job should be more solid and help when
other code paths fail or after importing data.
Also, update category tag stats too should happen when updating other
category stats as well.
Currently processing emails that are blank or have a nil value for the mail will cause several errors.
This update allows emails with blank body or missing sender to log the blank email error to the mail logs rather than throwing an error.
In #21498, we split `BaseStore#download` into a "safe" version which returns nil on errors, and an "unsafe" version which raises an exception, which was the old behaviour of `#download`.
This change updates call sites that used the old `#download`, which raised exceptions, to use the new `#download!` to preserve behaviour (and silence deprecation warnings.)
It also silences the deprecation warning in tests.
* DEV: add new thread icon
* FIX: Use new thread icon, fix typo in SVG
UX: move the thread list icon to the right of
the collapse button
---------
Co-authored-by: Martin Brennan <martin@discourse.org>
* FIX: Displaying the wrong number of minimum tags in the composer
When the minimum number of tags set for the category is larger than the minimum number of tags
set in the category tag-groups, the composer was displaying the wrong value.
This commit fixes the value displayed in the composer to show the max value between the required
for the category and the tag-groups set for the category.
This bug was reported on Meta in https://meta.discourse.org/t/tags-from-multiple-tag-groups-required-only-suggest-select-at-least-one-tag/263817
* FIX: Limiting tags in categories not working as expected
When a category was restricted to a tag group A, which was set to only allow
one tag from the group per topic, selecting a tag belonging only to A returned
other tags from A that also belonged to other group/s (if any).
Example:
Tag group A: alpha, beta, gamma, epsilon, delta
Tag group B: alpha, beta, gamma
Both tag groups set to only allow one tag from the group per topic.
If Category 1 was set to only allow tags from the tag group A, and the first tag
selected was epsilon, then, because they also belonged to tag group B, the tags
alpha, beta, and gamma were still returned as valid options when they should not be.
This commit ensures that once a tag from a tag group that restricts its tags to
one per topic is selected, no other tag from this group is returned.
This bug was reported on Meta in https://meta.discourse.org/t/limiting-tags-to-categories-not-working-as-expected/263143.
* FIX: Moving topics does not prompt to add required tag for new category
When a topic moved from a category to another, the tag requirements
of the new category were not being checked.
This allowed a topic to be created and moved to a category:
- that limited the tags to a tag group, with the topic containing tags
not allowed.
- that required N tags from a tag group, with the topic not containing
the required tags.
This bug was reported on Meta in https://meta.discourse.org/t/moving-tagged-topics-does-not-prompt-to-add-required-tag-for-new-category/264138.
* FIX: Editing topics with tag groups from parents allows incorrect tagging
When there was a combination between parent tags defined in a tag group
set to allow only one tag from the group per topic, and other tag groups
relying on this restriction to combine the children tag types with the
parent tag, editing a topic could allow the user to insert an invalid
combination of these tags.
Example:
Automakers tag group: landhover, toyota
- group set to limit one tag from the group per topic
Toyota models group: land-cruiser, hilux, corolla
Landhover models group: evoque, defender, discovery
If a topic was initially set up with the tags toyota, land-cruiser it was
possible to edit it by removing the tag toyota and adding the tag landhover
and other landhover model tags like evoque for example.
In this case, the topic would end up with the tags toyota, land-cruiser,
landhover, evoque because Discourse will automatically insert the
missing parent tag toyota when it detects the tag land-cruiser.
This combination of tags would violate the restriction specified in
the Automakers tag group resulting in an invalid combination of tags.
This commit enforces that the "one tag from the group per topic"
restriction is verified before updating the topic tags and also
make sure the verification checks the compatibility of parent tags that
would be automatically inserted.
After the changes, the user will receive an error similar to:
The tags land-cruiser, landhover cannot be used simultaneously.
Please include only one of them.
Watched words were converted to regular expressions containing \W, which
handled only ASCII characters. Using [^[:word]] instead ensures that
UTF-8 characters are also handled correctly.
* Color for turbo_rspec in CI (`progress` and `documentation` formats)
* Show "DONE" only when `documentation` formatter is used
* Fix formatting
* Collapse RSpec commands
* Add line wrapping to the `progress` formatter (to mitigate GH Actions issue)
- Update welcome topic copy
- Edit the welcome topic automatically when the title or description changes
- Remove “Create your Welcome Topic” banner/CTA
- Add "edit welcome topic" user tip
### Background
When SSRF detection fails, the exception bubbles all the way up, causing a log alert. This isn't actionable, and should instead be ignored. The existing `rescue` does already ignore network errors, but fails to account for SSRF exceptions coming from `FinalDestination`.
### What is this change?
This PR does two things.
---
Firstly, it introduces a common root exception class, `FinalDestination::SSRFError` for SSRF errors. This serves two functions: 1) it makes it easier to rescue both errors at once, which is generally what one wants to do and 2) prevents having to dig deep into the class hierarchy for the constant.
This change is fully backwards compatible thanks to how inheritance and exception handling works.
---
Secondly, it rescues this new exception in `UserAvatar.import_url_for_user`, which is causing sporadic errors to be logged in production. After this SSRF errors are handled the same as network errors.
After this change, in order to join a chat channel, a user needs to be in a group with at least “Reply” permission for the category. If the user only has “See” permission, they are able to preview the channel, but not join it or send messages. The auto-join function also follows this new restriction.
---------
Co-authored-by: Martin Brennan <martin@discourse.org>
A category's slug can be encoded when
`SiteSetting.slug_generation_method` has been set to "encoded". As a
result, we have to support non ASCII characters as well.
This commit adds support for excluding categories when using the
`category:` filter with the `-` prefix. For example,
`-category:category-slug` will exclude all topics that belong to the
category with slug "category-slug" and all of its sub-categories.
To only exclude a particular category and not all of its sub-categories,
the `-` prefix can be used with the `=` prefix. For example,
`-=category:category-slug` will only exclude topics that belong to the
category with slug "category-slug". Topics in the sub-categories of
"category-slug" will still be included.
Specifying more than two tag names when using the `tag:` filter was not
working because of a bug in the code where only the first two value in
the `tag:` filter was being selected.
What is the problem?
Consider the following timeline:
1. OP starts a topic.
2. Troll responds snarkily.
3. Flagger flags the post as “inappropriate”.
4. Admin agrees and hides the post.
5. Troll ninja-edits the post within the grace period, but still snarky.
6. Flagger flags the post as inappropriate again.
The current behaviour is that the flagger is met with an error saying the post has been reviewed and can't be flagged again for the same reason.
The desired behaviour is after someone has edited a post, it should be flaggable again.
Why is this happening?
This is related to the ninja-edit feature, where within a set grace period no new revision is created, but a new revision is required to flag the same post for the same reason.
So essentially there is a window between the naughty corner cooldown where a flagged post can't be edited, and the ninja-edit grace period, where an edit can be made without a new revision. Posts that are edited within this window can't be re-flagged by the same user.
|-----------------|-------------------------------|
^ Flag accepted | ~~~~~~~~~~~~~ 🥷🏻 ~~~~~~~~~~~~ |
| ^ Editing grace period over
^ Naughty corner cooldown over
How does this fix it?
We already create a new revision when ninja-editing a post with a pending flag. The issue above happens only in the case where the flag is already accepted.
This change extends the existing behaviour so that a new revision is created when ninja-editing any flagged post, regardless of the status of the flag. (Deleted flags excluded.)
This should also help with posterity, avoiding situations where a successfully flagged post looks innocuous in the history because it was ninja-edited, and vice versa.
This header is used by Microsoft Exchange to indicate when certain types of
autoresponses should not be generated for an email.
It triggers our "is this mail autogenerated?" detection, but should not be used
for this purpose.
Clearing modifiers during a plugin spec run will affect all future specs. Instead, this commit introduces a more surgical `.unregister_modifier` API which plugins can use if they need to add/remove a modifier during a specific spec.
`TopicQuery#latest_results` which was being used by
`TopicQuery#list_filter` defaults to ordering by `Topic#bumped_at` in
descending order and that was taking precedent over the order scopes
being applied by `TopicsFilter`.
This allows multiple ordering to be specified by using a comma seperated string.
For example, `order:created,views` would order the topics by
`Topic#created_at` and then `Topic#views.
An older change about optimising images caused the selector that adds lightboxing not to apply on quoted images. This fixes that. The selector is now not applicable as optimisation occurs in a separate place.
This change allows quoted images to be opened in a lightbox.
This new modifier can be used by plugins to modify search ordering.
Specifically plugins such as discourse_solved can amend search ordering
so solved topics bump to the top.
Also correct edge case where low and high sort priority categories did not
order correctly when it came to closed/archived
This commit adds support for the following ordering filters:
1. `order:activity` which orders the topics by `Topic#bumped_at` in descending order
2. `order:activity-asc` which orders the topics by `Topic#bumped_at` in ascending order
3. `order:latest-post` which orders the topics by `Topic#last_posted_at` in descending order
4. `order:latest-post-asc` which orders the topics by `Topic#last_posted_at` in ascending order
5. `order:created` which orders the topics by `Topic#created_at` in descending order
6. `order:created-asc` which orders the topics by `Topic#created_at` in ascending order
7. `order:views` which orders the topics by `Topic#views` in descending order
8. `order:views-asc` which orders the topics by `Topic#views` in ascending order
9. `order:likes` which orders the topics by `Topic#likes` in descending order
10. `order:likes-asc` which orders the topics by `Topic#likes` in ascending order
11. `order:likes-op` which orders the topics by `Post#like_count` of the first post in the topic in descending order
12. `order:likes-op-asc` which orders the topics by `Post#like_count` of the first post in the topic in ascending order
13. `order:posters` which orders the topics by `Topic#participant_count` in descending order
14. `order:posters-asc` which orders the topics by `Topic#participant_count` in ascending order
15. `order:category` which orders the topics by `Category#name` of the topic's category in descending order
16. `order:category-asc` which orders the topics by `Category#name` of the topic's category in ascending order
Multiple order filters can be composed together and the order of ordering is applied based on the position of the filter
in the query string. For example, `order:views order:created` will order the topics by `Topic#views` in descending order
and then order the topics by `Topics#created_at` in descending order.
This commit adds support for the following date filters:
1. `activity-before:<YYYY-MM-DD>` which filters for topics that have been bumped at or before given date
2. `activity-after:<YYYY-MM-DD>` which filters for topics that have been bumped at or after given date
3. `created-before:<YYYY-MM-DD>` which filters for topics that have been created at or before given date
4. `created-after:<YYYY-MM-DD>` which filters for topics that have been created at or after given date
5. `latest-post-before:<YYYY-MM-DD>` which filters for topics with the
latest post posted at or before given date
6. `latest-post-after:<YYYY-MM-DD>` which filters for topics with the
latest post posted at or after given date
If the filter has an invalid value, i.e string that cannot be converted
into a proper date in the `YYYY-MM-DD` format, the filter will be ignored.
If either of each filter is specify multiple times, only the last
occurrence of each filter will be taken into consideration.
Large or broken images are removed from oneboxes, but sometimes images
were removed when they were oneboxed too. The reason is that images can
be oneboxed by the AllowlistedGenericOnebox or ImageOnebox and only
AllowlistedGenericOnebox was handled correctly.
- Move the old '`define_include_method`' arg to a `respect_plugin_enabled` kwarg
- Introduce an `include_condition` kwarg which can be passed a lambda with inclusion logic. Lambda will be run via `instance_exec` in the context of the serializer instance
This is backwards compatible - old-style invocations will trigger a deprecation message
- Move the old '`define_include_method`' arg to a `respect_plugin_enabled` kwarg
- Introduce an `include_condition` kwarg which can be passed a lambda with inclusion logic. Lambda will be run via `instance_exec` in the context of the serializer instance
This is backwards compatible - old-style invocations will trigger a deprecation message
Update chat and poll plugins to new pattern
By default, the Sprockets DirectiveProcessor introduces a newline between possible 'header' comments and the rest of the JS file. This causes sourcemaps to be offset by 1 line, and therefore breaks browser tooling. We know that Ember-Cli assets do not use Sprockets directives, so we can totally bypass the DirectiveProcessor for those files.
We're using v3 of Sprockets, which is no longer supported - upstreaming a fix will be difficult. Long term, we intend to move away from sprockets.
This PR adds the ability to destroy reviewables for a passed user via the API. This was not possible before as this action was reserved for reviewables for you created only.
If a user is an admin and calls the `#destroy` action from the API they are able to destroy a reviewable for a passed user. A user can be targeted by passed either their:
- username
- external_id (for SSO)
to the request.
In the case you attempt to destroy a non-personal reviewable and
- You are not an admin
- You do not access the `#destroy` action via the API
you will raise a `Discourse::InvalidAccess` (403) and will not succeed in destroying the reviewable.
Responding to negative behaviour tends to solicit more of the same. Common wisdom states: "don't feed the trolls".
This change codifies that advice by introducing a new nudge when hitting the reply button on a flagged post. It will be shown if either the current user, or two other users (configurable via a site setting) have flagged the post.
This is to help generate random channels and chat
messages for local dev. This was removed in 12a18d4d55
presumably because it was not worth refactoring at the
time.
I've only added these tasks:
- `rake chat:message:populate\[113,20\]` (channel_id, count)
- Generates the count of messages for a channel ID provided,
otherwise uses a random channel and 200 count.
- `rake chat:category_channel:populate`
- Creates a chat channel for a random category.
- `rake chat🧵populate\[132,5\]` (channel_id, message_count)
- Creates a thread with N messages in the specified channel,
and enables threading in that channel if necessary
It switches to a different command for detecting the current git branch because the old command always returned HEAD as branch when the git repository is on a detached head (e.g. tag). The new command doesn't return a branch when the repository is on a detached head, which allows us to fall back to the `version` variable that is stored in the git config since https://github.com/discourse/discourse_docker/pull/707. It contains the value of the `version` from `app.yml`.
It also includes a small change to specs, because our tests usually run on specific commits instead of a branch or tag, so Discourse.git_branch always returns "unknown". We can use the "unknown" branch for tests, so it makes sense to ignore it only in other envs.
This fixes a 500 error that occurs when adding a tag to a category's
restricted tag list if the category's restricted tags already included a
synonym tag.
* DEV: Support `likes-(min:max):<count>` on `/filter` route
This commit adds support for the following filters:
1. `likes-min`
2. `likes-max`
3. `views-min`
4. `views-max`
5. `likes-op-min`
6. `likes-op-max`
If the filter has an invalid value, i.e string that cannot be converted
into an integer, the filter will be ignored.
If either of each filter is specify multiple times, only the last
occurrence of each filter will be taken into consideration.
This commit adds support for the `posters-min:<count>` and
`posters-max:<count>` filters for the topics filtering query language.
`posters-min:1` will filter for topics with at least a one poster while
`posters-max:3` will filter for topics with a maximum of 3 posters.
If the filter has an invalid value, i.e string that cannot be converted
into an integer, the filter will be ignored.
If either of each filter is specify multiple times, only the last
occurence of each filter will be taken into consideration.
This commit adds support for the `posts-min:<count>` and
`posts-max:<count>` filters for the topics filtering query language.
`posts-min:1` will filter for topics with at least a one post while
`posts-max:3` will filter foor topics with a maximum of 3 posts.
If the filter has an invalid value, i.e string that cannot be converted
into an integer, the filter will be ignored.
If either of each filter is specify multiple times, only the last
occurence of each filter will be taken into consideration.
When we "pull hotlinked images" on onebox images, they are added to the uploads table and their dominant color is calculated. This commit adds the data to the HTML so that it can be used by the client in the same way as non-onebox images. It also adds specific handling to the new `discourse-lazy-videos` plugin.
This feature will allow sites to define which emoji are not allowed. Emoji in this list should be excluded from the set we show in the core emoji picker used in the composer for posts when emoji are enabled. And they should not be allowed to be chosen to be added to messages or as reactions in chat.
This feature prevents denied emoji from appearing in the following scenarios:
- topic title and page title
- private messages (topic title and body)
- inserting emojis into a chat
- reacting to chat messages
- using the emoji picker (composer, user status etc)
- using search within emoji picker
It also takes into account the various ways that emojis can be accessed, such as:
- emoji autocomplete suggestions
- emoji favourites (auto populates when adding to emoji deny list for example)
- emoji inline translations
- emoji skintones (ie. for certain hand gestures)
Why this change?
Previously `TopicsFilter` was designed in such a way that we act on a
filter sequentially based on the order it was matched. However, this
made it hard to support filters composition where a similar filter may
be present further in the query string. Because of this limitation, I
previously introduced a private API `TopicsFilter.register_scope` which
allows us to act on a filter only after the entire query string has been
scanned. However, I felt that it made the code complicated and hard to
reason about.
In thie commit, I've changed it such that we scan through the entire
query string and group the values of each filter together. This allows
us to act on the values of a given filter in one go which I find easier
to reason about. This also opens up the possibility for us to ignore
certain filters when it has been specified multiple times.
This commit adds support for the `created-by:<username>` query filter
which will return topics created by the specified user. Multiple
usernames can be specified by comma seperating the usernames like so:
`created-by:username1,username2`. This will filter for topics created by
either of the specified users. Multiple `created-by:<username>` can also
be composed together. `created-by:username1 created-by:username2` is
equivalent to `created-by:username1,username2`.
This was inadvertently removed in 4c46c7e. In very specific scenarios,
this could be used execute arbitrary JavaScript.
Only affects instances where SVGs are allowed as uploads and CDN is not
configured.
Previously, Discourse's password hashing was hard-coded to a specific algorithm and parameters. Any changes to the algorithm or parameters would essentially invalidate all existing user passwords.
This commit introduces a new `password_algorithm` column on the `users` table. This persists the algorithm/parameters which were use to generate the hash for a given user. All existing rows in the users table are assumed to be using Discourse's current algorithm/parameters. With this data stored per-user in the database, we'll be able to keep existing passwords working while adjusting the algorithm/parameters for newly hashed passwords.
Passwords which were hashed with an old algorithm will be automatically re-hashed with the new algorithm when the user next logs in.
Values in the `password_algorithm` column are based on the PHC string format (https://github.com/P-H-C/phc-string-format/blob/master/phc-sf-spec.md). Discourse's existing algorithm is described by the string `$pbkdf2-sha256$i=64000,l=32$`
To introduce a new algorithm and start using it, make sure it's implemented in the `PasswordHasher` library, then update `User::TARGET_PASSWORD_ALGORITHM`.
This commit adds support for the `in:<topic notification level>` query
filter. As an example, `in:tracking` will filter for topics that the
user is watching. Filtering for multiple topic notification levels can
be done by comma separating the topic notification level keys. For
example, `in:muted,tracking` or `in:muted,tracking,watching`.
Alternatively, the user can also compose multiple filters with `in:muted
in:tracking` which translates to the same behaviour as
`in:muted,tracking`.
This commit adds support for the `in:pinned` filter to the topics filtering
query language. When the filter is present, it will filter for topics
where `Topic#pinned_until` is greater than `Topic#pinned_at`.
This adds a SiteSetting, which when enabled, creates a small_action post for tag/category changes to the topic. It uses `topic.add_moderator_post, and passes raw text in, to describe the change.
This was introduced to the standard library in Ruby 2.4. In my testing, it produces the same result, and is around 8x faster than our pure-ruby implementation
Before this commit, composing multiple category filters with a query such as category:category1 and category:category2 would not return any results. This is because we were filtering for topics that belonged to both category1 and category2, which is impossible since a topic can only belong to a single category.
With this commit, specifying a query like category:category1 category:category2 will now translate to filtering for topics that belong to either the category1 or category2 category.
Invite only and Discourse connect could not be enabled at the same time
because of some legacy reason. This is a follow up commit to ce04db8,
355d51a and 40f6ceb.
Followup to #20915. If we're grouping search results that don't rely on core's search, we won't have access to pg headlines. This is now configurable via the constructor, defaulting to `SiteSetting.use_pg_headlines_for_excerpt`
We use schema_migration_details to determine the age of a site in multiple migrations. This commit moves the logic into a dedicated `Migration::Helpers` module so that it doesn't need to be re-implemented every time.
This commit adds support for filtering for topics in specific
subcategories via the categories filter query language.
For example: `category:documentation:admins` will filter for topics and
subcategory topics in
the category with slug "admins" whose parent category has the slug
"documentation".
The `=` prefix can also be used such that
`=category:documentation:admins` will exclude subcategory topics of the
category with slug "admins" whose parent category has the slug
"documentation".
On the `/filter` route, the categories filtering query language is now
supported in the input per the example provided below:
```
category:bug => topics in the bug category AND all subcategories
=category:bug => topics in the bug category excluding subcategories
category:bug,feature => allow for categories either in bug or feature
=category:bug,feature => allow for exact categories match excluding sub cats
categories: => alias for category
```
Currently composing multiple category filters is not supported as we
have yet to determine what behaviour it should result in. For example,
`category:bug category:feature` would now return topics that are in both
the `bug` and `feature` category but it is not possible for a topic to
belong to two categories.
Add a modifier that will allow us to tune the results returned by suggested.
At the moment the modifier allows us to toggle including random results.
This was created for the discourse-ai module. It needs to switch off random
results when it returns related topics.
Longer term we can use it to toggle unread/new and other aspects.
This also demonstrates how to test the contract when adding modifiers.
This was added in d3f02a1270
for hashtags but later removed usage in
b2acc416e7. It was removed because
serializing the user does not include things like their
secure_categories.
It is not used by any other plugins or themes, and can cause
issues where it will error when operating on a null user. Better
to just pass in the user_id and use it to look up a user
directly in a PrettyText::Helper
Similar to the _map added for group_list SiteSettings in
e62e93f83a, this commit adds
the same extension for simple and compact `list` type SiteSettings,
so developers do not have to do the `.to_s.split("|")` dance
themselves all the time.
For example:
```
SiteSetting.markdown_linkify_tlds
=> "com|net|org|io|onion|co|tv|ru|cn|us|uk|me|de|fr|fi|gov|ddd"
SiteSetting.markdown_linkify_tlds_map
=> ["com", "net", "org", "io", "onion", "co", "tv", "ru", "cn", "us", "uk", "me", "de", "fr", "fi", "gov"]
```
There is no need to validate the user's emails when
promoting/demoting their trust level, this can cause
issues in things like Jobs::Tl3Promotions, we don't
need to fail in that case when all we are doing is changing
trust level.
Introduces a new API for plugin data modification without class-based extension overhead.
This commit introduces a new API that allows plugins to modify data in cases where they return different data rather than additional data, as is common with filtered_registers in DiscoursePluginRegistry. This API removes the need for defining class-based extension points.
When a plugin registers a modifier, it will automatically be called if the plugin is enabled. The core will then modify the parameter sent to it using the block registered by the plugin:
```ruby
DiscoursePluginRegistry.register_modifier(plugin_instance, :magic_sum_modifier) { |a, b| a + b }
sum = DiscoursePluginRegistry.apply_modifier(:magic_sum_filter, 1, 2)
expect(sum).to eq(3)
```
Key features of these modifiers:
- Operate in a stack (first registered, first called)
- Automatically disabled when the plugin is disabled
- Pass the cumulative result of all block invocations to the caller
The following are the changes being introduced in this commit:
1. Instead of mapping the query language to various query params on the
client side, we've decided that the benefits of having a more robust
query language far outweighs the benefits of having a more human readable query params in the URL.
As such, the `/filter` route will just accept a single `q` query param
and the query string will be parsed on the server side.
1. On the `/filter` route, the tags filtering query language is now
supported in the input per the example provided below:
```
tags:bug+feature tagged both bug and feature
tags:bug,feature tagged either bug or feature
-tags:bug+feature excluding topics tagged bug and feature
-tags:bug,feature excluding topics tagged bug or feature
```
The `tags` filter can also be specified multiple
times in the query string like so `tags:bug tags:feature` which will
filter topics that contain both the `bug` tag and `feature` tag. More
complex query like `tags:bug+feature -tags:experimental` will also work.
This change sets the ground work for allowing us to filter topics list
by tags in the following ways:
1. Filter for topics that matches all tags in a given set of tags
2. Filter for topics that matches any tags in a given set of tags
3. Exclude topics that matches all tags in a given set of tags
4. Exclude topics that matches any tags in a given set of tags
Before, incorrectly filled fields were marked with red border. Now, additional information under the field is displayed to notify the user what is incorrect.
/t/93696
When we renamed the `default_categories_regular` to `default_categories_normal` we missed a site setting validation method. It allowed the duplicate category ids in `default_categories_normal` site setting and caused the problem in user registration process.
5176c689e9
When setting the ACL for optimized images after setting the
ACL for the linked upload (e.g. via the SyncACLForUploads job),
we were using the optimized image path as the S3 key. This worked
for single sites, however it would fail silently for multisite
sites since the path would be incorrect, because the Discourse.store.upload_path
was not included.
For example, something like this:
somecluster1/optimized/2X/1/3478534853498753984_2_1380x300.png
Instead of:
somecluster1/uploads/somesite1/2X/1/3478534853498753984_2_1380x300.png
The silent failure is still intentional, since we don't want to
break other things because of ACL updates, but now we will update
the ACL correctly for optimized images on multisite sites.
Sites with many categories and many of them in muted by default (see
`default_categories_muted`) reported bad performance when requesting
the homepage as an anonymous user. This was the case because of the
long query that iterated over topics and categories trying to remove
those from the muted categories.
Our SafeMigrate system is designed to prevent tables/columns being dropped in pre-deploy migrations. Its regex-based detection was triggering incorrectly on `ALTER COLUMN DROP NOT NULL`.
There are many situations that may cause users to lose permission to
send messages in a chat channel. Until now we have relied on security
checks in `Chat::ChatChannelFetcher` to remove channels which the
user may have a `UserChatChannelMembership` record for but which
they do not have access to.
This commit takes a more proactive approach. Now any of these following
`DiscourseEvent` triggers may cause `UserChatChannelMembership`
records to be deleted:
* `category_updated` - Permissions of the category changed
(i.e. CategoryGroup records changed)
* `user_removed_from_group` - Means the user may not be able to access the
channel based on `GroupUser` or also `chat_allowed_groups`
* `site_setting_changed` - The `chat_allowed_groups` was updated, some
users may no longer be in groups that can access chat.
* `group_destroyed` - Means the user may not be able to access the
channel based on `GroupUser` or also `chat_allowed_groups`
All of these are handled in a distinct service run in a background
job. Users removed are logged via `StaffActionLog` and then we
publish messages on a per-channel basis to users who had their
memberships deleted.
When the user has a channel they are kicked from open, we show
a dialog saying "You no longer have access to this channel".
When they click OK we redirect them either:
* To their first other public channel, if they have any followed
* The chat browse page if they don't
This is to save on tons of requests from kicked out users getting messages
from other channels.
When the user does not have the kicked channel open, we can just
silently yoink it out of their sidebar and turn off subscriptions.
There was a lot of duplication in the svg parsing and coercion code. This reduces that duplication and causes svg sprite parsing to happen earlier so that more computation is cached.
Our schema allows `category.topic_id` to be NULL. Null values shouldn't actually happen in production, but it is very common in tests because `Fabricate(:category)` skips creating the definition topic to improve performance. Before this commit, a NULL category.topic_id would cause all subcategory topics to be excluded from a TopicQuery result. This is because, in postgres, `NULL <> anything` is falsy. Instead, we can use `IS DISTINCT FROM`, which will return true when NULL is compared to a non-NULL value.
Why is this change required?
Prior to this change, we would list all group messages that a user
has access to in the user menu messages notifications panel dropdown.
However, this did not respect the topic's notification level setting and
group messages which the user has set to 'normal' notification level were
being displayed
What does this commit do?
With this commit, we no longer display all group messages that a user
has access to. Instead, we only display group messages that a user is
watching in the user menu messages notifications panel dropdown.
Internal Ref: /t/94392
Follow up to 4d2a95ffe6. Sometimes
due to the original UploadReference migration or other issues,
multiple UploadReference records can have the exact same
created_at date and time. To tiebreak and correct the SQL order
when this happens, we can add a secondary `id ASC` ordering
when we check for the first upload reference.
Way back when this was introduced way back in b96c10a903
I didn't have any frame of reference for what these max rate
limit numbers should be, so 10 seemed like a reasonable limit
until a real world case where this did not make sense came
along.
The time has come.
Moving these into site settings, which are hidden since in most
cases there is no need to change these.
Similar spirit to e195e6f614,
this moves the Bookmarkable registration to DiscoursePluginRegistry
so plugins which are not enabled do not register additional
bookmarkable classes.
When a theme setting of type `upload` has a default upload, it should return the URL of the specified default upload until a custom upload is used for the setting. However, currently this isn't the case and we get null instead of the default upload URL.
The reason for this is because the `super` method of `#value` already returns the default upload URL (if there's one), so we can't pass that to `cdn_url` which expects an upload ID:
c961dcc757/lib/theme_settings_manager.rb (L212)
This commit fixes the bug by skipping the call to `cdn_url` when we fallback to the default upload for the setting value.
* UX: add type tag and design update
* UX: clarify status copy in reviewQ
* DEV: switch to selectKit
* UX: color approve/reject buttons in RQ
* DEV: regroup actions
* UX: add type tag and design update
* UX: clarify status copy in reviewQ
* Join questions for flagged post with "or" with new I18n function
* Move ReviewableScores component out of context
* Add CSS classes to reviewable-item based on human type
* UX: add table header for scoring
* UX: don't display % score
* UX: prefix modifier class with dash
* UX: reviewQ flag table styling
* UX: consistent use of ignore icon
* DEV: only show context question on pending status
* UX: only show table headers on pending status
* DEV: reviewQ regroup actions for hidden posts
* UX: reviewQ > approve/reject buttons
* UX: reviewQ add fadeout
* UX: reviewQ styling
* DEV: move scores back into component
* UX: reviewQ mobile styling
* UX: score table on mobile
* UX: reviewQ > move meta info outside table
* UX: reviewQ > score layout fixes
* DEV: readd `agree_and_keep` and fix the spec tests.
* Fix the spec tests
* fix the quint test
* DEV: readd deleting replies
* UX: reviewQ copy tweaks
* DEV: readd test for ignore + delete replies
* Remove old
* FIX: Add perform_ignore back in for backwards compat
* DEV: add an action alias `ignore` for `ignore_and_do_nothing`.
---------
Co-authored-by: Martin Brennan <martin@discourse.org>
Co-authored-by: Vinoth Kannan <svkn.87@gmail.com>
Follow up to 098ab29d41. Since
we just used a `cattr_reader` on `About` this was not safe
for multisite, since some sites could have the chat plugin
enabled and some may not. Using `DiscoursePluginRegistry` gets
around this issue, and makes it so the chat stats only show
for a site if `chat_enabled` is true.
Sometimes we get Maps URL containing a zoom level as a float (17.5z and
not 17z) but this doesn’t work with our current onebox implementation.
While Google accepts those float zoom levels, it removes automatically
the floating part in the URL (thus when visiting a Maps URL containing
17.5z, the URL will be rewritten shortly after as 17z). When putting a
float zoom level in an embedded URL, this actually breaks (Maps API
returns a 400 error).
This patch addresses the issue by allowing the onebox engine to match on
a zoom level expressed as a float but we only keep the integer part thus
rendering properly maps.
The ensure_consistency rake task was not marking posted as true for post authors in the TopicUser table, post migration. Create another step to set posted='t'.
Forcing distributed muted to raise when a notify reviewable job is running
leads to excessive errors in the logs under many conditions.
The new pattern
1. Optimises the counting of reviewables so it is a lot faster
2. Holds the distributed lock for 2 minutes (max)
The downside is the job queue can get blocked up when tons of notify
reviewables are running at the same time. However this should be very
rare in the real world, as we only notify when stuff is flagged which
is fairly infrequent.
This also give a fair bit more time for the notifications which may be
a little slow on large sites with tons of mods.
This commit implements many changes to topic and comments embedding. It
deprecates the class_name field from EmbeddableHost and suggests using
the className parameter. discourse_username parameter has been
deprecated and it will fetch it from embedded site from the author or
discourse-username meta.
See the updated code sample from Admin > Customize > Embedding page.
* FEATURE: Add className parameter for Discourse embed
* DEV: Hide class_name from EmbeddableHost
* DEV: Deprecate class_name field of EmbeddableHost
* FEATURE: Use either author or discourse-username meta tag
* DEV: Deprecate discourse_username parameter
* DEV: Improve embed code sample
This commit introduces a few experimental changes to the New topics list and "Everything" link in the sidebar:
1. Make the New topics list include unread topics
2. Make the Everything section in the sidebar link to the New topics list (`/new`)
3. Remove "unread" or "new" text next to the count and keep the count
4. The count is a sum of new and unread topics counts
All of these of changes are behind an off-by-default feature flag. I've not written extensive tests for these changes because they're highly experimental.
Internal topic: t/77234.
When invoking e.g. `can_see?(Foo.new)`, the guardian checks if there's a method `#can_see_foo?` defined and if so uses that to determine whether the user can see it or not.
When such a method is not defined, the guardian currently returns `true`, but it is probably a better call (pun intended) to make it "safe by default" and return `false` instead. I.e. if you can't explicitly see it, you can't see it at all.
This change makes the change to `Guardian#can_see?` to fall back to `false` if no visibility check method is defined.
For `#can_see_user?` and `#can_see_tag?` we don't have any particular logic that prevents viewing. We previously relied on the implicit `true` value, but since that's now change to `false`, I have explicitly implemented these two methods in `UserGuardian` and `TagGuardian` modules. If in the future we want to add some logic for it, this would be the place.
To be clear, **the behaviour remains the same**, but the `true` value is now explicit rather than implicit.
We were only supporting the main name of each HighlightJS language. So, by default, you could not use `js` or `jsx` to highlight Javascript, given they are aliases for `javascript`.
This PR adds a list of aliases as a constant to core (built via a rake task), and then checks against the `highlighted_languages` site settings plus the list of aliases when processing a code block.
This commit 57caf08e13 broke
`bin/turbo_rspec` timing recording via `TurboTests::Runner`,
because we changed to using all `spec/*` folders except
`spec/system` as default for the runner, rather than
the old `['spec']` array, which is what `TurboTests::Runner`
was relying on to determine whether to record test run
time with `ParallelTests::RSpec::RuntimeLogger`.
Instead, we can just pass a new `use_runtime_info` boolean to the
runner class and use it when running against the default set of
spec files using `bin/turbo_rspec` and the turbo rspec rake task.
The current default timeout is hardcoded to 2 seconds which is proving
too low for certain cases, and resulting in sporadic timeouts due to slow DNS queries.
As of ba3f62f576, handlebars templates are colocated with js files so the path to hbs templates referenced by this rake task is no longer valid. This commit fixes the path to hbs templates and updates a couple of files that are generated by the rake task.
We call `post.update_uploads_secure_status` in both
`PostCreator` and `PostRevisor`. Only the former was checking
if `SiteSetting.secure_uploads?` was enabled, but the latter
was not. There is no need to enqueue the job
`UpdatePostUploadsSecureStatus` if secure_uploads is not
enabled for the site.
- Reduce duplication of terms in post index from unlimited to 6. This will
result in reduced index size and reduced weighting for posts containing
a huge amount of duplicate terms. (Eg: a post containing "sam sam sam sam
sam sam sam sam", will index as "sam sam sam sam sam sam", only including
the word up to 6 times.) This corrects a flaw where title weighting could
be ignored.
- Prioritize exact matches of words in titles. Our search always performs
a prefix match. However we want to give special weight to exact title matches
meaning that a search for "sum" will find topics such as "the sum of us" vs
"summer in spring".
- Pick up fixes to our search algorithm which are missing from old indexes.
Specifically pick up the fix that indexes URLs properly. (`https://happy.com`
was stemmed to `happi` in keywords and then was not searchable)
see also:
https://meta.discourse.org/t/refinements-to-search-being-tested-on-meta/254158
Indexing will take a while and work in batches, in the background.
* Add username and name_or_username variables to SystemMessage defaults
* Allow username and name variables on welcome_user email template overrides
* Satisfy linting
* Add test
This commit adds backend support for a new topics list that combines both the current unread and new topics lists. We're going to experiment with this new list (name TBD) internally and decide if this feature is something that we want to fully build.
Internal topic: t/77234.
Currently, clicking on the unsubscribe link with a `key` associated with a
deleted topic results in an HTTP 500 response.
This change fixes that by skipping any attempt to run topic related flow if topic
isn't present.
* FIX: do not notify admins on suppressed categories
Avoid notifying admins on categories where they are not explicitly members
in cases where SiteSetting.suppress_secured_categories_from_admin is
enabled.
This helps keep notification stream clean and avoids admins mistakenly
being invited to discussions that should be suppressed
This is a combined work of Martin Brennan, Loïc Guitaut, and Joffrey Jaffeux.
---
This commit implements a base service object when working in chat. The documentation is available at https://discourse.github.io/discourse/chat/backend/Chat/Service.html
Generating documentation has been made as part of this commit with a bigger goal in mind of generally making it easier to dive into the chat project.
Working with services generally involves 3 parts:
- The service object itself, which is a series of steps where few of them are specialized (model, transaction, policy)
```ruby
class UpdateAge
include Chat::Service::Base
model :user, :fetch_user
policy :can_see_user
contract
step :update_age
class Contract
attribute :age, :integer
end
def fetch_user(user_id:, **)
User.find_by(id: user_id)
end
def can_see_user(guardian:, **)
guardian.can_see_user(user)
end
def update_age(age:, **)
user.update!(age: age)
end
end
```
- The `with_service` controller helper, handling success and failure of the service within a service and making easy to return proper response to it from the controller
```ruby
def update
with_service(UpdateAge) do
on_success { render_serialized(result.user, BasicUserSerializer, root: "user") }
end
end
```
- Rspec matchers and steps inspector, improving the dev experience while creating specs for a service
```ruby
RSpec.describe(UpdateAge) do
subject(:result) do
described_class.call(guardian: guardian, user_id: user.id, age: age)
end
fab!(:user) { Fabricate(:user) }
fab!(:current_user) { Fabricate(:admin) }
let(:guardian) { Guardian.new(current_user) }
let(:age) { 1 }
it { expect(user.reload.age).to eq(age) }
end
```
Note in case of unexpected failure in your spec, the output will give all the relevant information:
```
1) UpdateAge when no channel_id is given is expected to fail to find a model named 'user'
Failure/Error: it { is_expected.to fail_to_find_a_model(:user) }
Expected model 'foo' (key: 'result.model.user') was not found in the result object.
[1/4] [model] 'user' ❌
[2/4] [policy] 'can_see_user'
[3/4] [contract] 'default'
[4/4] [step] 'update_age'
/Users/joffreyjaffeux/Code/pr-discourse/plugins/chat/app/services/update_age.rb:32:in `fetch_user': missing keyword: :user_id (ArgumentError)
from /Users/joffreyjaffeux/Code/pr-discourse/plugins/chat/app/services/base.rb:202:in `instance_exec'
from /Users/joffreyjaffeux/Code/pr-discourse/plugins/chat/app/services/base.rb:202:in `call'
from /Users/joffreyjaffeux/Code/pr-discourse/plugins/chat/app/services/base.rb:219:in `call'
from /Users/joffreyjaffeux/Code/pr-discourse/plugins/chat/app/services/base.rb:417:in `block in run!'
from /Users/joffreyjaffeux/Code/pr-discourse/plugins/chat/app/services/base.rb:417:in `each'
from /Users/joffreyjaffeux/Code/pr-discourse/plugins/chat/app/services/base.rb:417:in `run!'
from /Users/joffreyjaffeux/Code/pr-discourse/plugins/chat/app/services/base.rb:411:in `run'
from <internal:kernel>:90:in `tap'
from /Users/joffreyjaffeux/Code/pr-discourse/plugins/chat/app/services/base.rb:302:in `call'
from /Users/joffreyjaffeux/Code/pr-discourse/plugins/chat/spec/services/update_age_spec.rb:15:in `block (3 levels) in <main>'
```
The #pluck_first freedom patch, first introduced by @danielwaterworth has served us well, and is used widely throughout both core and plugins. It seems to have been a common enough use case that Rails 6 introduced it's own method #pick with the exact same implementation. This allows us to retire the freedom patch and switch over to the built-in ActiveRecord method.
There is no replacement for #pluck_first!, but a quick search shows we are using this in a very limited capacity, and in some cases incorrectly (by assuming a nil return rather than an exception), which can quite easily be replaced with #pick plus some extra handling.
Ember CLI will automatically run babel transformations in parallel when the config is 'serializable', and can therefore be applied in multiple processes automatically. If any plugin is defined in an unserializable way, parallelisation will be disabled.
Our discourse-widget-hbs transformer was causing parallelisation to be disabled. This commit fixes that, and also enables the throwUnlessParallelizable flag so that we catch this kind of issue more easily in future.
This commit also refactors our deprecation silencing system into its own file, and uses a fake babel plugin to ensure deprecations are silenced in babel worker processes.
In our GitHub CI jobs, this doubles the speed of ember builds (1m30s -> 45s). It should also improve production deploy times, and cold-start dev builds.
This PR is a major change to Sass compilation in Discourse.
The new version of sass-ruby moves to dart-sass putting we back on the supported version of Sass. It does so while keeping compatibility with the existing method signatures, so minimal change is needed in Discourse for this change.
This moves us
From:
- sassc 2.0.1 (Feb 2019)
- libsass 3.5.2 (May 2018)
To:
- dart-sass 1.58
This update applies the following breaking changes:
>
> These breaking changes are coming soon or have recently been released:
>
> [Functions are stricter about which units they allow](https://sass-lang.com/documentation/breaking-changes/function-units) beginning in Dart Sass 1.32.0.
>
> [Selectors with invalid combinators are invalid](https://sass-lang.com/documentation/breaking-changes/bogus-combinators) beginning in Dart Sass 1.54.0.
>
> [/ is changing from a division operation to a list separator](https://sass-lang.com/documentation/breaking-changes/slash-div) beginning in Dart Sass 1.33.0.
>
> [Parsing the special syntax of @-moz-document will be invalid](https://sass-lang.com/documentation/breaking-changes/moz-document) beginning in Dart Sass 1.7.2.
>
> [Compound selectors could not be extended](https://sass-lang.com/documentation/breaking-changes/extend-compound) in Dart Sass 1.0.0 and Ruby Sass 4.0.0.
SCSS files have been migrated automatically using `sass-migrator division app/assets/stylesheets/**/*.scss`
Allows users to configure their own custom sidebar sections with links withing Discourse instance. Links can be passed as relative path, for example "/tags" or full URL.
Only path is saved in DB, so when Discourse domain is changed, links will be still valid.
Feature is hidden behind SiteSetting.enable_custom_sidebar_sections. This hidden setting determines the group which members have access to this new feature.
Previously due to an error archived topics were more prominent in search
than closed topics.
This amends our internal logic to ensure archived topics are bumped down
the list.
Previously to_tsquery would split terms and join with &
In PG 14 terms are split and use <-> which means followed directly by.
In PG 13:
discourse_test=# SELECT to_tsquery('english', '''hello world''');
to_tsquery
---------------------
'hello' & 'world'
(1 row)
In PG 14:
discourse_test=# SELECT to_tsquery('english', '''hello world''');
to_tsquery
---------------------
'hello' <-> 'world'
(1 row)
Change is very unobtrosive, we simply amend our to_tsquery to behave like
it used to behave and make no use of the `<->` operator
More detail at: https://akorotkov.github.io/blog/2021/05/22/pg-14-query-parsing/
Note that plainto_tsquery used elsewhere in Discourse keeps the exact
same function.
This also corrects a faulty test that was passing by a fluke on older
version of PG
We've had a couple of problems with the R2 gem where it generated a broken RTL CSS bundle that caused a badly broken layout when Discourse is used in an RTL language, see a3ce93b and 5926386. For this reason, we're replacing R2 with `rtlcss` that can handle modern CSS features better than R2 does.
`rltcss` is written in JS and available as an npm package. Calling the `rltcss` from rubyland is done via the `rtlcss_wrapper` gem which contains a distributable copy of the `rtlcss` package and loads/calls it with Mini Racer. See https://github.com/discourse/rtlcss_wrapper for more details.
Internal topic: t/76263.
`--d-hover` is calculated to be equivalent to primary-100 in light mode, or primary-low in dark mode
`--d-selected` is calculated to be equivalent to primary-low in light mode, or primary-100 in dark mode
`lib/color_math` is introduced to provide some utilities for making these calculations.
The new `prioritize_exact_search_match` can be used to force the search
algorithm to prioritize exact term matches in title when ranking results.
This is scoped narrowly to titles for cases such as a topic titled:
"organisation chart" and a search of "org chart".
If we scoped this wider, all discussion about "org chart" would float to
the top and leave a very common title de-prioritized.
This is a hidden site setting and it has some performance impact due
to double ranking.
That said, performance impact is somewhat mitigated cause ranking on
title alone is a very cheap operation.
Many users seems surprised by prefix matching in search leading to
unexpected results.
Over the years we always would return results starting with a search term
and not expect exact matches.
Meaning a search for `abra` would find `abracadabra`
This introduces the Site Setting `enable_search_prefix_matching` which
defaults to true. (behavior unchanged)
We plan to experiment on select sites with exact matches to see if the
results are less surprising
* FIX: Ensure soft-deleted topics can be deleted
The topic was not found during the deletion process because it was
deleted and `@post.topic` was nil.
* DEV: Use @topic instead of finding the topic every time
Under some situations, we would inadvertently return a public (unauthenticated) result to an authenticated API request. This commit adds the `Api-Key` header to our anonymous cache bypass logic.
Under scenarios of extremely high load where large numbers of `Reviewable*` items are being created, it has been observed that multiple instances of the `NotifyReviewable` job may run simultaneously.
These jobs will work satisfactorily if the concurrency is limited to 1, and the different types of jobs (items reviewable by admins, vs moderators, vs particular groups, etc.) are run eventually.
This change introduces a new option to `DistributedMutex` which allows the `max_get_lock_attempts` to be specified. If the number is exceeded an error will be raised, which will cause Sidekiq to requeue the job. Sidekiq has existing logic to back-off on retry times for jobs that have failed multiple times.
There was an issue where if hashtag-cooked HTML was sent
to the ExcerptParser without the keep_svg option, we would
end up with empty </use> and </svg> tags on the parts of the
excerpt where the hashtag was, in this case when a post
push notification was sent.
Fixed this, and also added a way to only display a plaintext
version of the hashtag for cases like this via PrettyText#excerpt.
When checking whether an existing upload should be secure
based on upload references, do not count deleted posts, since
there is still a reference attached to them. This can lead to
issues where e.g. an upload is used for a post then later on
a custom emoji.
Currently, `Tag#topic_count` is a count of all regular topics regardless of whether the topic is in a read restricted category or not. As a result, any users can technically poll a sensitive tag to determine if a new topic is created in a category which the user has not excess to. We classify this as a minor leak in sensitive information.
The following changes are introduced in this commit:
1. Introduce `Tag#public_topic_count` which only count topics which have been tagged with a given tag in public categories.
2. Rename `Tag#topic_count` to `Tag#staff_topic_count` which counts the same way as `Tag#topic_count`. In other words, it counts all topics tagged with a given tag regardless of the category the topic is in. The rename is also done so that we indicate that this column contains sensitive information.
3. Change all previous spots which relied on `Topic#topic_count` to rely on `Tag.topic_column_count(guardian)` which will return the right "topic count" column to use based on the current scope.
4. Introduce `SiteSetting.include_secure_categories_in_tag_counts` site setting to allow site administrators to always display the tag topics count using `Tag#staff_topic_count` instead.
This fixes a longstanding issue for sites with the
secure_uploads setting enabled. What would happen is a scenario
like this, since we did not check all places an upload could be
linked to whenever we used UploadSecurity to check whether an
upload should be secure:
* Upload is created and used for site setting, set to secure: false
since site setting uploads should not be secure. Let's say favicon
* Favicon for the site is used inside a post in a private category,
e.g. via a Onebox
* We changed the secure status for the upload to true, since it's been
used in a private category and we don't check if it's originator
was a public place
* The site favicon breaks :'(
This was a source of constant consternation. Now, when an upload is _not_
being created, and we are checking if an existing upload should be
secure, we now check to see what the first record in the UploadReference
table is for that upload. If it's something public like a site setting,
then we will never change the upload to `secure`.
Prior to this change, we were parsing `Post#cooked` every time we
serialize a post to extract the usernames of mentioned users in the
post. However, the only reason we have to do this is to support
displaying a user's status beside each mention in a post on the client side when
the `enable_user_status` site setting is enabled. When
`enable_user_status` is disabled, we should avoid having to parse
`Post#cooked` since there is no point in doing so.
Since the new hashtag format has been added, we want site
admins to be able to rebake old posts with the old hashtag
format. This can now be done with `rake hashtags:mark_old_format_for_rebake`
which goes and marks posts with the old cooked version of hashtags
in this format for rebake:
```
<a class=\"hashtag\" href=\"/c/ux/14\">#<span>ux</span></a>
```
c.f. https://meta.discourse.org/t/what-rebake-is-required-for-the-new-autocomplete-styling/249642/12
This commit fixes the following issue:
* User creates a post
* Akismet or some other thing like requiring posts to be approved puts
the post in the review queue, deleting it
* Admin approves the post
* Email is never sent to mailing list mode subscribers
We intentionally do not enqueue this for every single post when
recovering a topic (i.e. recovering the first post) since the topics
could have a lot of posts with emails already sent, and we don't want
to clog sidekiq with thousands of notify jobs.
TL4 users can already list and unlist topics, but they can't see
the unlisted topics. This change brings this to par by allowing
TL4 users to also see unlisted topics.
So it can easily be overwritten in a plugin for example.
### Added more tests to provide better coverage
We previously only had `u.silenced_till IS NULL` but I made it consistent with pretty much every other places where we check for "active" users.
These two new lines do change the query a tiny bit though.
**Before**
- You could not get the badge if you were currently silenced (no matter what period is being checked)
- You could get the badge if you were suspended 😬
**After**
- You can't get the badge if you were silenced during the past year
- You can't get the badge if you were suspended during the past year
### Improved the performance of the query by using `NOT EXISTS` instead of `LEFT JOIN / COUNT() = 0`
There is no difference in behaviour between
```sql
LEFT JOIN user_badges AS ub ON ub.user_id = u.id AND ...
[...]
HAVING COUNT(ub.*) = 0
```
and
```sql
NOT EXISTS (SELECT 1 FROM user_badges AS ub WHERE ub.user_id = u.id AND ...)
```
The only difference is performance-wise. The `NOT EXISTS` is 10-30% faster on very large databases (aka. posts and users in X millions). I checked on 3 of the largest datasets I could find.
When the thread is aborted, an exception is raised before the `start` of a job is set, and therefore raises an exception in the `ensure` block. This commit checks that `start` exists, and also adds `abort_on_exception=true` so that this issue would have caused test failures.
The `enable_new_notifications_menu` site setting allows sites that have
`navigation_menu` set to `legacy` to use the redesigned notifications
menu before switching to the new sidebar navigation menu.
This is a very subtle one. Setting the redirect URL is done by passing
a hash through a Discourse event. This is broken on Ruby 2 since the
support for keyword arguments in events was added.
In Ruby 2 the last argument is cast to keyword arguments if it is a
hash. The key point here is that creates a new copy of the hash, so
what the plugin is modifying is not the hash that was passed.
This patch introduces a cookies rotator as indicated in the Rails
upgrade guide. This allows to migrate from the old SHA1 digest to the
new SHA256 digest.
This will give us some aggregate stats on the defer queue performance.
It is limited to 100 entries (for safety) which is stored in an LRU cache.
Scheduler::Defer.stats can then be used to get an array that denotes:
- number of runs and completions (queued, finished)
- error count (errors)
- total duration (duration)
We can look later at exposing these metrics to gain visibility on the reason
the defer queue is clogged.
There was an issue with channel archiving, where at times the topic
creation could fail which left the archive in a bad state, as read-only
instead of archived. This commit does several things:
* Changes the ChatChannelArchiveService to validate the topic being
created first and if it is not valid report the topic creation errors
in the PM we send to the user
* Changes the UI message in the channel with the archive status to reflect
that topic creation failed
* Validate the new topic when starting the archive process from the UI,
and show the validation errors to the user straight away instead of
creating the archive record and starting the process
This also fixes another issue in the discourse_dev config which was
failing because YAML parsing does not enable all classes by default now,
which was making the seeding rake task for chat fail.
* Firefox now finally returns PerformanceMeasure from performance.measure
* Some TODOs were really more NOTE or FIXME material or no longer relevant
* retain_hours is not needed in ExternalUploadsManager, it doesn't seem like anywhere in the UI sends this as a param for uploads
* https://github.com/discourse/discourse/pull/18413 was merged so we can remove JS test workaround for settings
Currently when generating a onebox for Discourse topics, some important
context is missing such as categories and tags.
This patch addresses this issue by introducing a new onebox engine
dedicated to display this information when available. Indeed to get this
new information, categories and tags are exposed in the topic metadata
as opengraph tags.
If unaccent is called with quote-like Unicode characters then it can
generate invalid queries because some of the transformed quotes by
unaccent are not escaped and to_tsquery fails because of bad input.
This commits replaces more quote-like Unicode characters before
unaccent is called.
Fixes the support for kwargs in `DiscourseEvent.trigger()` on Ruby 3, e.g.
```rb
DiscourseEvent.trigger(:before_system_message_sent, message_type: type, recipient: @recipient, post_creator_args: post_creator_args, params: method_params)
```
Fixes https://github.com/discourse/discourse-local-site-contacts
If a secure upload's access_control_post was trashed, and an anon user
tried to look at that upload, they would get a 500 error rather than
the correct 403 because of an error inside the PostGuardian logic.
This commit does a couple of things:
1. Changes the limit of tags to include a subject for a
notification email to the `max_tags_per_topic` setting
instead of the arbitrary 3 limit
2. Adds both an X-Discourse-Tags and X-Discourse-Category
custom header to outbound emails containing the tags
and category from the subject, so people on mail clients
that allow advanced filtering (i.e. not Gmail) can filter
mail by tags and category, which is useful for mailing
list mode users
c.f. https://meta.discourse.org/t/headers-for-email-notifications-so-that-gmail-users-can-filter-on-tags/249982/17
This commit fixes an issue where the chat message bookmarks
did not respect the user's `bookmark_auto_delete_preference`
which they select in their user preference page.
Also, it changes the default for that value to "keep bookmark and clear reminder"
rather than "never", which ends up leaving a lot of expired bookmark
reminders around which are a pain to clean up.
When sending emails out via group SMTP, if we
are sending them to non-staged users we want
to mask those emails with BCC, just so we don't
expose them to anyone we shouldn't. Staged users
are ones that have likely only interacted with
support via email, and will likely include other
people who were CC'd on the original email to the
group.
Co-authored-by: Martin Brennan <martin@discourse.org>
This commit changes the default return value of `Auth::ManagedAuthenticator#primary_email_verified?` to false. We're changing the default to force developers to think about email verification when building a new authentication method. All existing authenticators (in core and official plugins) have been updated to explicitly define the `primary_email_verified?` method in their subclass of `Auth::ManagedAuthenticator` (example commit 65f57a4d05).
Internal topic: t/82084.
The `TopicView#bookmarks` method is called by `TopicViewSerializer` and `PostSerializer`
so we want to avoid running a meaningless query when user is not
present.
When loading posts in a topic, the topic level guardian
checks are run multiple times even though all the posts belong to the
same topic. Profiling in production revealed that this accounted for a
significant amount of request time for a user that is not staff or anon.
Therefore, we're optimizing this by adding memoizing the topic level
calls in `PostGuardian`. Speficifally, the result of
`TopicGuardian#can_see_topic?` and `PostGuardian#can_create_post?`
method calls are memoized per topic.
Locally profiling shows a significant improvement for normal users
loading a topic with 100 posts.
Benchmark script command: `ruby script/bench.rb --unicorn --skip-bundle-assets --iterations 100`
Before:
```
topic user:
50: 114
75: 117
90: 122
99: 209
topic.json user:
50: 67
75: 69
90: 72
99: 162
```
After:
```
topic user:
50: 101
75: 104
90: 107
99: 184
topic.json user:
50: 53
75: 53
90: 56
99: 138
```
This commit removes 3 redundant DB queries when loading posts.
1. `@posts` will eventually have to be loaded so we can avoid two
additional queries.
2. No need to preload topic association of posts as we're already
dealing with a fixed topic in `TopicView`.
In some situations (e.g. disaster recovery), it may make sense to spin up a temporary readonly version of a cluster. In that situation, the s3 `expire_missing_assets` job would delete assets which are still in use by the canonical read-write version of the cluster.
To avoid that, this commit will skip deletion if the site is currently in readonly mode.
1. Fix bug where we were not waiting for all unicorn workers to start up
before running benchmarks.
2. Fix a bug where headers were not used when benchmarking. Admin
benchmarks were basically running as anon user.
3. Disable rate limits when in profile env. We're pretty much going to
hit the rate limit every time as a normal user.
4. Benchmark against topic with a fixed posts count of 100. Previously profiling script was just randomly creating posts
and we would benchmark against a topic with a fixed posts count of 30.
Sometimes, the script fails because no topics with a posts count of 30
exists.
5. Benchmarks are not run against a normal user on top of anon and
admin.
6. Add script option to select tests that should be run.
# Context
When a topic is reviewable by a group we give those group moderators some admin abilities including the ability to delete a topic.
# Problem
There are two main problems:
1. Currently when a group moderator deletes a topic they are redirected to root (not the same for staff)
2. Viewing the categories deleted topics (`c/foo/1/?status=deleted`) does not display the deleted topic to the group moderator (not the same for staff).
# Fix
If the `deleted_by` user is part a group that matches the `reviewable_by_group` on a topic then don't redirect. This is the default interaction for staff to give them the ability to do things like restore the topic in case it was accidentally deleted.
To render the deleted topics as expected for the group moderator I am utilizing [the guardian scope of `guardian.can_see_deleted_topics?` for said category](https://github.com/discourse/discourse/pull/19618/files#diff-288e61b8bacdb29d9c2e05b42da6837b0036dcf1867332d977ca7c5e74a44297R802-R803)
Previously, browser logs would be printed to STDOUT halfway through the test run. This commit changes the behaviour so that the logs are included in the failure summary along with other rspec failure information.
There is an issue where chat message processing breaks due to
unhandles `SocketError` exceptions originating in the SSRF check,
specifically in `FinalDestination::Resolver`.
This change gives `FinalDestination::SSRFDetector` a new error class
to wrap the `SocketError` in, and haves the `RetrieveTitle` class
handle that error gracefully.
A few specs in `dashboard_controller_spec.rb` set some state in redis but don't clean it up afterwards which causes other specs to fail when they're ran after `dashboard_controller_spec.rb`.
Related commit: 18467d4.
At the time of writing, this is how the `TopicPosterSerializer` looks
like:
```
class TopicPosterSerializer < ApplicationSerializer
attributes :extras, :description
has_one :user, serializer: PosterSerializer
has_one :primary_group, serializer: PrimaryGroupSerializer
has_one :flair_group, serializer: FlairGroupSerializer
end
```
Within `PosterSerializer`, the `primary_group` and `flair_group`
association is requested on the `user` object. However, the associations
have not been loaded on the `user` object at this point leading to the
N+1 queries problem. One may wonder
why the associations have not been loaded when the `TopicPosterSerializer`
has `has_one :primary_group` and `has_one :flair_group`. It turns out that `TopicPoster`
is just a struct containing the `user`, `primary_group` and
`flair_group` objects. The `primary_group` and `flair_group`
ActiveRecord objects are loaded seperately in `UserLookup` and not preloaded when querying for
the users. This is done for performance reason so that we are able to
load the `primary_group` and `flair_group` records in a single query
without duplication.
We were adding to the resolver's work queue before setting up the `@lookup` and `@parent` information. That could lead to the lookup being performed on the wrong (or `nil`) hostname. This also lead to some flakiness in specs.
This task sometimes fails in CI due to temporary network issues. Retrying twice should help resolve those situations without needing to manually restart the job.
* UX: Wizard Step Enhancements
- Remove illustrations
- Add Emoji graphic to top of steps
- Add description below step title
- Move point of contact to last step
* Move step count to header, plus some button navigation tweaks
* add remaining emoji to step headers
* fix button logic on steps
* Update Point of Contact
* remove automated messages field
* adjust styling for counter, title, and emoji
* Update wording for logos
* Fix tests
* fix prettier
* fix specs
* set same with for steps except for styling screen
* use sentence case; remove duplicate copy under your organization fields
* fix missing buttons on small screens
* add spacing to buttons; adjust font weight to labels
* adjust styling for community logo step; use sentence case for button
* update copy for point of contact text helper
* use sentence case for field labels
* fix ui tests
* use btn-back class to fix ui tests
* reduce bottom margin for toggle fields
* clean up
Co-authored-by: Ella <ella.estigoy@gmail.com>
Follow up to a review in #18937, this commit changes the HashtagAutocompleteService to no longer use class variables to register hashtag data sources or types in context priority order. This is to address multisite concerns, where one site could e.g. have chat disabled and another might not. The filtered plugin registers I added will not be included if the plugin is disabled.
In both ChatMessage#rebake! and in ChatMessageProcessor
when we were calling ChatMessage.cook we were missing the
user_id to cook with, which causes missed hashtag cooks
because of missing permissions.
Previously, restricted category chat channel was available for all groups - even `readonly`. From now on, only user who belong to group with `create_post` or `full` permissions can access that chat channel.
* DEV: Remove enable_whispers site setting
Whispers are enabled as long as there is at least one group allowed to
whisper, see whispers_allowed_groups site setting.
* DEV: Always enable whispers for admins if at least one group is allowed.
We were changing the user's user_option.bookmark_auto_delete_preference
to whatever they changed it to in the bookmark modal to use as default
for future bookmarks. However this was leading to a lot of confusion
since if you wanted to set it for one bookmark you had to remember to
change it back on the next one.
This commit removes that automatic functionality, and instead moves
the bookmark auto delete preference to User Preferences > Interface
in an explicit dropdown.
This commit adds a new notification that gets sent to admins when the site gets new features after an upgrade/deploy. Clicking on the notification takes the admin to the admin dashboard at `/admin` where they can see the new features under the "New Features" section.
Internal topic: t/87166.
This introduces another "section" of queries to the
hashtag autocomplete search, which returns results for
each type that start with the search term. So now results
will be in this order, and within these sections ordered
by the types in priority order:
1. Exact matches sorted by type
2. "starts with" sorted by type
3. Everything else sorted by type then name within type
Depending on the current state of things, sometimes the homepage style
wouldn't update because we were incorrectly blocking updates the
`desktop_category_page_style` site setting if the first item in the top
menu was 'categories'.
Added a test case to handle this situation.
See https://meta.discourse.org/t/248354
When originally introduced, `attempts` was only used in the read-only check
context.
With the introduction of the exponential backoff in cda370db, `attempts` was
also used to count loop iterations, but was left inside the if block instead of
being incremented every loop, meaning the exponential backoff was only
happening when the site was recently readonly.
Co-authored-by: jbrw <jamie.wilson@discourse.org>
* FEATURE: Add chat and sidebar toggles to the setup wizard
- Fix css alighnment
- Add Enable Chat Toggle
- Add Enable Sidebar Toggle
* Check for the chat plugin
* Account for new sidebar step
* update chat and sidebar description
* UI: add checkmark as a visual indicator that it is enabled
* use new navigation_memu site setting for enabling the sidebar
* fix tests
* Add tests
* Update lib/wizard/step_updater.rb
Use HEADER_DROPDOWN instead of LEGACY
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
* Fix spec. Use HEADER_DROPDOWN instead of LEGACY
Co-authored-by: Ella <ella.estigoy@gmail.com>
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
The tsquery used for searching is generated using both functions from
Ruby and Postgresql (for example, unaccent function). Depending on the
term used, it generated an invalid tsquery. For example "can’t"
generated "''can''t''" instead of "''can''''t''".
* FIX: Use Category.secured(guardian) for hashtag datasource
Follow up to comments in #19219, changing the category
hashtag datasource to use Category.secured(guardian) instead
of Site.new(guardian).categories here since the latter does
more work for not much benefit, and the query time is the
same. Also eliminates some Hash -> Model back and forth
busywork. Add some more specs too.
* FIX: Server-side hashtag lookup cooking user loading
When we were using the PrettyText.options.currentUser
and parsing back and forth with JSON for the hashtag
lookups server-side, we had a bug where the user's
secure categories were not loaded since we never actually
loaded a User model from the database, only parsed it
from JSON.
This commit fixes the issue by instead using the
PretyText.options.userId and looking up the user directly
from the database when calling hashtag_lookup via the
PrettyText::Helpers code when cooking server-side. Added
the missing spec to check for this as well.
If configured, this will be used for static JS assets which are stored on S3. This can be useful if you want to use different CDN providers/configuration for Uploads and JS
This commit allows us to type # in the UI and present autocomplete
results immediately with the following logic for the topic composer,
and reversed for the chat composer:
* Categories the user can access and has not muted sorted by `topic_count`
* Tags the user can access and has not muted sorted by `topic_count`
* Chat channels the user is a member of sorted by `messages_count`
So in effect, we allow searching for hashtags without a search term.
To do this we add a new `search_without_term` to each data source so
each one can define how it wants to handle this logic.
This new site setting replaces the
`enable_experimental_sidebar_hamburger` and `enable_sidebar` site
settings as the sidebar feature exits the experimental phase.
Note that we're replacing this without depreciation since the previous
site setting was considered experimental.
Internal Ref: /t/86563
When under extremely high load, it has been observed that updating the categories table when creating a new topic can become a bottleneck.
This change will reduce the two updates to one when a new topic is created within a category, and therefore should help with performance when under extremely high load.
Note that we don't have a database table and a model for post mentions yet, and I decided to implement it without adding one to avoid heavy data migrations. Still, we may want to add such a model later, that would be convenient, we have such a model for mentions in chat.
Note that status appears on all mentions on all posts in a topic except of the case when you just posted a new post, and it appeared on the bottom of the topic. On such posts, status won't be shown immediately for now (you'll need to reload the page to see the status). I'll take care of it in one of the following PRs.
In some cases (e.g. user notification emails) we
are passing an excerpted/stripped version of the
post HTML to Email::Styles, at which point the
<span> elements surrounding the hashtag text have
been stripped. This caused an error when trying to
remove that element to replace the text.
Instead we can just remove all elements inside
a.hashtag-cooked and replace with the raw #hashtag
text which will work in more cases.
This is closer to git's redirect following behaviour. We prevented git
following redirects when we clone in order to prevent SSRF attacks.
Follow-up-to: 291bbc4fb9
When doing local oneboxes we sometimes want to allow
SVGs in the final preview HTML. The main case currently
is for the new cooked hashtags, which include an SVG
icon.
SVGs will be included in local oneboxes via `ExcerptParser` _only_
if they have the d-icon class, and if the caller for `post.excerpt`
specifies the `keep_svg: true` option.
Adds stats for API and user API requests similar to regular page views.
This comes with a new report to visualize API requests per day like the
consolidated page views one.
This commit introduce a new API for registering callbacks, which we'll execute when a user gets destroyed, and the `delete_posts` opt is true. The chat plugin registers one callback and queues a job to destroy every message from that user in batches.
This reverts commit a71f6cf09b.
The github UI had an error I didn't notice which resulted
in a security commit being merged _after_ the bump, now
I have to redo the bump.
Previously we would unconditionally fetch all images via HTTP to grab
original sizing from cooked post processor in 2 different spots.
This was wasteful as we already calculate and cache this info in upload records.
This also simplifies some specs and reduces use of mocks.
This hasn't been necessary for many years, and is no longer supported following 84bec1cb. Only extremely old plugins might be trying to do this. All the affected open-source plugins I can find have already been updated.
We now use Ember CLI (core/plugins) and DiscourseJSProcessor (themes) for all Ember and template compilation. This commit removes the remnants of the legacy Sprockets-based Ember compilation system.
Sprockets, and its DiscourseJSProcess-based Babel transformations, is still in use for a few assets. Ideally that will be removed/replaced in the near future.
Raw paths like `/test/path` are not supported natively in the CSP. This commit prepends the site's base URL to these paths. This allows plugins to add 'local' assets to the CSP without needing to hardcode the site's hostname.
We're going to change the default return value of the `primary_email_verified?` method of `Auth::ManagedAuthenticator` to false, so we need to explicitly define the method on authenticators to return true where it makes sense to do so.
Internal topic: t/82084.
In some setups the keys start with "original/" and "optimized/" and in some setups the key is something like "foo/original/", so lets make the filter less strict.
This will be used by plugins to handle the client side of their custom
post validations without having to overwrite the whole composer save
action as it was done in other plugins.
Co-authored-by: Penar Musaraj <pmusaraj@gmail.com>
This commit fleshes out and adds functionality for the new `#hashtag` search and
lookup system, still hidden behind the `enable_experimental_hashtag_autocomplete`
feature flag.
**Serverside**
We have two plugin API registration methods that are used to define data sources
(`register_hashtag_data_source`) and hashtag result type priorities depending on
the context (`register_hashtag_type_in_context`). Reading the comments in plugin.rb
should make it clear what these are doing. Reading the `HashtagAutocompleteService`
in full will likely help a lot as well.
Each data source is responsible for providing its own **lookup** and **search**
method that returns hashtag results based on the arguments provided. For example,
the category hashtag data source has to take into account parent categories and
how they relate, and each data source has to define their own icon to use for the
hashtag, and so on.
The `Site` serializer has two new attributes that source data from `HashtagAutocompleteService`.
There is `hashtag_icons` that is just a simple array of all the different icons that
can be used for allowlisting in our markdown pipeline, and there is `hashtag_context_configurations`
that is used to store the type priority orders for each registered context.
When sending emails, we cannot render the SVG icons for hashtags, so
we need to change the HTML hashtags to the normal `#hashtag` text.
**Markdown**
The `hashtag-autocomplete.js` file is where I have added the new `hashtag-autocomplete`
markdown rule, and like all of our rules this is used to cook the raw text on both the clientside
and on the serverside using MiniRacer. Only on the server side do we actually reach out to
the database with the `hashtagLookup` function, on the clientside we just render a plainer
version of the hashtag HTML. Only in the composer preview do we do further lookups based
on this.
This rule is the first one (that I can find) that uses the `currentUser` based on a passed
in `user_id` for guardian checks in markdown rendering code. This is the `last_editor_id`
for both the post and chat message. In some cases we need to cook without a user present,
so the `Discourse.system_user` is used in this case.
**Chat Channels**
This also contains the changes required for chat so that chat channels can be used
as a data source for hashtag searches and lookups. This data source will only be
used when `enable_experimental_hashtag_autocomplete` is `true`, so we don't have
to worry about channel results suddenly turning up.
------
**Known Rough Edges**
- Onebox excerpts will not render the icon svg/use tags, I plan to address that in a follow up PR
- Selecting a hashtag + pressing the Quote button will result in weird behaviour, I plan to address that in a follow up PR
- Mixed hashtag contexts for hashtags without a type suffix will not work correctly, e.g. #ux which is both a category and a channel slug will resolve to a category when used inside a post or within a [chat] transcript in that post. Users can get around this manually by adding the correct suffix, for example ::channel. We may get to this at some point in future
- Icons will not show for the hashtags in emails since SVG support is so terrible in email (this is not likely to be resolved, but still noting for posterity)
- Additional refinements and review fixes wil
The UPDATE statement could lock the `uploads` table for a very long time
when the `verification_status` of lots of uploads changed. Splitting up
and simplifying the UPDATE solves that problem.
Also, this change ensures that only the needed data from the inventory
gets inserted into the `TEMP TABLE`. For example, there's no need to
have records for optimized images in that table when the `uploads` table
gets updated.
The hidden site setting `suppress_secured_categories_from_admin` will
suppress visibility of categories without explicit access from admins
in a few key areas (category drop downs and topic lists)
It is not intended to be a security wall since admins can amend any site
setting. Instead it is feature that allows hiding the categories from the
UI.
Admins will still be able to see topics in categories without explicit
access using direct URLs or flags.
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
These two dropdown fields in the setup wizard are pre-populated, and
there is no way to de-select a value, you can only change it. So we can
remove the required attribute so that an asterisk doesn't show up in the
UI.
Previously we were forcing node's max-old-space-size to be 2GB. This override was added in a01b1dd6 to avoid issues caused by a lower default node heap_size_limit on machines with less memory.
This commit makes that `max-old-space-size` override more specific so that it only applies to machines with less memory. Other machines will go use Node's defaults.
The override is also lowered to 1GB. This is still high enough for the build to complete, while reducing memory usage.
https://meta.discourse.org/t/245547
* FEATURE: Default Composer Category Site Setting
- Create the default_composer_category site setting
- Replace general_category_id logic for auto selecting the composer
category
- Prevent Uncategorized from being selected if not allowed
- Add default_composer_category option to seeded categories
- Create a migration to populate the default_composer_category site
setting if there is a general_category_id populated
- Added some tests
* Add missing translation for the new site setting
* fix some js tests
* Just check that the header value is null
Currently, moderators are able to set primary group for users
irrespective of the of the `moderators_manage_categories_and_groups` site
setting value.
This change updates Guardian implementation to honour it.
* Remove old bookmark column ignores to follow up b22450c7a8
* Change some group site setting checks to use the _map helper
* Remove old secure_media helper stub for chat
* Change attr_accessor to attr_reader for preloaded_custom_fields to follow up 70af45055a
Previously the stylesheet cachebusting hash was based on the maximum mtime of files. This works well in development and during in-container updates (e.g. via docker_manager). However, when a fresh docker image is created for each deploy, the file mtimes will change even if the contents has not.
This commit changes the production logic to calculate the cachebuster from the filenames and contents of the relevant assets. This should be consistent across deploys, thereby improving cache hits and improving page load times.
- Ensure it works with prefixed S3 buckets
- Perform a sanity check that all current assets are present on S3 before starting deletion
- Remove the lifecycle rule configuration and delete expired assets immediately. This task should be run post-deploy anyway, so adding a 10-day window is not required
Since the system user is a regular user, it can have its
`allow_private_messages` user option turned off, which
with our current `can_send_private_message?(Discourse.system_user)`
check inside the CurrentUserSerializer, will prevent any
user from sending messages in the UI if the system user is not
accepting PMs.
This commit adds a new `can_send_private_messages?` method to
the Guardian, which can be used in serializers and not depend
on the system user. When the user actually sends a message
we still rely on the old `can_send_private_message?(target)`
call to see if they are allowed to send the message to the target.
The new method is just to say they can "generally" send
private messages.
Previously, we didn't have a site-wide setting to set the default behavior for user profile visibility and user presence features. But we already have a user preference for that.
This task is supposed to skip uploading if the asset is already present in S3. However, when a bucket 'folder path' was configured, this logic was broken and so the assets would be re-uploaded every time.
This commit fixes that logic to include the bucket 'folder path' in the check
This should fix fetching from gitlab.
In order to get SSRF protection, we had to prevent redirects when cloning via git, but some repos are behind redirects and we want to support those too. We use `FinalDestination` before cloning to try to simulate git with redirects, but this isn't quite how git works, so there's some discrepancies between our SSRF protected cloning behavior and normal git behavior that I'm trying to work around.
This is temporary fix. It would be better to use `FinalDestination` to simulate the first request that git makes. I aim to make it work like that in the not too distant future, but this is better for now.
Depends on: #18806
We have a banner that prompts to edit the welcome topic, so let's not
show it in the topic list until it has been edited. Previously this
banner covered the welcome topic, now the banner will be above the topic
list, so we need to hide the welcome topic.
Before this commit, there was no way for us to efficiently check an
array of topics for which a user can see. Therefore, this commit
introduces the `TopicGuardian#can_see_topic_ids` method which accepts an
array of `Topic#id`s and filters out the ids which the user is not
allowed to see. The `TopicGuardian#can_see_topic_ids` method is meant to
maintain feature parity with `TopicGuardian#can_see_topic?` at all
times so a consistency check has been added in our tests to ensure that
`TopicGuardian#can_see_topic_ids` returns the same result as
`TopicGuardian#can_see_topic?`. In the near future, the plan is for us
to switch to `TopicGuardian#can_see_topic_ids` completely but I'm not
doing that in this commit as we have to be careful with the performance
impact of such a change.
This method is currently not being used in the current commit but will
be relied on in a subsequent commit.
Linking a commit from a GitHub pull request included the complete commit
message, instead of just the first line. The rest of the commit message
will be added to the body of the Onebox.
When PostRevisor is called with 'skip_validations: true' it can save
the post twice and one of the calls passes the correct 'validate: false'
argument, but the other one does not.
The filenames (minus the extensions) were being used as keys in a hash to pass to Terser, which meant that colocated connector files would overwrite each other. This commit moves the un-colocating earlier in the pipeline so that the fixed filenames are passed to Terser.
Followup to be3d6a56ce
This commit adds a new `/hashtag/search` endpoint and both
relevant JS and ruby plugin APIs to handle plugins adding their
own data sources and priority orders for types of things to search
when `#` is pressed.
A `context` param is added to `setupHashtagAutocomplete` which
a corresponding chat PR https://github.com/discourse/discourse-chat/pull/1302
will now use.
The UI calls `registerHashtagSearchParam` for each context that will
require a `#` search (e.g. the topic composer), for each type of record that
the context needs to search for, as well as a priority order for that type. Core
uses this call to add the `category` and `tag` data sources to the topic composer.
The `register_hashtag_data_source` ruby plugin API call is for plugins to
add a new data source for the hashtag searching endpoint, e.g. discourse-chat
may add a `channel` data source.
This functionality is hidden behind the `enable_experimental_hashtag_autocomplete`
flag, except for the change to `setupHashtagAutocomplete` since only core and
discourse-chat are using that function. Note this PR does **not** include required
changes for hashtag lookup or new styling.
Theme javascript is now minified using Terser, just like our core/plugin JS bundles. This reduces the amount of data sent over the network.
This commit also introduces sourcemaps for theme JS. Browser developer tools will now be able show each source file separately when browsing, and also in backtraces.
For theme test JS, the sourcemap is inlined for simplicity. Network load is not a concern for tests.
Previously, compiling theme 'extra_js' was done with a number of steps. Each theme_field would be compiled into its own value_baked column, and then the JavascriptCache content would be built by concatenating all of those compiled values.
This commit streamlines things by removing the value_baked step. The raw value of all extra_js theme_fields are passed directly to the ThemeJavascriptCompiler, and then the result is stored in the JavascriptCache.
In itself, this commit should not cause any behavior change. It is designed to open the door to more advanced compilation features which have interdependencies between different source files (e.g. template colocation, sourcemaps).
RS256 was added for Windows Hello and as a side effect we speculatively added
RS384 and RS512. These ciphers were not tested and are now failing on solo
keys. It may be the case that the ciphers are not configured correctly on
our side. It may be the case that this is a Solo key bug.
Regardless, we are removing the ciphers and will only consider adding them
again if absolutely needed.
The previous implementation would attempt to fetch groups using the end-user's Google auth token. This only worked for admin accounts, or users with 'delegated' access to the `admin.directory.group.readonly` API.
This commit changes the approach to use a single 'service account' for fetching the groups. This removes the need to add permissions to all regular user accounts. I'll be updating the [meta docs](https://meta.discourse.org/t/226850) with instructions on setting up the service account.
This is technically a breaking change in behavior, but the existing implementation was marked experimental, and is currently unusable in production google workspace environments.
Previously, when the array had both nil and string values it returned the error "comparison of NilClass with String failed". Now I added the `.compact` method to prevent this issue as per @martin-brennan's suggestion https://github.com/discourse/discourse/pull/18431#discussion_r984204788
* Revert "Revert "FEATURE: Preload resources via link header (#18475)" (#18511)"
This reverts commit 95a57f7e0c.
* put behind feature flag
* env -> global setting
* declare global setting
* forgot one spot
* FEATURE: Hide Privacy Policy and TOS topics
As a way to simplify new sites this change will hide the privacy policy
and the TOS topics from the topic list. They can still be accessed and
edited though.
* add tests
Experiment moving from preload tags in the document head to preload information the the response headers.
While this is a minor improvement in most browsers (headers are parsed before the response body), this allows smart proxies like Cloudflare to "learn" from those headers and build HTTP 103 Early Hints for subsequent requests to the same URI, which will allow the user agent to download and parse our JS/CSS while we are waiting for the server to generate and stream the HTML response.
Co-authored-by: Penar Musaraj <pmusaraj@gmail.com>