When we added direct S3 uploads to Discourse, which use
presigned URLs, we never took into account the dualstack
endpoints for IPv6 on S3.
This commit fixes the issue by using the dualstack endpoints
for presigned URLs and requests, which are used in the
get-presigned-put and batch-presign-urls endpoints used when
directly uploading to S3.
It also makes regular S3 requests for `put` and so on use
dualstack URLs. It doesn't seem like there is a downside to
doing this, but a bunch of specs needed to be updated to reflect this.
This PR adds a small visual change to the new feature item on the `/admin/whats-new` page. When features are marked with an experimental site setting, they should show an indication on the feature item that it is "Experimental"
This commit changes the uploads:secure_upload_analyse_and_update
and uploads:disable_secure_uploads to no longer rebake affected
posts inline. This just took way too long, and if the task stalled
you couldn't be sure if the rest of it completed.
Instead, we can update the baked_version of affected posts and
utilize our PeriodicalUpdates job to gradually rebake them. I added
warnings about increasing the site setting rebake_old_posts_count and
the global setting max_old_rebakes_per_15_minutes before doing this
as well.
For good measure, the affected post IDs are written to a JSON file too.
This commit contains two changes to how our site setting
keyword system works:
1. Crowdin, our translation provider, does not support YAML lists,
so we are changing site setting keywords in server.en.yml to
be pipe-separated (|)
2. It's unclear to translators what they are supposed to do with
aliases of site settings where the name has changed, e.g.
min_trust_level_for_here_mention. Instead of getting these as
keywords from the yml file, we can discern these from
SiteSettings::DeprecatedSettings automatically, and still use
them for client-side search
These changes should help improve the situation for translators.
The `categories_only_optimized` category page style has been introduced
in commit d37a0d401c. This commit makes
sure that style is enforced for users who can see over 1000 categories
in order to keep `/categories` page functional.
This commit removes the feature flag for the new /about page, enabling it for all sites, and removes the code for old the /about page.
Internal topic: t/140413.
We decided to make contracts immutable once their validations have run.
Indeed, it doesn’t make a lot of sense to modify a contract value
outside the contract itself.
If processing is needed, then it should happen inside the contract
itself.
Followup 30fdd7738e
Adds a new site setting and corresponding user preference
to disable smart lists. By default they are enabled, because
this is a better experience for most users. A small number of
users would prefer to not have this enabled.
Smart lists automatically append new items to each
list started in the composer when enter is pressed. If
enter is pressed on an empty list item, it is cleared.
This setting will be removed when the new composer is complete.
This patch replaces the parameters provided to a service through
`params` by the contract object.
That way, it allows better consistency when accessing input params. For
example, if you have a service without a contract, to access a
parameter, you need to use `params[:my_parameter]`. But with a contract,
you do this through `contract.my_parameter`. Now, with this patch,
you’ll be able to access it through `params.my_parameter` or
`params[:my_parameter]`.
Some methods have been added to the contract object to better mimic a
Hash. That way, when accessing/using `params`, you don’t have to think
too much about it:
- `params.my_key` is also accessible through `params[:my_key]`.
- `params.my_key = value` can also be done through `params[:my_key] =
value`.
- `#slice` and `#merge` are available.
- `#to_hash` has been implemented, so the contract object will be
automatically cast as a hash by Ruby depending on the context. For
example, with an AR model, you can do this: `user.update(**params)`.
Currently in services, we don’t make a distinction between input
parameters, options and dependencies.
This can lead to user input modifying the service behavior, whereas it
was not the developer intention.
This patch addresses the issue by changing how data is provided to
services:
- `params` is now used to hold all data coming from outside (typically
user input from a controller) and a contract will take its values from
`params`.
- `options` is a new key to provide options to a service. This typically
allows changing a service behavior at runtime. It is, of course,
totally optional.
- `dependencies` is actually anything else provided to the service (like
`guardian`) and available directly from the context object.
The `service_params` helper in controllers has been updated to reflect
those changes, so most of the existing services didn’t need specific
changes.
The options block has the same DSL as contracts, as it’s also based on
`ActiveModel`. There aren’t any validations, though. Here’s an example:
```ruby
options do
attribute :allow_changing_hidden, :boolean, default: false
end
```
And here’s an example of how to call a service with the new keys:
```ruby
MyService.call(params: { key1: value1, … }, options: { my_option: true }, guardian:, …)
```
Bug introduced in this PR https://github.com/discourse/discourse/pull/29244
When the experiment toggle button was introduced, new features did not look right when the toggle button was not available.
In addition, the plugin name can be an empty string. In that case, information about new features should be displayed.
Currently, when calling a service with its block form, a `#result`
method is automatically created on the caller object. Even if it never
clashed so far, this could happen.
This patch removes that method, and instead use a more classical way of
doing things: the result object is now provided as an argument to the
main block. This means if we need to access the result object in an
outcome block, it will be done like this from now on:
```ruby
MyService.call(params) do |result|
on_success do
# do something with the result object
do_something(result)
end
end
```
In the same vein, this patch introduces the ability to match keys from
the result object in the outcome blocks, like we already do with step
definitions in a service. For example:
```ruby
on_success do |model:, contract:|
do_something(model, contract)
end
```
Instead of
```ruby
on_success do
do_something(result.model, result.contract)
end
```
Database dumps sometimes reference functions in the `discourse_functions` schema. It's possible that some of these functions have been dropped in a newer version of Discourse. In that case, restoring an older backup will fail with a `ERROR: function discourse_functions.something_something() does not exist` error. The restore functionality contains a workaround for that problem, but it didn't work with functions created in plugin migrations.
This commit adds support for temporarily creating missing `discourse_functions` from plugins. And it adds a simple check if the DB migration file even contains the required `DROPPED_TABLES` or `DROPPED_COLUMNS` constant. We don't need to create an instance of the DB migration class unless one of those constants is used. This makes the restore slightly faster and works around a problem with migrations that execute without `up` or `down` methods (e.g. `BackfillChatChannelAndThreadLastMessageIdsPostMigrate`).
Toggle the button to enable the experimental site setting from "What's new" announcement.
The toggle button is displayed when:
- site setting exists and is boolean;
- potentially required plugin is enabled.
* FIX: participating users statistics...
... was (mis-)counting
- bots
- anonymous users
- suspended users
There's now a "valid_users" function that holds the AR query for valid users and which is used in all "users", "active_users", and "participating_users" queries.
Internal ref - t/138435
- Add concurrency when running on multisite clusters (default 10, configurable via THEME_UPDATE_CONCURRENCY env)
- Add a version cache for the duration of the rake task. This avoids duplicating work when many sites in the cluster have the same theme installed, and it is already up-to-date
- Updates output to be more concurrent friendly (all `puts`, no `print`)
* FEATURE: Create rake for db migration in plugins
before the dev-xp was clunky, we had to create a migration file in core and
move it to the plugin.
Now we automated this process, we still create the migration file in core
but the rake task will move it to the plugin.
the usage is:
```
rake plugin:generate_migration[plugin_name,migration_name,migration_args]
rake plugin:generate_migration[discourse-automation,add_group_id_to_automation_rule,"group_id:integer"]
```
* DEV: change rake to be a generator for plugin migrations
* DEV: trying to add extra class option to migration generator
* DEV: revert to have only `plugin_migration_generator`
* DEV: remove rake task for plugin migration creation
* DEV: remove migration_generator.rb
* DEV: remove if because options with `plugin_name` will always be true
We want to allow lightboxing of smaller images, even if they are below the minimum size for image thumbnail generation.
This change sets a minimum threshold of 100 x 100 pixels for triggering the lightbox.
---------
Co-authored-by: Régis Hanol <regis@hanol.fr>
This patch improves the custom `array` type available in contracts.
It’s now able to split strings on `|` on top of `,`, and to be more
consistent, it also tries to cast the resulting items to integers.
Theme modifiers can now be defined as theme settings, this allows for
site operators to override behavior of theme modifiers.
New syntax is:
```
{
...
"modifiers": {
"modifier_name": {
"type": "setting",
"value": "setting_name"
}
}
}
```
This also introduces a new theme modifier for serialize_post_user_badges. Name of badge must match the name of the badge in the badges table. The client-side is updated to load this new data from the post-stream serializer.
Co-authored-by: David Taylor <david@taylorhq.com>
Constants should always be only assigned once. The logical OR assignment
of a constant is a relic of the past before we used zeitwerk for
autoloading and had bugs where a file could be loaded twice resulting in
constant redefinition warnings.
- limits security key deletes to second factor keys
- also deletes backup codes (lingering backup codes break login flow entirely)
* Add spec for rake task to disable 2FA for a user
Currently, when the MessageFormat compiler fails on some translations,
we just have the raw output from the compiler in the logs and that’s not
always very helpful.
Now, when there is an error, we iterate over the translation keys and
try to compile them one by one. When we detect one that is failing, it’s
added to a list that is now outputted in the logs. That way, it’s easier
to know which keys are not properly translated, and the problems can be
addressed quicker.
---
The previous implementation of this patch had a bug: it wasn’t handling
locales with country/region code properly. So instead of iterating over
the problematic keys, it was raising an error.
If a plugin's JS fails to load for some reason, most commonly
ad blockers, the entire admin interface would break. This is because
we are adding links to the admin routes for plugins that define
them in the sidebar.
We have a fix for this already in the plugin list which shows a warning
to the admin. This fix just prevents the broken link from rendering
in the sidebar if the route is not valid.
* Add migrations to ensure password hash is synced across users & user_passwords
* Persist password-related data in user_passwords instead of users
* Merge User#expire_old_email_tokens with User#expire_tokens_if_password_changed
* Add post deploy migration to mark password-related columns from users table as read-only
* Refactored UserPassword#confirm_password? and changes required to accommodate hashing the password after validations
In our production environment, we have been seeing Sidekiq processes
getting stuck randomly when a USR1 signal is sent to the Unicorn master
process. We have not been able to identify the root cause of why the
Sidekiq process gets stuck. We however noticed that when the Unicorn
master process receives a USR1 signal, it will reopen the logs for the
Unicorn master process first before sending a USR1 signal for the
Unicorn worker processes to reopen the logs. We figured that we should
do the same for the Sidekiq process as well when a USR1 signal.
In this commit, we introduce an arbitrary delay of 1 second before we
the Sidekiq process reopens its log files so as to allow enough time for the Unicorn
master to finish reopening it logs first.
We also do not send reopen logs for the Sidekiq process if the `DISCOURSE_LOG_SIDEKIQ`
env is not present because there is no need to reopen any logs.
There have been too many flaky tests as a result of leaking state in
Redis so it is easier to resolve them by ensuring we flush Redis'
database.
Locally on my machine, calling `Discourse.redis.flushdb` takes around
0.1ms which means this change will have very little impact on test
runtimes.
While using `OpenStruct` is nice, it’s generally not a very good idea as
it usually leads to performance problems.
The `OpenStruct` source code even says basically to avoid it.
Since the context object is crucial in our services, this patch replaces
`OpenStruct` with a custom implementation instead.
When a post has some replies, and the user click on the button to show them, we would load ALL the replies. This could lead to DoS if there were a very large number of replies.
This adds support for pagination to these post replies.
Internal ref t/129773
FIX: Duplicated parent posts
DEV: Query refactor
XHR requests are handled differently by the application and the
responses do not have any preloaded data so the cache key needs to
differntiate between those requests.
Remove emoji.clear cache calls as data.js.es6.erb hasn't existed in a while.
Emoji data is now compiled separately via javascript rake tasks.
Skip db and redis precompilation when no db is present
Currently, when the MessageFormat compiler fails on some translations,
we just have the raw output from the compiler in the logs and that’s not
always very helpful.
Now, when there is an error, we iterate over the translation keys and
try to compile them one by one. When we detect one that is failing, it’s
added to a list that is now outputted in the logs. That way, it’s easier
to know which keys are not properly translated, and the problems can be
addressed quicker.