Currently, when the MessageFormat compiler fails on some translations,
we just have the raw output from the compiler in the logs and that’s not
always very helpful.
Now, when there is an error, we iterate over the translation keys and
try to compile them one by one. When we detect one that is failing, it’s
added to a list that is now outputted in the logs. That way, it’s easier
to know which keys are not properly translated, and the problems can be
addressed quicker.
---
The previous implementation of this patch had a bug: it wasn’t handling
locales with country/region code properly. So instead of iterating over
the problematic keys, it was raising an error.
When a post has some replies, and the user click on the button to show them, we would load ALL the replies. This could lead to DoS if there were a very large number of replies.
This adds support for pagination to these post replies.
Internal ref t/129773
XHR requests are handled differently by the application and the
responses do not have any preloaded data so the cache key needs to
differntiate between those requests.
Currently, when the MessageFormat compiler fails on some translations,
we just have the raw output from the compiler in the logs and that’s not
always very helpful.
Now, when there is an error, we iterate over the translation keys and
try to compile them one by one. When we detect one that is failing, it’s
added to a list that is now outputted in the logs. That way, it’s easier
to know which keys are not properly translated, and the problems can be
addressed quicker.
This check was checking the wrong scope, causing problems in certain edge conditions, for example:
1. Admin adds an "on signup" field that isn't editable after signup.
2. Admin adds a "for all users" field.
3. User goes and fills up the "for all users" field from 2.
4. User is now stuck on the required fields page without any fields showing.
With this change, we only consider "for all users" fields when asking if required custom fields are filled in.
We were running into errors running `ember build` on machines with high
CPU counts. It was then noted that `thread-loader`, which embroider uses, defaults to spinning
up x workers where x is number of physical CPU cores - 1. That is
probably too much so we set out to find out an optimial count to set for
the `JOBS` env which embroider will use to set the number of
`thread-loader` workers.
I first built an image using the following Dockerfile.
```
FROM discourse/base:release
RUN cd /var/www/discourse && sudo -EH -u discourse bundle exec rake plugin:install_all_official
RUN cd /var/www/discourse && sudo -EH -u discourse bundle exec rake assets:precompile:prereqs
```
I then ran the following command on my M3 Max Macbook Pro that has 14
phyisal CPU cores.
```
for j in 1 2 4 8 14; do echo "JOBS=$j"; time docker run --rm -it -e JOBS=$j test:latest /bin/bash -c "su discourse -c 'cd /var/www/discourse && bundle exec rake assets:precompile:build'"; done
```
These are the results I got:
```
JOBS=1 0.04s user 0.03s system 0% cpu 1:01.92 total
JOBS=2 0.04s user 0.02s system 0% cpu 42.605 total
JOBS=4 0.04s user 0.02s system 0% cpu 37.012 total
JOBS=8 0.04s user 0.02s system 0% cpu 35.199 total
JOBs=14 0.04s user 0.02s system 0% cpu 37.941 total
```
We think JOBS=2 is a good default when the `JOBS` env has not been set.
Anything above just consumes more resources for little benefit.
* DEV: Migrate notifications#id to bigint (#28444)
The `notifications.id` column is the most probable column to run out of
values. This is because it is an `int` column that has only 2147483647
values and many notifications are generated on a regular basis in an
active community. This commit migrates the column to `bigint`.
These migrations do not use `ALTER TABLE ... COLUMN ... TYPE` in order
to avoid the `ACCESS EXCLUSIVE` lock on the entire table. Instead, they
create a new `bigint` column, copy the values to the new column and
then sets the new column as primary key.
Related columns (see `user_badges`, `shelved_notifications`) will
be migrated in a follow-up commit.
* DEV: Fix bigint notifications id migration to deal with public schema (#28538)
Follow up to 799a45a291
* DEV: Migrate shelved_notifications#notification_id to bigint (#28549)
DEV: Migrate shelved_notifications#notification_id to bigint
The `notifications.id` has been migrated to `bigint` in previous commit
799a45a291.
* DEV: Fix annotations (#28569)
Follow-up to ec8ba5a0b9
* DEV: Migrate user_badges#notification_id to bigint (#28546)
The `notifications.id` has been migrated to bigint in previous commit
799a45a291. This commit migrates one of
the related columns, `user_badges.notification_id`, to `bigint`.
* DEV: Migrate `User#seen_notification_id` to `bigint` (#28572)
`Notification#id` was migrated to `bigint` in 799a45a291
* DEV: Migrate `Chat::NotificationMention#notification_id` to `bigint` (#28571)
`Notification#id` was migrated to `bigint` in 799a45a291
---------
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
In the formkit conversion in 2ca06ba236
we missed setting a type for the UppyImageUploader for badges. Also,
we were not passing down the `image_url` as form data, so when we used
`data.image` for that field the badge was not updating in the UI after
page loads and the image URL was not loading for preview.
Co-authored-by: Martin Brennan <martin@discourse.org>
We need a way to disable certain checks programatically, e.g. on Discourse hosting. This PR adds a configuration option for this, and makes it so that disabled checks aren't run as part of #run_all.
QUnit tests are failing in different ways on Chromium in Debian
bookworm. We have no interest in figuring out why as it is not a good
use of our time and the long term plan is to switch to Chrome for Testing
anyway.
This has been split out from https://github.com/discourse/discourse/pull/28051
so we can use this same code in plugin specs before merging the core PR,
adds some helpers for creating local backup temp files
and cleaning them up.
We changed the design of the member access wizard step to use toggle groups instead of switches. To support existing designs for notices, we need another plugin outlet.
Merged in main here. This is a backport to stable.
Following a recent refactor, some methods from `FlagSettings` have been
renamed (`custom_types` -> `additional_message_types`). The
`PostActionType` model was using `custom_types` but when the renaming
was done, it was renamed to `with_additional_message` instead of
`additional_message_types`, which under the right circumstances will
raise an error.
* SECURITY: Update default allowed iframes list
Change the default iframe url list to all include 3 slashes.
* SECURITY: limit group tag's name length
Limit the size of a group tag's name to 100 characters.
Internal ref - t/130059
* SECURITY: Improve sanitization of SVGs in Onebox (stable)
---------
Co-authored-by: Blake Erickson <o.blakeerickson@gmail.com>
Co-authored-by: Régis Hanol <regis@hanol.fr>
Co-authored-by: David Taylor <david@taylorhq.com>
* SECURITY: Update default allowed iframes list
Change the default iframe url list to all include 3 slashes.
* SECURITY: limit group tag's name length
Limit the size of a group tag's name to 100 characters.
Internal ref - t/130059
* SECURITY: Improve sanitization of SVGs in Onebox
---------
Co-authored-by: Blake Erickson <o.blakeerickson@gmail.com>
Co-authored-by: Régis Hanol <regis@hanol.fr>
Co-authored-by: David Taylor <david@taylorhq.com>
Followup 4aea12fdcb
In certain config areas (like About) we want to be able
to fetch specific site settings by name. In this case,
sometimes we need to be able to fetch hidden settings,
in cases where a config area is still experimental.
Splitting out a different endpoint for this purpose
allows us to be stricter with what we return for config
areas without affecting the main site settings UI, revealing
hidden settings before they are ready.
`addCommunitySectionLink` API function accepts secondary argument to determine if the link should be added to the primary or secondary (more) section. There was a bug and all links were mounted in the secondary section.
Since switching to Maxmind permalinks to download the databases in
7079698cdf, we have received multiple
reports about rebuilds failing as `maxminddb:refresh` runs during
the rebuilds and failing to download the databases cases the rebuilds to
fail.
Downloading Maxmind databases should not sit in the critical rebuild
path but since we are close to the Discourse 3.3 release, we have opted
to just rescue all errors encountered when downloading the databases.
In the near future after the Discourse 3.3 release, we will be looking
at moving the downloading of maxmind databases out of the rebuild path.
We have a dedicated admin page (`/admin/customize/email_templates`) that lets admins customize all emails that Discourse sends to users. The way this page works is that it lists all translations strings that are used for emails, and the list of translation strings is currently hardcoded and hasn't been updated in years. We've had a number of new emails that Discourse sends, so we should add those templates to the list to let admins easily customize those templates.
Meta topic: https://meta.discourse.org/t/3-2-x-still-ignores-some-custom-email-templates/308203.
In this case, there is no 'nearPost' param in the URL. Instead, the server preloads a post-stream with whichever page of posts is requested. We can check for that situation using `postStream.firstPostPresent`.
Also updates the widget-header version to fetch a value from the service on initial render, instead of relying on the observer triggering.
Followup to bdec564d14