* DEV: refactor rake asset precompile tasks
add a separate ember build task that does not depend on rails env
allowing us to compile assets without db+redis connections
rename EMBER_CLI_COMPILE_DONE to SKIP_EMBER_CLI_COMPILE
better semantics in build steps
This PR addresses the push to unify the icon representing AI throughout Discourse, by using the discourse-sparkles icon.
The icon is being moved to core to make changes with dependencies included in core that were using the "magic" icon instead.
In 2 places "magic" -> "discourse-sparkles,
1. topic summaries
2. (unreleased) chat summaries example
The UrlHelper#escape_uri helper has been deprecated and replaced by UrlHelper#normalized_encode, and was marked for removal in 3.0. This PR removes the method.
* FIX: Secure upload post processing race condition
This commit fixes a couple of issues.
A little background -- when uploads are created in the composer
for posts, regardless of whether the upload will eventually be
marked secure or not, if secure_uploads is enabled we always mark
the upload secure at first. This is so the upload is by default
protected, regardless of post type (regular or PM) or category.
This was causing issues in some rare occasions though because
of the order of operations of our post creation and processing
pipeline. When creating a post, we enqueue a sidekiq job to
post-process the post which does various things including
converting images to lightboxes. We were also enqueuing a job
to update the secure status for all uploads in that post.
Sometimes the secure status job would run before the post process
job, marking uploads as _not secure_ in the background and changing
their ACL before the post processor ran, which meant the users
would see a broken image in their posts. This commit fixes that issue
by always running the upload security changes inline _within_ the
cooked_post_processor job.
The other issue was that the lightbox wrapper link for images in
the post would end up with a URL like this:
```
href="/secure-uploads/original/2X/4/4e1f00a40b6c952198bbdacae383ba77932fc542.jpeg"
```
Since we weren't actually using the `upload.url` to pass to
`UrlHelper.cook_url` here, we weren't converting this href to the CDN
URL if the post was not in a secure context (the UrlHelper does not
know how to convert a secure-uploads URL to a CDN one). Now we
always end up with the correct lightbox href. This was less of an issue
than the other one, since the secure-uploads URL works even when the
upload has become non-secure, but it was a good inconsistency to fix
anyway.
As of #23867 this is now a real package, so updating the imports to
use the real package name, rather than relying on the alias. The
name change in the package name is because `I18n` is not a valid
name as NPM packages must be all lowercase.
This commit also introduces an eslint rule to prevent importing from
the old I18n path.
For themes/plugins, the old 'i18n' name remains functional.
There are cases where a user can copy image markdown from a public
post (such as via the discourse-templates plugin) into a PM which
is then sent via an email. Since a PM is a secure context (via the
.with_secure_uploads? check on Post), the image will get a secure
URL in the PM post even though the backing upload is not secure.
This fixes the bug in that case where the image would be stripped
from the email (since it had a /secure-uploads/ URL) but not re-attached
further down the line using the secure_uploads_allow_embed_images_in_emails
setting because the upload itself was not secure.
The flow in Email::Sender for doing this is still not ideal, but
there are chicken and egg problems around when to strip the images,
how to fit in with other attachments and email size limits, and
when to apply the images inline via Email::Styles. It's convoluted,
but at least this fixes the Template use case for now.
Why this change?
This ensures that malicious requests cannot end up causing the logs to
quickly fill up. The default chosen is sufficient for most legitimate
requests to the Discourse application.
When truncation happens, parsing of logs in supported format like
lograge may break down.
Why this change?
The `PostsController#create` action allows arbitrary topic custom fields
to be set by any user that can create a topic. Without any restrictions,
this opens us up to potential security issues where plugins may be using
topic custom fields in security sensitive areas.
What does this change do?
1. This change introduces the `register_editable_topic_custom_field` plugin
API which allows plugins to register topic custom fields that are
editable either by staff users only or all users. The registered
editable topic custom fields are stored in `DiscoursePluginRegistry` and
is called by a new method `Topic#editable_custom_fields` which is then
used in the `PostsController#create` controller action. When an unpermitted custom fields is present in the `meta_data` params,
a 400 response code is returned.
2. Removes all reference to `meta_data` on a topic as it is confusing
since we actually mean topic custom fields instead.
Preloading just metadata is not always respected by browsers, and
sometimes the whole video will be downloaded. This switches to using a
placeholder image for the video and only loads the video when the play
button is clicked.
Currently, `window.I18n` is defined in an old school hand written
script, inlined into locale/*.js by the Rails asset pipeline, and
then the global variable is shimmed into a pseudo AMD module later
in `module-shims.js`.
This approach has some problems – for one thing, when we add a new
V2 addon (e.g. in #23859), Embroider/Webpack is stricter about its
dependencies and won't let you `import from "I18n";` when `"I18n"`
isn't listed as one of its `dependencies` or `peerDependencies`.
This moves `I18n` into a real package – `discourse-i18n`. (I was
originally planning to keep the `I18n` name since it's a private
package anyway, but NPM packages are supposed to have lower case
names and that may cause problems with other tools.)
This package defines and exports a regular class, but also defines
the default global instance for backwards compatibility. We should
use the exported class in tests to make one-off instances without
mutating the global instance and having to clean it up after the
test run. However, I did not attempt that refactor in this PR.
Since `discourse-i18n` is now included by the app, the locale
scripts needs to be loaded after the app chunks. Since no "real"
work happens until later on when we kick things off in the boot
script, the order in which the script tags appear shouldn't be a
problem. Alternatively, we can rework the locale bundles to be more
lazy like everything else, and require/import them into the app.
I avoided renaming the imports in this commit since that would be
quite noisy and drowns out the actual changes here. Instead, I used
a Webpack alias to redirect the current `"I18n"` import to the new
package for the time being. In a separate commit later on, I'll
rename all the imports in oneshot and remove the alias. As always,
plugins and the legacy bundles (admin/wizard) still relies on the
runtime AMD shims regardless.
For the most part, I avoided refactoring the actual I18n code too
much other than making it a class, and some light stuff like `var`
into `let`.
However, now that it is in a reasonable format to work with (no
longer inside the global script context!) it may also be a good
opportunity to refactor and make clear what is intended to be
public API vs internal implementation details.
Speaking of, I took the librety to make `PLACEHOLDER`, `SEPARATOR`
and `I18nMissingInterpolationArgument` actual constants since it
seemed pretty clear to me those were just previously stashed on to
the `I18n` global to avoid polluting the global namespace, rather
than something we expect the consumers to set/replace.
This is part 2 (of 3) for passkeys support.
This adds a hidden site setting plus routes and controller actions.
1. registering passkeys
Passkeys are registered in a two-step process. First, `create_passkey`
returns details for the browser to create a passkey. This includes
- a challenge
- the relying party ID and Origin
- the user's secure identifier
- the supported algorithms
- the user's existing passkeys (if any)
Then the browser creates a key with this information, and submits it to
the server via `register_passkey`.
2. authenticating passkeys
A similar process happens here as well. First, a challenge is created
and sent to the browser. Then the browser makes a public key credential
and submits it to the server via `passkey_auth_perform`.
3. renaming/deleting passkeys
These routes allow changing the name of a key and deleting it.
4. checking if session is trusted for sensitive actions
Since a passkey is a password replacement, we want to make sure to confirm the user's identity before allowing adding/deleting passkeys. The u/trusted-session GET route returns success if user has confirmed their session (and failed if user hasn't). In the frontend (in the next PR), we're using these routes to show the password confirmation screen.
The `/u/confirm-session` route allows the user to confirm their session with a password. The latter route's functionality already existed in core, under the 2FA flow, but it has been abstracted into its own here so it can be used independently.
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
For the admin plugin list we want to be able to link to
a meta topic for plugins, but we have no standard way to
do this at the moment. This adds support for meta_topic_id
alongside other plugin metadata like authors, URL etc,
that gets built into a Meta topic URL in the serializer.
To match discourse_theme CLI behavior, we should skip hidden files/directories (e.g. `.git`), and two regular directories: `node_modules/` and `src/`.
Without these excludes, it's very easy for a theme to hit the file count limit. e.g. when trying this with discourse-kanban-board, I got:
> The number of files (20366) in the theme has exceeded the maximum allowed number of files (1024)
We have a custom implementation of #symbolize_keys in our Onebox helpers. This is likely a legacy from when Onebox was a standalone gem. This change replaces all usages with either #deep_symbolize_keys from ActiveSupport, or appropriate option to the JSON parser gem used.
We have a custom implementation of #blank? in our Onebox helpers. This is likely a legacy from when Onebox was a standalone gem. This change replaces all usages with respective incarnations of #blank?, #present?, and #presence from ActiveSupport. It changes a bunch of "unless blank" to "if present" as well.
This is part 1 of 3, split up of PR #23529. This PR refactors the
webauthn code to support passkey authentication/registration.
Passkeys aren't used yet, that is coming in PRs 2 and 3.
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
This commit adds support for an optional `prompt` parameter in the
payload of the /session/sso_provider endpoint. If an SSO Consumer
adds a `prompt=none` parameter to the encoded/signed `sso` payload,
then Discourse will avoid trying to login a not-logged-in user:
* If the user is already logged in, Discourse will immediately
redirect back to the Consumer with the user's credentials in a
signed payload, as usual.
* If the user is not logged in, Discourse will immediately redirect
back to the Consumer with a signed payload bearing the parameter
`failed=true`.
This allows the SSO Consumer to simply test whether or not a user is
logged in, without forcing the user to try to log in. This is useful
when the SSO Consumer allows both anonymous and authenticated access.
(E.g., users that are already logged-in to Discourse can be seamlessly
logged-in to the Consumer site, and anonymous users can remain
anonymous until they explicitly ask to log in.)
This feature is similar to the `prompt=none` functionality in an
OpenID Connect Authentication Request; see
https://openid.net/specs/openid-connect-core-1_0.html#AuthRequest
If a user somehow is looking at an old version of the page and attempts
to like a post they already like. Display a more reasonable error message.
Previously we would display:
> You are not permitted to view the requested resource.
New error message is:
> Oops! You already performed this action. Can you try refreshing the page?
Triggering this error condition is very tricky, you need to stop the
message bus. A possible reason for it could be bad network connectivity.
We will soon be dropping support for `/theme-qunit` in production, so this will start failing if we don't remove it. Plus, we now have system specs which verify the end-to-end functionality of the Theme QUnit system.
This was the last thing which was using the legacy `run-qunit` script, so that can also be dropped.
* FIX: min_personal_message_post_length not applying to first post
Due to the way PostCreator is wired, we were not applying min_personal_message_post_length
to the first post.
This meant that admins could not configure it so PMs have different
limits.
The code was already pretending that this works, but had no reliable way
of figuring out if we were dealing with a private message
This commit adds limits to themes and theme components on the:
- file size of about.json and .discourse-compatibility
- file size of theme assets
- number of files in a theme
This extends search so it can have consumers that:
1. Can split off "term" from various advanced filters and orders
2. Can build a relation of either order or filter
It also moves a lot of stuff around in the search class for clarity.
Two new APIs are exposed:
`.apply_filter` to apply all the special filters to a posts/topics relation
`.apply_order` to force a particular order (eg: order:latest)
This can then be used by semantic search in Discourse AI
Why this change?
Currently, we do not have an easy way to test themes and theme components
using Rails system tests. While we support QUnit acceptance tests for
themes and theme components, QUnit acceptance tests stubs out the server
and setting up the fixtures for server responses is difficult and can lead to a
frustrating experience. System tests on the other hand allow authors to
set up the test fixtures using our fabricator system which is much
easier to use.
What does this change do?
In order for us to allow authors to run system tests with their themes
installed, we are adding a `upload_theme` helper that is made available
when writing system tests. The `upload_theme` helper requires a single
`directory` parameter where `directory` is the directory of the theme
locally and returns a `Theme` record.
This comment isn't necessarily on a line by itself, so we need to remove the `^` from the regex. This will fix `EMBER_ENV=development bin/rake assets:precompile`
Until now, we have allowed testing themes in production environments via `/theme-qunit`. This was made possible by hacking the ember-cli build so that it would create the `tests.js` bundle in production. However, this is fundamentally problematic because a number of test-specific things are still optimized out of the Ember build in production mode. It also makes asset compilation significantly slower, and makes it more difficult for us to update our build pipeline (e.g. to introduce Embroider).
This commit removes the ability to run qunit tests in production builds of the JS app when the Embdroider flag is enabled. If a production instance of Discourse exists exclusively for the development of themes (e.g. discourse.theme-creator.io) then they can add `EMBER_ENV: development` to their `app.yml` file. This will build the entire app in development mode, and has a significant performance impact. This must not be used for real production sites.
This commit also refactors many of the request specs into system specs. This means that the tests are guaranteed to have Ember assets built, and is also a better end-to-end test than simply checking for the presence of certain `<script>` tags in the HTML.
This method is used by assets:precompile to decide whether to apply `terser` to a file. Embroider chunks do not necessarily start with `chunk.`, and so they were incorrectly being re-terser'd by our assets:precompile task. This is inefficient, and also led to broken sourcemaps on some assets.
This is a follow up to 9caba30d5c
In that commit, we were migrating the database but we didn't actually
ensure that the database was created and that plugins were updated
before the databases were migrated.
## What is the context here?
The `docker.rake` Rakefile contains Rake tasks that are meant to be run
in the `discourse/discourse_test:release` Docker image. For example, we
have the `docker:test` Rake task that makes it easier to run the test
suite for a particular Discourse commit.
Why are we introducing a `docker:test:setup` Rake task?
While we have the `docker:test` Rake task, it is very limited in the
test commands that can be executed. It is very useful for automated
testing but not very useful for running tests in the development
environment. Therefore, we are introducing a `docker:test:setup` rake
task that can be used to set up the test environment for running tests.
The envisioned example usage is something like this:
```
docker run -d --name=discourse_test --entrypoint=/sbin/boot discourse/discourse_test:release
docker exec -u discourse:discourse discourse_test ruby script/docker_test.rb --no-tests
docker exec -u discourse:discourse discourse_test bundle exec rake docker:test:setup
docker exec -u discourse:discourse discourse_test bundle exec rspec <path to file>
```
Currently, if the review queue has both a flagged post and a flagged chat message, one of the two will have some of the labels of their actions replaced by those of the other. In other words, the labels are getting mixed up. For example, a flagged chat message might show up with an action labelled "Delete post".
This is happening because when using bundles, we are sending along the actions in a separate part of the response, so they can be shared by many reviewables. The bundles then index into this bag of actions by their ID, which is something generic describing the server action, e.g. "agree_and_delete".
The problem here is the same action can have different labels depending on the type of reviewable. Now that the bag of actions contains multiple actions with the same ID, which one is chosen is arbitrary. I.e. it doesn't distinguish based on the type of the reviewable.
This change adds an additional field to the actions, server_action, which now contains what used to be the ID. Meanwhile, the ID has been turned into a concatenation of the reviewable type and the server action, e.g. post-agree_and_delete.
This still provides the upside of denormalizing the actions while allowing for different reviewable types to have different labels and descriptions.
At first I thought I would prepend the reviewable type to the ID, but this doesn't work well because the ID is used on the server-side to determine which actions are possible, and these need to be shared between different reviewables. Hence the introduction of server_action, which now serves that purpose.
I also thought about changing the way that the bundle indexes into the bag of actions, but this is happening through some EmberJS mechanism, so we don't own that code.
This was causing the following notice to be printed out when running system specs:
```
I did no detect a custom `config/dev.yml` file, creating one for you where you can amend defaults.
```
(since 61571bee43)
This adds a new secure_uploads_pm_only site setting. When secure_uploads
is true with this setting, only uploads created in PMs will be marked
secure; no uploads in secure categories will be marked as secure, and
the login_required site setting has no bearing on upload security
either.
This is meant to be a stopgap solution to prevent secure uploads
in a single place (private messages) for sensitive admin data exports.
Ideally we would want a more comprehensive way of saying that certain
upload types get secured which is a hybrid/mixed mode secure uploads,
but for now this will do the trick.
Admins are always able to send PMs, so it doesn't make
sense that they shouldn't be able to convert topics just
because they aren't in personal_message_enabled_groups.
Previously we would respect it if the filter was `nil`, but if `default` was explicitly passed then it would ignore the category order settings. This explicit passing of `filter=default` happens for some types of navigations in the JS app.
This extends the fix from 92bc61b4be
The theme tests we use for the smoke-test typically take 3-4 seconds to complete. This commit reduces the timeout from 10 minutes to 20 seconds, so that failures are detected more quickl
Our Ember build compiles assets into multiple chunks. In the past, we used the output from ember-auto-import-chunks-json-generator to give Rails a map of those chunks. However, that addon is specific to ember-auto-import, and is not compatible with Embroider.
Instead, we can switch to parsing the html files which are output by ember-cli. These are guaranteed to have the correct JS files in the correct place. A <discourse-chunked-script> will allow us to easily identify which chunks belong to which entrypoint.
In future, as we update more entrypoints to be compiled by Embroider/Webpack, we can easily introduce new wrappers.
Previously applied in 2c58d45 and reverted in 24d46fd. This version has been updated for subfolder support.
New headers were added to upload PUT requests as part of a MinIO update (cf42466). This change updates the asset bucket CORS ruleset to allow the new headers in the preflight request.
See https://dev.discourse.org/t/111136
Co-authored-by: Sam Saffron <sam.saffron@gmail.com>
They're both constant per-instance values, there is no need to store them
in the session. This also makes the code a bit more readable by moving
the `session_challenge_key` method up to the `DiscourseWebauthn` module.
Our Ember build compiles assets into multiple chunks. In the past, we used the output from `ember-auto-import-chunks-json-generator` to give Rails a map of those chunks. However, that addon is specific to ember-auto-import, and is not compatible with Embroider.
Instead, we can switch to parsing the html files which are output by ember-cli. These are guaranteed to have the correct JS files in the correct place. A `<discourse-chunked-script>` will allow us to easily identify which chunks belong to which entrypoint.
In future, as we update more entrypoints to be compiled by Embroider/Webpack, we can easily introduce new wrappers.
Followup to eea74e0e32. Site settings
which are a list without a list_type should also have the _map
extension added which returns an array based on split("|").
For example:
```
SiteSetting.post_menu_map
=> ["read", "like"]
```
* DEV: Add rake command to help detect dead settings
Some Site Settings may still exist but are no longer being used in the
core discourse code or in related plugins. This rake task will help
identify any unused (aka: dead) settings by using the `rg` command to
search for them.
You can execute the rake task by using this command:
`LOAD_PLUGINS=1 bin/rails "site_settings:find_dead"`
* Add env variable, apply feedback
When navigating around, we make ajax requests with a parameter like `?filter=latest`. This results in the TopicQuery being set up with `filter: "latest"` as a string. The logic introduced in fd9a5bc0 checks for equality with `:latest` and `:unseen` symbols, which didn't work correctly in this situation
This commit makes the logic detect both strings and symbols, and adds a spec for the behaviour.
This adds support for oneboxing WEBP and AVIF images in posts and fixing
oneboxing fixes download remote images for those formats too.
Reported in https://meta.discourse.org/t/-/276433?u=falco
Reverts e2705df and re-lands #23187 and #23219.
The issue was incorrect order of execution of Rails' `assets:precompile` task in our own precompilation stack.
Co-authored-by: David Taylor <david@taylorhq.com>
This commit adds some system specs to test uploads with
direct to S3 single and multipart uploads via uppy. This
is done with minio as a local S3 replacement. We are doing
this to catch regressions when uppy dependencies need to
be upgraded or we change uppy upload code, since before
this there was no way to know outside manual testing whether
these changes would cause regressions.
Minio's server lifecycle and the installed binaries are managed
by the https://github.com/discourse/minio_runner gem, though the
binaries are already installed on the discourse_test image we run
GitHub CI from.
These tests will only run in CI unless you specifically use the
CI=1 or RUN_S3_SYSTEM_SPECS=1 env vars.
For a history of experimentation here see https://github.com/discourse/discourse/pull/22381
Related PRs:
* https://github.com/discourse/minio_runner/pull/1
* https://github.com/discourse/minio_runner/pull/2
* https://github.com/discourse/minio_runner/pull/3