Switches to using a dialog to confirm a session (i.e. sudo mode for
account changes where we want to be extra sure the current user is who
they say they are) to match what we do with passkeys.
Subscriptions manager have been a pain since the beginning, one of the problem is that thread and channels behave mostly the same but with various small difference which I expect to increase over time.
Trying to use subclasses for this case has proven to be a mistake, this commit now uses a class for each case (channel, thread) which for now contains a lot of duplication, which might be reduced in the future but has the merit to make reasoning about each case very simple.
This refactor is fixing a bug introduced in 90efdd7f9d which was causing the wrong channel to be unsubscribed, this shouldn't be possible anymore. We had tests for this which were disabled due to flakeyness, I will consider re-enabling them in the future.
Other notes:
- notices had been added to the subscriptions manager service, they have been moved into their own dedicated service: `ChatChannelNoticesManager`
- the `(each model)` trick used in `<ChatChannel />` since 90efdd7f9d to ensure atomicity has been applied to `<ChatThread />` too
https://github.com/discourse/discourse/pull/22622 accidentally introduced an `@action` decorator inside the actions hash, which does not work. This commit modernizes the component by removing the actions hash altogether.
Previously, we were parsing webpack JS chunk filenames from the HTML files which ember-cli generates. This worked ok for simple entrypoints, but falls apart once we start using async imports(), which are not included in the HTML.
This commit uses the stats plugin to generate an assets.json file, and updates Rails to parse it instead of the HTML. Caching on the Rails side is also improved to avoid reading from the filesystem multiple times per request in develoment.
Co-authored-by: Godfrey Chan <godfreykfc@gmail.com>
Chat redesign work to improve chat navigation:
- New header title with channel name (thread list on mobile)
- New header title without channel name (thread list on full page chat)
- Removes the close button on threads (mobile only)
- Updates to back button route within thread (mobile), taking user to:
- The thread index, if they accessed the thread from the thread index.
- The channel itself, if they accessed the thread directly from the channel.
- The channel itself, if they accessed the thread from a notification.
- Show thread title in chat drawer header
- Properly convert emoji in thread titles in chat header (all devices)
- Upgrades various templates to use gjs format.
Followup to b53449eac9,
it was too easy to add broken routes which would break
configuration for the whole site, so now we validate ember
routes on save.
This commit fixes an issue where when some actions were done
(deleting/recovering post, moving posts) we updated the
topic_users.bookmarked column to the wrong value. This was happening
because the SyncTopicUserBookmarked job was not taking into account
Topic level bookmarks, so if there was a Topic bookmark and no
Post bookmarks for a user in the topic, they would have
topic_users.bookmarked set to false, which meant the bookmark would
no longer show in the /bookmarks list.
To reproduce before the fix:
* Bookmark a topic and don’t bookmark any posts within
* Delete or recover any post in the topic
c.f. https://meta.discourse.org/t/disappearing-bookmarks-and-expected-behavior-of-bookmarks/264670/36
This commit adds an `enable_s3_transfer_acceleration` site setting,
which is hidden to begin with. We are adding this because in certain
regions, using https://aws.amazon.com/s3/transfer-acceleration/ can
drastically speed up uploads, sometimes as much as 70% in certain
regions depending on the target bucket region. This is important for
us because we have direct S3 multipart uploads enabled everywhere
on our hosting.
To start, we only want this on the uploads bucket, not the backup one.
Also, this will accelerate both uploads **and** downloads, depending
on whether a presigned URL is used for downloading. This is the case
when secure uploads is enabled, not anywhere else at this time. To
enable the S3 acceleration on downloads more generally would be a
more in-depth change, since we currently store S3 Upload record URLs
like this:
```
url: "//test.s3.dualstack.us-east-2.amazonaws.com/original/2X/6/123456.png"
```
For acceleration, `s3.dualstack` would need to be changed to `s3-accelerate.dualstack`
here.
Note that for this to have any effect, Transfer Acceleration must be enabled
on the S3 bucket used for uploads per https://docs.aws.amazon.com/AmazonS3/latest/userguide/transfer-acceleration-examples.html.
In some browsers, 2FA login was failing with a "request is already
pending" error. This only applied when `experimental_passkeys` was
enabled and on Chrome only. This was due to the fact that the webauthn
API only supports one auth attempt at a time, so the security key call
needs to abort the passkey's conditional UI request before starting.
I am not sure we can test this. We have system specs that simulate
webauthn credentials and they didn't catch this (probably because the
simulation covers the whole flow).
With Embroider, we can rely on async `import()` to do the splitting
for us.
This commit extracts from `pretty-text` all the parts that are
meant to be loaded async into a new `discourse-markdown-it` package
that is also a V2 addon (meaning that all files are presumed unused
until they are imported, aka "static").
Mostly I tried to keep the very discourse specific stuff (accessing
site settings and loading plugin features) inside discourse proper,
while the new package aims to have some resembalance of a general
purpose library, a MarkdownIt++ if you will. It is far from perfect
because of how all the "options" stuff work but I think it's a good
start for more refactorings (clearing up the interfaces) to happen
later.
With this, pretty-text and app/lib/text are mostly a kitchen sink
of loosely related text processing utilities.
After the refactor, a lot more code related to setting up the
engine are now loaded lazily, which should be a pretty nice win. I
also noticed that we are currently pulling in the `xss` library at
initial load to power the "sanitize" stuff, but I suspect with a
similar refactoring effort those usages can be removed too. (See
also #23790).
This PR does not attempt to fix the sanitize issue, but I think it
sets things up on the right trajectory for that to happen later.
Co-authored-by: David Taylor <david@taylorhq.com>
We have identified some third-party analytics scripts which do things like `window.I18n = window.I18n` 🤦♂️. This leads to the window object having a null I18n property, but `"I18n" in globalThis` returns true.
This commit checks whether `window.I18n` is a truthy value.
When Discourse first introduced brotli support, reverse-proxy/CDN support for passing through the accept-encoding header to our NGINX server was very poor. Therefore, a separate `/brotli_assets/...` path was introduced to serve the brotli assets. This worked well, but introduces additional complexity and inconsistencies.
Nowadays, Brotli encoding is well supported, so we don't need the separate paths any more. Requests can be routed to the asset `.js` URLs, and NGINX will serve the brotli/gzip version of the asset automatically.
This commit starts from a simple observation: cooking messages on the hot path can be slow. Especially with a lot of mentions.
To move cooking from the hot path, this commit has made the following changes:
- updating cooked, inserting mentions and notifying user of new mentions has been moved inside the `process_message` job. It happens right after the `Chat::MessageProcessor` run, which is where the cooking happens.
- the similar existing code in `rebake!` has also been moved to rely on the `process_message`job only
- refactored `create_mentions` and `update_mentions` into one single `upsert_mentions` which can be called invariably
- allows services to decide if their job is ran inline or later. It avoids to need to know you have to use `Jobs.run_immediately!` in this case, in tests it will be inline per default
- made various frontend changes to make the chat-channel component lifecycle clearer. we had to handle `did-update @channel` which was super awkward and creating bugs with listeners which the changes of the PR made clear in failing specs
- adds a new `-processed` (and `-not-processed`) class on the chat message, this is made to have a good lifecyle hook in system specs
Nowadays, themes/plugins have their templates compiled at build-time, so there is no need for us to carry the template compiler on the frontend during tests
The motivation of this PR is to remove our dependence on Ember's 'named outlets', which are removed in Ember 4+.
At a high-level, the changes can be summarized as:
- The top-level `discovery` route is totally emptied of all logic. The HTML structure of the template is moved into the `<Discovery::Layout />` component for use by child routes.
- `AbstractTopicRoute` and `AbstractCategoryRoute` routes now both lean on the `DiscoverySortableController` and associated template. This controller is where most of the logic from the old top-level `discovery` controller has ended up.
- All navigation controllers/templates have been replaced with components. `navigation/categories`, `navigation/category` and `navigation/default` were very similar, and so they've all been combined into `<Navigation::Default>`. `navigation/filter` gets its own component.
- The `discovery/topics` controller/template have been moved into a new `<Discovery::Topics>` component.
Various other parts of the app have been tweaked to support these changes, but I've tried to keep that to a minimum.
Anything from `<TopicList>` down is untouched, which should hopefully mean that a large proportion of topic-list-customizing themes are unaffected.
For more information, see https://meta.discourse.org/t/282816
For deprecated site settings, we log out a warning when
the old setting is used. However when we convert all the client
settings to JSON, we are creating a lot of log noise like this:
> Deprecation notice: `SiteSetting.anonymous_posting_min_trust_level` has been deprecated.
We don't need to do this because we are just dumping the JSON.
We updated scheduled admin checks to run concurrently in their own jobs. The main reason for this was so that we can implement re-check functionality for especially flaky checks (e.g. group e-mail credentials check.)
This works in the following way:
1. The check declares its retry policy using class methods.
2. A block can be yielded to if there are problems, but before they are committed to Redis.
3. The job uses this block to either a) schedule a retry if there are any remaining or b) do nothing and let the check commit.
When submitting files through the form template upload field, we were having an issue where, although a validation error message was being presented to the user, the upload was still coming through, because `PickFilesButton`'s validation happens **after** the Uppy mixin finished the upload and hit `uploadDone`.
This PR adds a new overridable method to the Uppy mixin and overrides it with the custom validation, which now happens before the file is sent.
Additionally, we're now also using `uploadingOrProcessing` as the source of truth to show the upload/uploading label, which seems more reliable.
This work includes the following updates for the new lightbox:
- Show carousel by default if there are more than 1 image in a post
- Removes toggling of carousel on mobile - now always open if there are more than 1 image
- Updates swipe down gesture on mobile to close lightbox (previously used to toggle carousel)
- Removes swipe up gesture on mobile (was previously used to close lightbox)
This change removes the background image (which is the small version of the uploaded image) from the lightbox backdrop.
Now a solid color (dark grey) is used for the backdrop so we can distinguish between the lightbox's head, body and footer.
This PR does some preparatory refactoring of scheduled admin checks in order for us to be able to do custom retry strategies for some of them.
Instead of running all checks in sequence inside a single, scheduled job, the scheduled job spawns one new job per check.
In order to be concurrency-safe, we need to change the existing Redis data structure from a string (of serialized JSON) to a list of strings (of serialized JSON).