The automation plugin has 4 custom field types that are array typed. However, array typed custom fields are deprecated and should be migrated to JSON type.
This commit does a couple of things:
1. Migrate all four custom fields to JSON
2. Fix a couple of small bugs that have been discovered while migrating the custom fields to JSON (see the comments on this commit's PR for details https://github.com/discourse/discourse/pull/26939)
The watched word group's create, update and delete action logs were missing the translations. This PR will add those strings and will use the group key instead of watched word key where needed.
This reverts commit 0f4520867b.
This has led to two problems:
1. An incompatibility with Cloudflare's "auto minify" feature. They've deprecated this feature because of incompatibility with modern JS syntax. But unfortunately it will remain enabled on existing properties until 2024-08-05.
2. Discourse fails to boot in Safari 15. This is strange, because Safari does support all the required features in our production JS bundles. Even more strangely, things start working as soon as you open the developer tools. That suggests the cause could be a Safari bug rather than a simple incompatibility.
Reverting while we work out a path forward on both those issues.
- adds a `@groupIdentifier` property which will ensure that two menus of the same group are not expanded at the same time
- adds a `@class` property which will be applied to the trigger and the content
- adds a `@triggerClass` property which will be applied to the trigger
- adds a `@contentClass` property which will be applied to the trigger
- removes `extraClassName`
Some combinations of start_date and frequency/interval values can cause a recurring automation rule to either trigger before its start_date or never trigger. Example repros:
- Configure a recurring automation with hourly recurrence and a start_date several days ahead. What this will do is make the automation start running hourly immediately even though the start_date is several days ahead.
- Configure a recurring automation with a weekly recurrence and a start_date several weeks ahead. This will result in the automation never triggering even after the start_date.
These 2 scenarios share the same cause which is that the automation plugin doesn't use the start_date as the date for the first run and instead uses the frequency/interval values from the current time to calculate the first run date.
This PR fixes this bug by adding an explicit check for start_date and using it as the first run's date if it's ahead of the current time.
It's a temporary solution while I work a better solution. The problem here is quite tricky. We are showing a modal from a modal. But if we close the previous modal, before the second one is show it means we destroy the menu holding the first modal which prevents showing the second modal.
One possible solution would be to refactor d-modal’s show function. At the moment if you await on show it will await until closed and not when the modal has been inserted to the DOM. It means we don't have a clean moment to close the d-menu.
The second issue it that even though it's possible to have multiple modals on screen, the close modal assumes only one active modal at a time.
Cases like the glimmer-site-header are complex because the swiped area is not the moved target, for now it's simpler to not apply the body scroll lock automatically.
A new property is now available on the swipe modifier: `{{swipe @lockBody=false}}`
Note I tried to have tests for this modifier in the past, but it was very inconsistent on CI causing lots of flakeys, this is why there are no tests for now. I might try to write them again using system specs.
Previously, we only updated the duration and interval values in the constructor. So whenever the initial values are updated in the form the changes are not reflected in the UI. To fix this issue we're using "get" methods in this PR.
Co-authored-by: Jarek Radosz <jradosz@gmail.com>
This change adds a wrapper link around the thread list details on mobile to make the click area larger.
We also update child div elements to span to ensure valid html, since the link is an inline element and divs are block level.
---------
Co-authored-by: chapoi <101828855+chapoi@users.noreply.github.com>
This commit switches `DiscourseIpInfo.mmdb_download` to use the
permalinks supplied by MaxMind to download the MaxMind databases as
specified in
https://dev.maxmind.com/geoip/updating-databases#directly-downloading-databases
which states:
```
To directly download databases, follow these steps:
1. In the "Download Links" column, click "Get Permalink(s)" for the desired database.
2. Copy the permalink(s) provided in the modal window.
3. Provide your account ID and your license key using Basic Authentication to authenticate.
```
Previously we are downloading from `https://download.maxmind.com/app/geoip_download` but this is not
documented anyway on MaxMind's docs so this URL can in theory break
in the future without warning. Therefore, we are taking a proactive
approach to download the databases from MaxMind the recommended way
instead of relying on a hidden URL. This old way of downloading the
databases with only a license key will be deprecated in 3.3 and be
removed in 3.4.
In 95a82d608d, we lowered the default for
`Onebox.options.max_download_kb` from 10mb to 2mb for security hardening
purposes. However, this resulted in multiple bug reports where seemingly
nomral URLs stopped being oneboxed. It turns out that lowering
`Onebox.options.max_download_kb` resulted in `Onebox::Helpers::DownloadTooLarge` being raised
more often for more URLs in `Onebox::Helpers.fetch_response` which
`Onebox::Helpers.fetch_html_doc` relies on. When
`Onebox::Helpers::DownloadTooLarge` is raised in
`Onebox::Helpers.fetch_response`, we throw away whatever response body
which we have already downloaded at that point. This is not ideal
because Nokogiri can parse incomplete HTML documents and there is a
really high chance that the incomplete HTML document still contains the
information which we need for oneboxing.
Therefore, this commit updates `Onebox::Helpers.fetch_html_doc` to not
throw away the response body when the size of the response body exceeds
`Onebox.options.max_download_size`. Instead, we just take whatever
response which we have and get Nokogiri to parse it.
Fixes a bug I stumbled upon in dev env:
```
Error: Assertion Failed: You attempted to update <discourse@model:user::ember337>.status to "[object Object]", but it is being tracked by a tracking context, such as a template, computed property, or observer. In order to make sure the context updates properly, you must invalidate the property when updating it. You can mark the property as `@tracked`, or use `@ember/object#set` to do this.
```
This can happen for various reasons including rate limiting and middleware bugs. This should resolve the warning we're seeing in the logs
```
RequestTracker.get_data failed : NoMethodError : undefined method `[]' for nil:NilClass
```
We were missing two `getURL` calls.
The test is now written for subfolder but it's good enough. If it's working for subfolder, it's working for non subfolders, the opposite being false.
decorator-transforms (https://github.com/ef4/decorator-transforms) is a modern replacement for babel's plugin-proposal-decorators. It provides a decorator implementation using modern browser features, without needing to enable babel's full suite of class feature transformations. This improves the developer experience and performance.
In local testing with Google's 'tachometer' tool, this reduces Discourse's 'init-to-render' time by around 3-4% (230ms -> 222ms).
It reduces our initial gzip'd JS payloads by 3.2% (2.43MB -> 2.35MB), or 7.5% (14.5MB -> 13.4MB) uncompressed.
DropdownMenu component is meant as a way to describe the content of menus.
Syntax:
```
<DropdownMenu as |dm|>
<dm.item class="test">
First
</dm.item>
<dm.divider class="foo" />
<dm.item class="bar">
Second
</dm.item>
</DropdownMenu>
```
This commits updates
`PageObjects::Components::NavigationMenu::Base#click_edit_tags_button`
to wait for `.sidebar-tags-form` to be present before returning. This is
essential because the client side app has to load the tags from the
server when the modal is open. If we don't wait for all the tags to be
loaded, it makes it hard to reason about the state of the UI when
interacting with the modal. In the case of "allows a user to deselect all tags in the modal which will display the site's top tags" which
was flaky, the system test was interacting with the UI when the tags are
still loading.