* DEV: Removal of create_post_for_category_and_tag_changes setting
reverting commit: #65f35e1
and adding a migration to remove the setting
ref: t/132320
* DEV: change checks for zeros to check for nils
* DEV: remove create_post_for_category_and_tag_changes migration file
If anything goes wrong, we can always revert back to the previous state.
This commit fixes a bug in the redesigned about page where if there's no banner image configured for the page, the top of the page where the banner goes is occupied with large white space. Additionally, this commit also fixes a related bug in the admin config area for the /about page where it's not possible to remove the uploaded banner image.
### Why?
Before, all flags were static. Therefore, they were stored in class variables and serialized by SiteSerializer. Recently, we added an option for admins to add their own flags or disable existing flags. Therefore, the class variable had to be dropped because it was unsafe for a multisite environment. However, it started causing performance problems.
### Solution
When a new Flag system is used, instead of using PostActionType, we can serialize Flags and use fragment cache for performance reasons.
At the same time, we are still supporting deprecated `replace_flags` API call. When it is used, we fall back to the old solution and the admin cannot add custom flags. In a couple of months, we will be able to drop that API function and clean that code properly. However, because it may still be used, redis cache was introduced to improve performance.
To test backward compatibility you can add this code to any plugin
```ruby
replace_flags do |flag_settings|
flag_settings.add(
4,
:inappropriate,
topic_type: true,
notify_type: true,
auto_action_type: true,
)
flag_settings.add(1001, :trolling, topic_type: true, notify_type: true, auto_action_type: true)
end
```
When upgrading to Rails 7.1, we had some problems because we were using
several tagged loggers at the same time. They were all added to the main
broadcast logger shipped with Rails, but the Rails 7.1 codebase contains
a bug making a request being run as many times as there are tagged loggers.
The fix was to use the code from the Rails 7.2 codebase.
This patch adds a small spec to ensure the behavior will stay the proper
one in the future.
This change ensures native push notifications respect the site setting for push_notification_time_window_mins. Previously only web push notifications would account for the delay, now we can bring more consistency between Discourse in browser vs Hub, by applying the same delay strategy to both forms of push notifications.
### Why?
Before, all flags were static. Therefore, they were stored in class variables and serialized by SiteSerializer. Recently, we added an option for admins to add their own flags or disable existing flags. Therefore, the class variable had to be dropped because it was unsafe for a multisite environment. However, it started causing performance problems.
### Solution
When a new Flag system is used, instead of using PostActionType, we can serialize Flags and use fragment cache for performance reasons.
At the same time, we are still supporting deprecated `replace_flags` API call. When it is used, we fall back to the old solution and the admin cannot add custom flags. In a couple of months, we will be able to drop that API function and clean that code properly. However, because it may still be used, redis cache was introduced to improve performance.
To test backward compatibility you can add this code to any plugin
```ruby
replace_flags do |flag_settings|
flag_settings.add(
4,
:inappropriate,
topic_type: true,
notify_type: true,
auto_action_type: true,
)
flag_settings.add(1001, :trolling, topic_type: true, notify_type: true, auto_action_type: true)
end
```
Adds a new statistics (hidden from the UI, but available via the API) that tracks daily participating users.
A user is considered as "participating" if they have
- Reacted to a post
- Replied to a topic
- Created a new topic
- Created a new PM
- Sent a chat message
- Reacted to a chat message
Internal ref - t/131013
This commit continues on work laid out by 6039b513fe to redesign the /about page. In this commit, we add sections for showing the site admins and moderators.
The lists of admins and moderators display the 10 most recently seen admins/moderators, with a button to display the rest of admins or moderators. Admins or moderators that have not logged in to the site in the last year will not be shown. Clicking on an admin's or moderator's name/avatar will show their user card.
In development, Ember raises an error when previously-used values are updated during a render. This is to avoid 'backtracking', where parts of templates have to be re-rendered multiple times. In general, this kind of pattern should be avoided, and Ember's warning helps us do that.
However, for the deprecation warning banner, it is quite reasonable for some rendering to trigger a deprecation, and thereby require the global-notice to be re-rendered. We can use our `DeferredTrackedSet` to achieve that. Its `.add` method will delay adding an item to the Set until after the current render has completed.
Very similar to move up/down flag problem fixed here - https://github.com/discourse/discourse/pull/28272
Those are the steps to toggle the flag:
1. click toggle - `saving` CSS class is added;
2. request to backend;
3. `saving` CSS class is removed.
And check if the flag was toggle was:
```ruby
def has_saved_flag?(key)
has_css?(".admin-flag-item.#{key}.saving")
has_no_css?(".admin-flag-item.#{key}.saving")
end
```
If the save action is very fast, then the saving class is removed before the first check.
Therefore I decided to invert it, and once action is finished add `saved` CSS class.
Then we can have a quick positive check:
```ruby
def has_saved_flag?(key)
has_css?(".admin-flag-item.#{key}.saved")
end
```
Currently, in the list controller, when encountering an unsafe redirect
error, a 404 is rendered. The problem is that it’s done in a way that it
also logs a fatal error (because a `Discourse::NotFound` exception was
raised inside a `rescue_from` block).
This patch addresses that issue by simply rendering a 404 without
raising any error.
This commit adds a `MiniSchedulerLongRunningJobLogger` class which will
poll every 60 seconds for mini_scheduler jobs which are stuck. When it
detects that a job is stuck, it will log a warning message with the
current backtrace of the thread that is executing the job.
Note that for scheduled jobs which are executed at a frequency of less
than 30 minutes, we will log when the job has been executing for 30
minutes.
For scheduled jobs executed at a frequency of less than 2 hours, we will
log when the job has been executing for a duration greater than its
specified frequency.
For scheduled jobs executed at a frequency greater than 2 hours, we will
log as long as the job has been executing for more than 2 hours.
Those are the steps to move the flag:
1. open menu;
2. click move up - `saving` CSS class is added;
3. request to backend;
4. `saving` CSS class is removed.
To check if the action was finished we are using this method:
```
def move_up(key)
open_flag_menu(key)
find(".admin-flag-item__move-up").click
has_saved_flag?(key)
self
end
def has_saved_flag?(key)
has_css?(".admin-flag-item.#{key}.saving")
has_no_css?(".admin-flag-item.#{key}.saving")
end
```
However, sometimes specs were failing with `expected to find CSS ".admin-flag-item.spam.saving" but there were no matches`
I think that the problem is with those 2 lines:
```
find(".admin-flag-item__move-up").click
has_closed_flag_menu?
```
If the save action is very fast, then the `saving` class is removed before the first check.
Therefore, to determine that the move action is finished, I am checking if the menu is closed.
Currently, when a badly named category slug is provided, it can lead to
an infinite redirect.
This patch addresses the issue by properly unescaping `request.fullpath`
so the path is successfully rewritten and the redirect happens as
expected.
Currently to handle stub topics after merging, there are only options to (1) never delete a stub topic and (2) delete a stub topic after X amount of days. This adds the option to immediately delete a stub topic upon merge.
---------
Co-authored-by: Mark VanLandingham <markvanlan@gmail.com>
Co-authored-by: Renato Atilio <renato@discourse.org>
This commit continues on work laid out by 6039b513fe to redesign the /about page. In this commit, we add the site age and a section on the right hand side to show site activities/statistics such as topics, posts, sign-ups, likes etc.
Following a recent refactor, some methods from `FlagSettings` have been
renamed (`custom_types` -> `additional_message_types`). The
`PostActionType` model was using `custom_types` but when the renaming
was done, it was renamed to `with_additional_message` instead of
`additional_message_types`, which under the right circumstances will
raise an error.
Admin can create up to 50 custom flags. It is limited for performance reasons.
When the limit is reached "Add button" is disabled and backend is protected by guardian.
This commit patches `Net::HTTP` to reduce the default timeouts of 60
seconds when we are processing a request. There are certain routes in
Discourse which makes external requests and if the proper timeouts are
not set, we risk having the Unicorn master process force restarting the
Unicorn workers once the `30` seconds timeout is reached. This can
potentially become a vector for DoS attacks and this commit is aimed at
reducing the risk here.
We support a low-level construct called "inline checks", which you can use to register a problem ad-hoc from within application code.
Problems registered by inline checks never show up in the admin dashboard, this is because when loading the dashboard, we run all realtime checks and look for problems. Because of an oversight, we considered inline checks to be "realtime", causing them to be run and clear their problem status.
To fix this, we don't consider inline checks to be realtime, to prevent them from running when loading the admin dashboard.
This change is mainly a refactor of the desktop notifications service to improve readability and have standardised values for tracking state for current user in regards to the Notification API and Push API.
Also improves readability when handling push notification jobs, especially in scenarios where the push_notification_time_window_mins site setting is set to 0, which will allow sending push notifications instantly.
We were writing theme-transpiler JS files to the filesystem on a per-process basis, and then immediately reading them back in. Plus, there was no cleanup mechanism, so the tmp directory would grow indefinitely.
This commit refactors things so that the `build.js` script outputs the theme-transpiler source to stdout. That way, we can read it directly into the process, and then into mini-racer, without needing to go via the filesystem. No cleanup required!
In production, the theme-transpiler is still cached in a file during `assets:precompile`
In the formkit conversion in 2ca06ba236
we missed setting a type for the UppyImageUploader for badges. Also,
we were not passing down the `image_url` as form data, so when we used
`data.image` for that field the badge was not updating in the UI after
page loads and the image URL was not loading for preview.
Co-authored-by: Joffrey JAFFEUX <j.jaffeux@gmail.com>
Before this commit, running `rspec --seed 22953 --format documentation spec/requests/admin/site_texts_controller_spec.rb:191 spec/lib/freedom_patches/translate_accelerator_spec.rb:109` will fail.
Setting `I18n.config.available_locales` is equivalent to hard coding the
locales for the entire process. It should not be set so that `I18n` will
fallback to `backend.locales`.
Sometimes the backtrace is quite big for failing specs, this env var
(RSPEC_EXCLUDE_NOISE_IN_BACKTRACE) can be set to
1 to remove backtrace from anything but spec or application code in
rspec. This makes it easier to see where the actual failure is
coming from, most of the time all the gem paths are noise.
When creating a shared draft, we're recording topic view stats on the draft and then pass those on when the draft is published, conflating the actual view count.
This fixes that by not registering topic views if the topic is a shared draft.
When `SiteSetting.review_every_post` is true and the category `require_topic_approval` system creates two reviewable items.
1. Firstly, because the category needs approval, the `ReviewableQueuePost` record` is created - at this stage, no topic is created.
2. Admin is approving the review. The topic and first post are created.
3. Because `review_every_post` is true `queue_for_review_if_possible` callback is evaluated and `ReviewablePost` is created.
4. Then `ReviewableQueuePost` is linked to the newly generated topic and post.
At the beginning, we were thinking about hooking to those guards:
```
def self.queue_for_review_if_possible(post, created_or_edited_by)
return unless SiteSetting.review_every_post
return if post.post_type != Post.types[:regular] || post.topic.private_message?
return if Reviewable.pending.where(target: post).exists?
...
```
And add something like
```
return if Reviewable.approved.where(target: post).exists?
```
However, because the callback happens in point 3. before the `ReviewableQueuePost` is linked to the `Topic`, it was not possible.
Therefore, when `ReviewableQueuePost` is creating a `Topic`, a new option called `:reviewed_queued_post` is passed to `PostCreator` to avoid creating a second `Reviewable`.
Currently, descriptions for flag types aren’t interpolated, returning
`%{base_path}` in their string, for example. This breaks the navigation
on the sites.
The behavior changed probably because of an upgrade of Ruby, as two
hashes were passed to `I18n.t` (`vars` and `default`) without using the
splat operator.
When using `Discourse.cache.fetch` with an expiry, there's a potential for a race condition due to how we read the data from redis.
The code used to be
```ruby
raw = redis.get(key) if !force
entry = read_entry(key) if raw
return entry if raw && !(entry == :__corrupt_cache__)
```
with `read_entry` defined as follow
```ruby
def read_entry(key)
if data = redis.get(key)
Marshal.load(data)
end
rescue => e
:__corrupt_cache__
end
```
If the value at "key" expired in redis between `raw = redis.get` and `entry = read_entry`, the `entry` variable would be `nil` despite `raw` having a value.
We would then proceed to return `entry` (which is `nil`) thinking it had a value, when it didn't.
The first `redis.get` can be skipped altogether and we can rely only on `read_entry` to read the data from redis. Thus avoiding the race condition and removing the double read operations.
Internal ref - t/132507
* SECURITY: Update default allowed iframes list
Change the default iframe url list to all include 3 slashes.
* SECURITY: limit group tag's name length
Limit the size of a group tag's name to 100 characters.
Internal ref - t/130059
* SECURITY: Improve sanitization of SVGs in Onebox
---------
Co-authored-by: Blake Erickson <o.blakeerickson@gmail.com>
Co-authored-by: Régis Hanol <regis@hanol.fr>
Co-authored-by: David Taylor <david@taylorhq.com>
Followup 4aea12fdcb
In certain config areas (like About) we want to be able
to fetch specific site settings by name. In this case,
sometimes we need to be able to fetch hidden settings,
in cases where a config area is still experimental.
Splitting out a different endpoint for this purpose
allows us to be stricter with what we return for config
areas without affecting the main site settings UI, revealing
hidden settings before they are ready.
Since switching to Maxmind permalinks to download the databases in
7079698cdf, we have received multiple
reports about rebuilds failing as `maxminddb:refresh` runs during
the rebuilds and failing to download the databases cases the rebuilds to
fail.
Downloading Maxmind databases should not sit in the critical rebuild
path but since we are close to the Discourse 3.3 release, we have opted
to just rescue all errors encountered when downloading the databases.
In the near future after the Discourse 3.3 release, we will be looking
at moving the downloading of maxmind databases out of the rebuild path.
We have a dedicated admin page (`/admin/customize/email_templates`) that lets admins customize all emails that Discourse sends to users. The way this page works is that it lists all translations strings that are used for emails, and the list of translation strings is currently hardcoded and hasn't been updated in years. We've had a number of new emails that Discourse sends, so we should add those templates to the list to let admins easily customize those templates.
Meta topic: https://meta.discourse.org/t/3-2-x-still-ignores-some-custom-email-templates/308203.
* FIX: Ensure JsLocaleHelper to obly outputs up-to-date translations
The old implementation forgot to filter out deprecated
translations, causing these translations to incorrectly override the new
locale in the frontend.
This commit fills in the forgotten where clause, filtering only the
up-to-date part.
Related meta topic: https://meta.discourse.org/t/outdated-translation-replacement-causing-missing-translation/314352
This patch fixes the `i18n:check` rake task which has been broken by
the `MessageFormat` upgrade.
It also adds a spec to ensure we generate valid MF code for all our
available locales.
Currently, when adding translation overrides, values aren’t validated
for MF strings. This results in being able to add invalid plural keys or
even strings containing invalid syntax.
This patch addresses this issue by compiling the string when saving an
override if the key is detected as an MF one.
If there’s an error from the compiler, it’s added to the model errors,
which in turn is displayed to the user in the admin UI, helping them to
understand what went wrong.
When we show user tips, we immediately send an AJAX request to mark the
tiup as seen. This is done in the background. However, when system tests
are run, sometimes that request is not completed before the test ends.
This causes the test to be flakey.
One way to fix this is to force the system test run to wait for the AJAX
request to complete. However, this is not ideal because it makes the
test suite slower on each run.
Instead, this commit removes the flakey assertion and adds an alternative
assertion in the frontend tests that ensures the background request is
sent when the user tip is shown.
Form Kit is our new form library/framework for unifying the way forms look across Discourse. The admin config area for the /about page is a new form that isn't currently used, so it makes sense for it to be one of the first forms to be migrated to Form Kit to test the library.
Co-authored-by: Joffrey JAFFEUX <j.jaffeux@gmail.com>
We can get translations with invalid plural keys from Crowdin
or from custom overrides. Currently, this will raise an error and the
locales won’t be outputted at all.
This patch addresses this issue by using the new `strict: false` option
of our `messageformat-wrapper` gem, allowing to generate locales even if
there are invalid plural keys present.
* FIX: Add post id to the anchor to prevent two identical anchors
We generate anchors for headings in posts. This works fine if there is
only one post in a topic with anchors. The problem comes when you have
two or more posts with the same heading. PrettyText generates anchors
based on the heading text using the raw context of each post, so it is
entirely possible to generate the same anchor for two posts in the same
topic, especially for topics with template replies
Post1:
# heading
context
Post2:
# heading
context
When both posts are on the page at the same time, the anchor will only
work for the first post, according to the [HTML specification](https://html.spec.whatwg.org/multipage/browsing-the-web.html#scroll-to-the-fragment-identifier).
> If there is an a element in the document tree whose root is document
> that has a name attribute whose value is equal to fragment, then
> return the *first* such element in tree order.
This bug is particularly serious in forums with non-Latin languages,
such as Chinese. We do not generate slugs for Chinese, which results in
the heading anchors being completely dependent on their order.
```ruby
[2] pry(main)> PrettyText.cook("# 中文")
=> "<h1><a name=\"h-1\" class=\"anchor\" href=\"#h-1\"></a>中文</h1>"
```
Therefore, the anchors in the two posts must be in exactly the same by
order, causing almost all of the anchors in the second post to be
invalid.
This commit solves this problem by adding the `post_id` to the anchor.
The new anchor generation method will add `p-{post_id}` as a prefix when
post_id is available:
```ruby
[3] pry(main)> PrettyText.cook("# 中文", post_id: 1234)
=> "<h1><a name=\"p-1234-h-1\" class=\"anchor\" href=\"#p-1234-h-1\"></a>中文</h1>"
```
This way we can ensure that each anchor name only appears once on the
same topic. Using post id also prevents the potential possibility of the
same anchor name when splitting/merging topics.
We are investigating a memory leak in Sidekiq and saw the following line
when comparing heap dumps over time.
`Allocated IMEMO 14775 objects of size 591000/7389528 (in bytes) at:
/var/www/discourse/app/jobs/onceoff/onceoff.rb:36`
That line in question was doing a `.select { |klass| klass < self }` on
`ObjectSpace.each_object(Class)`. This for some reason is allocating a
whole bunch of `IMEMO` objects which are instruction sequence objects.
Instead of diving deeper into why this might be leaking, we can just
save our time by switching to an implementation that is more efficient
and does not require looping through a ton of objects.
Followup e954eb234e
Adds a test for the defer_track_view method on topic controller
to check that the early returns (nexts) work correctly
without errors.
Previously in these 2 PRs, we introduced a new site setting `SiteSetting.enforce_second_factor_on_external_auth`.
https://github.com/discourse/discourse/pull/27547https://github.com/discourse/discourse/pull/27674
When disabled, it should enforce 2FA for local login with username and password and skip the requirement when authenticating with oauth2.
We stored information about the login method in a secure session but it is not reliable. Therefore, information about the login method is moved to the database.
Previously, we couldn't change the user agent name dynamically for onebox requests. In this commit, a new hidden site setting `onebox_user_agent` is created to override the default user agent value specified in the [initializer](c333e9d6e6/config/initializers/100-onebox_options.rb (L15)).
Co-authored-by: Régis Hanol <regis@hanol.fr>
- Ensure main title is set as 'not visible' when removed from DOM
- `deactivate` -> `willTransition` to ensure proper behavior when navigating between multiple topics
Followup to bdec564d14
- Move topic-title on-screen detection to intersection-observer (via new modifier), and add a boolean to header service which indicates whether it's on-screen
- Move scroll-direction from Mixin to dedicated service. Teach it to pause scroll monitoring while transitions are in progress, to avoid reporting false changes in scroll direction. Also resets to a 'neutral' state after each navigation, which indicates the the user has not yet scrolled
- When entering a topic view, notify the header service which post is being targeted. It can then make an educated guess about whether the topic title is likely to be in-view
- Update header service `topicInfoVisible` to be a declarative getter, based on the three refactored sources of truth mentioned above
- Update legacy widget header to use the header service for topic info
All of these changes mean that the header no longer 'flickers' when navigating into topics on mobile. As well as the improved UX, this should also improve our Cumulative Layout Shift (CLS) web vital metrics.
* FEATURE: Clean up previously logged information after permanently deleting posts
When soft deleteing a topic or post, we will log some details in the
staff log, including the raw content of the post. Before this commit, we
will not clear the information in these records. Therefore, after
permanently deleting the post, `UserHistory` still retains copy of the
permanently deleted post. This is an unexpected behaviour and may raise
some potential legal issues.
This commit adds a behavior that when a post is permanently deleted, the
details column of the `UserHistory` associated with the post will be
overwritten to "(permanently deleted)". At the same time, for permanent
deletion, a new `action_id` is introduced to distinguish it from soft
deletion.
Related meta topic: https://meta.discourse.org/t/introduce-a-way-to-also-permanently-delete-the-sensitive-info-from-the-staff-logs/292546
There is a bug with chat type flags - "An error occurred: Applies to is not included in the list"
Flag.valid_applies_to_types is a set of core types and types registered by plugins `Set.new(DEFAULT_VALID_APPLIES_TO | DiscoursePluginRegistry.flag_applies_to_types)`
Using lamba should ensure that valid values are calculated dynamically.
This commit promotes the new topic bulk action
menu introduced in 89883b2f51
to the main method of bulk selecting and performing
actions on topics. The site setting flag gating this
feature is deleted, and the old bulk select code is
deleted as well.
The new modal shows a loading spinner while operations
are taking place, allows selecting the action from a dropdown
instead of having a 2-step modal flow,
and also supports additional options for some operations, e.g.
allowing Close silently.
Replaces the existing topic map with the experimental-topic-map made by @awesomerobot.
---------
Co-authored-by: awesomerobot <kris.aubuchon@discourse.org>
This commit introduces the foundation for a new design for the /about page that we're currently working on. The current version will remain available and still be the default until we finish the new version and are ready to roll out. To opt into the new version right now, add one or more group to the `experimental_redesigned_about_page_groups` site setting and members in those groups will get the new version.
Internal topic: t/128545.
This ensures that elasticsearch doesn't parse it as an object. There are
too many combination of job opts so we don't want elasticsearch to be
parsing and indexing this field as an object.
This improves the `TextSentinel` so that we don't consider CJK text as being uppercase and thus failing the validator.
It also optimizes the entropy computation by using native ruby `.bytes` to get all the bytes from the text.
It also tweaks the `seems_pronounceable?` and `seems_unpretentious?` check to use the `\p{Alnum}` unicode regexp group to account for non-latin languages.
Reference - https://meta.discourse.org/t/body-seems-unclear-error-when-users-are-typing-in-chinese/88715
Inspired by https://github.com/discourse/discourse/pull/27900
Co-authored-by: Paulo Magalhaes <mentalstring@gmail.com>
* FEATURE: Add logging for CustomEmoji
We didn't provide any logs for CustomEmoji before, nor did we record the
person who added any emoji in the database. As a result, the staff had
no way to trace back who added a certain emoji.
This commit adds a new column `user_id` to `custom_emojis` to record the
creator of an emoji. At the same time, a log is added for staff logs to
record who added or deleted a custom emoji.
If a user has a required action, e.g. adding a 2FA method or filling in new required fields, we disable client-side routing except to allowed pages.
This led to a situation where a user might navigate away from e.g. the profile page to look at the new ToS, and then being "stuck" due to not knowing how to get back to accept the new terms.
This PR makes it so that if you click any restricted link, instead of doing nothing we transition the user back to the page where they can take the required action.
User actions can trigger functions that render changes to the screen within the same cycle (e.g. pressing the reply button will cause the login modal to pop up), potentially impacting performance and causing some jank on slower devices.
This change inserts runAfterFramePaint where certain actions are triggered. Below are some screenshots indicating an improved INP for some of the buttons affected on controls with the highest INPs. The two places where this is added help with several actions, e.g. user + group cards, generic button action usage.
When tags contain an underscore we should allow filtering in the same way, previously due to the regex those with underscores were not being found when filtering.
This commit ensures that we reset the `missing_s3_uploads` status count
if there are no inventory files which are at least 2 days older than the
site's restored date.
Otherwise, a site with missing uploads but was subsequntly restored will
be continue to report missing uploads for 2 days.
Followup 560e8aff75
The linked commit allowed oneboxing private GitHub PRs,
issues, commits, and so on, but it didn't actually allow
oneboxing the root repo e.g https://github.com/discourse/discourse-reactions
We didn't have an engine for this, we were relying on OpenGraph
tags on the HTML rendering of the page like we do with other
oneboxes.
To fix this, we needed a new github engine for repos specifically.
Also, this commit adds a `data-github-private-repo` attribute to
PR, issue, and repo onebox HTML so we have an indicator of
whether the repo was private, which can be used for theme components
and so on.
Our old group SMTP SSL option was a checkbox,
but this was not ideal because there are actually
3 different ways SSL can be used when sending
SMTP:
* None
* SSL/TLS
* STARTTLS
We got around this before with specific overrides
for Gmail, but it's not flexible enough and now people
want to use other providers. It's best to be clear,
though it is a technical detail. We provide a way
to test the SMTP settings before saving them so there
should be little chance of messing this up.
This commit also converts GroupEmailSettings to a glimmer
component.
Allow admin to create custom flag which requires an additional message.
I decided to rename the old `custom_flag` into `require_message` as it is more descriptive.
# Context
Currently there is no way to add a custom filter to the experimental `/filter` endpoint. While you can implement a custom `status:` there is no way to include the user's input in a custom query.
# PR
This PR adds the ability to implement a custom filter. eg. `CUSTOM_FILTER:foo`
- Add `add_filter_custom_filter` for extension
- Add specs
This PR introduces FormKit, a component-based form library designed to simplify form creation and management. This library provides a single `Form` component, various field components, controls, validation mechanisms, and customization options. Additionally, it includes helpers to facilitate testing and writing specifications for forms.
1. **Form Component**:
- The main component that encapsulates form logic and structure.
- Yields various utilities like `Field`, `Submit`, `Alert`, etc.
**Example Usage**:
```gjs
import Form from "discourse/form";
<template>
<Form as |form|>
<form.Field
@name="username"
@title="Username"
@validation="required"
as |field|
>
<field.Input />
</form.Field>
<form.Field @name="age" @title="Age" as |field|>
<field.Input @type="number" />
</form.Field>
<form.Submit />
</Form>
</template>
```
2. **Validation**:
- Built-in validation rules such as `required`, `number`, `length`, and `url`.
- Custom validation callbacks for more complex validation logic.
**Example Usage**:
```javascript
validateUsername(name, value, data, { addError }) {
if (data.bar / 2 === value) {
addError(name, "That's not how maths work.");
}
}
```
```hbs
<form.Field @name="username" @validate={{this.validateUsername}} />
```
3. **Customization**:
- Plugin outlets for extending form functionality.
- Styling capabilities through propagated attributes.
- Custom controls with properties provided by `form` and `field`.
**Example Usage**:
```hbs
<Form class="my-form" as |form|>
<form.Field class="my-field" as |field|>
<MyCustomControl id={{field.id}} @onChange={{field.set}} />
</form.Field>
</Form>
```
4. **Helpers for Testing**:
- Test assertions for form and field validation.
**Example usage**:
```javascript
assert.form().hasErrors("the form shows errors");
assert.form().field("foo").hasValue("bar", "user has set the value");
```
- Helper for interacting with he form
**Example usage**:
```javascript
await formKit().field("foo").fillIn("bar");
```
5. **Page Object for System Specs**:
- Page objects for interacting with forms in system specs.
- Methods for submitting forms, checking alerts, and interacting with fields.
**Example Usage**:
```ruby
form = PageObjects::Components::FormKit.new(".my-form")
form.submit
expect(form).to have_an_alert("message")
```
**Field Interactions**:
```ruby
field = form.field("foo")
expect(field).to have_value("bar")
field.fill_in("bar")
```
6. **Collections handling**:
- A specific component to handle array of objects
**Example Usage**:
```gjs
<Form @data={{hash foo=(array (hash bar=1) (hash bar=2))}} as |form|>
<form.Collection @name="foo" as |collection|>
<collection.Field @name="bar" @title="Bar" as |field|>
<field.Input />
</collection.Field>
</form.Collection>
</Form>
```
Previously, we did not log any topic slow mode changes. This allowed
some malicious (or just careless) TL4 users to delete slow modes created
by moderators at will. Administrators could not see who changed the slow
mode unless they had SQL knowledge and used Data Explorer.
This commit enables logging who turns slow mode on, off, or changes it.
Related meta topic: https://meta.discourse.org/t/why-is-there-no-record-of-who-added-or-removed-slow-mode/316354
Followup 7b627dc14b
In this other commit, I changed the email settings validator
to always use the `login` authentication method for
office365 and outlook, but I didn't change the actual
group SMTP mailer to do this.
This commit fixes that issue and does some minor refactoring.
This commit updates `TopicQuery.validators` to cover all of the
public options listed in `TopicQuery.public_valid_options`. This is done
to fix the app returning a 500 response code when an invalid value, such
as a hash, is passed as a query param when accessing the various topic
list routes.
Originally in 964da21817
we hid the SMTPAuthenticationError message except in
very specific cases. However this message often contains
helpful information from the mail provider, for example
here is a response from Office365:
> 535 5.7.139 Authentication unsuccessful, user is locked by your
organization's security defaults policy. Contact your administrator.
So, we will show the error message in the modal UI instead
of supressing it with a generic message to be more helpful.
The watch words controller creation function, create_or_update_word(), doesn’t validate the size of the replacement parameter, unlike the word parameter, when creating a replace watched word. So anyone with moderator privileges can create watched words with almost unlimited characters.
This commit updates `StaticController#enter` to not redirect to invalid
paths when the `redirect` param is set. Instead it should redirect to `/` when the
`redirect` param is invalid.
Followup 560e8aff75
GitHub auth tokens cannot be made with permissions to
access multiple organisations. This is quite limiting.
This commit changes the site setting to be a "secret list"
type, which allows for a key/value mapping where the value
is treated like a password in the UI.
Now when a GitHub URL is requested for oneboxing, the
org name from the URL is used to determine which token
to use for the request.
Just in case anyone used the old site setting already,
there is a migration to create a `default` entry
with that token in the new list setting, and for
a period of time we will consider that token valid to
use for all GitHub oneboxes as well.
### What is the problem?
We have recently added a new option to add user fields required for existing users. This is in contrast to requiring fields only on sign-up.
This revealed an existing problem. Consider the following:
1. User A signs up.
2. Admin adds a new user field required on sign-up. (Should not apply to User A since they already signed up.)
3. User A tries to update their profile.
**Expected behaviour:**
No problem.
**Actual behaviour:**
User A receives an error saying they didn't fill up all required fields.
### How does this fix it?
When updating profile, we only check that required fields that are "for all users" are filled. Additionally, we check that fields that were required on sign-up and have previously been filled are not blanked out.
Allow admin to create custom flag which requires an additional message.
I decided to rename the old `custom_flag` into `require_message` as it is more descriptive.
Currently, when a plugin registers a new reviewable type or extends a
list method (through `register_reviewble_type` and `extend_list_method`
respectively), the new array is statically computed and always returns
the same value. It will continue to return the same value even if the
plugin is disabled (it can be a problem in a multisite env too).
To address this issue, this patch changes how `extend_list_method`
works. It’s now using `DiscoursePluginRegistry.define_filtered_register`
to create a register on the fly and store the extra values from various
plugins. It then combines the original values with the ones from the
registry. The registry is already aware of disabled plugins, so when a
plugin is disabled, its registered values won’t be returned.
This commit improves the logging of Sidekiq errors when
`ENABLE_LOGSTASH_LOGGER` is set to 1. Prior to this change, we would
only log the message and the backtrace. After this change, useful
information like `job.class`, `job.opts`, `job.problem_db`,
`exception.class` and `exception.message` are included in the log line
as well.
Both office365 and outlook SMTP servers need LOGIN
SMTP authentication instead of PLAIN (which is what
we are using by default). This commit uses that
unconditionally for these servers, and also makes
sure to use STARTTLS for them too.
Before Rails 7.1, the `config.i18n.raise_on_missing_translations` option
was raising only in controllers and views, now it’s anywhere in the app.
It means it raises each time `#description` is called for a setting that
is missing a proper description (and we have a ton of them). Most of the
time it’s fine, as those are usually settings that aren’t shown to the
user.
We can’t just let the code blow up every time there’s a setting with a
missing description, that’s why it’s currently returning an empty
string when the translation is missing.
However, this silently broke our I18n integrity spec that was relying on
the old “Translation missing” message to detect missing translations.
This patch addresses this issue by checking the description isn’t an
empty string. It caught a missing translation by the way.
This change how we present attachments from incoming emails to now be "hidden" in a "[details]" so they don't "hang" at the end of the post.
This is especially useful when using Discourse as a support tool where email is the main communication channel. For various reasons, images are often duplicated by email user agents, and hiding them behind the details block help keep the conversation focused on the isssue at hand.
Internal ref t/122333
This patch upgrades the MessageFormat library to version 3.3.0 from
0.1.5.
Our `I18n.messageFormat` method signature is unchanged, and now uses the
new API under the hood.
We don’t need dedicated locale files for handling pluralization rules
anymore as everything is now included by the library itself.
The compilation of the messages now happens through our
`messageformat-wrapper` gem. It then outputs an ES module that includes
all its needed dependencies.
Most of the changes happen in `JsLocaleHelper` and in the `ExtraLocales`
controller.
A new method called `.output_MF` has been introduced in
`JsLocaleHelper`. It handles all the fetching, compiling and
transpiling to generate the proper MF messages in JS. Overrides and
fallbacks are also handled directly in this method.
The other main change is that now the MF translations are served through
the `ExtraLocales` controller instead of being statically compiled in a
JS file, then having to patch the messages using overrides and
fallbacks. Now the MF translations are just another bundle that is
created on the fly and cached by the client.
Drafts used to be deleted instead of being destroyed. The callbacks that
clean up the upload references were not being called. As a result, the
upload references were not cleaned up and uploads were not deleted
either. This has been partially fixed in 9655bf3e.
This commit updates `DiscourseLogstashLogger#add_with_opts` to avoid
logging messages that matches regexp patterns configured in
`Logster.store.ignore`. Those error logs are mostly triggered by clients
and do not serve any useful purpose.
This commit updates `DiscourseLogstashlogger` to add the
`exception_class` and `exception_message` field to the log line when the
`progname` of the log message is `web-exception` which is Logster's
logging of exceptions during a web request.
The `exception_class` and `exception_message` fields allows consumers of
the logs to easily group logs together.
This commit adds the ability to onebox private GitHub
commits, pull requests, issues, blobs, and actions using
a new `github_onebox_access_token` site setting. The token
must be set up in correctly to have access to the repos needed.
To do this successfully with the Oneboxer, we need to skip
redirects on the github.com host, otherwise we get a 404
on the URL before it is translated into a GitHub API URL
and has the appropriate headers added.
Handles the cases where the sections titles are Unicode only strings, allowing them to be expanded separately if the Unicode string contains letters.
Also prevents a sidebar section with the header hidden to be displayed collapsed.
Followup 3ff7ce78e7
Basing this setting on referrer was too brittle --
the referrer header can easily be ommitted or changed.
Instead, for the small amount of use cases that this
site setting serves, we can use a group-based setting
instead, changing it to `cross_origin_opener_unsafe_none_groups`
instead.
When a crawler visits a topic that has a deleted author, it would error because the `show.html.erb` view was expecting a user to be always present.
This ensure we don't render the "author" meta data when the author of the topic has been deleted.
Internal ref t/132508
This change eliminates a couple of instances where subfolder urls are badly formatted, in most cases we can use Discourse.base_url_no_prefix to prevent adding the subfolder to the base url.
Adds a report to show the top 100 most viewed topics in a date range,
combining logged in and anonymous views. Can be filtered by category.
This is a followup to 527f02e99f
and d1191b7f5f. We are also going to
be able to see this data in a new topic map, but this admin report
helps to see an overview across the forum for a date range.
This commit adds a hidden `s3_inventory_bucket_region` site setting to
specify the region of the `s3_inventory_bucket` when the `S3Inventory`
class initializes an instance of the `S3Helper`. By default, the
`S3Helper` class uses the value of the `s3_region` site setting but the
region of the `s3_inventory_bucket` is not always the same as the
`s3_region` configured.
Background:
In order to redrive failed webhook events, an operator has to go through and click on each. This PR is adding a mechanism to retry all failed events to help resolve issues quickly once the underlying failure has been resolved.
What is the change?:
Previously, we had to redeliver each webhook event. This merge is adding a 'Redeliver Failed' button next to the webhook event filter to redeliver all failed events. If there is no failed webhook events to redeliver, 'Redeliver Failed' gets disabled. If you click it, a window pops up to confirm the operator. Failed webhook events will be added to the queue and webhook event list will show the redelivering progress. Every minute, a job will be ran to go through 20 events to redeliver. Every hour, a job will cleanup the redelivering events which have been stored more than 8 hours.
This is a follow up to 005f623c42 where
we want to truncate the user agent string instead of nulling out the
column when the user agent string is too low. By truncating, we still
get to retain information that can still be useful.
Adds experimental_flags_admin_page_enabled_groups (default "")
to remove the Moderation Flags link from the admin sidebar for now,
there are still a few bugfixes that need to be done before we
are comfortable with turning this on more widely. This is
a _temporary_ flag, we will be removing this once the feature
is more stable.
Add a new column - `user_agent` - to the `SearchLog` table.
This column can be null as we are only allowing a the user-agent string to have a max length of 2000 characters. In the case the user-agent string surpasses the max characters allowed, we simply nullify the value, and save/write the log as normal.
The user serializer groups method previously relied on the members_visible_groups to determine groups that the user should be able to see, however this setting was intended for visibility of group members (which is entirely different).
The result of this could be seen when choosing a primary group from user preferences -> account, due to the serializer the group name was not visible when members_visible_groups was set to owners.
In our official Docker image, running git commands results in the
following error:
```
fatal: detected dubious ownership in repository at '/var/www/discourse'
To add an exception for this directory, call:
git config --global --add safe.directory /var/www/discourse
```
This commit rewrites `DiscourseLogstashLogger` to not be an instance
of `LogstashLogger`. The reason we don't want it to be an instance of
`LogstashLogger` is because we want the new logger to be chained to
Logster's logger which can then pass down useful information like the
request's env and error backtraces which Logster has already gathered.
Note that this commit does not bother to maintain backwards
compatibility and drops the `LOGSTASH_URI` and `UNICORN_LOGSTASH_URI`
ENV variables which were previously used to configure the destination in
which `logstash-logger` would send the logs to. Instead, we introduce
the `ENABLE_LOGSTASH_LOGGER` ENV variable to replace both ENV and remove
the need for the log paths to be specified. Note that the previous
feature was considered experimental as stated in d888d3c54c
and the new feature should be considered experimental as well. The code
may be moved into a plugin in the future.
Instead of the deprecated 'min trust level to allow ignore' in order to reduce the number of deprecation notices in the logs.
This tweaks a few serializers so that the 'can_ignore_users?` property is always coming from the server and properly used on the client-side.
SiteSetting.hide_user_profiles_from_public raises a Forbidden, which disallows our after_action: add no index header from triggering.
This fix makes sure that the no index header gets added via before_action instead
When tag preference in group and site settings are both used with same default notification level it will break new users signups because it tries to create duplicate records in the tag_users table which can’t happen because we have a unique index set.
If an existing user (John) accepts an invite created by Kenny to a group, John may be seen as invited by Kenny, despite already having an account on the site.
This fix removes the bug by excluding invites that determine the invited_by after the user's creation date. The delay buffer in the query accounts for invites that also create the user at the same time.
Badges can have their associated image uploads deleted. When this happens, any user who has that badge will have their profile page error out.
After this fix, when deleting an upload that's associated with a badge, we nullify the foreign key ID on the badge. This makes the existing safeguard work correctly.
There's currently a race condition in the following spec:
65be7a7880/spec/system/admin_about_config_area_spec.rb (L70-L95)
where the form can be saved before the image uploader field has finished uploading the selected image and causing the assertion at line 94 to fail with the following error:
```
Failure/Error: expect(SiteSetting.about_banner_image.sha1).to eq(Upload.generate_digest(image_file))
NoMethodError:
undefined method `sha1' for nil
[Screenshot Image]: /__w/discourse/discourse/tmp/capybara/failures_r_spec_example_groups_admin_about_config_area_page_the_general_settings_card_can_saves_its_fields_to_their_corresponding_site_settings_312.png
~~~~~~~ JS LOGS ~~~~~~~
http://localhost:31338/assets/vendor.js 15902:14 "WARNING: uppy needs a unique id, pass one in to the component implementing this mixin"
~~~~~ END JS LOGS ~~~~~
./spec/system/admin_about_config_area_spec.rb:94:in `block (3 levels) in <main>'
./spec/rails_helper.rb:552:in `block (3 levels) in <top (required)>'
./spec/rails_helper.rb:552:in `block (2 levels) in <top (required)>'
./spec/rails_helper.rb:513:in `block (3 levels) in <top (required)>'
./spec/rails_helper.rb:503:in `block (2 levels) in <top (required)>'
./spec/rails_helper.rb:460:in `block (2 levels) in <top (required)>'
./vendor/bundle/ruby/3.3.0/gems/webmock-3.23.1/lib/webmock/rspec.rb:39:in `block (2 levels) in <top (required)>'
```
This PR fixes the problem by making the system test wait for the image to finish uploading (with 10 seconds timeout) before carrying out the rest of the system test.
Followup 2f2da72747
When the "Consolidated Pageviews with Browser Detection (Experimental)"
report was introduced, we started counting the original
"page_view_logged_in" and "page_view_anon" ApplicationRequest
data as "Other Pageviews", subtracting
"page_view_anon_browser" and "page_view_logged_in_browser" from
this number.
However we unknowingly automatically started counting these
browser-based page views, which are a subset of the total
"page_view_logged_in" and "page_view_anon" counts, in the
original "Pageviews" report, leading to double counting
which meant that when you looked at the data for each
report side-by-side the data didn't add up.
This commit fixes the issue by not counting the "browser"
pageviews in the Pageviews report, and making the code where
we were only counting certain types of requests for this
report more plain, explicitly stating which types of requests
we want.
When a topic embed is run with either no tags argument or a nil tag argument
this should not affect any existing tags.
Only update topic tags when tags argument is explicitly empty.
Before checking if flags were reordered on the topic page, we need to ensure that the reorder action was finished. To achieve it "saving" CSS is added and removed when AJAX call is completed.
Followup 2f2da72747
This commit moves topic view tracking from happening
every time a Topic is requested, which is susceptible
to inflating numbers of views from web crawlers, to
our request tracker middleware.
In this new location, topic views are only tracked when
the following headers are sent:
* HTTP_DISCOURSE_TRACK_VIEW - This is sent on every page navigation when
clicking around the ember app. We count these as browser page views
because we know it comes from the AJAX call in our app. The topic ID
is extracted from HTTP_DISCOURSE_TRACK_VIEW_TOPIC_ID
* HTTP_DISCOURSE_DEFERRED_TRACK_VIEW - Sent when MessageBus initializes
after first loading the page to count the initial page load view. The
topic ID is extracted from HTTP_DISCOURSE_DEFERRED_TRACK_VIEW.
This will bring topic views more in line with the change we
made to page views in the referenced commit and result in
more realistic topic view counts.
By default, secure sessions expire after 1 hour.
For OAuth authentication it should expire at the same time when the authentication cookie expires - `SiteSetting.maximum_session_age.hours`.
It is possible that the forum will not have persistent sessions, based on `persistent_sessions` site setting. In that case, with next username and password authentication we need to reset information about OAuth.
Bug introduced in this PR - https://github.com/discourse/discourse/pull/27547
* FIX: Division by zero error on WebHookEventsDailyAggregate
* DEV: Update implementation of WebHookEventsDailyAggregate to handle division by zero error
I am changing many of these to notes or resolving them as is,
most of these I have not actively worked on in years so someone
else can work on them when we get to these areas again.
This commit continues work laid out by ffec8163b0 for the admin config page for the /about page. The last commit set up the user interface, and this one sets up all the wiring needed to make the input fields and save buttons actually work.
Internal topic: t/128544.
While creating a new category if the user didn't specify a value for `minimum_required_tags` input but clicked it then it returned the "PG::NotNullViolation: null value in column 'minimum_required_tags'" error.
When bad data is provided in the URI for redirecting to a category,
Rails raises an `ActionController::Redirecting::UnsafeRedirectError`
error, leading to a 500 error.
This patch catches the exception to render a 404 instead.
Currently redirecting to an external URL through a permalink doesn’t
work because Rails raises a
`ActionController::Redirecting::UnsafeRedirectError` error.
This wasn’t the case before we upgraded to Rails 7.0.
This patch fixes the issue by using `allow_other_host: true` on the
redirect.
If, for whatever reasons, the user's locale is "blank" and an admin is accepting their group membership request, there will be an error because we're generating posts with the locale of recipient.
In order to fix this, we now use the `user.effective_locale` which takes care of multiple things, including returning the default locale when the user's locale is blank.
Internal ref - t/132347
When using the full page search and filtering down to a specific topic, the sort order was overwritten to by by "post_number".
This was confusing because we allow different type of sort order in the full search page.
This fixes it by only sorting by post_number when there's no "global" sort order defined.
Since the "new topic map" uses the search endpoint behind the scene, this also fixes the "most likes" popup.
Context - https://meta.discourse.org/t/searching-order-seems-to-be-broken-when-searching-in-topic/312303
* DEV: add db migration to filter out invalid csp script source values
* DEV: insert UserHistory row during data migration to track old value for content_security_policy_script_src site setting
When visiting a user profile, and then opening the search, there's an option to filter down by posts made by that user.
When clicking that option, it used to pre-fill the "search bar" with "@<username>" to filter down the search.
This restore this behaviour and add a system spec to ensure it doesn't regress.
Context - https://meta.discourse.org/t/in-posts-by-search-option-does-not-work-when-clicked/312916
We want to allow admins to make new required fields apply to existing users. In order for this to work we need to have a way to make those users fill up the fields on their next page load. This is very similar to how adding a 2FA requirement post-fact works. Users will be redirected to a page where they can fill up the remaining required fields, and until they do that they won't be able to do anything else.
Followup 96a0781bc1
When sending emails where secure uploads is enabled
and secure_uploads_allow_embed_images_in_emails is
true, we attach the images to the email, and we
do some munging with the final email so the structure
of the MIME parts looks like this:
```
multipart/mixed
multipart/alternative
text/plain
text/html
image/jpeg
image/jpeg
image/png
```
However, we were not specifying the `boundary` of the
`multipart/mixed` main content-type of the email, so
sometimes the email would come through appearing to
have an empty body with the entire thing attached as
one attachment, and some mail parsers considered the
entire email as the "epilogue" and/or "preamble".
This commit fixes the issue by specifying the boundary
in the content-type header per https://www.w3.org/Protocols/rfc1341/7_2_Multipart.html
Adds a checkbox to filter untranslated text strings in the admin UI, behind a hidden and default `false` site setting `admin_allow_filter_untranslated_text`.
Followup to 0e1102b332
Minor followup, makes the condition check against the
boolean val, see the difference here:
```ruby
!SiteSetting.enforce_second_factor_on_external_auth && "true"
=> "true"
```
vs:
```ruby
!SiteSetting.enforce_second_factor_on_external_auth && "true" == "true"
=> true
```
* Removed the link from the title, so the settings can only be accessed via the settings button on the right
* Added an icon to the "Learn more" link to indicate that it opens a new window
* Made various styling adjustments
* DEV: Upgrade Rails to 7.1
* FIX: Remove references to `Rails.logger.chained`
`Rails.logger.chained` was provided by Logster before Rails 7.1
introduced their broadcast logger. Now all the loggers are added to
`Rails.logger.broadcasts`.
Some code in our initializers was still using `chained` instead of
`broadcasts`.
* DEV: Make parameters optional to all FakeLogger methods
* FIX: Set `override_level` on Logster loggers (#27519)
A followup to f595d599dd
* FIX: Don’t duplicate Rack response
---------
Co-authored-by: Jarek Radosz <jradosz@gmail.com>
In some instances, the `modifications` of `tags` hasn't been properly serialized as a Ruby array but rather as a string (I've seen `""`, `"[]"`, and `"[\"\"]"`).
This generates an error when we try to `filter_tags` and remove `hidden_tags` (which is an array) from `tags` which might be a string.
Internal ref - t/131126
I wasn't able to figure out the root cause of this so I reverted the behavior that was introduced ~6 years ago in f2c060bdf2
This ensures that the theme id is resolved as early as possible in the
request cycle. This is necessary for the custom homepage to skip
preloading the wrong data.
Previously filter route was not setting topic list, this meant that
keyboard navigation using "G" "J" was not functioning.
This amends it by ensuring the list is set after looking up the model.
* DEV: Upgrade Rails to 7.1
* FIX: Remove references to `Rails.logger.chained`
`Rails.logger.chained` was provided by Logster before Rails 7.1
introduced their broadcast logger. Now all the loggers are added to
`Rails.logger.broadcasts`.
Some code in our initializers was still using `chained` instead of
`broadcasts`.
* DEV: Make parameters optional to all FakeLogger methods
* FIX: Set `override_level` on Logster loggers (#27519)
A followup to f595d599dd
* FIX: Don’t duplicate Rack response
---------
Co-authored-by: Jarek Radosz <jradosz@gmail.com>
In this PR we introduced `enforce_second_factor_on_external_auth` setting https://github.com/discourse/discourse/pull/27506
When it is set to false and the user is authenticated via OAuth, then we should not enforce the 2fa configuration.
We recently fixed a problem where secure upload images weren't re-attached when sending the activity summary e-mail.
This fix contained a bug that would lead to n copies of the e-mail body being included, n being the number of duplicates. This is because #fix_parts_after_attachments! was called once per attachment, and adding more parts to the multipart e-mail.
This PR fixes that by:
Adding a failing test case for the above.
Moving the looping over multiple posts into #fix_parts_after_attachments! itself.
When a post is cooked the links are extracted and `TopicLink` instances
are created for each of them. These links are used in various places,
including the topic view, user summary page, etc.
In previous commit 48e5d1a, hotlinked images from Oneboxes have been
ignored from the texts, but hotlinked images turned into Lightboxes
were still extracted.
The test that checks that securely uploaded images are re-attached to the digest e-mail wasn't rendering the actual digest e-mail template. This change fixes that.
Many site settings can be distructive or have huge side-effects
for a site that the admin may not be aware of when changing it.
This commit introduces a `requires_confirmation` attribute that
can be added to any site setting. When it is true, a confirmation
dialog will open if that setting is changed in the admin UI,
optionally with a custom message that is defined in client.en.yml.
If the admin does not confirm, we reset the setting to its previous
clean value and do not save the new value.
Some tooling may rely on an unsafe-none cross origin opener policy to work. This change adds a hidden site setting that can be used to list referrers where we add this header instead of the default one configured in cross_origin_opener_policy_header.
For Topic Embeds, we would prefer <article> to be the main article in a topic, rather than a table cell <td> with potentially a lot of data. However, in an example URL like here, the table cell (the very large code snippet) is seen as the Topic Embed's article due to the determined content weight by the Readability library we use.
In the newly released 0.7.1 cantino/ruby-readability#94, the library has a new option to exclude the library's default <td> element into content weighting. This is more in line with the original library where they only weighted <p>. So this PR excludes the td, as seen in the tests, to allow the actual article to be seen as the article. This PR also adds the details tag into the allow-list.
Followup 6b872c4c53
Even though we were showing a validation error for a reject
reason that was too long, we were still sending an email and
doing other operations on the user which we are rejecting.
This commit fixes this by validating the reviewable model
before attempting to do anything else after the reason is set.
A new admin setting called `enforce_second_factor_on_external_auth`. It allows users to authenticate using external providers even when 2FA is forced with `enforce_second_factor` site setting.
* Revert "FIX: Set `override_level` on Logster loggers (#27519)"
This reverts commit c1b0488c54.
* Revert "DEV: Make parameters optional to all FakeLogger methods"
This reverts commit 3318dad7b4.
* Revert "FIX: Remove references to `Rails.logger.chained`"
This reverts commit f595d599dd.
* Revert "DEV: Upgrade Rails to 7.1"
This reverts commit 081b00391e.
`Rails.logger.chained` was provided by Logster before Rails 7.1
introduced their broadcast logger. Now all the loggers are added to
`Rails.logger.broadcasts`.
Some code in our initializers was still using `chained` instead of
`broadcasts`.
Currently when a cache entry is corrupt, we log the event without doing
anything else. It means the cache is still corrupt, and the proper value
isn’t computed again.
Normally, it’s very rare the cache becomes corrupt, but it can happen
when upgrading Rails for example and the cache format changes. This is
normally handled automatically by Rails but since we’re using a custom
cache class, we have to do it ourselves.
This patch takes the same approach the Rails team did, when a cache
entry is corrupt, we treat it as a miss, recomputing the proper value
and caching it in the new format.
Wasn't quite handling the cases where a closing bracket `]` was used in the value of one of the attributes.
```markdown
[chat quote=user channel="[broken]"]
```
Would not be correctly parsed because we would _greedily_ use the first `]` as the end of the tag even though it might be a valid character when inside proper quotes.
c39a4de139/app/assets/javascripts/discourse-markdown-it/src/features/bbcode-block.js (L62)
Re-wrote the `parseBBCodeTag` to properly handle the following cases
- A closing tag (aka `[/name]`) which are easy since they don't have any attributes
- An old `[quote=...]` format we used that doesn't uses quotes but still has various attributes of the form `key:value`
- All three valid BBCode opening tag formats we support
- `[name]` without any attributes
- `[name=foo]` with a default value
- `[name foo=bar]` with some attributes
Ended up having to fix/rewrite the few bbcode rules that were using the `parseBBCodeTag` function, namely `d-wrap` and `discourse-local-dates`.
While working on this, I think I also found a way to get rid the of shims we had in place so that plugins could use the `parseBBCodeTag` function.
Reference - https://meta.discourse.org/t/having-a-right-bracket-in-a-channel-name-breaks-all-quotes-from-that-channel/308439
* Load search results in displayed order so that when more categories are loaded on scroll, they appear at the end,
* Limit the number of subcategories that are shown per category and display 'show more' links,
This commit adds ability to fetch a subset of site settings from the `/admin/site_settings` endpoint so that it can be used in all places where the client app needs access to a subset of the site settings.
Additionally, this commit also introduces a new service class called `UpdateSiteSetting` that encapsulates all the logic that surrounds updating a site setting so that it can be used to update site setting(s) anywhere in the backend. This service comes in handy with, for example, the controller for the flags admin config area which may need to update some site settings related to flags.
Internal topic: t/130713.
This commits introduces the `sidekiq_report_long_running_jobs_minutes`
global setting which allows a site administrator to log a warning in the
Rails log when a Sidekiq job has been running for too long.
The warning is logged with the backtrace of the thread that is
processing the Sidekiq job to make it easier to figure out what a
sidekiq job is stuck on.
When we turn on settings automatically for customers,
we sometimes use `.set_and_log` which will make a staff
action log for the site setting change. This is fine, but
there is no context for customers.
This change allows setting a message with `.set_and_log`, which
will be stored in the `details` column of the staff action log
created, which will show up on `/admin/logs/staff_action_logs`
---------
Co-authored-by: Kelv <kelv@discourse.org>
In #26642 we introduced a change that re-attaches securely uploaded images in the digest e-mail. However, this change assumed that the type argument to the Email::Sender constructor would be a symbol, but when it is coming from the UserEmail job it is a string. This PR fixes that.
This introduces the syntax of
`category:a,b,c` which will search across multiple categories.
Previously there was no way to allow search across a wide selection of
categories.
After working on the Webhook events filter by Status, I noticed that the 'Delivered' and 'Failed' options do not take the status param when loading more than fifty Webhook events. It causes to load all Webhook events regardless of its status after the first load.
This PR is adding webhook events status for the filter to the param when loading more than fifty Webhook events.
This commit tries another work around for the `Socket::ResolutionError: getaddrinfo: Temporary failure in name resolution`
error we are seeing on CI.
The problem with the previous workaround is that `Capybara.using_session` will attempt to resolve `localhost`
before yielding the block which means our retry code is not hit.
This problem may be related to https://bugs.ruby-lang.org/issues/20172
which hints at us potentially not being able to spin up threads on CI so
I'm adding a debugging statement when stuff fails.
decorator-transforms (https://github.com/ef4/decorator-transforms) is a modern replacement for babel's plugin-proposal-decorators. It provides a decorator implementation using modern browser features, without needing to enable babel's full suite of class feature transformations. This improves the developer experience and performance.
In local testing with Google's 'tachometer' tool, this reduces Discourse's 'init-to-render' time by around 3-4% (230ms -> 222ms).
It reduces our initial gzip'd JS payloads by 3.2% (2.43MB -> 2.35MB), or 7.5% (14.5MB -> 13.4MB) uncompressed.
This was previously reverted in 97847f6. This version includes a babel transformation which works around the bug in Safari <= 15.
For Cloudflare compatibility issues, check https://meta.discourse.org/t/311390
Not all HTML elements are converted into Markdown. Some are kept as HTML.
Without this fix XML/HTML entities that are formatted as text instead of code are swallowed by Discourse.
This also fixes quotes in the `title` attribute of the `<abbr>` tag.
Previously `HtmlToMarkdown` always converted HTML tables into Markdown tables. That lead to some badly formatted Markdown tables, e.g. when the table contained `rowspan` or `colspan`. This solves the issue by using very basic HTML tables in those cases.
When running ` rspec spec/services/external_upload_manager_spec.rb`
in the development environment, tests were failing with the following
error:
```
NameError:
uninitialized constant FakeS3::Aws
```
This commit introduces a hidden `s3_inventory_bucket` site setting which
replaces the `enable_s3_inventory` and `s3_configure_inventory_policy`
site setting.
The reason `enable_s3_inventory` and `s3_configure_inventory_policy`
site settings are removed is because this feature has technically been
broken since it was introduced. When the `enable_s3_inventory` feature
is turned on, the app will because configure a daily inventory policy for the
`s3_upload_bucket` bucket and store the inventories under a prefix in
the bucket. The problem here is that once the inventories are created,
there is nothing cleaning up all these inventories so whoever that has
enabled this feature would have been paying the cost of storing a whole
bunch of inventory files which are never used. Given that we have not
received any complains about inventory files inflating S3 storage costs,
we think that it is very likely that this feature is no longer being
used and we are looking to drop support for this feature in the not too
distance future.
For now, we will still support a hidden `s3_inventory_bucket` site
setting which site administrators can configure via the
`DISCOURSE_S3_INVENTORY_BUCKET` env.
This is a follow up to 9ff0805a1d. We
noticed that `localhost` can fail to resolve in other spots of the app
and not just in selenium-webdriver.
From the failing tests we have seen, the `getaddrinfo: Temporary failure in name resolution` error is only
seen from within the `Capybara.using_session` block. This commit aims to
ensure that `localhost` can be resolve after the new session is started.
* FEATURE: Add Filter for Webhook Events by Status
* Fixing multiple issues
* Lint
* Fixing multiple issues
* Change the range of the status for webhook events
We want to get rid of the old topic bulk actions modal
and use the new dropdown (currently gated behind
experimental_topic_bulk_actions_enabled_groups). To do
this we need to use the new dropdown in all places in the
UI.
This commit changes the full page search UI to use the new
topic bulk actions dropdown if experimental_topic_bulk_actions_enabled_groups
is enabled, and makes some minor refactors to make this work.
Also add a spec for both the old and new functionality.
This commit fixes a problem where the user will not be able to reset
their password when they only have security keys and backup codes
configured.
This commit also makes the following changes/fixes:
1. Splits password reset system tests to
`spec/system/forgot_password_spec.rb` instead of missing the system
tests in `spec/system/login_spec.rb` which is mainly used to test
the login flow.
2. Fixes a UX issue where the `Use backup codes` or `Use authenticator
app` text is shown on the reset password form when the user does
not have either backup codes or an authenticator app configured.
The mistake was made when flags were moved to the database. The `notify_moderators` (something else) flag should be the last position on the list.
This commit contains 3 changes:
- update fixtures order;
- remove position and enable from fixtures (they can be overridden by admin and we don't want seed to restore them);
- migration to fix data if the order was not changed by admin.
This commit updates the `UserPasswordExpirer.expire_user_password`
method to update `UserPassword#password_expired_at` when an existing
`UserPassword` record exists with the same `password_salt`,
`password_hash` and `password_algorithm`. This is to prevent the unique
validation error on `UserPassword#user_id` and
`UserPassword#password_hash` from being raised when the method is called
twice for a user that has not changed its password.
This commit includes various UX improvements to the reset
password page:
* Introduce a `hide-application-header-buttons` helper to do the following:
* Hide Sign Up and Log In buttons, they are not necessary on this flow
* Hide the sidebar, it is a distraction on this flow
* Improve messaging when a 2FA confirmation is required first
* Improve display of server-side ActiveRecord model validation errors
in password form, e.g. instead of "is the same as your current password"
we do "The password is the same as your current password"
* Move password tip to next line below input and move caps lock hint
inline with Show/Hide password toggle
* Add system specs for 2FA flow on reset password page
* Fixes a computed property conflict issue on the password reset
page when toggling 2FA methods
Continued work on moderate flags UI.
In this PR admins are allowed to change the order of flags. The notify user flag is always on top but all other flags can be moved.
On Github Actions, system tests which uses `Capybara#using_session` are
failing intermittently with the error "Socket::ResolutionError: getaddrinfo: Temporary failure in name resolution"
when `Selenium::WebDriver::Platform.localhost` tries to resolve
`localhost`.
Too much time has been spent trying to figure out why so we are giving
up here and just retrying the resolution of `localhost` on Github
Actions.
This makes it more obvious what's happening, and makes it much less likely that users will send repeated reset emails (and thereby hit the rate limit)
Followup to e97ef7e9af
This commit adds the ability for site administrators to mark users'
passwords as expired. Note that this commit does not add any client side
interface to mark a user's password as expired.
The following changes are introduced in this commit:
1. Adds a `user_passwords` table and `UserPassword` model. While the
`user_passwords` table is currently used to only store expired
passwords, it will be used in the future to store a user's current
password as well.
2. Adds a `UserPasswordExpirer.expire_user_password` method which can
be used from the Rails console to mark a user's password as expired.
3. Updates `SessionsController#create` to check that the user's current
password has not been marked as expired after confirming the
password. If the password is determined to be expired based on the
existence of a `UserPassword` record with the `password_expired_at`
column set, we will not log the user in and will display a password
expired notice. A forgot password email is automatically send out to
the user as well.
Even when the admin sidebar sections are collapsed, they should expand while filtering. When the filter is removed, sections should go back to the previous state.
In addition, trim whitespace from the filter section.
This commit re-introduces the "Move to Inbox" and "Move to Archive"
bulk topic actions, which we had in the old modal but had not yet added
to the new "experimental" dropdown, which isn't really experimental at
this point.
Once this is merged we can remove the old modal and only
rely on the new dropdown.
Prior to this fix we were opening a modal before closing the `DMenu` modal, given `DModal` expects only one modal at a time it was closing the latest modal and instantly closing the one we just opened.
This commit introduces the following changes which allows a site
administrator to mark `Upload` records with the `s3_file_missing`
verification status which will result in the `Upload` record being ignored when
`Discourse.store.list_missing_uploads` is ran on a site where S3 uploads
are enabled and `SiteSetting.enable_s3_inventory` is set to `true`.
1. Introduce `s3_file_missing` to `Upload.verification_statuses`
2. Introduce `Upload.mark_invalid_s3_uploads_as_missing` which updates
`Upload#verification_status` of all `Upload` records from `invalid_etag` to `s3_file_missing`.
3. Introduce `rake uploads:mark_invalid_s3_uploads_as_missing` Rake task
which allows a site administrator to change `Upload` records with
`invalid_etag` verification status to the `s3_file_missing`
verificaton_status.
4. Update `S3Inventory` to ignore `Upload` records with the
`s3_file_missing` verification status.
When uploading a video, the composer will now show a thumbnail image in
the composer preview instead of just the video placeholder image.
If `enable_diffhtml_preview` is enabled the video will be rendered in
the composer preview and is playable.
Introduced back in 2022 in
e3d495850d,
our new more specific message-id format for inbound and
outbound emails has now been in use for a very long time,
we can remove the support for the old formats:
`topic/:topic_id/:post_id.:random@:host`
`topic/:topic_id@:host`
`topic/:topic_id.:random@:host`
We are seeing a weird resolution error on Github actions with the
following backtrace:
```
Failure/Error:
visit File.join(
GlobalSetting.relative_url_root || "",
"/session/#{user.encoded_username}/become.json?redirect=false",
)
Socket::ResolutionError:
getaddrinfo: Temporary failure in name resolution
```
Switch to use `127.0.0.1` instead of forcing a name resolution.
In 958437e7dd we ensured that the email summaries are properly sent based on 'digest_attempted_at' for people who barely/never visit the forum.
This fixed the "frequency" of the email summaries but introduced a bug where the digest would be sent even though there wasn't anything new since for some users.
The logic we use to compute the threshold date for the content to be included in the digest was
```ruby
@since = opts[:since] || user.last_seen_at || user.user_stat&.digest_attempted_at || 1.month.ago
```
It was working as expected for users who haven never been seen but for users who have connected at least once, we would use their "last_seen_at" date as the "threshold date" for the content to be sent in a summary 😬
This fix changes the logic to be the most recent date amongst the `last_seen_at`, `digest_attempted_at` and `1.month.ago` so it's correctly handling cases where
- user has never been seen nor emailed a summary
- user has been seen in a while but has recently been sent a summary
- user has been sent a summary recently but hasn't been seen in a while.
This commit moves the logic for crawler rate limits out of the application controller and into the request tracker middleware. The reason for this move is to apply rate limits to all crawler requests instead of just the requests that make it to the application controller. Some requests are served early from the middleware stack without reaching the Rails app for performance reasons (e.g. `AnonymousCache`) which results in crawlers getting 200 responses even though they've reached their limits and should be getting 429 responses.
Internal topic: t/128810.
This gives us daily fidelity of topic view stats
New table stores a row per topic viewed per day tracking
anonymous and logged on views
We also have a new endpoint `/t/ID/views-stats.json` to get the statistics for the topic.
When selected some text inside a post, we offer the ability to "fast edit" the selected text without opening the composer.
However, there are certain cases where this isn't working quite a expected, due to the fact that we have some text in the "cooked" version of the post that isn't literally in the "raw" version of the post.
This ensures that whenever someone selects the within
- a quote
- a onebox
- an encrypted message
- a "cooked" date
we directly show the composer instead of showing the fast edit modal and then leaving the user with an invisible error.
Internal ref. t/128400
This commit updates `S3Inventory#files` to ignore S3 inventory files
which have a `last_modified` timestamp which are not at least 2 days
older than `BackupMetadata.last_restore_date` timestamp.
This check was previously only in `Jobs::EnsureS3UploadsExistence` but
`S3Inventory` can also be used via Rake tasks so this protection needs
to be in `S3Inventory` and not in the scheduled job.
After flags were moved to the database, with each save they are changing available PostActionTypes. Therefore, flag specs should clear the state before and after each example not just before.
In addition, we need to clear `nil` counts for dynamically created flags from serializer.
* FEATURE: add agree and edit
adds agree and edit - an alias for agree and keep -- but with a client action to
edit the post in the composer before the flag is agreed with
---------
Co-authored-by: Juan David Martinez <juan@discourse.org>
We're planning to implement a feature that allows adding required fields for existing users. This PR does some preparatory refactoring to make that possible. There should be no changes to existing behaviour. Just a small update to the admin UI.
This commit updates `Post#each_upload_url` to reject URLs that do not
have a host which matches `Discourse.current_hostname` but follows the
`/uploads/short-url` uploads URL format. This situation most commonly
happen when users copy upload URL link between different Discourse
sites.
For plugins with only an "enabled" site setting, it doesn't
make sense to take them to the site settings page, since the
toggle switch in the list can be used to change enabled/disabled.
This will not be the case for plugins that have their own custom
config page (like Automation), but we will deal with this when
we actually overhaul this plugin to use the new show page.
Also adds another rspec fixture of a test plugin.
This PR introduces a basic AdminNotice model to store these notices. Admin notices are categorized by their source/type (currently only notices from problem check.) They also have a priority.
* DEV: allow reply_by_email, visit_link_to_respond strings to be modified by plugins
* DEV: separate visit_link_to_respond and reply_by_email modifiers out
This PR aims to add bulk actions to the user's bookmarks.
After this feature, all users should be able to select multiple bookmarks and perform the actions of "deleting" or "clear reminders"
Instead of creating two separate Topics when a user (1) requests to join a group and (2) gets accepted in, this makes the acceptance message into a Post under the origin group request Topic.
This fixes the `PrettyText.make_all_links_absolute` to better handle subfolder.
In subfolder, when given the cooked version of a post, links to mentions includes the `Discourse.base_path` prefix. Adding the `Discourse.base_url` was doubling the `Discourse.base_path`.
The issue was hidden behind the specs which was stubbing `Discourse.base_url` instead of relying on `Discourse.base_path`.
This fixes both the "algorithm" used in `PrettyText.make_all_links_absolute` to better handle this case and correct the specs to properly handle subfolder cases.
There are lots of changes in the specs due to a refactoring to use squiggly heredoc strings for easier reading and less escaping.
- FIX: properly scope category changes to what the current user can see
- UX: previous category is now highlighted in "red", new category is highlighted in "green"
- PERF: no need to serialize the categories
- FIX: properly track wiki
- FIX: properly track post_type (aka. Staff Color)
- FIX: properly track making a topic a PM
- FIX: never show the category changes when a topic is made a PM
- PERF: post_revision serializer is now more leaner (never includes title changes when post_number > 1, never includes user changes if there aren't any)
- UX: always sort the tags by name
Note this may have performance issues in some cases, will need to be monitored
Previous to this change we were bracketing on 50 id windows. They may end up
having zero posts we are searching for leading to posts.rss and .json returning
no results.
- avoids Post.last.id which is expensive
- order by id desc which is better cause we bracket on id
(experimental)
The initial implementation of glimmer topic-list and related components. Does not include new APIs and isn't compatible with existing customization. That's gonna come in future PRs.
Enabled by adding groups to `experimental_glimmer_topic_list_groups` setting.
AuthProvider#enabled_setting=, used primarily by plugins, has been deprecated since version 2.9, in favour of Authenticator#enabled?. This PR confirms we are seeing no more usage and removes the method.
In 07ecbb5a3b we ensure the mentions in a group's activity page worked properly but we missed adding proper support for infinite loading.
The client is using the `before` parameter instead of the `before_post_id` to do the pagination.
This adds support for `before` as well as some tests to ensure it doesn't regress.
I also added tests to the group's activity posts as well since those were missing.
Finally I deleted some unused code (`group.messages_for`) which is not used anymore.
Context - https://meta.discourse.org/t/-/308044/9
Whenever one creates, updates, or deletes a post, we should keep the `topic.word_count` counter in sync.
Context - https://meta.discourse.org/t/-/308062
- login with username/password
- login with username/password and 2FA
- login with username/password back up code
- login with magic link
- login with magic link and 2FA
- login with magic link and back up code
- login when 2FA is required
- reset password
---
- signup and activate account
- signup with invite code
- signup with invite link
- signup and approve account
- signup and auto approve account
- signup with blocked domain
---
- basic login with Facebook
- basic login with Google
- basic login with Github
- basic login with Twitter
- basic login with Discord
- basic login with Linkedin
When converting a PM to a public topic (and vice versa), if there was a validation error (like a topic already used, or a tag required or not allowed) the error message wasn't bubbled up nor shown to the user.
This fix ensures we properly stop the conversion whenever a validation error happens and bubble up the errors back to the user so they can be informed.
Internal ref - t/128795
When "unicode_usernames" is enabled, calling the "user_path" helper with a username containing some non ASCII character will break due to the route constraint we have on username.
This fixes the issue by always encoding the username before passing it to the "user_path" helper.
Internal ref - t/127547
This commits updates `FinalDestination#get` to not forward
`Authorization` header on redirects since most HTTP clients I tested like
curl and wget does not it.
This also fixes a recent problem in `DiscourseIpInfo.mmdb_download`
where we will fail to download the databases when both `GlobalSetting.maxmind_account_id` and
`GlobalSetting.maxmind_license_key` has been set. The failure is due to
the bug above where the redirected URL given by MaxMind does not accept
an `Authorization` header.
The users directory is updated on a daily cadence. However, when a site is new and doesn't have many users, it can be confusing that a user who has just joined doesn't show up in the users until a day after they join. To eliminate this confusion, this commit triggers a refresh for the users directory as soon as as a user joins, if the site is in bootstrap mode. The reason for the conditional trigger is that refreshing the users directory is an expensive operation and doing it often on a large site with many users could lead to performance problems.
Internal topic: t/126076.
This commit changes request method for "categories/search" from GET to
POST to make sure that long filters can be passed to the server. For
example, category selectors with many categories are setting the full
list of selected category IDs to ensure these are filtered out from the
list of choices. This can result in a long URL that exceeds the maximum
length.
Some sites are still on the legacy "hamburger dropdown"
navigation_menu setting. In this case to avoid confusion,
we want to show both the sidebar icon and the header dropdown
hamburger when visiting the admin portal. Otherwise, the
hamburger switches sides from right to left for admins
and takes on different behaviour.
The hamburger in this case _only_ shows the main panel, not
other sidebar panels like the admin one.
The watched word group's create, update and delete action logs were missing the translations. This PR will add those strings and will use the group key instead of watched word key where needed.
This reverts commit 0f4520867b.
This has led to two problems:
1. An incompatibility with Cloudflare's "auto minify" feature. They've deprecated this feature because of incompatibility with modern JS syntax. But unfortunately it will remain enabled on existing properties until 2024-08-05.
2. Discourse fails to boot in Safari 15. This is strange, because Safari does support all the required features in our production JS bundles. Even more strangely, things start working as soon as you open the developer tools. That suggests the cause could be a Safari bug rather than a simple incompatibility.
Reverting while we work out a path forward on both those issues.
This commit switches `DiscourseIpInfo.mmdb_download` to use the
permalinks supplied by MaxMind to download the MaxMind databases as
specified in
https://dev.maxmind.com/geoip/updating-databases#directly-downloading-databases
which states:
```
To directly download databases, follow these steps:
1. In the "Download Links" column, click "Get Permalink(s)" for the desired database.
2. Copy the permalink(s) provided in the modal window.
3. Provide your account ID and your license key using Basic Authentication to authenticate.
```
Previously we are downloading from `https://download.maxmind.com/app/geoip_download` but this is not
documented anyway on MaxMind's docs so this URL can in theory break
in the future without warning. Therefore, we are taking a proactive
approach to download the databases from MaxMind the recommended way
instead of relying on a hidden URL. This old way of downloading the
databases with only a license key will be deprecated in 3.3 and be
removed in 3.4.
In 95a82d608d, we lowered the default for
`Onebox.options.max_download_kb` from 10mb to 2mb for security hardening
purposes. However, this resulted in multiple bug reports where seemingly
nomral URLs stopped being oneboxed. It turns out that lowering
`Onebox.options.max_download_kb` resulted in `Onebox::Helpers::DownloadTooLarge` being raised
more often for more URLs in `Onebox::Helpers.fetch_response` which
`Onebox::Helpers.fetch_html_doc` relies on. When
`Onebox::Helpers::DownloadTooLarge` is raised in
`Onebox::Helpers.fetch_response`, we throw away whatever response body
which we have already downloaded at that point. This is not ideal
because Nokogiri can parse incomplete HTML documents and there is a
really high chance that the incomplete HTML document still contains the
information which we need for oneboxing.
Therefore, this commit updates `Onebox::Helpers.fetch_html_doc` to not
throw away the response body when the size of the response body exceeds
`Onebox.options.max_download_size`. Instead, we just take whatever
response which we have and get Nokogiri to parse it.
This can happen for various reasons including rate limiting and middleware bugs. This should resolve the warning we're seeing in the logs
```
RequestTracker.get_data failed : NoMethodError : undefined method `[]' for nil:NilClass
```
decorator-transforms (https://github.com/ef4/decorator-transforms) is a modern replacement for babel's plugin-proposal-decorators. It provides a decorator implementation using modern browser features, without needing to enable babel's full suite of class feature transformations. This improves the developer experience and performance.
In local testing with Google's 'tachometer' tool, this reduces Discourse's 'init-to-render' time by around 3-4% (230ms -> 222ms).
It reduces our initial gzip'd JS payloads by 3.2% (2.43MB -> 2.35MB), or 7.5% (14.5MB -> 13.4MB) uncompressed.
DropdownMenu component is meant as a way to describe the content of menus.
Syntax:
```
<DropdownMenu as |dm|>
<dm.item class="test">
First
</dm.item>
<dm.divider class="foo" />
<dm.item class="bar">
Second
</dm.item>
</DropdownMenu>
```
This commits updates
`PageObjects::Components::NavigationMenu::Base#click_edit_tags_button`
to wait for `.sidebar-tags-form` to be present before returning. This is
essential because the client side app has to load the tags from the
server when the modal is open. If we don't wait for all the tags to be
loaded, it makes it hard to reason about the state of the UI when
interacting with the modal. In the case of "allows a user to deselect all tags in the modal which will display the site's top tags" which
was flaky, the system test was interacting with the UI when the tags are
still loading.
This keeps coming up in user testing as something
we want to get rid of. The `navigation_menu` setting
has been set to sidebar by default for some time now,
and we are rolling out admin sidebar widely. It just
doesn't make sense to let people turn this off in
the first step of the wizard -- we _want_ people to
use the sidebar.
menus and tooltips are now appended to their own portals. The service are the only responsible for managing the instances, prior to this commit, services could manage one instance, but the DMenu and DTooltip components could also take over which could cause unexpected states.
This change also allows nested menus/tooltips.
Other notable changes:
- few months ago core copied the CloseOnClickOutside modifier of float-kit without removing the float-kit one, this commit now only use the core one.
- the close function is now trully async
- the close function accepts an instance or an identifier as parameter
Whenever a post already failed "lightweight" validations, we skip all the expensive validations (that cooks the post or run SQL queries) so that we reply as soon as possible.
Also skip validating polls when there's no "[/poll]" in the raw.
Internal ref - t/115890
This spec has been failing forever on my machine. I guess I have a "better" version of pngquant?
---------
Co-authored-by: Jarek Radosz <jradosz@gmail.com>
If there's ever a circular reference in categories, don't go into an infinite loop when generating the category slug.
Instead, keep track of parent ids, and bail out as soon as we're encountering one more than once.
In #22851 we added a dependent strategy for deleting upload references when a draft is destroyed. This, however, didn't catch all cases, because we still have some code that issues DELETE drafts queries directly to the database. Specifically in the weekly cleanup job handled by Draft#cleanup!.
This PR fixes that by turning the raw query into an ActiveRecord #destroy_all, which will invoke the dependent strategy that ultimately deletes the upload references. It also includes a post migration to clear orphaned upload references that are already in the database.
In this PR we started redirecting to the guide page after the wizard - https://github.com/discourse/discourse/pull/26696
The guide will require rebrand and until it is ready, we should redirect to `/latest`
This commit introduces the `run_theme_migration` spec helper to allow
theme developers to write RSpec tests for theme migrations. For example,
this allows the following RSpec test to be written in themes:
```
RSpec.describe "0003-migrate-small-links-setting migration" do
let!(:theme) { upload_theme_component }
it "should set target property to `_blank` if previous target component is not valid or empty" do
theme.theme_settings.create!(
name: "small_links",
theme: theme,
data_type: ThemeSetting.types[:string],
value: "some text, #|some text 2, #, invalid target",
)
run_theme_migration(theme, "0003-migrate-small-links-setting")
expect(theme.settings[:small_links].value).to eq(
[
{ "text" => "some text", "url" => "#", "target" => "_blank" },
{ "text" => "some text 2", "url" => "#", "target" => "_blank" },
],
)
end
end
```
This change is being introduced because we realised that writting just
javascript tests for the migrations is insufficient since javascript
tests do not ensure that the migrated theme settings can actually be
successfully saved into the database. Hence, we are introduce this
helper as a way for theme developers to write "end-to-end" migrations
tests.
* Simplify config nav link generation to always inject the Settings
tab
* Auto-redirect to the first non-settings config link (if there is one)
when the user lands on /admin/plugins/:plugin_id
* Add `extras` to admin plugin serializer so plugins can add more
data on first load
* Add PikadayCalendar page object for system specs, extracted from the
CalendarDateTimePicker to make it more generic.