This commit adds a `MiniSchedulerLongRunningJobLogger` class which will
poll every 60 seconds for mini_scheduler jobs which are stuck. When it
detects that a job is stuck, it will log a warning message with the
current backtrace of the thread that is executing the job.
Note that for scheduled jobs which are executed at a frequency of less
than 30 minutes, we will log when the job has been executing for 30
minutes.
For scheduled jobs executed at a frequency of less than 2 hours, we will
log when the job has been executing for a duration greater than its
specified frequency.
For scheduled jobs executed at a frequency greater than 2 hours, we will
log as long as the job has been executing for more than 2 hours.
This commit continues on work laid out by 6039b513fe to redesign the /about page. In this commit, we add the site age and a section on the right hand side to show site activities/statistics such as topics, posts, sign-ups, likes etc.
Admin can create up to 50 custom flags. It is limited for performance reasons.
When the limit is reached "Add button" is disabled and backend is protected by guardian.
This commit patches `Net::HTTP` to reduce the default timeouts of 60
seconds when we are processing a request. There are certain routes in
Discourse which makes external requests and if the proper timeouts are
not set, we risk having the Unicorn master process force restarting the
Unicorn workers once the `30` seconds timeout is reached. This can
potentially become a vector for DoS attacks and this commit is aimed at
reducing the risk here.
The Safari 15 bugfix has been rolled into @babel/preset-env in the most recent version, so we no longer need to carry our vendored copy.
This commit updates @babel/preset-env, runs npx yarn-deduplicate yarn.lock, and removes the vendored transform.
This commit also refactors our theme transpiler to use @babel/preset-env, with the same list of target browsers as our ember-cli build uses. This means we no longer need to maintain a separate list of babel transforms for themes.
We were writing theme-transpiler JS files to the filesystem on a per-process basis, and then immediately reading them back in. Plus, there was no cleanup mechanism, so the tmp directory would grow indefinitely.
This commit refactors things so that the `build.js` script outputs the theme-transpiler source to stdout. That way, we can read it directly into the process, and then into mini-racer, without needing to go via the filesystem. No cleanup required!
In production, the theme-transpiler is still cached in a file during `assets:precompile`
When `SiteSetting.review_every_post` is true and the category `require_topic_approval` system creates two reviewable items.
1. Firstly, because the category needs approval, the `ReviewableQueuePost` record` is created - at this stage, no topic is created.
2. Admin is approving the review. The topic and first post are created.
3. Because `review_every_post` is true `queue_for_review_if_possible` callback is evaluated and `ReviewablePost` is created.
4. Then `ReviewableQueuePost` is linked to the newly generated topic and post.
At the beginning, we were thinking about hooking to those guards:
```
def self.queue_for_review_if_possible(post, created_or_edited_by)
return unless SiteSetting.review_every_post
return if post.post_type != Post.types[:regular] || post.topic.private_message?
return if Reviewable.pending.where(target: post).exists?
...
```
And add something like
```
return if Reviewable.approved.where(target: post).exists?
```
However, because the callback happens in point 3. before the `ReviewableQueuePost` is linked to the `Topic`, it was not possible.
Therefore, when `ReviewableQueuePost` is creating a `Topic`, a new option called `:reviewed_queued_post` is passed to `PostCreator` to avoid creating a second `Reviewable`.
We have been seeing `ZLib::BufError` when running the `assets:precompile` rake
task.
```
I, [2024-07-30T05:19:58.807019 #1059] INFO -- : Writing /var/www/discourse/public/assets/scripts/discourse-test-listen-boot-9b14a0fc65c689577e6a428dcfd680205516fe211700a71c7adb5cbcf4df2cc5.js
rake aborted!
Zlib::BufError: buffer error (Zlib::BufError)
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sprockets-3.7.3/lib/sprockets/cache/file_store.rb💯in `<<'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sprockets-3.7.3/lib/sprockets/cache/file_store.rb💯in `set'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sprockets-3.7.3/lib/sprockets/cache.rb:212:in `set'
```
The hypothesis here is that some thread unsafe issue is causing the
problem since we download the Maxmind databases in a thread and run
decompression operations once the gzip file is downloaded.
In the near term, we plan to move downloading of Maxmind databases out
of the Rake task into a scheduled job so this patch should be considered
a temporary solution.
The trade-off here is that build time will slightly increase since we
are not longer downloading Maxmind databases while precompiling assets
at the same time.
When using `Discourse.cache.fetch` with an expiry, there's a potential for a race condition due to how we read the data from redis.
The code used to be
```ruby
raw = redis.get(key) if !force
entry = read_entry(key) if raw
return entry if raw && !(entry == :__corrupt_cache__)
```
with `read_entry` defined as follow
```ruby
def read_entry(key)
if data = redis.get(key)
Marshal.load(data)
end
rescue => e
:__corrupt_cache__
end
```
If the value at "key" expired in redis between `raw = redis.get` and `entry = read_entry`, the `entry` variable would be `nil` despite `raw` having a value.
We would then proceed to return `entry` (which is `nil`) thinking it had a value, when it didn't.
The first `redis.get` can be skipped altogether and we can rely only on `read_entry` to read the data from redis. Thus avoiding the race condition and removing the double read operations.
Internal ref - t/132507
* SECURITY: Update default allowed iframes list
Change the default iframe url list to all include 3 slashes.
* SECURITY: limit group tag's name length
Limit the size of a group tag's name to 100 characters.
Internal ref - t/130059
* SECURITY: Improve sanitization of SVGs in Onebox
---------
Co-authored-by: Blake Erickson <o.blakeerickson@gmail.com>
Co-authored-by: Régis Hanol <regis@hanol.fr>
Co-authored-by: David Taylor <david@taylorhq.com>
Since switching to Maxmind permalinks to download the databases in
7079698cdf, we have received multiple
reports about rebuilds failing as `maxminddb:refresh` runs during
the rebuilds and failing to download the databases cases the rebuilds to
fail.
Downloading Maxmind databases should not sit in the critical rebuild
path but since we are close to the Discourse 3.3 release, we have opted
to just rescue all errors encountered when downloading the databases.
In the near future after the Discourse 3.3 release, we will be looking
at moving the downloading of maxmind databases out of the rebuild path.
We have a dedicated admin page (`/admin/customize/email_templates`) that lets admins customize all emails that Discourse sends to users. The way this page works is that it lists all translations strings that are used for emails, and the list of translation strings is currently hardcoded and hasn't been updated in years. We've had a number of new emails that Discourse sends, so we should add those templates to the list to let admins easily customize those templates.
Meta topic: https://meta.discourse.org/t/3-2-x-still-ignores-some-custom-email-templates/308203.
* FIX: Ensure JsLocaleHelper to obly outputs up-to-date translations
The old implementation forgot to filter out deprecated
translations, causing these translations to incorrectly override the new
locale in the frontend.
This commit fills in the forgotten where clause, filtering only the
up-to-date part.
Related meta topic: https://meta.discourse.org/t/outdated-translation-replacement-causing-missing-translation/314352
This patch fixes the `i18n:check` rake task which has been broken by
the `MessageFormat` upgrade.
It also adds a spec to ensure we generate valid MF code for all our
available locales.
We can get translations with invalid plural keys from Crowdin
or from custom overrides. Currently, this will raise an error and the
locales won’t be outputted at all.
This patch addresses this issue by using the new `strict: false` option
of our `messageformat-wrapper` gem, allowing to generate locales even if
there are invalid plural keys present.
* FIX: Add post id to the anchor to prevent two identical anchors
We generate anchors for headings in posts. This works fine if there is
only one post in a topic with anchors. The problem comes when you have
two or more posts with the same heading. PrettyText generates anchors
based on the heading text using the raw context of each post, so it is
entirely possible to generate the same anchor for two posts in the same
topic, especially for topics with template replies
Post1:
# heading
context
Post2:
# heading
context
When both posts are on the page at the same time, the anchor will only
work for the first post, according to the [HTML specification](https://html.spec.whatwg.org/multipage/browsing-the-web.html#scroll-to-the-fragment-identifier).
> If there is an a element in the document tree whose root is document
> that has a name attribute whose value is equal to fragment, then
> return the *first* such element in tree order.
This bug is particularly serious in forums with non-Latin languages,
such as Chinese. We do not generate slugs for Chinese, which results in
the heading anchors being completely dependent on their order.
```ruby
[2] pry(main)> PrettyText.cook("# 中文")
=> "<h1><a name=\"h-1\" class=\"anchor\" href=\"#h-1\"></a>中文</h1>"
```
Therefore, the anchors in the two posts must be in exactly the same by
order, causing almost all of the anchors in the second post to be
invalid.
This commit solves this problem by adding the `post_id` to the anchor.
The new anchor generation method will add `p-{post_id}` as a prefix when
post_id is available:
```ruby
[3] pry(main)> PrettyText.cook("# 中文", post_id: 1234)
=> "<h1><a name=\"p-1234-h-1\" class=\"anchor\" href=\"#p-1234-h-1\"></a>中文</h1>"
```
This way we can ensure that each anchor name only appears once on the
same topic. Using post id also prevents the potential possibility of the
same anchor name when splitting/merging topics.
Previously in these 2 PRs, we introduced a new site setting `SiteSetting.enforce_second_factor_on_external_auth`.
https://github.com/discourse/discourse/pull/27547https://github.com/discourse/discourse/pull/27674
When disabled, it should enforce 2FA for local login with username and password and skip the requirement when authenticating with oauth2.
We stored information about the login method in a secure session but it is not reliable. Therefore, information about the login method is moved to the database.
Previously, we couldn't change the user agent name dynamically for onebox requests. In this commit, a new hidden site setting `onebox_user_agent` is created to override the default user agent value specified in the [initializer](c333e9d6e6/config/initializers/100-onebox_options.rb (L15)).
Co-authored-by: Régis Hanol <regis@hanol.fr>
* FEATURE: Clean up previously logged information after permanently deleting posts
When soft deleteing a topic or post, we will log some details in the
staff log, including the raw content of the post. Before this commit, we
will not clear the information in these records. Therefore, after
permanently deleting the post, `UserHistory` still retains copy of the
permanently deleted post. This is an unexpected behaviour and may raise
some potential legal issues.
This commit adds a behavior that when a post is permanently deleted, the
details column of the `UserHistory` associated with the post will be
overwritten to "(permanently deleted)". At the same time, for permanent
deletion, a new `action_id` is introduced to distinguish it from soft
deletion.
Related meta topic: https://meta.discourse.org/t/introduce-a-way-to-also-permanently-delete-the-sensitive-info-from-the-staff-logs/292546
This ensures that elasticsearch doesn't parse it as an object. There are
too many combination of job opts so we don't want elasticsearch to be
parsing and indexing this field as an object.
This improves the `TextSentinel` so that we don't consider CJK text as being uppercase and thus failing the validator.
It also optimizes the entropy computation by using native ruby `.bytes` to get all the bytes from the text.
It also tweaks the `seems_pronounceable?` and `seems_unpretentious?` check to use the `\p{Alnum}` unicode regexp group to account for non-latin languages.
Reference - https://meta.discourse.org/t/body-seems-unclear-error-when-users-are-typing-in-chinese/88715
Inspired by https://github.com/discourse/discourse/pull/27900
Co-authored-by: Paulo Magalhaes <mentalstring@gmail.com>
We are investigating reports of errors with messageformat compilation following 301713ef. This commit does not fix the issue, but it introduces some basic error handling to avoid completely breaking affected sites.
We will have a fix for the root cause soon.
When tags contain an underscore we should allow filtering in the same way, previously due to the regex those with underscores were not being found when filtering.
This commit ensures that we reset the `missing_s3_uploads` status count
if there are no inventory files which are at least 2 days older than the
site's restored date.
Otherwise, a site with missing uploads but was subsequntly restored will
be continue to report missing uploads for 2 days.
Followup 560e8aff75
The linked commit allowed oneboxing private GitHub PRs,
issues, commits, and so on, but it didn't actually allow
oneboxing the root repo e.g https://github.com/discourse/discourse-reactions
We didn't have an engine for this, we were relying on OpenGraph
tags on the HTML rendering of the page like we do with other
oneboxes.
To fix this, we needed a new github engine for repos specifically.
Also, this commit adds a `data-github-private-repo` attribute to
PR, issue, and repo onebox HTML so we have an indicator of
whether the repo was private, which can be used for theme components
and so on.
Our old group SMTP SSL option was a checkbox,
but this was not ideal because there are actually
3 different ways SSL can be used when sending
SMTP:
* None
* SSL/TLS
* STARTTLS
We got around this before with specific overrides
for Gmail, but it's not flexible enough and now people
want to use other providers. It's best to be clear,
though it is a technical detail. We provide a way
to test the SMTP settings before saving them so there
should be little chance of messing this up.
This commit also converts GroupEmailSettings to a glimmer
component.
Allow admin to create custom flag which requires an additional message.
I decided to rename the old `custom_flag` into `require_message` as it is more descriptive.
# Context
Currently there is no way to add a custom filter to the experimental `/filter` endpoint. While you can implement a custom `status:` there is no way to include the user's input in a custom query.
# PR
This PR adds the ability to implement a custom filter. eg. `CUSTOM_FILTER:foo`
- Add `add_filter_custom_filter` for extension
- Add specs
Followup 7b627dc14b
In this other commit, I changed the email settings validator
to always use the `login` authentication method for
office365 and outlook, but I didn't change the actual
group SMTP mailer to do this.
This commit fixes that issue and does some minor refactoring.
This commit updates `TopicQuery.validators` to cover all of the
public options listed in `TopicQuery.public_valid_options`. This is done
to fix the app returning a 500 response code when an invalid value, such
as a hash, is passed as a query param when accessing the various topic
list routes.