`ember exam` points all browsers at a single ember-cli server process. With very high parallelism, we've started seeing `ChunkLoadError` exceptions, which may indicate the server being overloaded. Capping to 8 browsers to avoid that.
Followup 76c56c8284
The change introduced above made it so the expired
bookmark reminders were cleared when using the bulk
action menu for bookmarks. However this also affected
clearing reminders for bookmarks when sending notifications.
When clearing bookmark reminders after sending notifications,
we take into account the auto delete preference:
* never - The bookmark `reminder_at` date should not be cleared,
and the bookmark is kept.
* clear_reminder - The bookmark `reminder_at` date is cleared and
the bookmark is kept
The `never` option made it so "expired" bookmark reminder show
on the user's bookmark list.
This commit fixes the change from the other commit and only
forces clearing of `reminder_at` if using the bookmark bulk
action service.
This commit improves `TopicsController#show` to not load suggested and
related topics unless it is the last page of the topic's view.
Previously, we avoided loading suggested and related topics by the use
of conditionals in the `TopicViewSerializer` to avoid calling
`TopicView#suggested_topics` and `TopicView#related_topics`. However,
this pattern is not reliable as the methods can still be called from
other spots in the code base. Instead, we ensure that
`TopicView#include_suggested` and `TopicView#include_related` is set
correctly on the instance of `TopicView` which ensures that for the
given instance, `TopicView#suggested_topics` and
`TopicView#related_topics` will be a noop.
What did this fix?
===============
Previously, we only triggered this event in the `user.logged_out` method.
This resulted in the event being triggered only when the user was logged
out by the administrator or the site had strict logout mode enabled.
This bug affected customers who managed user status via webhooks.
meta topic: https://meta.discourse.org/t/user-log-out-event-not-triggered-in-webhooks/249464
Previously, we were silently failing when a theme hit SSRF protection, or the `git clone` command failed for some reason. This commit updates them to be exceptions, so they provide more useful error messages
This commit implements 2 new metrics/stats in the /about page for the _estimated_ numbers of unique visitors from the EU and the rest of the world. This new feature is currently off by default, but it can be enabled by turning on the hidden `display_eu_visitor_stats` site settings via the rails console.
There are a number of assumptions that we're making here in order to estimate the number of unique visitors, specifically:
1. we're assuming that the average of page views per anonymous visitor is similar to the average number of page views that a logged-in visitor makes, and
2. we're assuming that the ratio of logged in visitors from the EU is similar to the ratio of anonymous visitors from the EU
Discourse keeps track of the number of both logged-in and anonymous page views, and also the number of unique logged-in visitors and where they're from. So with those numbers and the assumptions above, we can estimate the number of unique anonymous visitors from the EU and the rest of the world.
Internal topic: t/128480.
This patch allows using an AR relation as a model in services without
fetching associated records. It will just check if the relation is empty
or not. In the former case, the execution will stop at that point, as
expected.
This commit introduces a new frontend API to add custom items to the "Site activity" section in the new /about page. The new API is called `addAboutPageActivity` and it works along side the `register_stat` serve-side API which serializes the data that the frontend API consumes. More details of how the two APIs work together is in the JSDoc comment above the API function definition.
Internal topic: t/128545/9.
This commit fixes two codepaths which where incorrectly working with capitalized usernames as we were doing a mix of username_lower and non lower username.
Also adds two specs for these cases.
This patch removes two freedom patches:
- `mail_disable_starttls.rb`: this has been fixed in the 2.8 release of
the mail gem, so we don’t need it anymore.
- `rails4.rb`: those methods have been deprecated for a while now and
should have been dropped with Discourse v3.2.
In development mode and when a developer's email is configured as part
of `Rails.configuration.developer_emails`, the user can be trusted and
should not be required to be an admin user.
* System user attachment size WIP
* spec check
* controller update
* add max to system_user_max_attachment_size_kb
* DEV: update to use static method for `max_attachment_size_for_user`
add test to use large image.
add check for failure.
* DEV: update `system_user_max_attachment_size_kb` default value to 0
remove unecessary test.
update tests to reflect the new default value of `system_user_max_attachment_size_kb`
* DEV: update maximum_file_size to check when is an attachment made by a system user
Add tests for when `system_user_max_attachment_size_kb` is over and under the limit
Add test for checking interaction with `max_attachment_size_kb`
* DEV: move `max_attachment_size_for_user` to private methods
* DEV: turn `max_attachment_size_for_user` into a static method
* DEV: typo in test case
* DEV: move max_attachment_size_for_user to private class method
* Revert "DEV: move max_attachment_size_for_user to private class method"
This reverts commit 5d5ae0b715.
---------
Co-authored-by: Gabriel Grubba <gabriel@discourse.org>
* DEV: Removal of create_post_for_category_and_tag_changes setting
reverting commit: #65f35e1
and adding a migration to remove the setting
ref: t/132320
* DEV: change checks for zeros to check for nils
* DEV: remove create_post_for_category_and_tag_changes migration file
If anything goes wrong, we can always revert back to the previous state.
This commit updates `Demon::Base#start` to call `Discourse.before_fork`
before forking. According to the docs in `mini_racer`, we need to
"Dispose manually of all MiniRacer::Context objects prior to forking".
This commit is motivated by a segmentation fault which we are seeing in
production when killing a daemon process. Backtrace of the core dump
includes traces of `mini_racer` so we think this is the cause. Note that
we are not 100% sure if this will fix the issue.
### Why?
Before, all flags were static. Therefore, they were stored in class variables and serialized by SiteSerializer. Recently, we added an option for admins to add their own flags or disable existing flags. Therefore, the class variable had to be dropped because it was unsafe for a multisite environment. However, it started causing performance problems.
### Solution
When a new Flag system is used, instead of using PostActionType, we can serialize Flags and use fragment cache for performance reasons.
At the same time, we are still supporting deprecated `replace_flags` API call. When it is used, we fall back to the old solution and the admin cannot add custom flags. In a couple of months, we will be able to drop that API function and clean that code properly. However, because it may still be used, redis cache was introduced to improve performance.
To test backward compatibility you can add this code to any plugin
```ruby
replace_flags do |flag_settings|
flag_settings.add(
4,
:inappropriate,
topic_type: true,
notify_type: true,
auto_action_type: true,
)
flag_settings.add(1001, :trolling, topic_type: true, notify_type: true, auto_action_type: true)
end
```
### Why?
Before, all flags were static. Therefore, they were stored in class variables and serialized by SiteSerializer. Recently, we added an option for admins to add their own flags or disable existing flags. Therefore, the class variable had to be dropped because it was unsafe for a multisite environment. However, it started causing performance problems.
### Solution
When a new Flag system is used, instead of using PostActionType, we can serialize Flags and use fragment cache for performance reasons.
At the same time, we are still supporting deprecated `replace_flags` API call. When it is used, we fall back to the old solution and the admin cannot add custom flags. In a couple of months, we will be able to drop that API function and clean that code properly. However, because it may still be used, redis cache was introduced to improve performance.
To test backward compatibility you can add this code to any plugin
```ruby
replace_flags do |flag_settings|
flag_settings.add(
4,
:inappropriate,
topic_type: true,
notify_type: true,
auto_action_type: true,
)
flag_settings.add(1001, :trolling, topic_type: true, notify_type: true, auto_action_type: true)
end
```
Adds a new statistics (hidden from the UI, but available via the API) that tracks daily participating users.
A user is considered as "participating" if they have
- Reacted to a post
- Replied to a topic
- Created a new topic
- Created a new PM
- Sent a chat message
- Reacted to a chat message
Internal ref - t/131013
This commit adds a `MiniSchedulerLongRunningJobLogger` class which will
poll every 60 seconds for mini_scheduler jobs which are stuck. When it
detects that a job is stuck, it will log a warning message with the
current backtrace of the thread that is executing the job.
Note that for scheduled jobs which are executed at a frequency of less
than 30 minutes, we will log when the job has been executing for 30
minutes.
For scheduled jobs executed at a frequency of less than 2 hours, we will
log when the job has been executing for a duration greater than its
specified frequency.
For scheduled jobs executed at a frequency greater than 2 hours, we will
log as long as the job has been executing for more than 2 hours.
This commit continues on work laid out by 6039b513fe to redesign the /about page. In this commit, we add the site age and a section on the right hand side to show site activities/statistics such as topics, posts, sign-ups, likes etc.
Admin can create up to 50 custom flags. It is limited for performance reasons.
When the limit is reached "Add button" is disabled and backend is protected by guardian.
This commit patches `Net::HTTP` to reduce the default timeouts of 60
seconds when we are processing a request. There are certain routes in
Discourse which makes external requests and if the proper timeouts are
not set, we risk having the Unicorn master process force restarting the
Unicorn workers once the `30` seconds timeout is reached. This can
potentially become a vector for DoS attacks and this commit is aimed at
reducing the risk here.
The Safari 15 bugfix has been rolled into @babel/preset-env in the most recent version, so we no longer need to carry our vendored copy.
This commit updates @babel/preset-env, runs npx yarn-deduplicate yarn.lock, and removes the vendored transform.
This commit also refactors our theme transpiler to use @babel/preset-env, with the same list of target browsers as our ember-cli build uses. This means we no longer need to maintain a separate list of babel transforms for themes.
We were writing theme-transpiler JS files to the filesystem on a per-process basis, and then immediately reading them back in. Plus, there was no cleanup mechanism, so the tmp directory would grow indefinitely.
This commit refactors things so that the `build.js` script outputs the theme-transpiler source to stdout. That way, we can read it directly into the process, and then into mini-racer, without needing to go via the filesystem. No cleanup required!
In production, the theme-transpiler is still cached in a file during `assets:precompile`
When `SiteSetting.review_every_post` is true and the category `require_topic_approval` system creates two reviewable items.
1. Firstly, because the category needs approval, the `ReviewableQueuePost` record` is created - at this stage, no topic is created.
2. Admin is approving the review. The topic and first post are created.
3. Because `review_every_post` is true `queue_for_review_if_possible` callback is evaluated and `ReviewablePost` is created.
4. Then `ReviewableQueuePost` is linked to the newly generated topic and post.
At the beginning, we were thinking about hooking to those guards:
```
def self.queue_for_review_if_possible(post, created_or_edited_by)
return unless SiteSetting.review_every_post
return if post.post_type != Post.types[:regular] || post.topic.private_message?
return if Reviewable.pending.where(target: post).exists?
...
```
And add something like
```
return if Reviewable.approved.where(target: post).exists?
```
However, because the callback happens in point 3. before the `ReviewableQueuePost` is linked to the `Topic`, it was not possible.
Therefore, when `ReviewableQueuePost` is creating a `Topic`, a new option called `:reviewed_queued_post` is passed to `PostCreator` to avoid creating a second `Reviewable`.
We have been seeing `ZLib::BufError` when running the `assets:precompile` rake
task.
```
I, [2024-07-30T05:19:58.807019 #1059] INFO -- : Writing /var/www/discourse/public/assets/scripts/discourse-test-listen-boot-9b14a0fc65c689577e6a428dcfd680205516fe211700a71c7adb5cbcf4df2cc5.js
rake aborted!
Zlib::BufError: buffer error (Zlib::BufError)
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sprockets-3.7.3/lib/sprockets/cache/file_store.rb💯in `<<'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sprockets-3.7.3/lib/sprockets/cache/file_store.rb💯in `set'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sprockets-3.7.3/lib/sprockets/cache.rb:212:in `set'
```
The hypothesis here is that some thread unsafe issue is causing the
problem since we download the Maxmind databases in a thread and run
decompression operations once the gzip file is downloaded.
In the near term, we plan to move downloading of Maxmind databases out
of the Rake task into a scheduled job so this patch should be considered
a temporary solution.
The trade-off here is that build time will slightly increase since we
are not longer downloading Maxmind databases while precompiling assets
at the same time.
When using `Discourse.cache.fetch` with an expiry, there's a potential for a race condition due to how we read the data from redis.
The code used to be
```ruby
raw = redis.get(key) if !force
entry = read_entry(key) if raw
return entry if raw && !(entry == :__corrupt_cache__)
```
with `read_entry` defined as follow
```ruby
def read_entry(key)
if data = redis.get(key)
Marshal.load(data)
end
rescue => e
:__corrupt_cache__
end
```
If the value at "key" expired in redis between `raw = redis.get` and `entry = read_entry`, the `entry` variable would be `nil` despite `raw` having a value.
We would then proceed to return `entry` (which is `nil`) thinking it had a value, when it didn't.
The first `redis.get` can be skipped altogether and we can rely only on `read_entry` to read the data from redis. Thus avoiding the race condition and removing the double read operations.
Internal ref - t/132507
* SECURITY: Update default allowed iframes list
Change the default iframe url list to all include 3 slashes.
* SECURITY: limit group tag's name length
Limit the size of a group tag's name to 100 characters.
Internal ref - t/130059
* SECURITY: Improve sanitization of SVGs in Onebox
---------
Co-authored-by: Blake Erickson <o.blakeerickson@gmail.com>
Co-authored-by: Régis Hanol <regis@hanol.fr>
Co-authored-by: David Taylor <david@taylorhq.com>
Since switching to Maxmind permalinks to download the databases in
7079698cdf, we have received multiple
reports about rebuilds failing as `maxminddb:refresh` runs during
the rebuilds and failing to download the databases cases the rebuilds to
fail.
Downloading Maxmind databases should not sit in the critical rebuild
path but since we are close to the Discourse 3.3 release, we have opted
to just rescue all errors encountered when downloading the databases.
In the near future after the Discourse 3.3 release, we will be looking
at moving the downloading of maxmind databases out of the rebuild path.