This was fixed previously but must have regressed, we
are showing a darker grey background around the
"Only show overridden" checkbox for our Settings tab
in config pages.
This PR fixes a recent regression in e37952c9db that reverted a fix made in 1c4d5dae1c, which allowed for async calls to finish first before removing in progress uploads.
The `max_compress?` logic is totally broken at least when used for
brotli compression because we are only seeing 4 assets subjected to the
max compression level in production. Instead of fixing the broken logic,
we should just drop this unnecessary complexity cause things are easier
to reason about when we only have one compression level to deal with
across all assets.
Now that we run the `upload` method in different threads, we need to
synchronize writes to `STDOUT` which we can do so by using a `Logger`.
Follow-up to 49e8353959
The test was flaky and failing with the following errors:
```
Failure/Error:
klass
.connection
.select_raw(relation.arel) do |result, _|
result.type_map = DB.type_map
result.nfields == 1 ? result.column_values(0) : result.values
end
NoMethodError:
undefined method `select_raw' for nil
./lib/freedom_patches/fast_pluck.rb:60:in `pluck'
./vendor/bundle/ruby/3.3.0/gems/activerecord-7.2.2.1/lib/active_record/relation/calculations.rb:354:in `pick'
./app/models/web_crawler_request.rb:27:in `request_id'
./app/models/web_crawler_request.rb:31:in `rescue in request_id'
./app/models/web_crawler_request.rb:26:in `request_id'
./app/models/web_crawler_request.rb:19:in `write_cache!'
./app/models/concerns/cached_counting.rb:135:in `block (3 levels) in flush_to_db'
./vendor/bundle/ruby/3.3.0/gems/rails_multisite-6.1.0/lib/rails_multisite/connection_management/null_instance.rb:49:in `with_connection'
./vendor/bundle/ruby/3.3.0/gems/rails_multisite-6.1.0/lib/rails_multisite/connection_management.rb:21:in `with_connection'
./app/models/concerns/cached_counting.rb:134:in `block (2 levels) in flush_to_db'
./app/models/concerns/cached_counting.rb:124:in `each'
./app/models/concerns/cached_counting.rb:124:in `block in flush_to_db'
./lib/distributed_mutex.rb:53:in `block in synchronize'
./lib/distributed_mutex.rb:49:in `synchronize'
./lib/distributed_mutex.rb:49:in `synchronize'
./lib/distributed_mutex.rb:34:in `synchronize'
./app/models/concerns/cached_counting.rb:120:in `flush_to_db'
./app/models/concerns/cached_counting.rb:187:in `perform_increment!'
./app/models/web_crawler_request.rb:15:in `increment!'
./lib/middleware/request_tracker.rb:74:in `log_request'
./lib/middleware/request_tracker.rb:409:in `block in log_later'
./lib/scheduler/defer.rb:125:in `block in do_work'
./vendor/bundle/ruby/3.3.0/gems/rails_multisite-6.1.0/lib/rails_multisite/connection_management/null_instance.rb:49:in `with_connection'
./vendor/bundle/ruby/3.3.0/gems/rails_multisite-6.1.0/lib/rails_multisite/connection_management.rb:21:in `with_connection'
./lib/scheduler/defer.rb:119:in `do_work'
./lib/scheduler/defer.rb:105:in `block (2 levels) in start_thread'
```
This was due to running the defer thread in an async manner which is
actually no representative of the production environment. It also
revealed a spot in our code base where writes are happening in a GET
request which can cause requests to fail if ActiveRecord is in readonly
mode.
The directory items controller specs that have a search param were not
matching how things worked in production. In a non-test environment the
UserSearch class depends on the `user_search_data` table being
populated, so the tests I corrected now use this table as well to match
reality.
Also added a new test to match the 20 user limit for search results that
currently exists. This 20 user limit is on the line between a bug and a
feature but it is how it is currently working so we should document
that. We have plans to increase this limit and it has been documented
here: https://meta.discourse.org/t/296485
This PR is a no-op and only changes the tests.
Co-authored-by: brrusselburg <25828824+brrusselburg@users.noreply.github.com>
We now extensively reference the `{ i18n }` named export of the `discourse-i18n` package, instead of calling `I18n.t` directly. That means that mutations of `I18n.t` no longer have any impact on most of the app.
This commit updates the verbose localisation logic to be switched by a boolean instead of a method mutation.
When a post containing an apostrophe (') is being cooked, the apostrophe is being converted to the "typographic" version (’) (because we enable markdown-it's **typographer** mode by default in Discourse)
When you select text that contains such apostrophe and then try to save your fast edit, it fails miserably without any error.
That's because when you select text from the DOM, it uses the cooked version which has the typographic apostrophe.
When you save your fast edit, we fetch the raw version of the post, which has the "regular" apostrophe. Thus doing `raw.replace(selectedText, editedText)` doesn't work because `raw` has the regular apostrophe but `selectedText` has the typographic apostrophe.
Since it's somewhat complicated to handle all typographic characters, we would basically have to reverse the process done in `custom-typographer-replacements.js`, we instead bail out and show the composer when we detect such character in the selection.
Internal ref - t/143836
This reverts commit 766ff723f8.
Ensure that we create the sidekiq log file first before opening it for
logging. This avoids any issue of the log file not being present when we
initialize an instance of the `Logger`.
Some pages like new/edit item should not display admin header. New attribute called `@shouldDisplay` was added.
As a proof of concept, the flags page was updated.
This PR resolves an issue where the "Experimental" badge would break onto a new line when the title was too long, causing the badge text to separate from the icon. The fix ensures the badge text and icon remain aligned, even with longer titles.
In 6cafe59c76, we added a monkey patch to
`Unicorn::HtppServer#murder_lazy_workers` to log a message and send a
`USR2` signal to the Unicorn worker process when they Unicorn worker
process is 2 seconds away from being timed out by the Unicorn master
process. However, we ended up loggging multiple messages and sending
multiple USR2 signal during the 2 seconds before the Unicorn worker
process hit the time out.
To overcome this problem, we will now set an instance variable on the
`Unicorn::Worker` instance and use it to ensure that the log message is
only logged once and USR2 signal to the Unicorn worker process is only
sent one as well.
In `Jobs::Base::JobInstrumenter.raw_log`, we were creating an instance
of `Queue` and then pushing messages to the queue before popping it off
the queue in a thread. However, this complexity is not necessary when
we can just write directly to the logger without much overhead. This is
how all logging is done in other parts of the app as well.
We identify and deny blocked crawlers here in anonymous_cache.
Separating the notion of the crawler identifier here lets plugins perform an
override if they perform more advanced detection.
In 806e37aaec, I improved the conflict handling when editing a post to account for title and tags.
This fixes an edge cases when a topic has a hidden tag the current editor can't see. When they submit their edit, we automatically add the hidden tags before checking with the tags stored in the database.
Reported in https://meta.discourse.org/t/341375
Since 3e7f0867ea, I started seeing some specs fail due to the following error
```plain
Error occurred while rendering: top-level application > discourse-root > topic > discourse-topic > topic-navigation > plugin-outlet > plugin-connector > topic-presence-display
/assets/plugins/discourse-presence.js - Uncaught TypeError: Cannot destructure property 'whisperer' of 'this.currentUser' as it is null.
```
For some reasons I can't fanthom, the presence components seem to be rendered even though they're using outlets that are only rendered when a user is signed in... 🤷♂️
Lost too much time trying to reproduce so I ended up adding this `if (!this.currentUser) { return; }` condition to both "presence display" component to (hopefully) fix these flakes.
This change creates a shallow copy of the public message channels so we don't change the original array while sorting.
Without this change the publicMessageChannels getter cache gets invalidated and cached again with the sorted channels array instead, which causes a bug where the sidebar channel list sorting order is updated by activity when user interacts with the chat drawer.
We've seen in some communities abuse of user profile where bios and other fields are used in malicious ways, such as malware distribution. A common pattern between all the abuse cases we've seen is that the malicious actors tend to have 0 posts and have a low trust level.
To eliminate this abuse vector, or at least make it much less effective, we're making the following changes to user profiles:
1. Anonymous, TL0 and TL1 users cannot see any user profiles for users with 0 posts except for staff users
2. Anonymous and TL0 users can only see profiles of TL1 users and above
Users can always see their own profile, and they can still hide their profiles via the "Hide my public profile" preference. Staff can always see any user's profile.
Internal topic: t/142853.
Followup 35ecd0335c
Since we have the moderators_manage_categories_and_groups setting,
more than admins can manage groups, so we need to allow others to
see this Automatic tooltip as well.
Also fixes an inconsistency with canManageGroup between the User
model and Group controller, the latter is correct, allowing management
of automatic groups if can_admin_group permission is true
The modal was disabling body scroll lock and select-kit collection was not whitelisted which was preventing users to be able to scroll a select-kit collection on iOS.