This should resolve these warnings under Ruby 3.1
```
warning: Passing safe_level with the 2nd argument of ERB.new is deprecated
```
Unfortunately Sprockets 3.x has not seen a rubygems release since 2018, so we need to fetch these improvements via git.
In Discourse, there are many migration files where we CREATE INDEX CONCURRENTLY which requires us to set disable_ddl_transaction!. Setting disable_ddl_transaction! in a migration file runs the SQL statements outside of a transaction. The implication of this is that there is no ROLLBACK should any of the SQL statements fail.
We have seen lock timeouts occuring when running CREATE INDEX CONCURRENTLY. When that happens, the index would still have been created but marked as invalid by Postgres.
Per the postgres documentation:
> If a problem arises while scanning the table, such as a deadlock or a uniqueness violation in a unique index, the CREATE INDEX command will fail but leave behind an “invalid” index. This index will be ignored for querying purposes because it might be incomplete; however it will still consume update overhead.
> The recommended recovery method in such cases is to drop the index and try again to perform CREATE INDEX CONCURRENTLY . (Another possibility is to rebuild the index with REINDEX INDEX CONCURRENTLY ).
When such scenarios happen, we are supposed to either drop and create the index again or run a REINDEX operation. However, I noticed today that we have not been doing so in Discourse. Instead, we’ve been incorrectly working around the problem by checking for the index existence before creating the index in order to make the migration idempotent. What this potentially mean is that we might have invalid indexes which are lying around in the database which PG will ignore for querying purposes.
This commits adds a migration which queries for all the
invalid indexes in the `public` namespace and reindexes them.
Since the poll post handler runs very early in the post creation
process, it's possible to run the handler on an obiviously invalid post.
This change ensures the post's `raw` value is present before
proceeding.
Data Explorer queries have a `user_id` assigned to each query created. DE Reports can be bookmarked for later reference.
When creating the bookmark notification there was the possibility of a notification error being thrown (that made the notification menu inaccessible) due to a DE Query not having a owner (associated user_id). This can happen in a couple ways:
- having a query created by a user that was then later deleted leaving the query without ownership
- having a TA create a query for a customer using a temporary account, that would then later be deleted leaving the query without ownership
Since there is a case that `bookmark.user` is not valid the PR makes the `bookmark.user.username` optional for a bookmark notification. As [tested](https://github.com/discourse/discourse/pull/19851/files#diff-5b5154de37f96988d551feff6f1dfe5ba804fbcbc1c33b5478dde02a447a634f) in the case a username is not present, we will still render the `content` of the notification minus the username. This creates a safe fallback when looking up non-valid users.
Our fork was needed for OpenSSL 3 and Ruby 2.X compatibility.
The OpenSSL 3 part was merged into the gem for version 3.
Discourse dropped support for Ruby 2.X.
That means we don't need our fork anymore.
This is a very subtle one. Setting the redirect URL is done by passing
a hash through a Discourse event. This is broken on Ruby 2 since the
support for keyword arguments in events was added.
In Ruby 2 the last argument is cast to keyword arguments if it is a
hash. The key point here is that creates a new copy of the hash, so
what the plugin is modifying is not the hash that was passed.
This patch introduces a cookies rotator as indicated in the Rails
upgrade guide. This allows to migrate from the old SHA1 digest to the
new SHA256 digest.
The spec was flaky because it was dependent on order,
when usernames got high enough sequence numbers in them
we would get this error:
> expected to find text "bruce99, bruce100" in "bruce100, bruce99"
Also move selectors into page object and use them in the
spec instead.
The latter can be called directly from the Topic page object,
so we can remove some duplication between the two. There are
levels of page objects (e.g. entire page, component, complete flow)
and its perfectly valid to call one from another.
We've been doing some work to support new keyword argument semantics in Ruby 3. As part of that we made some changes to `DiscourseEvent::TestHelper`. The backwards compatibility fix doesn't work if the method is called with an empty hash as the final argument. This fix adds a valid option to the final hash in the particular test.
In "GlobalSetting.redirect_avatar_requests" mode, when the application gets
an avatar request it returns a "redirect" to the S3 CDN.
This shields the application from caching avatars and downloading from S3.
However clients will make 2 requests per avatar. (one to get redirect,
second to get avatar)
A one hour cache on a redirect means there may be an increase in CDN
traffic, given more clients will ask for the redirect every hour.
This may also lead to an increase in origin requests to the application.
To mitigate lets cache the CDN URL for 1 day.
The downside is that any changes to S3 CDN need extra care to allow for
the extra 1 day delay. (leave data around for 1 extra day)
This will give us some aggregate stats on the defer queue performance.
It is limited to 100 entries (for safety) which is stored in an LRU cache.
Scheduler::Defer.stats can then be used to get an array that denotes:
- number of runs and completions (queued, finished)
- error count (errors)
- total duration (duration)
We can look later at exposing these metrics to gain visibility on the reason
the defer queue is clogged.
0403cda1d1 introduced a regression where
topics in non read-restricted categories have its TopicTrackingState
MessageBus messages published with the `group_ids: [nil]` option. This
essentially means that no one would be able to view the message.
There was an issue with channel archiving, where at times the topic
creation could fail which left the archive in a bad state, as read-only
instead of archived. This commit does several things:
* Changes the ChatChannelArchiveService to validate the topic being
created first and if it is not valid report the topic creation errors
in the PM we send to the user
* Changes the UI message in the channel with the archive status to reflect
that topic creation failed
* Validate the new topic when starting the archive process from the UI,
and show the validation errors to the user straight away instead of
creating the archive record and starting the process
This also fixes another issue in the discourse_dev config which was
failing because YAML parsing does not enable all classes by default now,
which was making the seeding rake task for chat fail.
* Firefox now finally returns PerformanceMeasure from performance.measure
* Some TODOs were really more NOTE or FIXME material or no longer relevant
* retain_hours is not needed in ExternalUploadsManager, it doesn't seem like anywhere in the UI sends this as a param for uploads
* https://github.com/discourse/discourse/pull/18413 was merged so we can remove JS test workaround for settings
Currently when generating a onebox for Discourse topics, some important
context is missing such as categories and tags.
This patch addresses this issue by introducing a new onebox engine
dedicated to display this information when available. Indeed to get this
new information, categories and tags are exposed in the topic metadata
as opengraph tags.
* FIX: Channel archive N1 when serializing current user
The `ChatChannelSerializer` serializes the archive for the
channel if it is present, however this was causing an N1 for
the current user serializer in the case of DM channels, which
were not doing `includes(:chat_channel_archive)` in the
`ChatChannelFetcher`.
DM channels cannot be archived, so we can just never try to serialize
the archive for DM channels in `ChatChannelSerializer`, which
removes the N1.
* DEV: Add N1 performance spec for latest.html preloading
We modify current user serializer in chat, so it's a good
idea to have some N1 performance specs to avoid regressions
here.
When a topic belongs to category that is read restricted but permission
has not been granted to any groups, publishing ceratin topic tracking state
updates for the topic will result in the `MessageBus::InvalidMessageTarget` error being raised
because we're passing `nil` to `group_ids` which is not support by
MessageBus.
This commit ensures that for said category above, we will publish the
updates to the admin groups.
```
class Jobs::DummyDelayedJob < Jobs::Base
def execute(args = {})
end
end
RSpec.describe "Jobs.run_immediately!" do
before { Jobs.run_immediately! }
it "explodes" do
current_user = Fabricate(:user)
Jobs.enqueue_in(1.seconds, :dummy_delayed_job)
sign_in(current_user)
end
end
```
The test above will fail with the following error if `ActiveRecord::Base.connection_handler.clear_active_connections!` is called before the configured Capybara server checks out a connection from the connection pool.
```
ActiveRecord::ActiveRecordError:
Cannot expire connection, it is owned by a different thread: #<Thread:0x00007f437391df58@puma srv tp 001 /home/tgxworld/.asdf/installs/ruby/3.1.3/lib/ruby/gems/3.1.0/gems/puma-6.0.2/lib/puma/thread_pool.rb:106 sleep_forever>. Current thread: #<Thread:0x00007f437d6cfc60 run>.
```
We're not exactly sure if this is an ActiveRecord bug or not but we've
invested too much time into investigating this problem. Fundamentally,
we also no longer understand why `ActiveRecord::Base.connection_handler.clear_active_connections!` is being called in an ensure block
within `Jobs::Base#perform` which was added in
ceddb6e0da 10 years ago. This
commit moves the logic for running jobs immediately out of the
`Jobs::Base#perform` method into another `Jobs::Base#perform_immediately` method such that
`ActiveRecord::Base.connection_handler.clear_active_connections!` is not
called. This change will only impact the test environment.
If unaccent is called with quote-like Unicode characters then it can
generate invalid queries because some of the transformed quotes by
unaccent are not escaped and to_tsquery fails because of bad input.
This commits replaces more quote-like Unicode characters before
unaccent is called.