Specifying wildcard characters which also happen to be regex
meta characters for `auto_approve_email_domains`, `allowed_email_domains`
and `blocked_email_domains` site settings currently breaks email
validation.
This change prevents these characters from being specified for these
site settings. It does this by switching the site setting type
from `list` to `host_list`. The `host_list` validator checks for these
characters.
In addition, this change also improves the site setting descriptions and
introduces a migration to fix existing records.
This commit fixes the following issue:
* User creates a post
* Akismet or some other thing like requiring posts to be approved puts
the post in the review queue, deleting it
* Admin approves the post
* Email is never sent to mailing list mode subscribers
We intentionally do not enqueue this for every single post when
recovering a topic (i.e. recovering the first post) since the topics
could have a lot of posts with emails already sent, and we don't want
to clog sidekiq with thousands of notify jobs.
We refer to the channel name rather than title elsewhere
(including the new channel modal), so we should be consistent.
Title is an internal abstraction, since DM channels cannot have
names (currently).
Also change the name field on channel edit to a input type="text"
rather than a textarea, since we don't want a huge input here.
Added in c2013865d7,
this migration was supposed to only turn off the hashtag
setting for existing sites (since that was the old default)
but its doing it for new ones too because we run all migrations
on new sites.
Instead, we should only run this if the first migration was
only just created, meaning its a new site.
If the enable_experimental_hashtag_autocomplete setting is
enabled, then we should autolink hashtag references to the
archived channels (e.g. #blah::channel) for a nicer UX, and
just show the channel name if not (since doing #channelName
can lead to weird inconsistent results).
In Discourse, there are many migration files where we CREATE INDEX CONCURRENTLY which requires us to set disable_ddl_transaction!. Setting disable_ddl_transaction! in a migration file runs the SQL statements outside of a transaction. The implication of this is that there is no ROLLBACK should any of the SQL statements fail.
We have seen lock timeouts occuring when running CREATE INDEX CONCURRENTLY. When that happens, the index would still have been created but marked as invalid by Postgres.
Per the postgres documentation:
> If a problem arises while scanning the table, such as a deadlock or a uniqueness violation in a unique index, the CREATE INDEX command will fail but leave behind an “invalid” index. This index will be ignored for querying purposes because it might be incomplete; however it will still consume update overhead.
> The recommended recovery method in such cases is to drop the index and try again to perform CREATE INDEX CONCURRENTLY . (Another possibility is to rebuild the index with REINDEX INDEX CONCURRENTLY ).
When such scenarios happen, we are supposed to either drop and create the index again or run a REINDEX operation. However, I noticed today that we have not been doing so in Discourse. Instead, we’ve been incorrectly working around the problem by checking for the index existence before creating the index in order to make the migration idempotent. What this potentially mean is that we might have invalid indexes which are lying around in the database which PG will ignore for querying purposes.
This commits adds a migration which queries for all the
invalid indexes in the `public` namespace and reindexes them.
Since the poll post handler runs very early in the post creation
process, it's possible to run the handler on an obiviously invalid post.
This change ensures the post's `raw` value is present before
proceeding.
This is a very subtle one. Setting the redirect URL is done by passing
a hash through a Discourse event. This is broken on Ruby 2 since the
support for keyword arguments in events was added.
In Ruby 2 the last argument is cast to keyword arguments if it is a
hash. The key point here is that creates a new copy of the hash, so
what the plugin is modifying is not the hash that was passed.
We've been doing some work to support new keyword argument semantics in Ruby 3. As part of that we made some changes to `DiscourseEvent::TestHelper`. The backwards compatibility fix doesn't work if the method is called with an empty hash as the final argument. This fix adds a valid option to the final hash in the particular test.
There was an issue with channel archiving, where at times the topic
creation could fail which left the archive in a bad state, as read-only
instead of archived. This commit does several things:
* Changes the ChatChannelArchiveService to validate the topic being
created first and if it is not valid report the topic creation errors
in the PM we send to the user
* Changes the UI message in the channel with the archive status to reflect
that topic creation failed
* Validate the new topic when starting the archive process from the UI,
and show the validation errors to the user straight away instead of
creating the archive record and starting the process
This also fixes another issue in the discourse_dev config which was
failing because YAML parsing does not enable all classes by default now,
which was making the seeding rake task for chat fail.
Currently, `Tag#topic_count` is a count of all regular topics regardless of whether the topic is in a read restricted category or not. As a result, any users can technically poll a sensitive tag to determine if a new topic is created in a category which the user has not excess to. We classify this as a minor leak in sensitive information.
The following changes are introduced in this commit:
1. Introduce `Tag#public_topic_count` which only count topics which have been tagged with a given tag in public categories.
2. Rename `Tag#topic_count` to `Tag#staff_topic_count` which counts the same way as `Tag#topic_count`. In other words, it counts all topics tagged with a given tag regardless of the category the topic is in. The rename is also done so that we indicate that this column contains sensitive information.
3. Change all previous spots which relied on `Topic#topic_count` to rely on `Tag.topic_column_count(guardian)` which will return the right "topic count" column to use based on the current scope.
4. Introduce `SiteSetting.include_secure_categories_in_tag_counts` site setting to allow site administrators to always display the tag topics count using `Tag#staff_topic_count` instead.
0403cda1d1 introduced a regression where
topics in non read-restricted categories have its TopicTrackingState
MessageBus messages published with the `group_ids: [nil]` option. This
essentially means that no one would be able to view the message.
* FIX: Channel archive N1 when serializing current user
The `ChatChannelSerializer` serializes the archive for the
channel if it is present, however this was causing an N1 for
the current user serializer in the case of DM channels, which
were not doing `includes(:chat_channel_archive)` in the
`ChatChannelFetcher`.
DM channels cannot be archived, so we can just never try to serialize
the archive for DM channels in `ChatChannelSerializer`, which
removes the N1.
* DEV: Add N1 performance spec for latest.html preloading
We modify current user serializer in chat, so it's a good
idea to have some N1 performance specs to avoid regressions
here.
When a topic belongs to category that is read restricted but permission
has not been granted to any groups, publishing ceratin topic tracking state
updates for the topic will result in the `MessageBus::InvalidMessageTarget` error being raised
because we're passing `nil` to `group_ids` which is not support by
MessageBus.
This commit ensures that for said category above, we will publish the
updates to the admin groups.
```
class Jobs::DummyDelayedJob < Jobs::Base
def execute(args = {})
end
end
RSpec.describe "Jobs.run_immediately!" do
before { Jobs.run_immediately! }
it "explodes" do
current_user = Fabricate(:user)
Jobs.enqueue_in(1.seconds, :dummy_delayed_job)
sign_in(current_user)
end
end
```
The test above will fail with the following error if `ActiveRecord::Base.connection_handler.clear_active_connections!` is called before the configured Capybara server checks out a connection from the connection pool.
```
ActiveRecord::ActiveRecordError:
Cannot expire connection, it is owned by a different thread: #<Thread:0x00007f437391df58@puma srv tp 001 /home/tgxworld/.asdf/installs/ruby/3.1.3/lib/ruby/gems/3.1.0/gems/puma-6.0.2/lib/puma/thread_pool.rb:106 sleep_forever>. Current thread: #<Thread:0x00007f437d6cfc60 run>.
```
We're not exactly sure if this is an ActiveRecord bug or not but we've
invested too much time into investigating this problem. Fundamentally,
we also no longer understand why `ActiveRecord::Base.connection_handler.clear_active_connections!` is being called in an ensure block
within `Jobs::Base#perform` which was added in
ceddb6e0da 10 years ago. This
commit moves the logic for running jobs immediately out of the
`Jobs::Base#perform` method into another `Jobs::Base#perform_immediately` method such that
`ActiveRecord::Base.connection_handler.clear_active_connections!` is not
called. This change will only impact the test environment.
If unaccent is called with quote-like Unicode characters then it can
generate invalid queries because some of the transformed quotes by
unaccent are not escaped and to_tsquery fails because of bad input.
This commits replaces more quote-like Unicode characters before
unaccent is called.
These accidental inclusions are mostly no-ops (because the method name is also included as an explicit symbol). The mistakes were made more obvious because syntax_tree adjusted the indentation of these methods
Fixes the support for kwargs in `DiscourseEvent.trigger()` on Ruby 3, e.g.
```rb
DiscourseEvent.trigger(:before_system_message_sent, message_type: type, recipient: @recipient, post_creator_args: post_creator_args, params: method_params)
```
Fixes https://github.com/discourse/discourse-local-site-contacts
If a secure upload's access_control_post was trashed, and an anon user
tried to look at that upload, they would get a 500 error rather than
the correct 403 because of an error inside the PostGuardian logic.