If a codeblock contains **exactly** the same markdown as an image which has been retrieved by the 'pull hotlinked' job, then it will be replaced with the new URL. This commit adds failing (skipped) tests for this issue.
This is a bug that happens only when the current date is less than 90 days from a date on which the time zone transitions into or out of Daylight Savings Time.
In these conditions, bulk invites show the time of day of their expiration as being 1 hour later than the current time.
Whereas it should match the time of day the invite was generated.
This is because the server has not been using the user's timezone in calculating the expiration time of day. This PR fixes issue by considering the user's timezone when doing the date math.
https://meta.discourse.org/t/bulk-invite-logic-to-generate-expire-date-bug/274689
After fbe0e4c we always pass a block into these methods.
So yield inside the export methods works and there is no need
anymore to wrap them into enumerators.
Our code assumed the content_range interval was inclusive, but they are open-ended due to Postgres' [discrete range types](https://www.postgresql.org/docs/current/rangetypes.html#RANGETYPES-DISCRETE), meaning [1,2] will be represented as [1,3).
It also fixes some flaky tests due to test data not being correctly setup and the registry not being resetted after each test.
When we receive the stream parameter, we'll queue a job that periodically publishes partial updates, and after the summarization finishes, a final one with the completed version, plus metadata.
`summary-box` listens to these updates via MessageBus, and updates state accordingly.
A previous change updated `ReviewableQueuedPost`'s `created_by`
to be consistent with other reviewable types. It assigns
the the creator of the post being queued to `target_created_by` and sets
the `created_by` to the creator of the reviewable itself.
This fix updates some of the `created_by` references missed during the
intial fix.
We recently introduced this advice to admins when some translation overrides are outdated or using unknown interpolation keys:
However we missed the case where the original translation key has been renamed or altogether removed. When this happens they are no longer visible in the admin interface, leading to the confusing situation where we say there are outdated translations, but none are shown.
Because we don't explicitly handle this case, some deleted translations were incorrectly marked as having unknown interpolation keys. (This is because I18n.t will return a string like "Translation missing: foo", which obviously has no interpolation keys inside.)
This change adds an additional status, deprecated for TranslationOverride, and the job that checks them will check for this status first, taking precedence over invalid_interpolation_keys. Since the advice only checks for the outdated and invalid_interpolation_keys statuses, this fixes the problem.
This PR adds a feature to help admins stay up-to-date with their translations. We already have protections preventing admins from problems when they update their overrides. This change adds some protection in the other direction (where translations change in core due to an upgrade) by creating a notice for admins when defaults have changed.
Terms:
- In the case where Discourse core changes the default translation, the translation override is considered "outdated".
- In the case above where interpolation keys were changed from the ones the override is using, it is considered "invalid".
- If none of the above applies, the override is considered "up to date".
How does it work?
There are a few pieces that makes this work:
- When an admin creates or updates a translation override, we store the original translation at the time of write. (This is used to detect changes later on.)
- There is a background job that runs once every day and checks for outdated and invalid overrides, and marks them as such.
- When there are any outdated or invalid overrides, a notice is shown in admin dashboard with a link to the text customization page.
Known limitations
The link from the dashboard links to the default locale text customization page. Given there might be invalid overrides in multiple languages, I'm not sure what we could do here. Consideration for future improvement.
What is the problem?
When an admin changes the default_sidebar_categories or default_sidebar_tags site settings and opts to backfill the setting,
we currently enqueue a sidekiq job to run the backfilling operation. When an admin changes those settings multiple times
within a short time frame, multiple sidekiq jobs with different backfilling parameters will be enqueued.
This is problematic if multiple jobs are executed concurrently as it may lead to situations where a job
with “outdated” site setting values is completed after a job with the “latest” site setting values.
What is the fix?
By setting `cluster_concurrency` to `1`, we ensure that only one of such
backfilling job will execute across all the sidekiq processes that are
deployed at any point in time. Since Sidekiq pops off job in the order
in which they are pushed, limiting the cluster concurrency here will
allow us to execute the enqueued `Jobs::BackfillSidebarSiteSettings`
jobs serially.
While we are unable to support OAUTH2 with pop3 (due to upstream dependency ruby/net-pop#16), we are adding the support for mail pollers plugin. Doing so, it would be possible to write a plugin which then uses other ways (microsoft graph sdk for example) to poll emails from a mailbox.
The idea is that a plugin would define a class which inherits from Email::Poller and defines a poll_mailbox static method which returns an array of strings. Then the plugin could call register_mail_poller(<class_name>) to have it registered. All the configuration (oauth2 tokens, email, etc) could be managed by sitesettings defined in the plugin.
This change adds support retroactively updating display names in the new quote format when the user's name is changed. It happens through a background job that is triggered by a callback when a user is saved with a new name.
Anonymization is among the most expensive operations we can perform with
extreme potential to impact the database. To mitigate risk we only allow a
single anonymization across the entire cluster concurrently.
This commit introduces support for `cluster_concurrency 1`. When you set that on a Job it will only allow 1 concurrent execution per cluster.
SearchIndexer is only automatically disabled in `before_all` and `before` blocks which means at the start
of test runs. Enabling the SearchIndexer in one `fab!` block will affect
all other `fab!` blocks which is not ideal as we may be indexing stuff
for search when we don't need to.
On the client-side, message-bus subscriptions and reviewable count UI is based on the 'redesigned_user_menu_enabled' boolean. We need to use the same logic on the server-side to ensure things work correctly when legacy navigation is used alongside the new user menu.
Currently, we’re performing a check when a user is suspended in the
`UserEmail` job and we’re assuming a `post` is always available, which
is not the case. The code indeed breaks when the job is called with the
`account_suspended` type option.
This patch fixes this issue by making the check use the safe navigation
operator, thus making it working when `post` is not provided.
There is no need to validate the user's emails when
promoting/demoting their trust level, this can cause
issues in things like Jobs::Tl3Promotions, we don't
need to fail in that case when all we are doing is changing
trust level.
Currently, when a suspended user belongs to a group PM (private message
with more than two people in it) and a staff member sends a message to
this group PM, then the suspended user will receive an email.
This happens because a suspended user can only receive emails from staff
members. But in this case, this can be seen as a bug as the expected
behavior would be instead to not send any email to the suspended user. A
staff member can participate in active discussions like any other
member and so their messages in this context shouldn’t be treated
differently than the ones from regular users.
This patch addresses this issue by checking if a suspended user receives
a message from a group PM or not. If that’s the case then an email won’t
be sent no matter if the post originated from a staff member or not.
* UX: add type tag and design update
* UX: clarify status copy in reviewQ
* DEV: switch to selectKit
* UX: color approve/reject buttons in RQ
* DEV: regroup actions
* UX: add type tag and design update
* UX: clarify status copy in reviewQ
* Join questions for flagged post with "or" with new I18n function
* Move ReviewableScores component out of context
* Add CSS classes to reviewable-item based on human type
* UX: add table header for scoring
* UX: don't display % score
* UX: prefix modifier class with dash
* UX: reviewQ flag table styling
* UX: consistent use of ignore icon
* DEV: only show context question on pending status
* UX: only show table headers on pending status
* DEV: reviewQ regroup actions for hidden posts
* UX: reviewQ > approve/reject buttons
* UX: reviewQ add fadeout
* UX: reviewQ styling
* DEV: move scores back into component
* UX: reviewQ mobile styling
* UX: score table on mobile
* UX: reviewQ > move meta info outside table
* UX: reviewQ > score layout fixes
* DEV: readd `agree_and_keep` and fix the spec tests.
* Fix the spec tests
* fix the quint test
* DEV: readd deleting replies
* UX: reviewQ copy tweaks
* DEV: readd test for ignore + delete replies
* Remove old
* FIX: Add perform_ignore back in for backwards compat
* DEV: add an action alias `ignore` for `ignore_and_do_nothing`.
---------
Co-authored-by: Martin Brennan <martin@discourse.org>
Co-authored-by: Vinoth Kannan <svkn.87@gmail.com>
- Reduce duplication of terms in post index from unlimited to 6. This will
result in reduced index size and reduced weighting for posts containing
a huge amount of duplicate terms. (Eg: a post containing "sam sam sam sam
sam sam sam sam", will index as "sam sam sam sam sam sam", only including
the word up to 6 times.) This corrects a flaw where title weighting could
be ignored.
- Prioritize exact matches of words in titles. Our search always performs
a prefix match. However we want to give special weight to exact title matches
meaning that a search for "sum" will find topics such as "the sum of us" vs
"summer in spring".
- Pick up fixes to our search algorithm which are missing from old indexes.
Specifically pick up the fix that indexes URLs properly. (`https://happy.com`
was stemmed to `happi` in keywords and then was not searchable)
see also:
https://meta.discourse.org/t/refinements-to-search-being-tested-on-meta/254158
Indexing will take a while and work in batches, in the background.
So it can easily be overwritten in a plugin for example.
### Added more tests to provide better coverage
We previously only had `u.silenced_till IS NULL` but I made it consistent with pretty much every other places where we check for "active" users.
These two new lines do change the query a tiny bit though.
**Before**
- You could not get the badge if you were currently silenced (no matter what period is being checked)
- You could get the badge if you were suspended 😬
**After**
- You can't get the badge if you were silenced during the past year
- You can't get the badge if you were suspended during the past year
### Improved the performance of the query by using `NOT EXISTS` instead of `LEFT JOIN / COUNT() = 0`
There is no difference in behaviour between
```sql
LEFT JOIN user_badges AS ub ON ub.user_id = u.id AND ...
[...]
HAVING COUNT(ub.*) = 0
```
and
```sql
NOT EXISTS (SELECT 1 FROM user_badges AS ub WHERE ub.user_id = u.id AND ...)
```
The only difference is performance-wise. The `NOT EXISTS` is 10-30% faster on very large databases (aka. posts and users in X millions). I checked on 3 of the largest datasets I could find.
```
class Jobs::DummyDelayedJob < Jobs::Base
def execute(args = {})
end
end
RSpec.describe "Jobs.run_immediately!" do
before { Jobs.run_immediately! }
it "explodes" do
current_user = Fabricate(:user)
Jobs.enqueue_in(1.seconds, :dummy_delayed_job)
sign_in(current_user)
end
end
```
The test above will fail with the following error if `ActiveRecord::Base.connection_handler.clear_active_connections!` is called before the configured Capybara server checks out a connection from the connection pool.
```
ActiveRecord::ActiveRecordError:
Cannot expire connection, it is owned by a different thread: #<Thread:0x00007f437391df58@puma srv tp 001 /home/tgxworld/.asdf/installs/ruby/3.1.3/lib/ruby/gems/3.1.0/gems/puma-6.0.2/lib/puma/thread_pool.rb:106 sleep_forever>. Current thread: #<Thread:0x00007f437d6cfc60 run>.
```
We're not exactly sure if this is an ActiveRecord bug or not but we've
invested too much time into investigating this problem. Fundamentally,
we also no longer understand why `ActiveRecord::Base.connection_handler.clear_active_connections!` is being called in an ensure block
within `Jobs::Base#perform` which was added in
ceddb6e0da 10 years ago. This
commit moves the logic for running jobs immediately out of the
`Jobs::Base#perform` method into another `Jobs::Base#perform_immediately` method such that
`ActiveRecord::Base.connection_handler.clear_active_connections!` is not
called. This change will only impact the test environment.
When sending emails out via group SMTP, if we
are sending them to non-staged users we want
to mask those emails with BCC, just so we don't
expose them to anyone we shouldn't. Staged users
are ones that have likely only interacted with
support via email, and will likely include other
people who were CC'd on the original email to the
group.
Co-authored-by: Martin Brennan <martin@discourse.org>
A few specs in `dashboard_controller_spec.rb` set some state in redis but don't clean it up afterwards which causes other specs to fail when they're ran after `dashboard_controller_spec.rb`.
Related commit: 18467d4.
* DEV: Skip push notifications for active online users
Currently, users with active push subscriptions get push notifications
regardless of their "presence" on the site.
This change introduces a `push_notification_time_window_mins`
site setting which is used in conjunction with a user's `last_seen_at` to
determine if push notifications should be sent. A user is considered to
be actively online if their `last_seen_at` is within `push_notification_time_window_mins`
minutes. `push_notification_time_window_mins` is set to 10 by default.
* DEV: Remove client param for push_notification_time_window_mins site setting
Co-authored-by: Bianca Nenciu <nbianca@users.noreply.github.com>
Co-authored-by: Bianca Nenciu <nbianca@users.noreply.github.com>
This commit adds a new notification that gets sent to admins when the site gets new features after an upgrade/deploy. Clicking on the notification takes the admin to the admin dashboard at `/admin` where they can see the new features under the "New Features" section.
Internal topic: t/87166.
FEATURE: Chat and Sidebar are now on by default
- Set the sidebar site setting to be enabled by default
- Set the chat site setting to be enabled by default
- Updated existing specs that assumed the original default
- Use a migration to keep old defaults for existing sites
This new site setting replaces the
`enable_experimental_sidebar_hamburger` and `enable_sidebar` site
settings as the sidebar feature exits the experimental phase.
Note that we're replacing this without depreciation since the previous
site setting was considered experimental.
Internal Ref: /t/86563
Users who can access the review queue can claim a pending reviewable(s) which means that the claimed reviewable(s) can only be handled by the user who claimed it. Currently, we show claimed reviewables in the user menu, but this can be annoying for other reviewers because they can't do anything about a reviewable claimed by someone. So this PR makes sure that we only show in the user menu reviewables that are claimed by nobody or claimed by the current user.
Internal topic: t/77235.
While load testing our user creation code path in production, we
identified that executing the DB statement to update the `Group#user_count` column within a
transaction is creating a bottleneck for us. This is because the
creation of a user and addition of the user to the relevant groups are
done in a transaction. When we execute the DB statement to update
`Group#user_count` for the relevant group, a row level lock is held
until the transaction completes. This row level lock acts like a global
lock when the server is creating users that will be added to the same
group in quick succession.
Instead of updating the counter cache within a transaction which the
default ActiveRecord `counter_cache` option does, we simply update the
counter cache outside of the committing transaction.
Co-authored-by: Rafael dos Santos Silva <xfalcox@gmail.com>
Co-authored-by: Rafael dos Santos Silva <xfalcox@gmail.com>
Previously we would unconditionally fetch all images via HTTP to grab
original sizing from cooked post processor in 2 different spots.
This was wasteful as we already calculate and cache this info in upload records.
This also simplifies some specs and reduces use of mocks.
Adds a new upload field for a second dark mode category logo.
This alternative will be used when the browser is in dark mode (similar to the global site setting for a dark logo).
Skipped invites were not counted at all and some invites could generate
more than one error and resulted in a grand total that was not equal to
the count of bulk invites.
This commit renames all secure_media related settings to secure_uploads_* along with the associated functionality.
This is being done because "media" does not really cover it, we aren't just doing this for images and videos etc. but for all uploads in the site.
Additionally, in future we want to secure more types of uploads, and enable a kind of "mixed mode" where some uploads are secure and some are not, so keeping media in the name is just confusing.
This also keeps compatibility with the `secure-media-uploads` path, and changes new
secure URLs to be `secure-uploads`.
Deprecated settings:
* secure_media -> secure_uploads
* secure_media_allow_embed_images_in_emails -> secure_uploads_allow_embed_images_in_emails
* secure_media_max_email_embed_image_size_kb -> secure_uploads_max_email_embed_image_size_kb
See https://meta.discourse.org/t/discourse-email-messages-are-incorrectly-threaded/233499
for thorough reasoning.
This commit changes how we generate Message-IDs and do email
threading for emails sent from Discourse. The main changes are
as follows:
* Introduce an outbound_message_id column on Post that
is either a) filled with a Discourse-generated Message-ID
the first time that post is used for an outbound email
or b) filled with an original Message-ID from an external
mail client or service if the post was created from an
incoming email.
* Change Discourse-generated Message-IDs to be more consistent
and static, in the format `discourse/post/:post_id@:host`
* Do not send References or In-Reply-To headers for emails sent
for the OP of topics.
* Make sure that In-Reply-To is filled with either a) the OP's
Message-ID if the post is not a direct reply or b) the parent
post's Message-ID
* Make sure that In-Reply-To has all referenced post's Message-IDs
* Make sure that References is filled with a chain of Message-IDs
from the OP down to the parent post of the new post.
We also are keeping X-Discourse-Post-Id and X-Discourse-Topic-Id,
headers that we previously removed, for easier visual debugging
of outbound emails.
Finally, we backfill the `outbound_message_id` for posts that have
a linked `IncomingEmail` record, using the `message_id` of that record.
We do not need to do that for posts that don't have an incoming email
since they are backfilled at runtime if `outbound_message_id` is missing.
`delete_previous!` deletes existing topics even when we cannot send a new one due to the `limit_once_per` option. The dashboard problems PM gets deleted the next time the job runs (30 minutes), so the inbox could be empty when
admins click on the summary notification.
Our internal implementation of #perform on jobs performs remapping.
This happens cause we do "exception aggregation".
Scheduled jobs run on every site in the multisite cluster, and we report
one error per site that failed. During this aggregation we reshape the
context from the original object shape returned by mini_scheduler
The new integration test ensures this interface will remain stable even if
decoupled parts of the code change shapes.
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
Mutating the `raw` variable like this would cause issues upstream, meaning that the modification is not persisted. Instead, we should allocate a new string like the other replacement methods.
If an image is oneboxed directly, then we should replace the onebox URL with a markdown image tag. This ensures that the wrapper link points to the downloaded version rather than the original.
This regressed in bf6f8299
It's already included in the `ignored_columns` list in the group model. 03ffb0bf27/app/models/group.rb (L9)
Also, removed the `MigrateGroupFlairImages` onceoff job and spec.
This table holds associations between uploads and other models. This can be used to prevent removing uploads that are still in use.
* DEV: Create upload_references
* DEV: Use UploadReference instead of PostUpload
* DEV: Use UploadReference for SiteSetting
* DEV: Use UploadReference for Badge
* DEV: Use UploadReference for Category
* DEV: Use UploadReference for CustomEmoji
* DEV: Use UploadReference for Group
* DEV: Use UploadReference for ThemeField
* DEV: Use UploadReference for ThemeSetting
* DEV: Use UploadReference for User
* DEV: Use UploadReference for UserAvatar
* DEV: Use UploadReference for UserExport
* DEV: Use UploadReference for UserProfile
* DEV: Add method to extract uploads from raw text
* DEV: Use UploadReference for Draft
* DEV: Use UploadReference for ReviewableQueuedPost
* DEV: Use UploadReference for UserProfile's bio_raw
* DEV: Do not copy user uploads to upload references
* DEV: Copy post uploads again after deploy
* DEV: Use created_at and updated_at from uploads table
* FIX: Check if upload site setting is empty
* DEV: Copy user uploads to upload references
* DEV: Make upload extraction less strict
Previously, with the default `editing_grace_period`, hotlinked images were pulled 5 minutes after a post is created. This delay was added to reduce the chance of automated edits clashing with user edits.
This commit refactors things so that we can pull hotlinked images immediately. URLs are immediately updated in the post's `cooked` HTML. The post's raw markdown is updated later, after the `editing_grace_period`.
This involves a number of behind-the-scenes changes including:
- Schedule Jobs::PullHotlinkedImages immediately after Jobs::ProcessPost. Move scheduling to after the `update_column` call to avoid race conditions
- Move raw changes into a separate job, which is delayed until after the ninja-edit window
- Move disable_if_low_on_disk_space logic into the `pull_hotlinked_images` job
- Move raw-parsing/replacing logic into `InlineUpload` so it can be easily be shared between `UpdateHotlinkedRaw` and `PullUserProfileHotlinkedImages`
Incorporates learnings from /t/64227:
* Changes the code to set access control posts in the rake
task to be an efficient UPDATE SQL query.
The original version was timing out with 312017 post uploads,
the new query took ~3s to run.
* Changes the code to mark uploads as secure/not secure in
the rake task to be an efficient UPDATE SQL query rather than
using UploadSecurity. This took a very long time previously,
and now takes only a few seconds.
* Spread out ACL syncing for uploads into jobs with batches of
100 uploads at a time, so they can be parallelized instead
of having to wait ~1.25 seconds for each ACL to be changed
in S3 serially.
One issue that still remains is post rebaking. Doing this serially
is painfully slow. We have a way to do this in sidekiq via PeriodicalUpdates
but this is limited by max_old_rebakes_per_15_minutes. It would
be better to fan this rebaking out into jobs like we did for the
ACL sync, but that should be done in another PR.
This commit migrates all bookmarks to be polymorphic (using the
bookmarkable_id and bookmarkable_type) columns. It also deletes
all the old code guarded behind the use_polymorphic_bookmarks setting
and changes that setting to true for all sites and by default for
the sake of plugins.
No data is deleted in the migrations, the old post_id and for_topic
columns for bookmarks will be dropped later on.
- Only validate if custom_fields are loaded, so that we don't trigger a db query
- Only validate public user fields, not all custom_fields
This commit also reverts the unrelated spec changes in ba148e08, which were required to work around these issues
Currently we don’t apply watched words to custom user fields nor user
profile fields.
This led to users being able to use blocked words in their bio, location
or some custom user fields.
This patch addresses this issue by adding some validations so it’s not
possible anymore to save the User model or the UserProfile model if they
contain blocked words.
A bit of a mixed bag, this addresses several edge areas of bookmarks and makes them compatible with polymorphic bookmarks (hidden behind the `use_polymorphic_bookmarks` site setting). The main ones are:
* ExportUserArchive compatibility
* SyncTopicUserBookmarked job compatibility
* Sending different notifications for the bookmark reminders based on the bookmarkable type
* Import scripts compatibility
* BookmarkReminderNotificationHandler compatibility
This PR also refactors the `register_bookmarkable` API so it accepts a class descended from a `BaseBookmarkable` class instead. This was done because we kept having to add more and more lambdas/properties inline and it was very messy, so a factory pattern is cleaner. The classes can be tested independently as well.
Some later PRs will address some other areas like the discourse narrative bot, advanced search, reports, and the .ics endpoint for bookmarks.
`TestLogger` was responsible for some flaky specs runs:
```
Error during failsafe response: undefined method `debug' for #<TestLogger:0x0000556c4b942cf0 @warnings=1>
Did you mean? debugger
```
This commit also cleans up other uses of `FakeLogger`
When changing upload security using `Upload#update_secure_status`,
we may not have the context of how an upload is being created, because
this code path can be run through scheduled jobs. When calling
update_secure_status, the normal ActiveRecord validations are run,
and ours include validating extensions. In some cases the upload
is created in an automated way, such as user export zips, and the
security is applied later, with the extension prohibited from
use when normally uploading.
This caused the upload to fail validation on `update_secure_status`,
causing the security change to silently fail. This fixes the issue
by skipping the file extension validation when the upload security
is being changed.
Sometimes we need to update a _lot_ of ACLs on S3 (such as when secure media
is enabled), and since it takes ~1s per upload to update the ACL, this is
best spread out over many jobs instead of having to do the whole thing serially.
In future, it will be better to have a job that can be run based on
a column on uploads (e.g. acl_stale) so we can track progress, similar
to how we can set the baked_version to nil to rebake posts.
raw_html posts (i.e. those which are pulled as part of our comments integration) don't go through our markdown pipeline, so `upload://` URLs are not supported. Running pull_hotlinked_images will break any images in the post.
In future we may add support for pulling hotlinked images in these posts. But for now, disabling it will stop it breaking images.
Under some conditions, replacing an `<img` with `![]()` can break rendering, and make the image disappear.
Context at https://meta.discourse.org/t/152801
The user can select what happens with a bookamrk after it expires. New
option allow bookmark's reminder to be kept even after it has expired.
After a bookmark's reminder notification is created, the reminder date
will be highlighted in red until the user resets the reminder date.
User can do that using the new Clear Reminder button from the dropdown.
It's very easy to forget to add `require 'rails_helper'` at the top of every core/plugin spec file, and omissions can cause some very confusing/sporadic errors.
By setting this flag in `.rspec`, we can remove the need for `require 'rails_helper'` entirely.
This allows text editors to use correct syntax coloring for the heredoc sections.
Heredoc tag names we use:
languages: SQL, JS, RUBY, LUA, HTML, CSS, SCSS, SH, HBS, XML, YAML/YML, MF, ICS
other: MD, TEXT/TXT, RAW, EMAIL
This commit introduces two new APIs for handling unused uploads, one
can be used to exclude uploads in bulk when the data model allow and
the other one excludes uploads one by one.
Breakdown of fixes in this commit:
* `UserStat#topic_count` was not updated when visibility of
the topic changed.
* `UserStat#post_count` was not updated when post was hidden or
unhidden.
* `TopicConverter` was only incrementing or decrementing the counts by 1
even if a user has multiple posts in the topic.
* The commit turns off the verbose logging by default as it is just
noise to normal users who are not debugging this problem.
This commit introduces our own handling and warning for Sidekiq's new 'non-json-serializable' warning. This decouples us from Sidekiq's own deprecation cycle, and allows us to use our own deprecation system. It also means that the dump/parse happens in test mode, which will help us to catch occurrences before they reach production.
This commit introduces scheduled problem checks for the admin dashboard, which are long running or otherwise cumbersome problem checks that will be run every 10 minutes rather than every time the dashboard is loaded. If these scheduled checks add a problem, the problem will remain until it is cleared or until the scheduled job runs again.
An example of a check that should be scheduled is validating credentials against an external provider.
This commit also introduces the concept of a `priority` to the problems generated by `AdminDashboardData` and the scheduled checks. This is `low` by default, and can be set to `high`, but this commit does not change any part of the UI with this information, only adds a CSS class.
I will be making a follow up PR to check group SMTP credentials.
We previously used ConsolidateNotifications with a threshold of 1 to re-use an existing notification and bump it to the top instead of creating a new one. It produces some jumpiness in the user notification list, and it relies on updating the `created_at` attribute, which is a bit hacky.
As a better alternative, we're introducing a new plan that deletes all the previous versions of the notification, then creates a new one.
We send the reminder using the GroupMessage class, which supports removing previous messages. We can't match them by raw because they could mention different moderators. Also, I had to change the subject to remove dynamically generated values, which is necessary for finding them.
Currently the Message-IDs we send out for outbound email
are not unique; for a post they look like:
topic/TOPIC_ID/POST_ID@HOST
And for a topic they look like:
topic/TOPIC_ID@HOST
This commit changes the outbound Message-IDs to also have
a random suffix before the host, so the new format is
like this:
topic/TOPIC_ID/POST_ID.RANDOM_SUFFIX@HOST
Or:
topic/TOPIC_ID.RANDOM_SUFFIX@HOST
This should help with email deliverability. This change
is backwards-compatible, the old Message-ID format will
still be recognized in the mail receiver flow, so people
will still be able to reply using Message-IDs, In-Reply-To,
and References headers that have already been sent.
This commit also refactors Message-ID related logic
to a central location, and adds judicious amounts of
tests and documentation.