This change is mainly a refactor of the desktop notifications service to improve readability and have standardised values for tracking state for current user in regards to the Notification API and Push API.
Also improves readability when handling push notification jobs, especially in scenarios where the push_notification_time_window_mins site setting is set to 0, which will allow sending push notifications instantly.
* FIX: Ensure JsLocaleHelper to obly outputs up-to-date translations
The old implementation forgot to filter out deprecated
translations, causing these translations to incorrectly override the new
locale in the frontend.
This commit fills in the forgotten where clause, filtering only the
up-to-date part.
Related meta topic: https://meta.discourse.org/t/outdated-translation-replacement-causing-missing-translation/314352
In development, classes are lazy loaded so `Jobs::Onceoff.onceoff_job_klasses`
may not have been set. This is not a problem in production cause stuff
is eager loaded.
Follow-up to f4d06f195d
We are investigating a memory leak in Sidekiq and saw the following line
when comparing heap dumps over time.
`Allocated IMEMO 14775 objects of size 591000/7389528 (in bytes) at:
/var/www/discourse/app/jobs/onceoff/onceoff.rb:36`
That line in question was doing a `.select { |klass| klass < self }` on
`ObjectSpace.each_object(Class)`. This for some reason is allocating a
whole bunch of `IMEMO` objects which are instruction sequence objects.
Instead of diving deeper into why this might be leaking, we can just
save our time by switching to an implementation that is more efficient
and does not require looping through a ton of objects.
Background:
In order to redrive failed webhook events, an operator has to go through and click on each. This PR is adding a mechanism to retry all failed events to help resolve issues quickly once the underlying failure has been resolved.
What is the change?:
Previously, we had to redeliver each webhook event. This merge is adding a 'Redeliver Failed' button next to the webhook event filter to redeliver all failed events. If there is no failed webhook events to redeliver, 'Redeliver Failed' gets disabled. If you click it, a window pops up to confirm the operator. Failed webhook events will be added to the queue and webhook event list will show the redelivering progress. Every minute, a job will be ran to go through 20 events to redeliver. Every hour, a job will cleanup the redelivering events which have been stored more than 8 hours.
I am changing many of these to notes or resolving them as is,
most of these I have not actively worked on in years so someone
else can work on them when we get to these areas again.
This commit introduces a hidden `s3_inventory_bucket` site setting which
replaces the `enable_s3_inventory` and `s3_configure_inventory_policy`
site setting.
The reason `enable_s3_inventory` and `s3_configure_inventory_policy`
site settings are removed is because this feature has technically been
broken since it was introduced. When the `enable_s3_inventory` feature
is turned on, the app will because configure a daily inventory policy for the
`s3_upload_bucket` bucket and store the inventories under a prefix in
the bucket. The problem here is that once the inventories are created,
there is nothing cleaning up all these inventories so whoever that has
enabled this feature would have been paying the cost of storing a whole
bunch of inventory files which are never used. Given that we have not
received any complains about inventory files inflating S3 storage costs,
we think that it is very likely that this feature is no longer being
used and we are looking to drop support for this feature in the not too
distance future.
For now, we will still support a hidden `s3_inventory_bucket` site
setting which site administrators can configure via the
`DISCOURSE_S3_INVENTORY_BUCKET` env.
This commit updates `S3Inventory#files` to ignore S3 inventory files
which have a `last_modified` timestamp which are not at least 2 days
older than `BackupMetadata.last_restore_date` timestamp.
This check was previously only in `Jobs::EnsureS3UploadsExistence` but
`S3Inventory` can also be used via Rake tasks so this protection needs
to be in `S3Inventory` and not in the scheduled job.
This PR introduces a basic AdminNotice model to store these notices. Admin notices are categorized by their source/type (currently only notices from problem check.) They also have a priority.
This commit introduces a few changes as a result of
customer issues with finding why a topic was relisted.
In one case, if a user edited the OP of a topic that was
unlisted and hidden because of too many flags, the topic
would get relisted by directly changing topic.visible,
instead of going via TopicStatusUpdater.
To improve tracking we:
* Introduce a visibility_reason_id to topic which functions
in a similar way to hidden_reason_id on post, this column is
set from the various places we change topic visibility
* Fix Post#unhide! which was directly modifying topic.visible,
instead we use TopicStatusUpdater which sets visibility_reason_id
and also makes a small action post
* Show the reason topic visibility changed when hovering the
unlisted icon in topic status on topic titles
In a large forum with millions of users and millions of user_fields
updating the list of dropdown user field options will result in a
502 now due to the large number of fields.
This commit moves the indexing into a job.
Previously, when the new site was created and after the first admin login, no one will receive notifications to review the user approval queue since only the moderators would receive the PMs about it. Also, this PR will change the "pending_users_reminder_delay_minutes" site setting to 5 minutes while the site is in bootstrap mode.
This adds a hidden site setting of `skip_email_bulk_invites`
If set to `true`, the `BulkInvite` job will pass the value to `Invite`, meaning the generated invite wont trigger an email notification being sent to the newly invited user.
(This is useful if you want to manage the sending of the invite emails outside of Discourse.)
Doing the following renames:
Jobs::ProblemChecks → Jobs::RunProblemChecks
Jobs::ProblemCheck → Jobs::RunProblemCheck
This is to disambiguate the ProblemCheck class name, ease fuzzy finding, and avoid needing to use :: in a bunch of places.
As part of problem checks refactoring, we're moving some data to be DB backed. In this PR it's the tracking of problem check execution. When was it last run, when was the last problem, when should it run next, how many consecutive checks had problems, etc.
This allows us to implement the perform_every feature in scheduled problem checks for checks that don't need to be run every 10 minutes.
Now forums can enroll their sites to be showcased in the Discourse [Discover](https://discourse.org/discover) directory. Once they enable the site setting `include_in_discourse_discover` to enroll their forum the `CallDiscourseHub` job will ping the `api.discourse.org/api/discover/enroll` endpoint. Then the Discourse Hub will fetch the basic details from the forum and add it to the review queue. If the site is approved then the forum details will be displayed in the `/discover` page.
Also, remove experimental setting and simply use top_menu for feature detection
This means that when people eventually enable the hot top menu, there will
be topics in it
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
Previously, problem checks were all added as either class methods or blocks in AdminDashboardData. Another set of class methods were used to add and run problem checks.
As of this PR, problem checks are promoted to first-class citizens. Each problem check receives their own class. This class of course contains the implementation for running the check, but also configuration items like retry strategies (for scheduled checks.)
In addition, the parent class ProblemCheck also serves as a registry for checks. For example we can get a list of all existing check classes through ProblemCheck.checks, or just the ones running on a schedule through ProblemCheck.scheduled.
After this refactor, the task of adding a new check is significantly simplified. You add a class that inherits ProblemCheck, you implement it, add a test, and you're good to go.
A while ago we increased group SMTP read and open timeouts
to address issues we were seeing with Gmail sometimes giving
really long timeouts for these values. The commit was:
3e639e4aa7
Now, we want to increase all SMTP read timeouts to 30s,
since the 5s is too low sometimes, and the ruby Net::SMTP
stdlib also defaults to 30s.
Also, we want to slightly tweak the group smtp email job
not to fail if the IncomingEmail log fails to create, or if
a ReadTimeout is encountered, to avoid retrying the job in sidekiq
again and sending the same email out.
We just completed the 3.2 release, which marks a good time to drop some previously deprecated columns.
Since the column has been marked in ignored_columns, it has been inaccessible to application code since then. There's a tiny risk that this might break a Data Explorer query, but given the nature of the column, the years of disuse, and the fact that such a breakage wouldn't be critical, we accept it.
When exporting a csv file and the size of the file exceeded the
max_export_file_size_kb it will still send the PM that the export
succeeded with a broken link to a missing export file. This change
ensures that a failed message will be sent instead.
Currently when exporting a list of users and there is an error we just
log that there was an error, but we don't show what the issue is in the
logs which makes it really hard to debug in production. This change will
output any errors to the logs.
We want to exclude the system user from group user counts, since intuitively admins wouldn't include them.
Originally this was accomplished by booting said system user from the groups, but this is causing problems, because the system user needs TL group membership to perform certain tasks.
After this PR, system user is still in the TL groups, but excluded when refreshing the user count.
This introduces a new experimental hot sort ordering.
It attempts to float top conversations by first prioritizing a topics with lots of recent activity (likes and users responding)
The schedule that updates hot topics is disabled unless the hidden site setting: `experimental_hot_topics` is enabled.
You can control "decay" with `hot_topic_gravity` and `recency` with `hot_topics_recent_days`
Data is stored in the new `topic_hot_scores` table and you can check it out on the `/hot` route once
enabled.
---------
Co-authored-by: Penar Musaraj <pmusaraj@gmail.com>
Why this change?
On CI, we have been seeing the "handles job concurrency" job timing out
on CI after 45 seconds. Upon closer inspection of `Jobs::Base#perform`
when cluster concurrency has been set, we see that a thread is spun up
to extend the expiring of a redis key by 120 seconds every 60 seconds
while the job is still being executed. The thread looks like this before
the fix:
```
keepalive_thread =
Thread.new do
while parent_thread.alive? && !finished
Discourse.redis.without_namespace.expire(cluster_concurrency_redis_key, 120)
sleep 60
end
end
```
In an ensure block of `Jobs::Base#perform`, the thread is stop by doing
something like this:
```
finished = true
keepalive_thread.wakeup
keepalive_thread.join
```
If the thread is sleeping, `keepalive_thread.wakeup` will stop the
`sleep` method and run the next iteration causing the thread to
complete. However, there is a timing issue at play here. If
`keepalive_thread.wakeup` is called at a time when the thread is not
sleeping, it will have no effect and the thread may end up sleeping for
60 seconds which is longer than our timeout on CI of 45 seconds.
What does this change do?
1. Change `sleep 60` to sleep in intervals of 1 second checking if the
job has been finished each time.
2. Add `use_redis_snapshotting` to `Jobs::Base` spec since Redis is
involved in scheduling and we want to ensure we don't leak Redis
keys.
3. Add `ConcurrentJob.stop!` and `thread.join` to `ensure` block in "handles job concurrency"
test since a failing expectation will cause us to not clean up the
thread we created in the test.
This commit fixes an issue where when some actions were done
(deleting/recovering post, moving posts) we updated the
topic_users.bookmarked column to the wrong value. This was happening
because the SyncTopicUserBookmarked job was not taking into account
Topic level bookmarks, so if there was a Topic bookmark and no
Post bookmarks for a user in the topic, they would have
topic_users.bookmarked set to false, which meant the bookmark would
no longer show in the /bookmarks list.
To reproduce before the fix:
* Bookmark a topic and don’t bookmark any posts within
* Delete or recover any post in the topic
c.f. https://meta.discourse.org/t/disappearing-bookmarks-and-expected-behavior-of-bookmarks/264670/36