Subclasses must call #delete_user_actions inside build_actions to support user deletion. The method adds a delete user bundle, which has a delete and a delete + block option. Every subclass is responsible for implementing these actions.
Email change requests are never deleted no matter if they completed
successfully or not. The abandoned requests have the disadvantage of
showing up as unconfirmed emails in user's preferences page.
Recalculating a ReviewableFlaggedPost's score after rejecting or ignoring it sets the score as 0, which means that we can't find them after reviewing. They don't surpass the minimum priority threshold and are hidden.
Additionally, we only want to use agreed flags when calculating the different priority thresholds.
This filter hides reviewables with a score lower than the "reviewable_low_priority_threshold" setting. We only use reviewables that already met this threshold to calculate the Medium and High priority filters.
Currently the process of adding a custom image to badge is quite clunky; you have to upload your image to a topic, and then copy the image URL and pasting it in a text field. Besides being clucky, if the topic or post that contains the image is deleted, the image will be garbage-collected in a few days and the badge will lose the image because the application is not that the image is referenced by a badge.
This commit improves that by adding a proper image uploader widget for badge images.
* FIX: Reduce the time_read threshold to one minute.
Five minutes is too much and could fill the queue with false positives.
* Update spec/jobs/enqueue_suspect_users_spec.rb
Co-authored-by: Arpit Jalan <arpit@techapj.com>
Co-authored-by: Arpit Jalan <arpit@techapj.com>
Completing the discobot tutorial gives you ~3m of reading time, so we set the limit at 5m. Additionally, we use an "OR" clause to cover the case when you just scroll through a single topic.
Original PR was reverted because of broken migration https://github.com/discourse/discourse/pull/12058
I fixed it by adding this line
```
AND topics.id IN(SELECT id FROM topics ORDER BY created_at DESC LIMIT :max_new_topics)
```
This time it is left joining a limited amount of topics. I tested it on few databases and it worked quite smooth
Follow up https://github.com/discourse/discourse/pull/11968
Dismiss all new topics using the same DismissTopicService. In addition, MessageBus receives exact topic ids which should be marked as `seen`.
This is a try to simplify logic around dismiss new topics to have one solution to work in all places - dismiss all-new, dismiss new in a specific category or even in a specific tag.
Lots of changes but it's mostly a refactoring.
The interesting part that was fix are the 'load_problem_<model>_ids' methods.
They will now return records with no search data associated so they can be properly indexed for the search.
This "bad" state usually happens after a migration.
This adds a new table UserNotificationSchedules which stores monday-friday start and ends times that each user would like to receive notifications (with a Boolean enabled to remove the use of the schedule). There is then a background job that runs every day and creates do_not_disturb_timings for each user with an enabled notification schedule. The job schedules timings 2 days in advance. The job is designed so that it can be run at any point in time, and it will not create duplicate records.
When a users saves their notification schedule, the schedule processing service will run and schedule do_not_disturb_timings. If the user should be in DND due to their schedule, the user will immediately be put in DND (message bus publishes this state).
The UI for a user's notification schedule is in user -> preferences -> notifications. By default every day is 8am - 5pm when first enabled.
This should make it easier to track down how the incoming email was created, which is one of four locations:
The POP3 poller (which picks up reply via email replies)
The admin email controller #handle_mail (which is where hosted mail is sent)
The IMAP sync tool
The group SMTP mailer, which sends emails when replying to IMAP topics, pre-emptively creating IncomingEmail records to avoid double syncing
Moves the topic timer jobs from being scheduled ahead of time with enqueue_at to a 5 minute scheduled run like bookmark reminders, in a new job called Jobs::EnqueueTopicTimers. Backwards compatibility is maintained by checking if an existing topic timer job is enqueued in sidekiq for the timer, and if it is not running it inside the new job.
The functionality to close/open a topic if it is in the opposite state still remains in the after_save block of TopicTimer, with further commentary, which is used for Open/Close Temporarily.
This also removes the ensure_consistency! functionality of topic timers as it is no longer needed; the new job will always pick up the timers because they are not stored in a fragile state of sidekiq.
This adds a safe default to not process pop3 emails when the pop3 polling option is set up that are > 1 week old. This is to avoid the situation where an older mailbox is used, which causes us to go and process all emails in that mailbox, sending out error emails to the senders of emails which cannot be parsed successfully.
My initial implementation didn't consider this case. We should skip imported users if the "imported_id" field is present, even if there're other custom fields.
- IgnoredUser records should all now have an expiring_at value. This commit enforces that in the DB, and fixes any corrupt rows
- Changes to the ignored user list are now handled by the `/u/{username}/notification_level` endpoint. This allows setting expiration dates on the ignore. This commit removes the old logic for saving a list of usernames in the user preferences.
- Many specs were calling `IgnoredUser.create`. This commit changes them to use `Fabricate(:ignored_user)` for consistency
b8c676e7 added the 'forever' option to the UI, and this is correctly stored in the database. However, we had a hard-coded limit of 4 months in the cleanup job. This commit removes the limit, so ignores can last forever.
Dependency on gifsicle, allow_animated_avatars and allow_animated_thumbnails
site settings were all removed. Animated GIF images are still allowed, but
the generated optimized images are no longer animated for those (which were
used for avatars and thumbnails).
The added 'animated' is populated by extracting information using FastImage.
This field was used to selectively reoptimize old animations. This process
happens in the background.
When posts or topics are deleted we don't want to immediately delete associated bookmarks, so we have a grace period to recover them and their reminders if the post or topic is un-deleted. This PR adds a task to the Weekly scheduled job to go and delete bookmarks attached to posts or topics deleted > 3 days ago.
Previously, Jobs::EnqueueDigestEmails would enqueue a digest job for every user, even if there are no topics to send. The digest job would exit, no email would send, and last_emailed_at would not change. 30 minutes later, Jobs::EnqueueDigestEmails would run again and re-enqueue jobs for the same users.
120fa8ad introduced a temporary mitigation for this issue, by randomly selecting a subset of those users each time.
This commit adds a new `digest_attempted_at` column to the `user_stats` table. This column is updated every time a digest job completes for a user. Using this, we can avoid scheduling digest jobs for the same user every 30 minutes. This also removes the random user selection in 120fa8ad, and instead prioritizes users who had digests attempted the longest time ago.
To avoid blocking the sidekiq queue a limit of 10,000 digests per 30 minutes
is introduced.
This acts as a safety measure that makes sure we don't keep pouring oil on
a fire.
On multisites it is recommended to set the number way lower so sites do not
dominate the backlog. A reasonable default for multisites may be 100-500.
This can be controlled with the environment var
DISCOURSE_MAX_DIGESTS_ENQUEUED_PER_30_MINS_PER_SITE
After restoring a backup it takes up to 48 hours for uploads stored on S3 to appear in the S3 inventory. This change prevents alerts about missing uploads by preventing the EnsureS3UploadsExistence job from running in the first 48 hours after a restore. During the restore it deletes the count of missing uploads from the PluginStore, so that an alert isn't triggered by an old number.
With the addition of `PostSearchData#private_message`, a partial
index consisting of only search data from regular posts can be created.
The partial index helps to speed up searches on large sites since PG
will not have to do an index scan on the entire search data index which
has shown to be a bottle neck.
Convert all IMAP logging to write to a database table for easier inspection. These logs are cleaned up daily if they are > 5 days old.
Logs can easily be watched in dev by setting DISCOURSE_DEV_LOG_LEVEL=\"debug\" and running tail -f development.log | grep IMAP
Was tired of seeing the following warnings in the logs
```
/discourse/app/jobs/scheduled/old_keys_reminder.rb:7: warning: already initialized constant Jobs::OldKeysReminder::OLD_CREDENTIALS_PERIOD
/discourse/app/jobs/scheduled/old_keys_reminder.rb:7: warning: previous definition of OLD_CREDENTIALS_PERIOD was here
```