There is a site setting reply_by_email_enabled which when combined with reply_by_email_address creates a Reply-To header in emails in the format "test+%{reply_key}@test.com" along with a PostReplyKey record, so when replying Discourse knows where to route the reply.
However this conflicts with the IMAP implementation. Since we are sending the email for a group via SMTP and from their actual email account, we want all replys to go to that email account as well so the IMAP sync job can pick them up and put them in the correct place. So if the group has IMAP enabled and configured, then the reply-to header will be correct.
This PR also makes a further fix to 64b0b50 by using the correct recipient user for the PostReplyKey record. If the post user is used we encounter this error:
if destination.user_id != user.id && !forwarded_reply_key?(destination, user)
raise ReplyUserNotMatchingError, "post_reply_key.user_id => #{destination.user_id.inspect}, user.id => #{user.id.inspect}"
end
This is because the user above is found from the from_address, but the destination which is the PostReplyKey is made by the post.user, which will be different people.
Our Email::Sender class accepts an optional user argument, which is used to create a PostReplyKey record when present. This record is used to sub out the %{reply_key} placeholder in the Reply-To mail header, so if we do not pass in the user we get a broken Reply-To header.
This is especially problematic in the IMAP group SMTP situation, because these emails go to customers that we are replying to, and when they reply to us the email bounces! This fixes the issue by passing user to the Email::Sender when sending a group_smtp email but there is still more to do in another PR.
This Email::Sender optional user is a bit of a footgun IMO, especially because most of the time we use it there is a user we can source. I would like to do another PR for this after this one to make the parameter not optional, so we don't end up with these reply issues down the line again.
Dependency on gifsicle, allow_animated_avatars and allow_animated_thumbnails
site settings were all removed. Animated GIF images are still allowed, but
the generated optimized images are no longer animated for those (which were
used for avatars and thumbnails).
The added 'animated' is populated by extracting information using FastImage.
This field was used to selectively reoptimize old animations. This process
happens in the background.
When posts or topics are deleted we don't want to immediately delete associated bookmarks, so we have a grace period to recover them and their reminders if the post or topic is un-deleted. This PR adds a task to the Weekly scheduled job to go and delete bookmarks attached to posts or topics deleted > 3 days ago.
Previously, Jobs::EnqueueDigestEmails would enqueue a digest job for every user, even if there are no topics to send. The digest job would exit, no email would send, and last_emailed_at would not change. 30 minutes later, Jobs::EnqueueDigestEmails would run again and re-enqueue jobs for the same users.
120fa8ad introduced a temporary mitigation for this issue, by randomly selecting a subset of those users each time.
This commit adds a new `digest_attempted_at` column to the `user_stats` table. This column is updated every time a digest job completes for a user. Using this, we can avoid scheduling digest jobs for the same user every 30 minutes. This also removes the random user selection in 120fa8ad, and instead prioritizes users who had digests attempted the longest time ago.
To avoid blocking the sidekiq queue a limit of 10,000 digests per 30 minutes
is introduced.
This acts as a safety measure that makes sure we don't keep pouring oil on
a fire.
On multisites it is recommended to set the number way lower so sites do not
dominate the backlog. A reasonable default for multisites may be 100-500.
This can be controlled with the environment var
DISCOURSE_MAX_DIGESTS_ENQUEUED_PER_30_MINS_PER_SITE
See https://meta.discourse.org/t/changing-a-users-email/164512 for additional context.
Previously when an admin user changed a user's email we assumed that they would need a password reset too because they likely did not have access to their account. This proved to be incorrect, as there are other reasons a user needs admin to change their email. This PR:
* Changes the admin change email for user flow so the user is sent an email to confirm the change
* We now record who the email change request was requested by
* If the requested by user is admin and not the user we note this in the email sent to the user
* We also make the confirm change email route open to anonymous users, so it can be clicked by the user even if they do not have access to their account. If there is a logged in user we make sure the confirmation matches the current user.
* FEATURE: Export the entire user profile as json, not just bio/website
* FEATURE: Add session log information to user export
Even though the columns are named 'auth_token' etc, the content is not actually usable to log into the forum with. Despite all that, it is still truncated for export, to avoid any 'token hash cracking' situations.
This PR removes the user reminder topic timers, because that system has been supplanted and improved by bookmark reminders. The option is removed from the UI and all existing user reminder topic timers are migrated to bookmark reminders.
Migration does this:
* Get all topic_timers with status_type 5 (reminders)
* Gets all bookmarks where the user ID and topic ID match
* Loops through the found topic timers
* If there is no bookmark for the OP of the topic, then we just create a bookmark with a reminder
* If there is a bookmark for the OP of the topic and it does **not** have a reminder set, then just
update it with the topic timer reminder
* If there is a bookmark for the OP of the topic with a reminder then just discard the topic timer
* Cancels all outstanding user reminder topic timers
* **Trashes (not deletes) all user reminder topic timers**
Notes:
* For now I have left the user reminder topic timer job class in place; this is so the jobs can be cancelled in the migration. It and the specs will be deleted in the next PR.
* At a later date I will write a migration to delete all trashed user topic timers. They are not deleted here in case there are data issues and they need to be recovered.
* A future PR will change the UI of the topic timer modal to make it look more like the bookmark modal.
After restoring a backup it takes up to 48 hours for uploads stored on S3 to appear in the S3 inventory. This change prevents alerts about missing uploads by preventing the EnsureS3UploadsExistence job from running in the first 48 hours after a restore. During the restore it deletes the count of missing uploads from the PluginStore, so that an alert isn't triggered by an old number.
It is possible that a user could exist without an email, if so we should
not enqueue a job to download their gravatar.
This commit resolves this error that can occur:
```
Job exception: undefined method `email' for nil:NilClass
/var/www/discourse/app/models/user.rb:1204:in `email'
/var/www/discourse/app/jobs/regular/update_gravatar.rb:12:in `execute'
```
This commit also fixes the original spec which actually was wrong. The
job never enqueued in the original spec and so the gravatar was never
actually updated and the test was checking if the two values were the
same, but they were both null and never updated, so of course they were
the same!
A new test has also been added to make sure the gravatar job isn't
enqueued when a user's email is missing.
* FEATURE: Use predictable filenames inside the user archive export
* FEATURE: Include badges in user archive export
* FEATURE: Add user_visits table to the user archive export
This is in preparation for improvements to the user archive export data.
Some refactors happened along the way, including calling the different _export methods 'components' of the zip file.
Additionally, make the test for post export much more comprehensive.
Copy sources:
app/jobs/regular/export_csv_file.rb
spec/jobs/export_csv_file_spec.rb
With the addition of `PostSearchData#private_message`, a partial
index consisting of only search data from regular posts can be created.
The partial index helps to speed up searches on large sites since PG
will not have to do an index scan on the entire search data index which
has shown to be a bottle neck.
Convert all IMAP logging to write to a database table for easier inspection. These logs are cleaned up daily if they are > 5 days old.
Logs can easily be watched in dev by setting DISCOURSE_DEV_LOG_LEVEL=\"debug\" and running tail -f development.log | grep IMAP
* FEATURE: don't notify about changed tags for a private message
Only staff members observing specific tag should receive a notification
* FIX: remove other category which is not used
* FIX: improved specs to ensure that revise was succesful
When a tab is open but left unattended for a while, the red, green, and blue
pills tend to go out of sync.
So whevener we open the notifications menu, we sync up the notification count
(eg. blue and green pills) with the server.
However, the reviewable count (eg. the red pill) is not a notification and
is located in the hamburger menu. This commit adds a new route on the server
side to retrieve the reviewable count for the current user and a ping
(refreshReviewableCount) from the client side to sync the reviewable count
whenever they open the hamburger menu.
REFACTOR: I also refactored the hamburger-menu widget code to prevent repetitive uses
of "this.".
PERF: I improved the performance of the 'notify_reviewable' job by doing only 1 query
to the database to retrieve all the pending reviewables and then tallying based on the
various rights.
This commit should cause no functional change
- Split into functions to avoid deep nesting
- Register custom field type, and remove manual json parse/serialize
- Recover from deleted upload records
Also adds a test to ensure pull_hotlinked_images redownloads secure images only once
Was tired of seeing the following warnings in the logs
```
/discourse/app/jobs/scheduled/old_keys_reminder.rb:7: warning: already initialized constant Jobs::OldKeysReminder::OLD_CREDENTIALS_PERIOD
/discourse/app/jobs/scheduled/old_keys_reminder.rb:7: warning: previous definition of OLD_CREDENTIALS_PERIOD was here
```
* FEATURE: notify admins about old credentials
Security and API keys should be renewed periodically.
This additional notification should help admins keep their Discourse safe and secure.
Previously the pull hotlinked images job was skipped after system edits. This ensured that we never had an infinite loop of system-edit/pull-hotlinked/system-edit/pull-hotlinked etc.
A side effect was that edits made by system for any other reason (e.g. API, removing full quotes) would prevent pulling hotlinked images. This commit removes the system edit check, and replaces it with another method to avoid an infinite job scheduling loop.
* DEV: new S3 backup layout
Currently, with $S3_BACKUP_BUCKET of "bucket/backups", multisite backups
end up in "bucket/backups/backups/dbname/" and single-site will be in
"bucket/backups/".
Both _should_ be in "bucket/backups/dbname/"
- remove MULTISITE_PREFIX,
- always include dbname,
- method to move to the new prefix
- job to call the method
* SPEC: add tests for `VacateLegacyPrefixBackups` onceoff job.
Co-authored-by: Vinoth Kannan <vinothkannan@vinkas.com>
When running jobs in tests, we use `Jobs.run_immediately!`. This means that jobs are run synchronously when they are enqueued. Jobs sometimes enqueue other jobs, which are also executed synchronously. This means that the outermost job will block until the inner jobs have finished executing. In some cases (e.g. process_post with hotlinked images) this can lead to a deadlock.
This commit changes the behavior slightly. Now we will never run jobs inside other jobs. Instead, we will queue them up and run them sequentially in the order they were enqueued. As a whole, they are still executed synchronously. Consider the example
```ruby
class Jobs::InnerJob < Jobs::Base
def execute(args)
puts "Running inner job"
end
end
class Jobs::OuterJob < Jobs::Base
def execute(args)
puts "Starting outer job"
Jobs.enqueue(:inner_job)
puts "Finished outer job"
end
end
Jobs.enqueue(:outer_job)
puts "All jobs complete"
```
The old behavior would result in:
```
Starting outer job
Running inner job
Finished outer job
All jobs complete
```
The new behavior will result in:
```
Starting outer job
Finished outer job
Running inner job
All jobs complete
```
It might happen that some User records have no associated primary emails.
In which case we don't ever want to send them a digest.
Also added a new "user_email_no_email" skipped email log to ensure these cases
are properly handled and surfaced.
* FEATURE: notify admins about old credentials
Security and API keys should be renewed periodically.
This additional notification should help admins keep their Discourse safe and secure.
* PERF: Dematerialize topic_reply_count
It's only ever used for trust level promotions that run daily, or compared to 0. We don't need to track it on every post creation.
* UX: Add symbol in TL3 report if topic reply count is capped
* DEV: Drop user_stats.topic_reply_count column
This introduces new APIs for obtaining optimized thumbnails for topics. There are a few building blocks required for this:
- Introduces new `image_upload_id` columns on the `posts` and `topics` table. This replaces the old `image_url` column, which means that thumbnails are now restricted to uploads. Hotlinked thumbnails are no longer possible. In normal use (with pull_hotlinked_images enabled), this has no noticeable impact
- A migration attempts to match existing urls to upload records. If a match cannot be found then the posts will be queued for rebake
- Optimized thumbnails are generated during post_process_cooked. If thumbnails are missing when serializing a topic list, then a sidekiq job is queued
- Topic lists and topics now include a `thumbnails` key, which includes all the available images:
```
"thumbnails": [
{
"max_width": null,
"max_height": null,
"url": "//example.com/original-image.png",
"width": 1380,
"height": 1840
},
{
"max_width": 1024,
"max_height": 1024,
"url": "//example.com/optimized-image.png",
"width": 768,
"height": 1024
}
]
```
- Themes can request additional thumbnail sizes by using a modifier in their `about.json` file:
```
"modifiers": {
"topic_thumbnail_sizes": [
[200, 200],
[800, 800]
],
...
```
Remember that these are generated asynchronously, so your theme should include logic to fallback to other available thumbnails if your requested size has not yet been generated
- Two new raw plugin outlets are introduced, to improve the customisability of the topic list. `topic-list-before-columns` and `topic-list-before-link`
We have the `# frozen_string_literal: true` comment on all our
files. This means all string literals are frozen. There is no need
to call #freeze on any literals.
For files with `# frozen_string_literal: true`
```
puts %w{a b}[0].frozen?
=> true
puts "hi".frozen?
=> true
puts "a #{1} b".frozen?
=> true
puts ("a " + "b").frozen?
=> false
puts (-("a " + "b")).frozen?
=> true
```
For more details see: https://samsaffron.com/archive/2018/02/16/reducing-string-duplication-in-ruby
The main thrust of this PR is to take all the conditional checks based on the `enable_bookmarks_with_reminders` away and only keep the code from the `true` path, making bookmarks with reminders the core bookmarks feature. There is also a migration to create `Bookmark` records out of `PostAction` bookmarks for a site.
### Summary
* Remove logic based on whether enable_bookmarks_with_reminders is true. This site setting is now obsolete, the old bookmark functionality is being removed. Retain the setting and set the value to `true` in a migration.
* Use the code from the rake task to create a database migration that creates bookmarks from post actions.
* Change the bookmark report to read from the new table.
* Get rid of old endpoints for bookmarks
* Link to the new bookmarks list from the user summary page
Within 24 hours of signing up, new users were losing their
default trust level of 3. With this fix, demotions from
trust level 3 won't happen when the "default trust level"
setting is 3 or 4.
Previously all topic posters would be added which could lead to major performance issues. Now if there are too many posters, only the acting user will be added as a participant.
The process_post job uses CookedPostProcessor which also uses a
DistributedMutex. There's no good reason for the timeout of the outer
lock to be smaller than the timeout of the inner lock.
If the “secure media” site setting is enabled then ALL files uploaded to Discourse (images, video, audio, pdf, txt, zip etc. etc.) will follow the secure media rules. The “prevent anons from downloading files” setting will no longer have any bearing on upload security. Basically, the feature will more appropriately be called “secure uploads” instead of “secure media”.
This is being done because there are communities out there that would like all attachments and media to be secure based on category rules but still allow anonymous users to download attachments in public places, which is not possible in the current arrangement.
* This PR implements the scheduling and notification system for bookmark reminders. Every 5 minutes a schedule runs to check any reminders that need to be sent before now, limited to **300** reminders at a time. Any leftover reminders will be sent in the next run. This is to avoid having to deal with fickle sidekiq and reminders in the far-flung future, which would necessitate having a background job anyway to clean up any missing `enqueue_at` reminders.
* If a reminder is sent its `reminder_at` time is cleared and the `reminder_last_sent_at` time is filled in. Notifications are only user-level notifications for now.
* All JavaScript and frontend code related to displaying the bookmark reminder notification is contained here. The reminder functionality is now re-enabled in the bookmark modal as well.
* This PR also implements the "Remind me next time I am at my desktop" bookmark reminder functionality. When the user is on a mobile device they are able to select this option. When they choose this option we set a key in Redis saying they have a pending at desktop reminder. The next time they change devices we check if the new device is desktop, and if it is we send reminders using a DistributedMutex. There is also a job to ensure consistency of these reminders in Redis (in case Redis drops the ball) and the at desktop reminders expire after 20 days.
* Also in this PR is a fix to delete all Bookmarks for a user via `UserDestroyer`
If a post is being cooked twice (for example after an edit), there is a
chance the 'raw' and 'cooked' column to be inconsistent. This reduces
the chances of that happening.
Rails has an odd behavior for calling .delete_all on a has_many relation - the
default behavior is to nullify the foreign key fields instead of actually
'DELETE'ing the records.
Additionally, publishing a shared draft topic creates a PostRevision that the
NotifyPostRevision job picks up which is then promptly deleted.
Use destroy_all when cleaning up the revisions and have the NotifyPostRevision
job tolerate deleted PostRevision records.
This takes a small performance hit (several SQL DELETEs instead of just one)
but shouldn't be too much of an issue (high cardinalities range from 30-100).
Introduces a new site setting `max_notifications_per_user`.
Out-of-the-box this is set to 10,000. If a user exceeds this number of
notifications, we will delete the oldest notifications keeping only 10,000.
To disable this safeguard set the setting to 0.
Enforcement happens weekly.
This is in place to protect the system from pathological states where a
single user has enormous amounts of notifications causing various queries
to time out. In practice nobody looks back more than a few hundred notifications.
Previously we had many places in the app that called `hostname` to get
hostname of a server. This commit replaces the pattern in 2 ways
1. We cache the result in `Discourse.os_hostname` so it is only ever called once
2. We prefer to use Socket.gethostname which avoids making a shell command
This improves performance as we are not spawning hostname processes throughout
the app lifetime
Basically, say you had already downloaded a certain image from a certain URL
using pull_hotlinked_images and the onebox. The upload would be stored
by its sha as an upload record. Whenever you linked to the same URL again
in a post (e.g. in our case an og:image on review.discourse) we would
would reuse the original upload record because of the sha1.
However when you turned on secure media this could cause problems as
the first post that uses that upload after secure media is enabled
will set the access control post for the upload to the new post.
Then if the post is deleted every single onebox/link to that same image
URL will fail forever with 403 as the secure-media-uploads URL fails
if the access control post has been deleted.
To fix this when cooking posts and pulling hotlinked images, we only
allow using an original upload by URL if its access control post
matches the current post, and if the original_sha1 is filled in,
meaning it was uploaded AFTER secure media was enabled. otherwise
we just redownload the media again to be safe, as the URL will always
be new then.
Regression was created here:
https://github.com/discourse/discourse/pull/8750
When tag or category is added and the user is watching that category/tag
we changed notification type to `edited` instead of `new post`.
However, the logic here should be a little bit more sophisticated.
If the user has already seen the post, notification should be `edited`.
However, when user hasn't yet seen post, notification should be "new
reply". The case for that is when for example topic is under private
category and set for publishing later. In that case, we modify an
existing topic, however, for a user, it is like a new post.
Discussion on meta:
https://meta.discourse.org/t/publication-of-timed-topics-dont-trigger-new-topic-notifications/139335/13
Previously the badge was granted one month after the last time the badge was granted. The exact date shifted by one day each month. The new logic tries to grant the badge always at the beginning of a new month by looking at new users of the previous month. The "granted at" date is set to the end of the previous month.
With this change the script:
* Actually removes original large-sized images
* Doesn't save processed files if their size has increased
* Prevents inconsistent state
When pull_hotlinked_images tried to run on posts with secure media (which had already been downloaded from external sources) we were getting a 404 when trying to download the image because the secure endpoint doesn't allow anon downloads.
Also, we were getting into an infinite loop of pull_hotlinked_images because the job didn't consider the secure media URLs as "downloaded" already so it kept trying to download them over and over.
In this PR I have also refactored secure-media-upload URL checks and mutations into single source of truth in Upload, adding a SECURE_MEDIA_ROUTE constant to check URLs against too.
Add TopicUploadSecurityManager to handle post moves. When a post moves around or a topic changes between categories and public/private message status the uploads connected to posts in the topic need to have their secure status updated, depending on the security context the topic now lives in.
Let's say post #2 quotes post number #1. If a user decides to quote the
quote in post #2, it should keep the information of post #1
("user_1, post: 1, topic: X"), instead of replacing with current post
info ("user_2, post: 2, topic: X").
There is a feature, that when tag or category is added to the topic,
customers who are watching that category or tag are notified.
The problem is that it is using default notification type "new post"
It would be better to use "new post" only when there really is a new
post and "edited" when categories or tags were modified.
### General Changes and Duplication
* We now consider a post `with_secure_media?` if it is in a read-restricted category.
* When uploading we now set an upload's secure status straight away.
* When uploading if `SiteSetting.secure_media` is enabled, we do not check to see if the upload already exists using the `sha1` digest of the upload. The `sha1` column of the upload is filled with a `SecureRandom.hex(20)` value which is the same length as `Upload::SHA1_LENGTH`. The `original_sha1` column is filled with the _real_ sha1 digest of the file.
* Whether an upload `should_be_secure?` is now determined by whether the `access_control_post` is `with_secure_media?` (if there is no access control post then we leave the secure status as is).
* When serializing the upload, we now cook the URL if the upload is secure. This is so it shows up correctly in the composer preview, because we set secure status on upload.
### Viewing Secure Media
* The secure-media-upload URL will take the post that the upload is attached to into account via `Guardian.can_see?` for access permissions
* If there is no `access_control_post` then we just deliver the media. This should be a rare occurrance and shouldn't cause issues as the `access_control_post` is set when `link_post_uploads` is called via `CookedPostProcessor`
### Removed
We no longer do any of these because we do not reuse uploads by sha1 if secure media is enabled.
* We no longer have a way to prevent cross-posting of a secure upload from a private context to a public context.
* We no longer have to set `secure: false` for uploads when uploading for a theme component.
* UI: Mass grant a badge from the admin ui
* Send the uploaded CSV and badge ID to the backend
* Read the CSV and grant badge in batches
* UX: Communicate the result to the user
* Don't award if badge is disabled
* Create a 'send_notification' method to remove duplicated code, slightly shrink badge image. Replace router transition with href.
* Dynamically discover current route
The following methods have long been deprecated in ruby due to flaws in their implementation per http://blade.nagaokaut.ac.jp/cgi-bin/vframe.rb/ruby/ruby-core/29293?29179-31097:
URI.escape
URI.unescape
URI.encode
URI.unencode
escape/encode are just aliases for one another. This PR uses the Addressable gem to replace these methods with its own encode, unencode, and encode_component methods where appropriate.
I have put all references to Addressable::URI here into the UrlHelper to keep them corralled in one place to make changes to this implementation easier.
Addressable is now also an explicit gem dependency.
We like to stay as close as possible to latest with rubocop cause the cops
get better.
This update required some code changes, specifically the default is to avoid
explicit returns where implicit is done
Also this renames a few rules
When uploading a file to a theme component, and that file is existing and has already been marked as secure, we now automatically mark the file as secure: false, change the ACL, and log the action as the user (also rebake the posts for the upload)
The secure media functionality relied on `SiteSetting.enable_s3_uploads?` which, as we found in dev, did not take into account global S3 settings via `GlobalSetting.use_s3?`. We now use `SiteSetting.Upload.enable_s3_uploads` instead to be more consistent.
Also, we now validate `enable_s3_uploads` changes, because if `GlobalSetting.use_s3?` is true users should NOT be enabling S3 uploads manually.
This PR introduces a new secure media setting. When enabled, it prevent unathorized access to media uploads (files of type image, video and audio). When the `login_required` setting is enabled, then all media uploads will be protected from unauthorized (anonymous) access. When `login_required`is disabled, only media in private messages will be protected from unauthorized access.
A few notes:
- the `prevent_anons_from_downloading_files` setting no longer applies to audio and video uploads
- the `secure_media` setting can only be enabled if S3 uploads are already enabled and configured
- upload records have a new column, `secure`, which is a boolean `true/false` of the upload's secure status
- when creating a public post with an upload that has already been uploaded and is marked as secure, the post creator will raise an error
- when enabling or disabling the setting on a site with existing uploads, the rake task `uploads:ensure_correct_acl` should be used to update all uploads' secure status and their ACL on S3
I made a regression here 17366d3bcc (diff-ddeebb36d131f89ca91be9d04c2baefaR10)
When the tag is added, people watching specific tag are notified but also people watching specific category.
Therefore, `notify_post_users` should accept options who should be notified.
So when `category` is added to the topic, users watching topic and users watching category are notified.
When `tag` is added to the topic, users watching topic and users watching tag are notified
Finally, when a new post is created, everybody is notified, topic watchers, category watchers, tag watchers.
* Fix user title logic when badge name customized
* Fix an issue where a user's title was not considered a badge granted title when the user used a badge for their title and the badge name was customized. this affected the effectiveness of revoke_ungranted_titles! which only operates on badge_granted_titles.
* When a user's title is set now it is considered a badge_granted_title if the badge name OR the badge custom name from TranslationOverride is the same as the title
* When a user's badge is revoked we now also revoke their title if the user's title matches the badge name OR the badge custom name from TranslationOverride
* Add a user history log when the title is revoked to remove confusion about why titles are revoked
* Add granted_title_badge_id to user_profile, now when we set badge_granted_title on a user profile when updating a user's title based on a badge, we also remember which badge matched the title
* When badge name (or custom text) changes update titles of users in a background job
* When the name of a badge changes, or in the case of system badges when their custom translation text changes, then we need to update the title of all corresponding users who have a badge_granted_title and matching granted_title_badge_id. In the case of system badges we need to first get the proper badge ID based on the translation key e.g. badges.regular.name
* Add migration to backfill all granted_title_badge_ids for both normal badge name titles and titles using custom badge text.
Issue was mentioned in this [meta topic](https://meta.discourse.org/t/send-a-notification-to-watching-users-when-adding-tag/125314)
It is working well when category is changed because NotifyCategoryChange job already got that code:
```
if post&.topic&.visible?
post_alerter = PostAlerter.new
post_alerter.notify_post_users(post, User.where(id: args[:notified_user_ids]))
post_alerter.notify_first_post_watchers(post, post_alerter.category_watchers(post.topic))
end
```
For NotifyTagChange job notify post users were missing so it worked only when your notification was set to `watching first post`
- Allow revoking keys without deleting them
- Auto-revoke keys after a period of no use (default 6 months)
- Allow multiple keys per user
- Allow attaching a description to each key, for easier auditing
- Log changes to keys in the staff action log
- Move all key management to one place, and improve the UI
Adds the settings:
raw_email_max_length, raw_rejected_email_max_length, delete_rejected_email_after_days.
These settings control retention of the "raw" emails logs.
raw_email_max_length ensures that if we get incoming email that is huge we will truncate it removing uploads from the raw log.
raw_rejected_email_max_length introduces an even more aggressive truncation for rejected incoming mail.
delete_rejected_email_after_days controls how many days we will keep rejected emails for (default 90)
Previously every hour we would run a full scan of the entire DB searching
for expired uploads that need to be moved to the tombstone folder.
This commit amends it so we only run the job 2 times per clean_orpha_uploads_grace_period_hours
There is a upper bound of 7 days so even if the grace period is set really
high it will still run at least once a week.
By default we have a 48 grace period so this amends it to run this cleanup
daily instead of hourly. This eliminates 23 times we run this ultra expensive
query.
Doing .pluck(:column).first is a very common pattern in Discourse and in
most cases, a limit cause isn't being added. Instead of adding a limit
clause to all these callsites, this commit adds two new methods to
ActiveRecord::Relation:
pluck_first, equivalent to limit(1).pluck(*columns).first
and pluck_first! which, like other finder methods, raises an exception
when no record is found
* FIX: Do not encode the URL twice
Now that we encode slugs in the server we don't need this anymore.
Reverts fe5na33
* FIX: More places do deal with encoded slugs
* the param is a string now, not a hash
* FIX: Handle the nil slug on /categories
* DEV: Add seeded? method to identity default categories
* DEV: Use SiteSetting to keep track of seeded categories
When an admin changes the site setting slug_generation_method to
encoded, we weren't really encoding the slug, but just allowing non-ascii
characters in the slug (unicode).
That brings problems when a user posts a link to topic without the slug, as
our topic controller tries to redirect the user to the correct URL that contains
the slug with unicode characters. Having unicode in the Location header in a
response is a RFC violation and some browsers end up in a redirection loop.
Bug report: https://meta.discourse.org/t/-/125371?u=falco
This commit also checks if a site uses encoded slugs and clear all saved slugs
in the db so they can be regenerated using an onceoff job.
* FEATURE: Adds an extra protection layer when decompressing files.
* Rename exporter/importer to zip importer. Update old locale
* Added a new composite class to decompress a file with multiple strategies
* Set max file size inside a site setting
* Ensure that file is deleted after compression
* Sanitize path and files before compressing/decompressing
Zeitwerk simplifies working with dependencies in dev and makes it easier reloading class chains.
We no longer need to use Rails "require_dependency" anywhere and instead can just use standard
Ruby patterns to require files.
This is a far reaching change and we expect some followups here.
Previously, calculating thresholds for reviewables was done based on the
50th and 85th percentile across all reviewables. However, many forum
owners provided feedback that these thresholds were too easy to hit, in
particular when it came to auto hiding content.
The calculation has been adjusted to base the priorities on reviewables
that have a minimum of 2 scores (flags). This should push the amount of
flags required to hide something higher then before.
On forums with very few flags you don't want to calculate averages
because they won't be very useful. Stick with the defaults until we hit
15 reviewables at least.
This reverts commit e805d44965.
We now have mechanisms in place to ensure heartbeat will always
be scheduled even if the scheduler is overloaded per: 098f938b
Databases can have a lot of user actions, self joining and running an
aggregate on millions of rows can be very costly
This optimisation will reduce the regular window of consistency down to 13
hours, this ensures the job runs much faster
* FIX: Heartbeat check per sidekiq process
* Rename method
* Remove heartbeat queues of previous bootups
* Regis feedback
* Refactor before_start
* Update lib/demon/sidekiq.rb
Co-Authored-By: Régis Hanol <regis@hanol.fr>
* Update lib/demon/sidekiq.rb
Co-Authored-By: Régis Hanol <regis@hanol.fr>
* Expire redis keys after 3600 seconds
* Don't use redis to store the list of queues
No need to let notifications stay around when users can't access
a topic after it was converted into a PM or posts were moved
into a restricted topic.
Also makes sure that moving to a new topic correctly uses the
guardian for the first post by enqueuing jobs outside of a
transaction.
DEV: deprecate `invite.via_email` in favor of `invite.emailed_status`
This commit adds a new column `emailed_status` in `invites` table for
tracking email sending status.
0 - not required
1 - pending
2 - bulk pending
3 - sending
4 - sent
For normal email invites, invite record is created with emailed_status
set to 'pending'.
When bulk invites are sent invite record is created with emailed_status
set to 'bulk pending'.
For invites that generates link, invite record is created with
emailed_status set to 'not required'.
When invite email is in queue emailed_status is updated to 'sending'
Once the email is sent via `InviteEmail` job the invite emailed_status
is updated to 'sent'.
* Revert "Revert "FEATURE: admin/user exports are compressed using the zip format (#7784)""
This reverts commit f89bd55576.
* Replace .tar.zip with .zip
* FEATURE: admin/user exports are compressed using the zip format
* Update translations. Theme exporter now exports .zip file. Theme importer supports .zip and .gz files
* Fix controller test, updated locale and skip saving the csv export to disk
* Support private uploads in S3
* Use localStore for local avatars
* Add job to update private upload ACL on S3
* Test multisite paths
* update ACL for private uploads in migrate_to_s3 task
The site settings beginning with "topic views heat" and "topic post like
heat" are set to defaults when installing Discourse, but there has not
been a process or guidance for updating these values based on
community activity.
This feature will update them once a month. The low, medium, and
high settings will be based on the minimums of the 45th, 25th, and
10th percentile topics respectively, so that 45% of topics will have
some "heat".
Disable automatic changes with the automatic_topic_heat_values setting.
This does two things
1. Our "index grace period" has been wound down to 1 day, there is no point
keeping a bloated index for a week, usually when people delete stuff they
mean for it to be removed
2. We were never dropping deleted posts from the index, only posts from
deleted topics
These changes speed up search a tiny bit and reduce background work.
The `AutoQueueHandler` will ignore really old flags. In that case, don't
notify the user that the moderator is looking into it. They probably
never saw it because it didn't meet the reviewable minimum priority.
Historically we would keep the user data export posts around but delete
the uploads.
This leaves a lot of broken uploads in the system.
This rake task allows us to clean up old mess.
This reduces chances of errors where consumers of strings mutate inputs
and reduces memory usage of the app.
Test suite passes now, but there may be some stuff left, so we will run
a few sites on a branch prior to merging
We found score hard to understand. It is still there behind the scenes
for sorting purposes, but it is no longer shown.
You can now filter by minimum priority (low, med, high) instead of
score.
This removes all uses of both `send` and `public_send` from consumers of
SiteSetting and instead introduces a `get` helper for dynamic lookup
This leads to much cleaner and safer code long term as we are always explicit
to test that a site setting is really there before sending an arbitrary
string to the class
It also removes a couple of risky stubs from the auth provider test
After careful analysis of large data-sets it became apparent that avg_time
had no impact whatsoever on "best of" topic scoring. Calculating avg_time
was a very costly operation especially on large databases.
We have some longer term plans of introducing other weighting that is read
time based into our scoring for "best of" and "top" topics, but in the
interim to stop a large amount of work that is not achieving any value we
are removing the jobs.
Column removal will follow once we decide on a new replacement metric.
`Upload#url` is more likely and can change from time to time. When it
does changes, we don't want to have to look through multiple tables to
ensure that the URLs are all up to date. Instead, we simply associate
uploads properly to `UserProfile` so that it does not have to replicate
the URLs in the table.
* Fix handling SNS notifications for AWS SES
This fixes detection of email bounce by:
- removing hard requirement for email ID, ID in webhook msg never equals this in email_log
- gets bounce_score from user stats instead of nonexistent field in webhook msg
* Remove empty line
* Prettify access to EmailLog for parsing SNS notification
Co-Authored-By: SystemZ <SystemZ@users.noreply.github.com>