Basically, say you had already downloaded a certain image from a certain URL
using pull_hotlinked_images and the onebox. The upload would be stored
by its sha as an upload record. Whenever you linked to the same URL again
in a post (e.g. in our case an og:image on review.discourse) we would
would reuse the original upload record because of the sha1.
However when you turned on secure media this could cause problems as
the first post that uses that upload after secure media is enabled
will set the access control post for the upload to the new post.
Then if the post is deleted every single onebox/link to that same image
URL will fail forever with 403 as the secure-media-uploads URL fails
if the access control post has been deleted.
To fix this when cooking posts and pulling hotlinked images, we only
allow using an original upload by URL if its access control post
matches the current post, and if the original_sha1 is filled in,
meaning it was uploaded AFTER secure media was enabled. otherwise
we just redownload the media again to be safe, as the URL will always
be new then.
Regression was created here:
https://github.com/discourse/discourse/pull/8750
When tag or category is added and the user is watching that category/tag
we changed notification type to `edited` instead of `new post`.
However, the logic here should be a little bit more sophisticated.
If the user has already seen the post, notification should be `edited`.
However, when user hasn't yet seen post, notification should be "new
reply". The case for that is when for example topic is under private
category and set for publishing later. In that case, we modify an
existing topic, however, for a user, it is like a new post.
Discussion on meta:
https://meta.discourse.org/t/publication-of-timed-topics-dont-trigger-new-topic-notifications/139335/13
Previously the badge was granted one month after the last time the badge was granted. The exact date shifted by one day each month. The new logic tries to grant the badge always at the beginning of a new month by looking at new users of the previous month. The "granted at" date is set to the end of the previous month.
When pull_hotlinked_images tried to run on posts with secure media (which had already been downloaded from external sources) we were getting a 404 when trying to download the image because the secure endpoint doesn't allow anon downloads.
Also, we were getting into an infinite loop of pull_hotlinked_images because the job didn't consider the secure media URLs as "downloaded" already so it kept trying to download them over and over.
In this PR I have also refactored secure-media-upload URL checks and mutations into single source of truth in Upload, adding a SECURE_MEDIA_ROUTE constant to check URLs against too.
There is a feature, that when tag or category is added to the topic,
customers who are watching that category or tag are notified.
The problem is that it is using default notification type "new post"
It would be better to use "new post" only when there really is a new
post and "edited" when categories or tags were modified.
### General Changes and Duplication
* We now consider a post `with_secure_media?` if it is in a read-restricted category.
* When uploading we now set an upload's secure status straight away.
* When uploading if `SiteSetting.secure_media` is enabled, we do not check to see if the upload already exists using the `sha1` digest of the upload. The `sha1` column of the upload is filled with a `SecureRandom.hex(20)` value which is the same length as `Upload::SHA1_LENGTH`. The `original_sha1` column is filled with the _real_ sha1 digest of the file.
* Whether an upload `should_be_secure?` is now determined by whether the `access_control_post` is `with_secure_media?` (if there is no access control post then we leave the secure status as is).
* When serializing the upload, we now cook the URL if the upload is secure. This is so it shows up correctly in the composer preview, because we set secure status on upload.
### Viewing Secure Media
* The secure-media-upload URL will take the post that the upload is attached to into account via `Guardian.can_see?` for access permissions
* If there is no `access_control_post` then we just deliver the media. This should be a rare occurrance and shouldn't cause issues as the `access_control_post` is set when `link_post_uploads` is called via `CookedPostProcessor`
### Removed
We no longer do any of these because we do not reuse uploads by sha1 if secure media is enabled.
* We no longer have a way to prevent cross-posting of a secure upload from a private context to a public context.
* We no longer have to set `secure: false` for uploads when uploading for a theme component.
* UI: Mass grant a badge from the admin ui
* Send the uploaded CSV and badge ID to the backend
* Read the CSV and grant badge in batches
* UX: Communicate the result to the user
* Don't award if badge is disabled
* Create a 'send_notification' method to remove duplicated code, slightly shrink badge image. Replace router transition with href.
* Dynamically discover current route
The secure media functionality relied on `SiteSetting.enable_s3_uploads?` which, as we found in dev, did not take into account global S3 settings via `GlobalSetting.use_s3?`. We now use `SiteSetting.Upload.enable_s3_uploads` instead to be more consistent.
Also, we now validate `enable_s3_uploads` changes, because if `GlobalSetting.use_s3?` is true users should NOT be enabling S3 uploads manually.
I made a regression here 17366d3bcc (diff-ddeebb36d131f89ca91be9d04c2baefaR10)
When the tag is added, people watching specific tag are notified but also people watching specific category.
Therefore, `notify_post_users` should accept options who should be notified.
So when `category` is added to the topic, users watching topic and users watching category are notified.
When `tag` is added to the topic, users watching topic and users watching tag are notified
Finally, when a new post is created, everybody is notified, topic watchers, category watchers, tag watchers.
* Fix user title logic when badge name customized
* Fix an issue where a user's title was not considered a badge granted title when the user used a badge for their title and the badge name was customized. this affected the effectiveness of revoke_ungranted_titles! which only operates on badge_granted_titles.
* When a user's title is set now it is considered a badge_granted_title if the badge name OR the badge custom name from TranslationOverride is the same as the title
* When a user's badge is revoked we now also revoke their title if the user's title matches the badge name OR the badge custom name from TranslationOverride
* Add a user history log when the title is revoked to remove confusion about why titles are revoked
* Add granted_title_badge_id to user_profile, now when we set badge_granted_title on a user profile when updating a user's title based on a badge, we also remember which badge matched the title
* When badge name (or custom text) changes update titles of users in a background job
* When the name of a badge changes, or in the case of system badges when their custom translation text changes, then we need to update the title of all corresponding users who have a badge_granted_title and matching granted_title_badge_id. In the case of system badges we need to first get the proper badge ID based on the translation key e.g. badges.regular.name
* Add migration to backfill all granted_title_badge_ids for both normal badge name titles and titles using custom badge text.
Issue was mentioned in this [meta topic](https://meta.discourse.org/t/send-a-notification-to-watching-users-when-adding-tag/125314)
It is working well when category is changed because NotifyCategoryChange job already got that code:
```
if post&.topic&.visible?
post_alerter = PostAlerter.new
post_alerter.notify_post_users(post, User.where(id: args[:notified_user_ids]))
post_alerter.notify_first_post_watchers(post, post_alerter.category_watchers(post.topic))
end
```
For NotifyTagChange job notify post users were missing so it worked only when your notification was set to `watching first post`
Previously every hour we would run a full scan of the entire DB searching
for expired uploads that need to be moved to the tombstone folder.
This commit amends it so we only run the job 2 times per clean_orpha_uploads_grace_period_hours
There is a upper bound of 7 days so even if the grace period is set really
high it will still run at least once a week.
By default we have a 48 grace period so this amends it to run this cleanup
daily instead of hourly. This eliminates 23 times we run this ultra expensive
query.
Zeitwerk simplifies working with dependencies in dev and makes it easier reloading class chains.
We no longer need to use Rails "require_dependency" anywhere and instead can just use standard
Ruby patterns to require files.
This is a far reaching change and we expect some followups here.
Previously, calculating thresholds for reviewables was done based on the
50th and 85th percentile across all reviewables. However, many forum
owners provided feedback that these thresholds were too easy to hit, in
particular when it came to auto hiding content.
The calculation has been adjusted to base the priorities on reviewables
that have a minimum of 2 scores (flags). This should push the amount of
flags required to hide something higher then before.
On forums with very few flags you don't want to calculate averages
because they won't be very useful. Stick with the defaults until we hit
15 reviewables at least.
This reverts commit e805d44965.
We now have mechanisms in place to ensure heartbeat will always
be scheduled even if the scheduler is overloaded per: 098f938b
* FIX: Heartbeat check per sidekiq process
* Rename method
* Remove heartbeat queues of previous bootups
* Regis feedback
* Refactor before_start
* Update lib/demon/sidekiq.rb
Co-Authored-By: Régis Hanol <regis@hanol.fr>
* Update lib/demon/sidekiq.rb
Co-Authored-By: Régis Hanol <regis@hanol.fr>
* Expire redis keys after 3600 seconds
* Don't use redis to store the list of queues
DEV: deprecate `invite.via_email` in favor of `invite.emailed_status`
This commit adds a new column `emailed_status` in `invites` table for
tracking email sending status.
0 - not required
1 - pending
2 - bulk pending
3 - sending
4 - sent
For normal email invites, invite record is created with emailed_status
set to 'pending'.
When bulk invites are sent invite record is created with emailed_status
set to 'bulk pending'.
For invites that generates link, invite record is created with
emailed_status set to 'not required'.
When invite email is in queue emailed_status is updated to 'sending'
Once the email is sent via `InviteEmail` job the invite emailed_status
is updated to 'sent'.