Within 24 hours of signing up, new users were losing their
default trust level of 3. With this fix, demotions from
trust level 3 won't happen when the "default trust level"
setting is 3 or 4.
Previously all topic posters would be added which could lead to major performance issues. Now if there are too many posters, only the acting user will be added as a participant.
The process_post job uses CookedPostProcessor which also uses a
DistributedMutex. There's no good reason for the timeout of the outer
lock to be smaller than the timeout of the inner lock.
If the “secure media” site setting is enabled then ALL files uploaded to Discourse (images, video, audio, pdf, txt, zip etc. etc.) will follow the secure media rules. The “prevent anons from downloading files” setting will no longer have any bearing on upload security. Basically, the feature will more appropriately be called “secure uploads” instead of “secure media”.
This is being done because there are communities out there that would like all attachments and media to be secure based on category rules but still allow anonymous users to download attachments in public places, which is not possible in the current arrangement.
* This PR implements the scheduling and notification system for bookmark reminders. Every 5 minutes a schedule runs to check any reminders that need to be sent before now, limited to **300** reminders at a time. Any leftover reminders will be sent in the next run. This is to avoid having to deal with fickle sidekiq and reminders in the far-flung future, which would necessitate having a background job anyway to clean up any missing `enqueue_at` reminders.
* If a reminder is sent its `reminder_at` time is cleared and the `reminder_last_sent_at` time is filled in. Notifications are only user-level notifications for now.
* All JavaScript and frontend code related to displaying the bookmark reminder notification is contained here. The reminder functionality is now re-enabled in the bookmark modal as well.
* This PR also implements the "Remind me next time I am at my desktop" bookmark reminder functionality. When the user is on a mobile device they are able to select this option. When they choose this option we set a key in Redis saying they have a pending at desktop reminder. The next time they change devices we check if the new device is desktop, and if it is we send reminders using a DistributedMutex. There is also a job to ensure consistency of these reminders in Redis (in case Redis drops the ball) and the at desktop reminders expire after 20 days.
* Also in this PR is a fix to delete all Bookmarks for a user via `UserDestroyer`
If a post is being cooked twice (for example after an edit), there is a
chance the 'raw' and 'cooked' column to be inconsistent. This reduces
the chances of that happening.
Rails has an odd behavior for calling .delete_all on a has_many relation - the
default behavior is to nullify the foreign key fields instead of actually
'DELETE'ing the records.
Additionally, publishing a shared draft topic creates a PostRevision that the
NotifyPostRevision job picks up which is then promptly deleted.
Use destroy_all when cleaning up the revisions and have the NotifyPostRevision
job tolerate deleted PostRevision records.
This takes a small performance hit (several SQL DELETEs instead of just one)
but shouldn't be too much of an issue (high cardinalities range from 30-100).
Introduces a new site setting `max_notifications_per_user`.
Out-of-the-box this is set to 10,000. If a user exceeds this number of
notifications, we will delete the oldest notifications keeping only 10,000.
To disable this safeguard set the setting to 0.
Enforcement happens weekly.
This is in place to protect the system from pathological states where a
single user has enormous amounts of notifications causing various queries
to time out. In practice nobody looks back more than a few hundred notifications.
Previously we had many places in the app that called `hostname` to get
hostname of a server. This commit replaces the pattern in 2 ways
1. We cache the result in `Discourse.os_hostname` so it is only ever called once
2. We prefer to use Socket.gethostname which avoids making a shell command
This improves performance as we are not spawning hostname processes throughout
the app lifetime
Basically, say you had already downloaded a certain image from a certain URL
using pull_hotlinked_images and the onebox. The upload would be stored
by its sha as an upload record. Whenever you linked to the same URL again
in a post (e.g. in our case an og:image on review.discourse) we would
would reuse the original upload record because of the sha1.
However when you turned on secure media this could cause problems as
the first post that uses that upload after secure media is enabled
will set the access control post for the upload to the new post.
Then if the post is deleted every single onebox/link to that same image
URL will fail forever with 403 as the secure-media-uploads URL fails
if the access control post has been deleted.
To fix this when cooking posts and pulling hotlinked images, we only
allow using an original upload by URL if its access control post
matches the current post, and if the original_sha1 is filled in,
meaning it was uploaded AFTER secure media was enabled. otherwise
we just redownload the media again to be safe, as the URL will always
be new then.
Regression was created here:
https://github.com/discourse/discourse/pull/8750
When tag or category is added and the user is watching that category/tag
we changed notification type to `edited` instead of `new post`.
However, the logic here should be a little bit more sophisticated.
If the user has already seen the post, notification should be `edited`.
However, when user hasn't yet seen post, notification should be "new
reply". The case for that is when for example topic is under private
category and set for publishing later. In that case, we modify an
existing topic, however, for a user, it is like a new post.
Discussion on meta:
https://meta.discourse.org/t/publication-of-timed-topics-dont-trigger-new-topic-notifications/139335/13
Previously the badge was granted one month after the last time the badge was granted. The exact date shifted by one day each month. The new logic tries to grant the badge always at the beginning of a new month by looking at new users of the previous month. The "granted at" date is set to the end of the previous month.
With this change the script:
* Actually removes original large-sized images
* Doesn't save processed files if their size has increased
* Prevents inconsistent state
When pull_hotlinked_images tried to run on posts with secure media (which had already been downloaded from external sources) we were getting a 404 when trying to download the image because the secure endpoint doesn't allow anon downloads.
Also, we were getting into an infinite loop of pull_hotlinked_images because the job didn't consider the secure media URLs as "downloaded" already so it kept trying to download them over and over.
In this PR I have also refactored secure-media-upload URL checks and mutations into single source of truth in Upload, adding a SECURE_MEDIA_ROUTE constant to check URLs against too.
Add TopicUploadSecurityManager to handle post moves. When a post moves around or a topic changes between categories and public/private message status the uploads connected to posts in the topic need to have their secure status updated, depending on the security context the topic now lives in.
Let's say post #2 quotes post number #1. If a user decides to quote the
quote in post #2, it should keep the information of post #1
("user_1, post: 1, topic: X"), instead of replacing with current post
info ("user_2, post: 2, topic: X").
There is a feature, that when tag or category is added to the topic,
customers who are watching that category or tag are notified.
The problem is that it is using default notification type "new post"
It would be better to use "new post" only when there really is a new
post and "edited" when categories or tags were modified.
### General Changes and Duplication
* We now consider a post `with_secure_media?` if it is in a read-restricted category.
* When uploading we now set an upload's secure status straight away.
* When uploading if `SiteSetting.secure_media` is enabled, we do not check to see if the upload already exists using the `sha1` digest of the upload. The `sha1` column of the upload is filled with a `SecureRandom.hex(20)` value which is the same length as `Upload::SHA1_LENGTH`. The `original_sha1` column is filled with the _real_ sha1 digest of the file.
* Whether an upload `should_be_secure?` is now determined by whether the `access_control_post` is `with_secure_media?` (if there is no access control post then we leave the secure status as is).
* When serializing the upload, we now cook the URL if the upload is secure. This is so it shows up correctly in the composer preview, because we set secure status on upload.
### Viewing Secure Media
* The secure-media-upload URL will take the post that the upload is attached to into account via `Guardian.can_see?` for access permissions
* If there is no `access_control_post` then we just deliver the media. This should be a rare occurrance and shouldn't cause issues as the `access_control_post` is set when `link_post_uploads` is called via `CookedPostProcessor`
### Removed
We no longer do any of these because we do not reuse uploads by sha1 if secure media is enabled.
* We no longer have a way to prevent cross-posting of a secure upload from a private context to a public context.
* We no longer have to set `secure: false` for uploads when uploading for a theme component.
* UI: Mass grant a badge from the admin ui
* Send the uploaded CSV and badge ID to the backend
* Read the CSV and grant badge in batches
* UX: Communicate the result to the user
* Don't award if badge is disabled
* Create a 'send_notification' method to remove duplicated code, slightly shrink badge image. Replace router transition with href.
* Dynamically discover current route
The following methods have long been deprecated in ruby due to flaws in their implementation per http://blade.nagaokaut.ac.jp/cgi-bin/vframe.rb/ruby/ruby-core/29293?29179-31097:
URI.escape
URI.unescape
URI.encode
URI.unencode
escape/encode are just aliases for one another. This PR uses the Addressable gem to replace these methods with its own encode, unencode, and encode_component methods where appropriate.
I have put all references to Addressable::URI here into the UrlHelper to keep them corralled in one place to make changes to this implementation easier.
Addressable is now also an explicit gem dependency.
We like to stay as close as possible to latest with rubocop cause the cops
get better.
This update required some code changes, specifically the default is to avoid
explicit returns where implicit is done
Also this renames a few rules
When uploading a file to a theme component, and that file is existing and has already been marked as secure, we now automatically mark the file as secure: false, change the ACL, and log the action as the user (also rebake the posts for the upload)