* DEV: Reserve webhook event types to be used in plugins
Based on feedback on the following PR's:
https://github.com/discourse/discourse-solved/pull/85https://github.com/discourse/discourse-assign/pull/61
This commit reserves ID's to be used for webhook event types to ensure
that some other webhook or plugin doesn't end up using the same ID.
* Fix broken test
I don't think this test has to test ALL event types to verify that this
feature is working. Now that we added some event types that plugins are
using this test was failing for missing fabricators that exist in the
respective plugins.
* remove loop and just test first record
Introduces `/user-cards.json`
Also allows the client-side user model to be passed an existing promise when loading, so that multiple models can share the same AJAX request
Meta: https://meta.discourse.org/t/improve-error-message-when-not-including-name-setting-up-totp/143339
* when the user creates a TOTP second factor method we want
to show them a nicer error if they forget to add a name
or the code from the app, instead of the param missing error
* also add a client-side check for this and for security key name,
no need to bother the server if we can help it
Anonymous users could query the invite json and see counts and
summaries which is not allowed in the UX of Discourse.
This commit has those endpoints return a 403 unless the user is
allowed to invite.
When secure media is enabled and an attachment is marked as secure we want to use the full url instead of the short-url so we get the same access control post protections as secure media uploads.
* Add uploads:sync_s3_acls rake task to ensure the ACLs in S3 are the correct (public-read or private) setting based on upload security
* Improved uploads:disable_secure_media to be more efficient and provide better messages to the user.
* Rename uploads:ensure_correct_acl task to uploads:secure_upload_analyse_and_update as it does more than check the ACL
* Many improvements to uploads:secure_upload_analyse_and_update
* Make sure that upload.access_control_post is unscoped so deleted posts are still fetched, because they still affect the security of the upload.
* Add escape hatch for capture_stdout in the form of RAILS_ENABLE_TEST_STDOUT. If provided the capture_stdout code will be ignored, so you can see the output if you need.
* FIX: We need to skip users with associated reviewables when auto-approving them
* Update spec/initializers/track_setting_changes_spec.rb
* Update spec/initializers/track_setting_changes_spec.rb
Co-authored-by: Robin Ward <robin.ward@gmail.com>
Introduces a new site setting `max_notifications_per_user`.
Out-of-the-box this is set to 10,000. If a user exceeds this number of
notifications, we will delete the oldest notifications keeping only 10,000.
To disable this safeguard set the setting to 0.
Enforcement happens weekly.
This is in place to protect the system from pathological states where a
single user has enormous amounts of notifications causing various queries
to time out. In practice nobody looks back more than a few hundred notifications.
When looking for the first paragraph with content in a post,
it was matching the lightboxed image paragraph as "<p></p>".
Fix that and other potential empty paragraphs with the
p:not(:empty) selector.
Add a new selector to find the image links in lightboxed
images as valid content for emails.
* Also fixes an issue where if webp was a downloaded hotlinked
image and then secure + sent in an email, it was not being
redacted because webp was not a supported media format in
FileHelper
* Webp originally removed as an image format in
https://github.com/discourse/discourse/pull/6377
and there was a spec to make sure a .bin webp
file did not get renamed from its type to webp.
However we want to support webp images now to make
sure they are properly redacted if secure media is
on, so change the example in the spec to use tiff,
another banned format, instead
* Attachments (non media files) were being marked as secure if just
SiteSetting.prevent_anons_from_downloading_files was enabled. this
was not correct as nothing should be marked as actually "secure" in
the DB without that site setting enabled
* Also add a proper standalone spec file for the upload security class
This is because the TOTP gem identifies as a colon as an addressable
protocol. The solution for now is to remove the colon in the issuer
name.
Changing the issuer changes the token values, but now it was completely
broken for colons so this should not be breaking anyone new.
A follow-up correction to this change https://github.com/discourse/discourse/pull/9001.
When admin changes staff email still enforce old email confirm. Only allow auto-confirm of a new email by admin IF the target user is not also an admin. If an admin gets locked out of their email the site admin can use the rails console to solve the issue in a pinch.
When admin changes a user's email from the preferences page of that user:
* The user will not be sent an email to confirm that their
email is changing. They will be sent a reset password email
so they can set the password for their account at the new
email address.
* The user will still be sent an email to their old email to inform
them that it was changed.
* Admin and staff users still need to follow the same old + new
confirm process, as do users changing their own email.
If a group mention could be notified on preview it was given an `<a>`
tag with the `.notify` class. When cooked it would display differently.
This patch makes the server side cooking match the client preview.
Further on from my earlier PR #8973 also reject upload as secure if its origin URL contains images/emoji. We still check Emoji.all first to try and be canonical.
This may be a little heavy handed (e.g. if an external URL followed this same path it would be a false positive), but there are a lot of emoji aliases where the actual Emoji url is something, but you can have another image that should not be secure that that thing is an alias for. For example slight_smile.png does not show up in Emoji.all BUT slightly_smiling_face does, and it aliases slight_smile e.g. /images/emoji/twitter/slight_smile.png?v=9 and /images/emoji/twitter/slightly_smiling_face.png?v=9 are equivalent.
The rake task was broken, because the addition of the
UploadSecurity check returned true/false instead of the
upload ID to determine which uploads to set secure.
Also it was rebaking the posts in the wrong place and
pretty inefficiently at that. Also it was rebaking before
the upload was being changed to secure in the DB.
This also updates the task to set the access_control_post_id
for all uploads. the first post the upload is linked to is used
for the access control. if the upload doesn't get changed to
secure this doesn't affect anything.
Added a spec for the rake task to cover common cases.
Sometimes PullHotlinkedImages pulls down a site emoji and creates a new upload record for it. In the cases where these happen the upload is not created via the normal path that custom emoji follows, so we need to check in UploadSecurity whether the origin of the upload is based on a regular site emoji. If it is we never want to mark it as secure (we don't want emoji not accessible from other posts because of secure media).
This only became apparent because the uploads:ensure_correct_acl rake task uses UploadSecurity to check whether an upload should be secure, which would have marked a whole bunch of regular-old-emojis as secure.
Now if a group is visible but unmentionable, users can search for it
when composing by typing with `@`, but it will be rendered without the
grey background color.
It will also no longer pop up a JIT warning saying "You are about to
mention X people" because the group will not be mentioned.
After adding a tag as a synonym of another tag,
both tags will have the wrong topic counts. It's
corrected within 12 hours by the EnsureDbConsistency
job. This fix ensures the topic counts are updated
much sooner.
* FIX: when unread reply notification exists don't create new
From time to time, the user is creating a reply post and then they want to add additional details. They edit an existing post and for example, add a quote from a previous one.
In that situation, if the user to whom reply was directed to already have the unread notification, we should not create the new one.
That behaviour was mentioned here: https://meta.discourse.org/t/reply-then-edit-to-add-quote-notification-redundancy/138358
* FIX: dont create new notification if already exists
* Because custom emoji count as post "uploads" we were
marking them as secure when updating the secure status for post uploads.
* We were also giving them an access control post id, which meant
broken image previews from 403 errors in the admin custom emoji list.
* We now check if an upload is used as a custom emoji and do not
assign the access control post + never mark as secure.
### UI Changes
If `SiteSetting.enable_bookmarks_with_reminders` is enabled:
* Clicking "Bookmark" on a topic will create a new Bookmark record instead of a post + user action
* Clicking "Clear Bookmarks" on a topic will delete all the new Bookmark records on a topic
* The topic bookmark buttons control the post bookmark flags correctly and vice-versa
Disabled selecting the "reminder type" for bookmarks in the UI because the backend functionality is not done yet (of sending users notifications etc.)
### Other Changes
* Added delete bookmark route (but no UI yet)
* Added a rake task to sync the old PostAction bookmarks to the new Bookmark table, which can be run as many times as we want for a site (it will not create duplicates).
The commit: 75069ff179
allows users to remove their primary group, but this introduced a bug
where if you were to edit any other profile info like location or
website which is a form on a separate page then the flair dropdown,
would cause the selected flair to be removed.
This fix ensures that if the `primary_group_id` parameter is missing
from the update payload it does not remove the existing
`primary_group_id`. It will only remove the `primary_group_id` if it is
present in the payload and empty.
This is not used in core or official plugins, and has been printing a deprecation notice since v2.3.0beta4. All OpenID 2.0 code and dependencies have been dropped. The user_open_ids table remains for now, in case anyone has missed the deprecation notice, and needs to migrate their data.
Context at https://meta.discourse.org/t/-/113249
This commit removes logic about spoilers because it should live inside
of the discourse-spoiler-alert plugin.
This PR:
https://github.com/discourse/discourse-spoiler-alert/pull/38
also completely removes spoilers from excerpts in order to keep them
from leaking in topic previews and notifications.
Extracted from #8772
This will allow developers (in rails development mode only) to log pre-loaded JSON app data to the browser console for inspection.
When a tag is restricted to a secured category that the user can't see,
the message was saying that it wasn't restricted to any categories.
Now it will say it's restricted to categories you can't access.
Some auth providers (e.g. Auth0 with default configuration) send the email address in the name field. In Discourse, the name field is made public, so this commit adds a safeguard to prevent emails being made public.
* DEV: Use Ember 3.12.2
* Add Ember version to ThemeField's DEPENDENT_CONSTANTS
* DEV: Use `id` instead of `elementId` (See: https://github.com/emberjs/ember.js/issues/18147)
* FIX: Don't leak event listeners (bug introduced in 999e2ff)
For some reasons, we have two ways of associating "custom fields" to a new topic:
using 'meta_data' and 'custom_fields'.
However, if we were to provide both arguments, the 'meta_data' would be overwritten
by any 'custom_fields' provided.
This commit ensures we can use both and merges the 'custom_fields' with the 'meta_data'.
This fix ensures that the site setting `post_edit_time_limit` does not
bypass the limit of the site setting `min_trust_to_edit_post`. This
prevents a bug where users that did not meet the minimum trust level to
edit could edit the title of topics.
Previously you'd get a server side generic error due to a password check
failing. Now the input element has a maxlength attribute and the server
side will respond with a nicer error message if the value is too long.
If our reply tree somehow ends up with cycles or other odd
structures, we only want to consider a reply once, at the first
level in the tree that it appears.
* DEV: Add data-notification-level attribute to category UI
* Show muted categories on the category page by default
This reverts commit ed9c21e42c.
* Remove redundant spec - muted categories are now visible by default
It seems in some situations replies have been moved to other topics but
the `PostReply` table has not been updated. I will try and fix this in a
follow up PR, but for now this fix ensures that every time we ask a post
for its replies that we restrict it to the same topic.
This commit adds support for an optional "logout" parameter in the
payload of the /session/sso_provider endpoint. If an SSO Consumer
adds a "logout=true" parameter to the encoded/signed "sso" payload,
then Discourse will treat the request as a logout request instead
of an authentication request. The logout flow works something like
this:
* User requests logout at SSO-Consumer site (e.g., clicks "Log me out!"
on web browser).
* SSO-Consumer site does whatever it does to destroy User's session on
the SSO-Consumer site.
* SSO-Consumer then redirects browser to the Discourse sso_provider
endpoint, with a signed request bearing "logout=true" in addition
to the usual nonce and the "return_sso_url".
* Discourse destroys User's discourse session and redirects browser back
to the "return_sso_url".
* SSO-Consumer site does whatever it does --- notably, it cannot request
SSO credentials from Discourse without the User being prompted to login
again.
* the spec to check if changing the topic between PM and
public to see if the upload security status is changed
is already covered extensively in topic_upload_security_manager_spec.rb
If someone only had security keys enabled, the icon to say they had 2FA enabled would not show in the admin staff user list. It would only show if they had TOTP enabled.
When we change upload's sha1 (e.g. when resizing images) it won't match the data in the most recent S3 inventory index. With this change the uploads that have been updated since the inventory has been generated are ignored.
When FinalDestination is given a URL it encodes it before doing anything else. however S3 presigned URLs should not be messed with in any way otherwise we can end up with 400 errors when downloading the URL e.g.
<Error><Code>InvalidToken</Code><Message>The provided token is malformed or otherwise invalid.</Message>
The signature of presigned URLs is very important and is automatically generated and should be preserved.
This should make the importer more resilient to incomplete or damaged
backups. It will disable some validations and attempt to automatically
repair category permissions before importing.
For example /t/ URLs were being replaced if they contained secure-media-uploads so if you made a topic called "Secure Media Uploads Are Cool" the View Topic link in the user notifications would be stripped out.
Refactored code so this secure URL detection happens in one place.
When 'categories topics' setting is set to 0, the system will
automatically try to find a value to keep the two columns (categories
and topics) symmetrical.
The value is computed as 1.5x the number of top level categories and at
least 5 topics will always be returned.
Previously if somehow a user created a blank markdown document using tag
tricks (eg `<p></p><p></p><p></p><p></p><p></p><p></p>`) and so on, we would
completely strip the document down to blank on post process due to onebox
hack.
Needs a followup cause I am still unclear about the reason for empty p stripping
and it can cause some unclear cases when we re-cook posts.
Basically, say you had already downloaded a certain image from a certain URL
using pull_hotlinked_images and the onebox. The upload would be stored
by its sha as an upload record. Whenever you linked to the same URL again
in a post (e.g. in our case an og:image on review.discourse) we would
would reuse the original upload record because of the sha1.
However when you turned on secure media this could cause problems as
the first post that uses that upload after secure media is enabled
will set the access control post for the upload to the new post.
Then if the post is deleted every single onebox/link to that same image
URL will fail forever with 403 as the secure-media-uploads URL fails
if the access control post has been deleted.
To fix this when cooking posts and pulling hotlinked images, we only
allow using an original upload by URL if its access control post
matches the current post, and if the original_sha1 is filled in,
meaning it was uploaded AFTER secure media was enabled. otherwise
we just redownload the media again to be safe, as the URL will always
be new then.
Regression was created here:
https://github.com/discourse/discourse/pull/8750
When tag or category is added and the user is watching that category/tag
we changed notification type to `edited` instead of `new post`.
However, the logic here should be a little bit more sophisticated.
If the user has already seen the post, notification should be `edited`.
However, when user hasn't yet seen post, notification should be "new
reply". The case for that is when for example topic is under private
category and set for publishing later. In that case, we modify an
existing topic, however, for a user, it is like a new post.
Discussion on meta:
https://meta.discourse.org/t/publication-of-timed-topics-dont-trigger-new-topic-notifications/139335/13
Previously the badge was granted one month after the last time the badge was granted. The exact date shifted by one day each month. The new logic tries to grant the badge always at the beginning of a new month by looking at new users of the previous month. The "granted at" date is set to the end of the previous month.
Adds a new route `/u/{username}/card.json`, which has a reduced number of fields. This change is behind a hidden site setting, so we can test compatibility before rolling out.
The new search modifier `in:all` can be used to include both public and personal messages in the same search.
Co-authored-by: adam j hartz <hz@mit.edu>
When pull_hotlinked_images tried to run on posts with secure media (which had already been downloaded from external sources) we were getting a 404 when trying to download the image because the secure endpoint doesn't allow anon downloads.
Also, we were getting into an infinite loop of pull_hotlinked_images because the job didn't consider the secure media URLs as "downloaded" already so it kept trying to download them over and over.
In this PR I have also refactored secure-media-upload URL checks and mutations into single source of truth in Upload, adding a SECURE_MEDIA_ROUTE constant to check URLs against too.
* FEATURE: Replace existing badge owners when using the bulk award feature
* Use ActiveRecord to sanitize title update query, Change replace checkbox text
Co-Authored-By: Robin Ward <robin.ward@gmail.com>
Co-authored-by: Robin Ward <robin.ward@gmail.com>
* DEV: Add a fake Mutex that for concurrency testing with Fibers
* DEV: Support running in sleep order in concurrency tests
* FIX: A separate FallbackHandler should be used for each redis pair
This commit refactors the FallbackHandler and Connector:
* There were two different ways to determine whether the redis master
was up. There is now one way and it is the responsibility of the
new RedisStatus class.
* A background thread would be created whenever `verify_master` was
called unless the thread already existed. The thread would
periodically check the status of the redis master. However, checking
that a thread is `alive?` is an ineffective way of determining
whether it will continue to check the redis master in the future
since the thread may be in the process of winding down.
Now, this thread is created when the recorded master status goes from
up to down. Since this thread runs the only part of the code that is
able to bring the recorded status up again, we ensure that only one
thread is probing the redis master at a time and that there is always
a thread probing redis master when it is recorded as being down.
* Each time the status of the redis master was checked periodically, it
would spawn a new thread and immediately join on it. I assume this
happened to isolate the check from the current execution, but since
the join rethrows exceptions in the parent thread, this was not
effective.
* The logic for falling back was spread over the FallbackHandler and
the Connector. The connector is now a dumb object that delegates
responsibility for determining the status of redis to the
FallbackHandler.
* Previously, failing to connect to a master redis instance when it was
not recorded as down would raise an exception. Now, this exception is
passed to `Discourse.warn_exception` and the connection is made to
the slave.
This commit introduces the FallbackHandlers singleton:
* It is responsible for holding the set of FallbackHandlers.
* It adds callbacks to the fallback handlers for when a redis master
comes up or goes down. Main redis and message bus redis may exist on
different or the same redis hosts and so these callbacks may all
exist on the same FallbackHandler or on separate ones.
These objects are tested using fake concurrency provided by the
Concurrency module:
* An `around(:each)` hook is used to cause each test to run inside a
Scenario so that the test body, mocking cleanup and `after(:each)`
callbacks are run in a different Fiber.
* Therefore, holting the execution of the Execution abruptly (so that
the fibers aren't run to completion), prevents the mocking cleaning
and `after(:each)` callbacks from running. I have tried to prevent
this by recovering from all exceptions during an Execution.
* FIX: Create frozen copies of passed in config where possible
* FIX: extract start_reset method and remove method used by tests
Co-authored-by: Daniel Waterworth <me@danielwaterworth.com>
Add TopicUploadSecurityManager to handle post moves. When a post moves around or a topic changes between categories and public/private message status the uploads connected to posts in the topic need to have their secure status updated, depending on the security context the topic now lives in.
When we were pulling hotlinked images for oneboxes in the CookedPostProcessor, we were using the direct S3 URL, which returned a 403 error and thus did not set widths and heights of the images. We now cook the URL first based on whether the upload is secure before handing off to FastImage.