We previously used ConsolidateNotifications with a threshold of 1 to re-use an existing notification and bump it to the top instead of creating a new one. It produces some jumpiness in the user notification list, and it relies on updating the `created_at` attribute, which is a bit hacky.
As a better alternative, we're introducing a new plan that deletes all the previous versions of the notification, then creates a new one.
* REFACTOR: Improve support for consolidating notifications.
Before this commit, we didn't have a single way of consolidating notifications. For notifications like group summaries, we manually removed old ones before creating a new one. On the other hand, we used an after_create callback for likes and group membership requests, which caused unnecessary work, as we need to delete the record we created to replace it with a consolidated one.
We now have all the consolidation rules centralized in a single place: the consolidation planner class. Other parts of the app looking to create a consolidable notification can do so by calling Notification#consolidate_or_save!, instead of the default Notification#create! method.
Finally, we added two more rules: one for re-using existing group summaries and another for deleting duplicated dashboard problems PMs notifications when the user is tracking the moderator's inbox. Setting the threshold to one forces the planner to apply this rule every time.
I plan to add plugin support for adding custom rules in another PR to keep this one relatively small.
* DEV: Introduces a plugin API for consolidating notifications.
This commit removes the `Notification#filter_by_consolidation_data` scope since plugins could have to define their criteria. The Plan class now receives two blocks, one to query for an already consolidated notification, which we'll try to update, and another to query for existing ones to consolidate.
It also receives a consolidation window, which accepts an ActiveSupport::Duration object, and filter notifications created since that value.
This commit adds token_hash and scopes columns to email_tokens table.
token_hash is a replacement for the token column to avoid storing email
tokens in plaintext as it can pose a security risk. The new scope column
ensures that email tokens cannot be used to perform a different action
than the one intended.
To sum up, this commit:
* Adds token_hash and scope to email_tokens
* Reuses code that schedules critical_user_email
* Refactors EmailToken.confirm and EmailToken.atomic_confirm methods
* Periodically cleans old, unconfirmed or expired email tokens
Use @here to mention all users that were allowed to topic directly or
through group, who liked topics or read the topic. Only first 10 users
will be notified.
We are pushing /notification-alert/#{user_id} and /notification/#{user_id}
messages to MessageBus from both PostAlerter and User#publish_notification_state.
This can cause memory issues on large sites with many users. This commit
stems the bleeding by only sending these alert messages if the user
in question has been seen in the last 30 days, which eliminates a large
chunk of users on some sites.
When 31035010af
was done it failed to take into account the case where the smtp_enabled
site setting was true, but the topic had no allowed groups / no
incoming email record, which caused errors for topics even with
nothing to do with group SMTP.
When there are multiple groups on a topic, we were selecting
the first from the topic allowed groups to act as the sender
email address when sending group SMTP replies via PostAlerter.
However, this was not ordered, and since there is no created_at
column on TopicAllowedGroup we cannot order this nicely, which
caused just a random group to be used (based on whatever postgres
decided it felt like that morning).
This commit changes the group used for SMTP sending to be the
group using the email_username of the to address of the first
incoming email for the topic, if there are more than one allowed
groups on the topic. Otherwise it just uses the only SMTP enabled
group.
This PR introduces a new `enable_experimental_backup_uploads` site setting (default false and hidden), which when enabled alongside `enable_direct_s3_uploads` will allow for direct S3 multipart uploads of backup .tar.gz files.
To make multipart external uploads work with both the S3BackupStore and the S3Store, I've had to move several methods out of S3Store and into S3Helper, including:
* presigned_url
* create_multipart
* abort_multipart
* complete_multipart
* presign_multipart_part
* list_multipart_parts
Then, S3Store and S3BackupStore either delegate directly to S3Helper or have their own special methods to call S3Helper for these methods. FileStore.temporary_upload_path has also removed its dependence on upload_path, and can now be used interchangeably between the stores. A similar change was made in the frontend as well, moving the multipart related JS code out of ComposerUppyUpload and into a mixin of its own, so it can also be used by UppyUploadMixin.
Some changes to ExternalUploadManager had to be made here as well. The backup direct uploads do not need an Upload record made for them in the database, so they can be moved to their final S3 resting place when completing the multipart upload.
This changeset is not perfect; it introduces some special cases in UploadController to handle backups that was previously in BackupController, because UploadController is where the multipart routes are located. A subsequent pull request will pull these routes into a module or some other sharing pattern, along with hooks, so the backup controller and the upload controller (and any future controllers that may need them) can include these routes in a nicer way.
Calling create_notification_alert could still send a notification to a
suspended user. This just moves the check if user is suspended right
before sending the notification.
The file size error messages for max_image_size_kb and
max_attachment_size_kb are shown to the user in the KB
format, regardless of how large the limit is. Since we
are going to support uploading much larger files soon,
this KB-based limit soon becomes unfriendly to the end
user.
For example, if the max attachment size is set to 512000
KB, this is what the user sees:
> Sorry, the file you are trying to upload is too big (maximum
size is 512000KB)
This makes the user do math. In almost all file explorers that
a regular user would be familiar width, the file size is shown
in a format based on the maximum increment (e.g. KB, MB, GB).
This commit changes the behaviour to output a humanized file size
instead of the raw KB. For the above example, it would now say:
> Sorry, the file you are trying to upload is too big (maximum
size is 512 MB)
This humanization also handles decimals, e.g. 1536KB = 1.5 MB
This pull request introduces the endpoints required, and the JavaScript functionality in the `ComposerUppyUpload` mixin, for direct S3 multipart uploads. There are four new endpoints in the uploads controller:
* `create-multipart.json` - Creates the multipart upload in S3 along with an `ExternalUploadStub` record, storing information about the file in the same way as `generate-presigned-put.json` does for regular direct S3 uploads
* `batch-presign-multipart-parts.json` - Takes a list of part numbers and the unique identifier for an `ExternalUploadStub` record, and generates the presigned URLs for those parts if the multipart upload still exists and if the user has permission to access that upload
* `complete-multipart.json` - Completes the multipart upload in S3. Needs the full list of part numbers and their associated ETags which are returned when the part is uploaded to the presigned URL above. Only works if the user has permission to access the associated `ExternalUploadStub` record and the multipart upload still exists.
After we confirm the upload is complete in S3, we go through the regular `UploadCreator` flow, the same as `complete-external-upload.json`, and promote the temporary upload S3 into a full `Upload` record, moving it to its final destination.
* `abort-multipart.json` - Aborts the multipart upload on S3 and destroys the `ExternalUploadStub` record if the user has permission to access that upload.
Also added are a few new columns to `ExternalUploadStub`:
* multipart - Whether or not this is a multipart upload
* external_upload_identifier - The "upload ID" for an S3 multipart upload
* filesize - The size of the file when the `create-multipart.json` or `generate-presigned-put.json` is called. This is used for validation.
When the user completes a direct S3 upload, either regular or multipart, we take the `filesize` that was captured when the `ExternalUploadStub` was first created and compare it with the final `Content-Length` size of the file where it is stored in S3. Then, if the two do not match, we throw an error, delete the file on S3, and ban the user from uploading files for N (default 5) minutes. This would only happen if the user uploads a different file than what they first specified, or in the case of multipart uploads uploaded larger chunks than needed. This is done to prevent abuse of S3 storage by bad actors.
Also included in this PR is an update to vendor/uppy.js. This has been built locally from the latest uppy source at d613b849a6. This must be done so that I can get my multipart upload changes into Discourse. When the Uppy team cuts a proper release, we can bump the package.json versions instead.
While merging two user accounts don't merge the source user's email address if the target user is not a human.
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
We shouldn't be checking if a user is allowed to do an action in the logger. We should be checking it just before we perform the action. In fact, guardians in the logger can make things even worse in case of a security bug. Let's say we forgot to check user's permissions before performing some action, but we still have a call to the guardian in the logger. In this case, a user would perform the action anyway, and this action wouldn't even be logged!
I've checked all cases and I confirm that we're safe to delete this calls from the logger.
I've added two calls to guardians in admin/user_controller. We didn't have security bugs there, because regular users can't access admin/... routes at all. But it's good to have calls to guardian in these methods anyway, neighboring methods have them.
This adds a few different things to allow for direct S3 uploads using uppy. **These changes are still not the default.** There are hidden `enable_experimental_image_uploader` and `enable_direct_s3_uploads` settings that must be turned on for any of this code to be used, and even if they are turned on only the User Card Background for the user profile actually uses uppy-image-uploader.
A new `ExternalUploadStub` model and database table is introduced in this pull request. This is used to keep track of uploads that are uploaded to a temporary location in S3 with the direct to S3 code, and they are eventually deleted a) when the direct upload is completed and b) after a certain time period of not being used.
### Starting a direct S3 upload
When an S3 direct upload is initiated with uppy, we first request a presigned PUT URL from the new `generate-presigned-put` endpoint in `UploadsController`. This generates an S3 key in the `temp` folder inside the correct bucket path, along with any metadata from the clientside (e.g. the SHA1 checksum described below). This will also create an `ExternalUploadStub` and store the details of the temp object key and the file being uploaded.
Once the clientside has this URL, uppy will upload the file direct to S3 using the presigned URL. Once the upload is complete we go to the next stage.
### Completing a direct S3 upload
Once the upload to S3 is done we call the new `complete-external-upload` route with the unique identifier of the `ExternalUploadStub` created earlier. Only the user who made the stub can complete the external upload. One of two paths is followed via the `ExternalUploadManager`.
1. If the object in S3 is too large (currently 100mb defined by `ExternalUploadManager::DOWNLOAD_LIMIT`) we do not download and generate the SHA1 for that file. Instead we create the `Upload` record via `UploadCreator` and simply copy it to its final destination on S3 then delete the initial temp file. Several modifications to `UploadCreator` have been made to accommodate this.
2. If the object in S3 is small enough, we download it. When the temporary S3 file is downloaded, we compare the SHA1 checksum generated by the browser with the actual SHA1 checksum of the file generated by ruby. The browser SHA1 checksum is stored on the object in S3 with metadata, and is generated via the `UppyChecksum` plugin. Keep in mind that some browsers will not generate this due to compatibility or other issues.
We then follow the normal `UploadCreator` path with one exception. To cut down on having to re-upload the file again, if there are no changes (such as resizing etc) to the file in `UploadCreator` we follow the same copy + delete temp path that we do for files that are too large.
3. Finally we return the serialized upload record back to the client
There are several errors that could happen that are handled by `UploadsController` as well.
Also in this PR is some refactoring of `displayErrorForUpload` to handle both uppy and jquery file uploader errors.
In badge queries for 'First Share' and 'Nice/Good/Great Share' badges,
check that the user exists.
For 'Nice+ Share' badges, also grant badges if the number of shares is
equal to the threshhold count to better match the descriptions.
Configuring staged users to watch categories and tags is a way to sign
them up to get many emails. These emails may be unwanted and get marked
as spam, hurting the site's email deliverability.
Users can opt-in to email notifications by logging on to their
account and configuring their own preferences.
If staff need to be able to configure these preferences on behalf of
staged users, the "allow changing staged user tracking" site setting
can be enabled. Default is to not allow it.
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
There was a bug with changing timestamps using the topic wrench button. Under some circumstances, a topic was disappearing from the top of the latest tab after changing timestamps. Steps to reproduce:
- Choose a topic on the latest tab (the topic should be created some time ago, but has recent posts)
- Change topic timestamps (for example, move them one day forward):
- Go back to the latest tab and see that topic has disappeared.
This PR fixes this. We were setting topic.bumped_at to the timestamp user specified on the modal. This is incorrect. Instead, we should be setting topic.bumped_at to the created_at timestamp of the last regular (not a whisper and so on) post on the topic.
Currently when bulk-awarding a badge that can be granted multiple times, users in the CSV file are granted the badge once no matter how many times they're listed in the file and only if they don't have the badge already.
This PR adds a new option to the Badge Bulk Award feature so that it's possible to grant users a badge even if they already have the badge and as many times as they appear in the CSV file.
Mixing multisite and standard specs can lead to issues (e.g. when using `fab!`)
Disabled the (upcoming https://github.com/discourse/rubocop-discourse/pull/11) rubocop rule for two files that have thoroughly tangled both types of specs.
For other private messages we have the site setting
personal_email_time_window_seconds (default 20s) which allows
people to edit their post etc. before the email is sent.
This PR makes the Jobs::GroupSmtpEmail enqueuer in the
PostAlerter use the same delay.
<!-- NOTE: All pull requests should have tests (rspec in Ruby, qunit in JavaScript). If your code does not include test coverage, please include an explanation of why it was omitted. -->
This PR backtracks a fair bit on this one https://github.com/discourse/discourse/pull/13220/files.
Instead of sending the group SMTP email for each user via `UserNotifications`, we are changing to send only one email with the existing `Jobs::GroupSmtpEmail` job and `GroupSmtpMailer`. We are changing this job and mailer along with `PostAlerter` to make the first topic allowed user the `to_address` for the email and any other `topic_allowed_users` to be the CC address on the email. This is to cut down on emails sent via SMTP, which is subject to daily limits from providers such as Gmail. We log these details in the `EmailLog` table now.
In addition to this, we have changed `PostAlerter` to no longer rely on incoming email email addresses for sending the `GroupSmtpEmail` job. This was unreliable as a user's email could have changed in the meantime. Also it was a little overcomplicated to use the incoming email records -- it is far simpler to reason about to just use topic allowed users.
This also adds a fix to include cc_addresses in the EmailLog.addressed_to_user scope.
Anonymizing a user changed their email address, destroyed all
associated InvitedUser records, but did not destroy the invites
associated to user's email.
When a group only has SMTP enabled and not IMAP, we do not
want to enqueue the :group_smtp_email job because using the group's
SMTP credentials for sending user_private_message emails is
handled by the UserNotifications class.
We do not want the :group_smtp_email job to be enqueued because
that uses a reply key instead of the group.email_username
for the reply-to address which is not what we want for SMTP
only, and also creates an IncomingEmail record to prevent IMAP
double syncing which we do not need either.
There is an open question about what happens when IMAP is
enabled after SMTP has been enabled for a while, and also questions
around whether we could do away with :group_smtp_email altogether
and handle everything via EmailLog and UserNotifications, adding
additional columns to the former and modifying the Imap::Sync
class to take this into account...a lot more further testing
for IMAP needs to be done to answer those questions.
For now, this fix should be sufficient to get the correct
reply-to address for user_private_response messages sent in
response to emails sent directly to the group's
email_username SMTP address.
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
This overhauls the user interface for the group email settings management, aiming to make it a lot easier to test the settings entered and confirm they are correct before proceeding. We do this by forcing the user to test the settings before they can be saved to the database. It also includes some quality of life improvements around setting up IMAP and SMTP for our first supported provider, GMail. This PR does not remove the old group email config, that will come in a subsequent PR. This is related to https://meta.discourse.org/t/imap-support-for-group-inboxes/160588 so read that if you would like more backstory.
### UI
Both site settings of `enable_imap` and `enable_smtp` must be true to test this. You must enable SMTP first to enable IMAP.
You can prefill the SMTP settings with GMail configuration. To proceed with saving these settings you must test them, which is handled by the EmailSettingsValidator.
If there is an issue with the configuration or credentials a meaningful error message should be shown.
IMAP settings must also be validated when IMAP is enabled, before saving.
When saving IMAP, we fetch the mailboxes for that account and populate them. This mailbox must be selected and saved for IMAP to work (the feature acts as though it is disabled until the mailbox is selected and saved):
### Database & Backend
This adds several columns to the Groups table. The purpose of this change is to make it much more explicit that SMTP/IMAP is enabled for a group, rather than relying on settings not being null. Also included is an UPDATE query to backfill these columns. These columns are automatically filled when updating the group.
For GMail, we now filter the mailboxes returned. This is so users cannot use a mailbox like Sent or Trash for syncing, which would generally be disastrous.
There is a new group endpoint for testing email settings. This may be useful in the future for other places in our UI, at which point it can be extracted to a more generic endpoint or module to be included.
Previously we would retry push notifications indefinitely for all errors
except for ExpiredSubscription
Under certain conditions other persistent errors may arise such as a persistent
rate limit.
If we track more than 3 errors in a period of time longer than a day we will
delete the subscription
Also performs a bit of internal cleanup to ensure protected methods really
are private.
This PR improves the UI of bulk select so that its context is applied to the Dismiss Unread and Dismiss New buttons. Regular users (not just staff) are now able to use topic bulk selection on the /new and /unread routes to perform these dismiss actions more selectively.
For Dismiss Unread, there is a new count in the text of the button and in the modal when one or more topic is selected with the bulk select checkboxes.
For Dismiss New, there is a count in the button text, and we have added functionality to the server side to accept an array of topic ids to dismiss new for, instead of always having to dismiss all new, the same as the bulk dismiss unread functionality. To clean things up, the `DismissTopics` service has been rolled into the `TopicsBulkAction` service.
We now also show the top Dismiss/Dismiss New button based on whether the bottom one is in the viewport, not just based on the topic count.
Over the years we accrued many spelling mistakes in the code base.
This PR attempts to fix spelling mistakes and typos in all areas of the code that are extremely safe to change
- comments
- test descriptions
- other low risk areas
We have a few places in the code where we need to validate various email related settings, and will have another soon with the improved group email settings UI. This PR introduces a class which can validate POP3, IMAP, and SMTP credentials and also provide a friendly error message for issues if they must be presented to an end user.
This PR does not change any existing code to use the new service. I have added a TODO to change POP3 validation and the email test rake task to use the new validator post-release.
* FIX: Link notification to first unread post
If a topic with a few posts was posted in a watched category or with a
watched tag, the created notification would always point to the last
post, instead of pointing to the first one.
The root cause is that the query that fetched the first unread post
uses 'TopicUser' records and those are not created by default for
user watching a category or tag. In this case, it should use the
'CategoryUser' or 'TagUser' records.
* DEV: Use named bind variables
When the admin creates a new custom field they can specify if that field should be searchable or not.
That setting is taken into consideration for quick search results.
This PR adds a new category setting which is a column in the `categories` table, `allow_unlimited_owner_edits_on_first_post`.
What this does is:
* Inside the `can_edit_post?` method of `PostGuardian`, if the current user editing a post is the owner of the post, it is the first post, and the topic's category has `allow_unlimited_owner_edits_on_first_post`, then we bypass the check for `LimitedEdit#edit_time_limit_expired?` on that post.
* Also, similar to wiki topics, in `PostActionNotifier#after_create_post_revision` we send a notification to all users watching a topic when the OP is edited in a topic with the category setting `allow_unlimited_owner_edits_on_first_post` enabled.
This is useful for forums where there is a Marketplace or similar category, where topics are created and then updated indefinitely by the OP rather than the OP making new topics or additional replies. In a way this acts similar to a wiki that only one person can edit.
Mailing list mode can generate significant email volume, especially on sites with a large user base. Disable mailing list mode via site settings by default so sites don't experience an unexpectedly large cost from outgoing email.
The urls that we generate for mobile post notifications don't take into
account the subfolder url if a site happens to have one configured. When
this happens when you tap on a new mobile notification it takes you to
a url that doesn't work because it is missing the subfolder portion.
I honestly think this should be handled in the Post model like we do
with the Topic model. `Post.url` should know how to handle subfolder
installs, but that seemed like a very risky change because there are
lots of other places in the codebase where we tack on the base_path and
I didn't want to risk duplicating it.
I also found a small typo in the topics controller spec.
We were sending 2 emails for user silencing if a message was provided in the UI. Also always send email for user silence and user suspend with reason regardless of whether message provided.
Previously we were checking truthiness in some places, and `== true` in
others. That can lead to some inconsistent UX where the interface says
the email is valid, but account creation fails.
This commit ensures values are boolean when set, and raises an error for
other value types.
If this safety check is triggered, it means the specific auth provider
needs to be updated to pass booleans.
The bug was mentioned on meta: https://meta.discourse.org/t/pressing-dismiss-new-doesnt-clear-new-topics/179858
Problem is that sometimes the user has TopicUser records with `last_read_post_number` set as NULL. In that case, the topic is still "new" to them and should be dismissed when they click dismiss button.
In addition, I added that condition to post_migration and bumped the number to fix existing records. Migration is written to be idempotent so it will make no harm to already deployed instances.
Previously when inheriting category auto-close settings for a topic, those settings were disrupted if another topic timer was assigned or if a topic was closed then manually re-opened.
This PR makes it so that when a topic is manually re-opened the topic auto-close settings are inherited from the category. However, they will now be based on the topic created_at date. As an example, for a topic with a category auto close hours setting of 72 (3 days):
* Topic was created on 2021-02-15 08:00
* Topic was closed on 2021-02-16 10:00
* Topic was opened again on 2021-02-17 06:00
Now, the topic will inherit the auto close timer again and will close automatically at **2021-02-18 08:00**, which is based on the creation date. If the current date and time is greater than the original auto-close time (e.g. we were at 2021-02-20 13:45) then no auto-close timer is created.
Note, this will not happen if the topic category auto-close setting is "based on last post".
Original PR was reverted because of broken migration https://github.com/discourse/discourse/pull/12058
I fixed it by adding this line
```
AND topics.id IN(SELECT id FROM topics ORDER BY created_at DESC LIMIT :max_new_topics)
```
This time it is left joining a limited amount of topics. I tested it on few databases and it worked quite smooth
Follow up https://github.com/discourse/discourse/pull/11968
Dismiss all new topics using the same DismissTopicService. In addition, MessageBus receives exact topic ids which should be marked as `seen`.
* FEATURE: Ability to dismiss new topics in a specific tag
Follow up of https://github.com/discourse/discourse/pull/11927
Using the same mechanism to disable new topics in a tag.
* FIX: respect when category and tag is selected
The 'Discourse SSO' protocol is being rebranded to DiscourseConnect. This should help to reduce confusion when 'SSO' is used in the generic sense.
This commit aims to:
- Rename `sso_` site settings. DiscourseConnect specific ones are prefixed `discourse_connect_`. Generic settings are prefixed `auth_`
- Add (server-side-only) backwards compatibility for the old setting names, with deprecation notices
- Copy `site_settings` database records to the new names
- Rename relevant translation keys
- Update relevant translations
This commit does **not** aim to:
- Rename any Ruby classes or methods. This might be done in a future commit
- Change any URLs. This would break existing integrations
- Make any changes to the protocol. This would break existing integrations
- Change any functionality. Further normalization across DiscourseConnect and other auth methods will be done separately
The risks are:
- There is no backwards compatibility for site settings on the client-side. Accessing auth-related site settings in Javascript is fairly rare, and an error on the client side would not be security-critical.
- If a plugin is monkey-patching parts of the auth process, changes to locale keys could cause broken error messages. This should also be unlikely. The old site setting names remain functional, so security-related overrides will remain working.
A follow-up commit will be made with a post-deploy migration to delete the old `site_settings` rows.
This PR allows entering a float value for topic timers e.g. 0.5 for 30 minutes when entering hours, 0.5 for 12 hours when entering days. This is achieved by adding a new column to store the duration of a topic timer in minutes instead of the ambiguous both hours and days that it could be before.
This PR has ommitted the post migration to delete the duration column in topic timers; it will be done in a subsequent PR to ensure that no data is lost if the UPDATE query to set duration_mintues fails.
I have to keep the old keyword of duration in set_or_create_topic_timer for backwards compat, will remove at a later date after plugins are updated.
This is a try to simplify logic around dismiss new topics to have one solution to work in all places - dismiss all-new, dismiss new in a specific category or even in a specific tag.
This adds a new table UserNotificationSchedules which stores monday-friday start and ends times that each user would like to receive notifications (with a Boolean enabled to remove the use of the schedule). There is then a background job that runs every day and creates do_not_disturb_timings for each user with an enabled notification schedule. The job schedules timings 2 days in advance. The job is designed so that it can be run at any point in time, and it will not create duplicate records.
When a users saves their notification schedule, the schedule processing service will run and schedule do_not_disturb_timings. If the user should be in DND due to their schedule, the user will immediately be put in DND (message bus publishes this state).
The UI for a user's notification schedule is in user -> preferences -> notifications. By default every day is 8am - 5pm when first enabled.
There was an issue that occurred with this order of operations:
* An IMAP topic was created by emailing a group
* A second user was invited to the topic (not the OP and not the group)
* A user with access to the group replies to the topic
* The second user receives a user_private_message notification email because of their involvement in the topic
* The second user replies to the email via email
This new reply would then go and notify the other group PM users, except for those who emailed the group topic directly, which is handled via the group SMTP mailer. However because the new post already has an incoming email because it is parsed via the Email::Receiver via POP3 the group SMTP section of the post alerter is skipped, and the group's email address is not ignored for the user_private_message notification.
This PR fixes it so the group is not ever sent an email via the PM notification. This is important because any new emails in the group's IMAP inbox will be picked up by the Imap::Sync code and created as a new topic which is not at all desirable.
Also in this PR I split up the specs a bit more for group SMTP in the post alerter to make them easier to read and they each only test one thing.
Fixes bug introduced by bd25627198
What happens is we send notifications to everyone involved in the group inbox topic about new posts, however we pass the param `skip_send_email_to: email_addresses`. In the above commit I removed the group email address from this `email_addresses` array. This breaks the IMAP inbox because we email the group with the reply, and the IMAP sync tool finds this email and opens a new unrelated topic with it.
This PR fixes a race condition with the IMAP notification code. In the `Email::Receiver` we call the `NewPostManager` to create the post and enqueue jobs and sends alerts via `PostAlerter`. However, if the post alerter reaches the `notify_pm_users` and the `group_notifying_via_smtp` method _before_ the incoming email is updated with the post and topic, we unnecessarily send a notification to the person who just posted. The result of this is that the IMAP syncer re-imports the email sent to the user about their own post, which looks like this in the group inbox:
To fix this, we skip the jobs enqueued by `NewPostManager` and only enqueue them with `PostJobsEnqueuer` manually _after_ the incoming email record has been updated with the post and topic.
Other improvements:
* Moved code to calculate email addresses from `IncomingEmail` records into the topic, with a group passed in, for easier testing and debugging. It is not the responsibility of the post alerter to figure this stuff out.
* Add shortcut methods on `IncomingEmail` to split or provide an empty array for to and cc addresses to avoid repetition.
It used to insert block Oneboxes inside paragraphs which resulted in
invalid HTML. This needed an additional parsing for removal of empty
paragraphs and the resulting HTML could still be invalid.
This commit ensure that block Oneboxes are inserted correctly, by
splitting the paragraph containing the link and putting the block
between the two. Paragraphs left with nothing but whitespaces will
be removed.
Follow up to 7f3a30d79f.
This commit adds an additional find_user_by_email hook to ManagedAuthenticator so that GitHub login can continue to support secondary email addresses
The github_user_infos table will be dropped in a follow-up commit.
This is the last core authenticator to be migrated to ManagedAuthenticator 🎉
- IgnoredUser records should all now have an expiring_at value. This commit enforces that in the DB, and fixes any corrupt rows
- Changes to the ignored user list are now handled by the `/u/{username}/notification_level` endpoint. This allows setting expiration dates on the ignore. This commit removes the old logic for saving a list of usernames in the user preferences.
- Many specs were calling `IgnoredUser.create`. This commit changes them to use `Fabricate(:ignored_user)` for consistency
Now that we have support for user-selectable color schemes, it makes sense
to simplify seeding and theme updates in the wizard.
We now:
- seed only one theme, named "Default" (previously "Light")
- seed a user-selectable Dark color scheme
- rename the "Themes" wizard step to "Colors"
- update the default theme's color scheme if a default is set
(a new theme is created if there is no default)
Previously, any errors in those files would e.g. blow up the update process in docker_manager.
Now it prints out an error and proceeds as if there was no compatibility file.
Includes:
* DEV: Extract setup_git_repo
* DEV: Use `Dir.mktmpdir`
* DEV: Default to `main` branch (The latest versions of git already do this, so to avoid problems do this by default)
* PERF: avoid lookbehinds when indexing search
Previously we used a `EmailCook.url_regexp` this regex used lookbehinds
Unfortunately certain strings could lead to pathological behavior causing
CPU to skyrocket and regex replace to take a very very long time.
EmailCook still needs a fix, but it is less urgent cause it already splits
to single lines. That said we will correct that as well in a seperate PR.
New implementation is far more naive and relies on the extra spaces search
indexer inserts.
Extracted commonly used spec helpers into spec/support/uploads_helpers.rb, removed unused stubs and let definitions. Makes it easier to write new S3-related specs without copy and pasting setup steps from other specs.
The NewUserOfTheMonth badge is part of the Badges::GettingStarted group. This group is skipped in BadgeGranter if the user skips the new user tips. However, the NewUserOfTheMonth badge granter job does not account for this. Instead, it notifies the user they've received the badge even if they did not.
This commit introduces a simple fix to allow granting of this badge even to users who skipped the new user tips.
Overall stubs lead to long term instability, this helps stabilize them.
Also fixed flaky spec around plugin hooks.
It was relying you `.posts` on system_user which could be loaded already
and invalid. Instead we now load it by hand.
DEV: add plugin hooks for silence message parameters
Allows plugins to add, and update extra silence message params for custom
i18n vars
Allows plugins to override system messages via `message_title` and
`message_raw` parameters. We can later expose these params where necessary via event
hooks. Expose the parameter for the on user_silenced trigger.
* FIX: Unlike own posts on ownership transfer
If a user has liked a post that has passed the
`post_undo_action_window_mins` system setting window and you transfer ownership
of that post to that user you will be the owner of a post that you have
liked, but cannot unlike resulting in a weird UI behavior. This commit
fixes this issue.
The existing tests didn't check for the timeout window for unliking
posts so I added that in.
I couldn't find a good way to do this logic inside of the guardian class
so rather than duplicating behavior of the `PostActionDestroyer` class
inside of the `PostOwnerChanger` I decided to pass in a "bypass"
variable that could be used to check if the calling class is the
'post_owner_changer' and bypass the guardian instead. I went this route
because the guardian `can_delete_post_action` method has no way of
distinguishing how to allow a user to be able to unlike their own posts
after the timeout window but only on a post owner change.
* use an options hash instead
* FEATURE: don't notify about changed tags for a private message
Only staff members observing specific tag should receive a notification
* FIX: remove other category which is not used
* FIX: improved specs to ensure that revise was succesful
In the near future, we will be swtiching to PG headlines to generate the
search blurb. As such, we need to replace audio and video links in the
raw data used for headline generation. This also means that we avoid
replacing links each time we need to generate the blurb.
We do prefix matching in search so there is no need to inject the extra
terms.
Before:
```
"'discourse':10,11 'discourse.org':10,11 'org':10,11 'test':8A,10,11 'test.discourse.org':10,11 'titl':4A 'uncategor':9B"
```
After:
```
"'discourse.org':10,11 'org':10,11 'test':8A 'test.discourse.org':10,11 'titl':4A 'uncategor':9B"
```
Indexing query strings in URLS produces inconsistent results in PG and
pollutes the search data for really little gain.
The following seems to work as expected...
```
discourse_development=# SELECT TO_TSVECTOR('https://www.discourse.org?test=2&test2=3');
to_tsvector
------------------------------------------------------
'2':3 '3':5 'test':2 'test2':4 'www.discourse.org':1
```
However, once a path is present
```
discourse_development=# SELECT TO_TSVECTOR('https://www.discourse.org/latest?test=2&test2=3');
to_tsvector
----------------------------------------------------------------------------------------------
'/latest?test=2&test2=3':3 'www.discourse.org':2 'www.discourse.org/latest?test=2&test2=3':1
```
The lexeme contains both the path and the query string.
```
discourse_development=# SELECT alias, lexemes FROM TS_DEBUG('www.discourse.org');
alias | lexemes
-------+---------------------
host | {www.discourse.org}
discourse_development=# SELECT TO_TSVECTOR('www.discourse.org');
to_tsvector
-----------------------
'www.discourse.org':1
```
Given the above lexeme, we will inject additional lexeme by splitting
the host on `.`. The actual tsvector stored will look something like
```
tsvector
---------------------------------------
'discourse':1 'discourse.org':1 'org':1 'www':1 'www.discourse.org':1
```
This is causing issues when purging old users, if they are set up in the
exact condition where they will be demoted into another group, but also
do not have a primary email.
This reverts commit 20780a1eee.
* SECURITY: re-adds accidentally reverted commit:
03d26cd6: ensure embed_url contains valid http(s) uri
* when the merge commit e62a85cf was reverted, git chose the 2660c2e2 parent to land on
instead of the 03d26cd6 parent (which contains security fixes)
* PERF: Dematerialize topic_reply_count
It's only ever used for trust level promotions that run daily, or compared to 0. We don't need to track it on every post creation.
* UX: Add symbol in TL3 report if topic reply count is capped
* DEV: Drop user_stats.topic_reply_count column
This ensures that at a minimum you are notified once a day of
repeat edits by the same user.
Long term we may consider winding this down to say 1 hour or
making it configurable.
Due to a refactor in e90f9e5cc4 we stopped notifying on edits if
a user liked a post and then edited.
The like could have happened a long time ago so this gets extra
confusing.
This change makes the suppression more deliberate. We only want
to suppress quite/link/mention if the user already got a reply
notification.
We can expand this suppression if it is not enough.
It's possible to cause a 500 error by putting in weird characters in the
input field for updating a users website on their profile.
Normal invalid input like not including the domain extension is already
handled by the user_profile model validation. This fix ensures a server
error doesn't occur for weird input characters.
It previously failed to match URLs with characters other than `[a-zA-z0-9\.\/:-]`. This meant that `PullHotlinkedImages` would sometimes download an external image and then never use it in any posts.
* FIX: FlagSockpuppets should not flag a post if a post of that user was already rejected by staff
* Update spec/services/flag_sockpuppets_spec.rb
Co-Authored-By: Robin Ward <robin.ward@gmail.com>
Co-authored-by: Robin Ward <robin.ward@gmail.com>
This fix ensures that if a staged user is linked to or quoted they won't
be emailed about it.
A staged user could email into a category, and another user could quote
them inside of a completely different category and we don't want a
staged user to receive an email for this.
Bug report:
https://meta.discourse.org/t/-/145202/9
For example given a custom badge with SQL:
```
SELECT 1
-- I am a comment
```
You end up with
```
FROM (SELECT 1
-- I am a comment) q
```
This fix adds newlines so you end up with the now-valid:
```
FROM (
SELECT 1
-- I am a comment
) q
```
New `duration` attribute is introduced for the `set_or_create_timer` method in the commit aad12822b7 for "based on last post" and "auto delete replies" topic timers.
* This PR implements the scheduling and notification system for bookmark reminders. Every 5 minutes a schedule runs to check any reminders that need to be sent before now, limited to **300** reminders at a time. Any leftover reminders will be sent in the next run. This is to avoid having to deal with fickle sidekiq and reminders in the far-flung future, which would necessitate having a background job anyway to clean up any missing `enqueue_at` reminders.
* If a reminder is sent its `reminder_at` time is cleared and the `reminder_last_sent_at` time is filled in. Notifications are only user-level notifications for now.
* All JavaScript and frontend code related to displaying the bookmark reminder notification is contained here. The reminder functionality is now re-enabled in the bookmark modal as well.
* This PR also implements the "Remind me next time I am at my desktop" bookmark reminder functionality. When the user is on a mobile device they are able to select this option. When they choose this option we set a key in Redis saying they have a pending at desktop reminder. The next time they change devices we check if the new device is desktop, and if it is we send reminders using a DistributedMutex. There is also a job to ensure consistency of these reminders in Redis (in case Redis drops the ball) and the at desktop reminders expire after 20 days.
* Also in this PR is a fix to delete all Bookmarks for a user via `UserDestroyer`
* FIX: when unread reply notification exists don't create new
From time to time, the user is creating a reply post and then they want to add additional details. They edit an existing post and for example, add a quote from a previous one.
In that situation, if the user to whom reply was directed to already have the unread notification, we should not create the new one.
That behaviour was mentioned here: https://meta.discourse.org/t/reply-then-edit-to-add-quote-notification-redundancy/138358
* FIX: dont create new notification if already exists
The commit: 75069ff179
allows users to remove their primary group, but this introduced a bug
where if you were to edit any other profile info like location or
website which is a form on a separate page then the flair dropdown,
would cause the selected flair to be removed.
This fix ensures that if the `primary_group_id` parameter is missing
from the update payload it does not remove the existing
`primary_group_id`. It will only remove the `primary_group_id` if it is
present in the payload and empty.
This is not used in core or official plugins, and has been printing a deprecation notice since v2.3.0beta4. All OpenID 2.0 code and dependencies have been dropped. The user_open_ids table remains for now, in case anyone has missed the deprecation notice, and needs to migrate their data.
Context at https://meta.discourse.org/t/-/113249
* FEATURE: Replace existing badge owners when using the bulk award feature
* Use ActiveRecord to sanitize title update query, Change replace checkbox text
Co-Authored-By: Robin Ward <robin.ward@gmail.com>
Co-authored-by: Robin Ward <robin.ward@gmail.com>
group membership and `CategoryUser` notification level should be
respected to determine whether to notify staged users about activity in
private categories, instead of only ever generating notifications for staged
users' own topics (which has been the behaviour since
0c4ac2a7bc)
Let's say post #2 quotes post number #1. If a user decides to quote the
quote in post #2, it should keep the information of post #1
("user_1, post: 1, topic: X"), instead of replacing with current post
info ("user_2, post: 2, topic: X").
This fix allows a user to remove their currently assigned primary group
if the Site Setting `user selected primary groups` is enabled.
Before this fix, if a user selected "none" for their primary group it
would silently fail and never be updated.
This is required for people using apps with custom protocols. We still verify the entire URL (including protocol) against the site setting value.
Refactored wildcard_url_checker so that it always returns a boolean, rather than sometimes returning a regex match.
* FEATURE: Ability to add components to all themes
This is the first and functional step from that topic https://dev.discourse.org/t/adding-a-theme-component-is-too-much-work/15398/16
The idea here is that when a new component is added, the user can easily assign it to all themes (parents).
To achieve that, I needed to change a site-setting component to accept `setDefaultValues` action and `setDefaultValuesLabel` translated label.
Also, I needed to add `allowAny` option to disable that for theme selector.
I also refactored backend to accept both parent and child ids with one method to avoid duplication (Renamed `add_child_theme!` to more general `add_relative_theme!`)
* FIX: Improvement after code review
* FIX: Improvement after code review2
* FIX: use mapBy and filterBy directly
Previously people were not consistent about mocking which left internals in
a fragile state when running subfolder specs.
This introduces a simple helper `set_subfolder` which you can use to set
the subfolder for the spec. It takes care of proper configuration of subfolder
and teardown.
```
# usage
set_subfolder "/my_amazing_subfolder"
```
You should no longer stub base_uri or global_settings
* Fix user title logic when badge name customized
* Fix an issue where a user's title was not considered a badge granted title when the user used a badge for their title and the badge name was customized. this affected the effectiveness of revoke_ungranted_titles! which only operates on badge_granted_titles.
* When a user's title is set now it is considered a badge_granted_title if the badge name OR the badge custom name from TranslationOverride is the same as the title
* When a user's badge is revoked we now also revoke their title if the user's title matches the badge name OR the badge custom name from TranslationOverride
* Add a user history log when the title is revoked to remove confusion about why titles are revoked
* Add granted_title_badge_id to user_profile, now when we set badge_granted_title on a user profile when updating a user's title based on a badge, we also remember which badge matched the title
* When badge name (or custom text) changes update titles of users in a background job
* When the name of a badge changes, or in the case of system badges when their custom translation text changes, then we need to update the title of all corresponding users who have a badge_granted_title and matching granted_title_badge_id. In the case of system badges we need to first get the proper badge ID based on the translation key e.g. badges.regular.name
* Add migration to backfill all granted_title_badge_ids for both normal badge name titles and titles using custom badge text.
Issue was mentioned in this [meta topic](https://meta.discourse.org/t/send-a-notification-to-watching-users-when-adding-tag/125314)
It is working well when category is changed because NotifyCategoryChange job already got that code:
```
if post&.topic&.visible?
post_alerter = PostAlerter.new
post_alerter.notify_post_users(post, User.where(id: args[:notified_user_ids]))
post_alerter.notify_first_post_watchers(post, post_alerter.category_watchers(post.topic))
end
```
For NotifyTagChange job notify post users were missing so it worked only when your notification was set to `watching first post`
- Allow revoking keys without deleting them
- Auto-revoke keys after a period of no use (default 6 months)
- Allow multiple keys per user
- Allow attaching a description to each key, for easier auditing
- Log changes to keys in the staff action log
- Move all key management to one place, and improve the UI
This is a major change to draft internals. Previously there were quite a
few cases where the draft system would say "draft saved", when in fact
we just skipped saving.
This commit ensures the draft system deals with draft ownership handover in
a predictable way.
For example:
- Window 1 editing draft
- Window 2 editing same draft at the same time
Previously we would allow window 1 and 2 to just fight on the same draft
each window overwriting the same draft over an over.
This commit introduces an ownership concept where either window 1 or 2 win
and user is prompted on the loser window to reload screen to correct the issue
This also corrects edge cases where a user could have multiple browser windows
open and posts in 1 window, later to post in the second window. Previously
drafts would break in the second window, this corrects it.
* FEATURE: Site setting/ui to allow users to set their primary group
* prettier and remove logic from account template
* added 1 to 43 to make web_hook_user_serializer_spec pass
Doing .pluck(:column).first is a very common pattern in Discourse and in
most cases, a limit cause isn't being added. Instead of adding a limit
clause to all these callsites, this commit adds two new methods to
ActiveRecord::Relation:
pluck_first, equivalent to limit(1).pluck(*columns).first
and pluck_first! which, like other finder methods, raises an exception
when no record is found
Zeitwerk simplifies working with dependencies in dev and makes it easier reloading class chains.
We no longer need to use Rails "require_dependency" anywhere and instead can just use standard
Ruby patterns to require files.
This is a far reaching change and we expect some followups here.
- Client-side censoring fixed for non-chrome browsers. (Regular expression rewritten to avoid lookback)
- Regex generation is now done on the server, to reduce repeated logic, and make it easier to extend in plugins
- Censor tests are moved to ruby, to ensure everything works end-to-end
- If "watched words regular expressions" is enabled, warn the admin when the generated regex is invalid
* REFACTOR: Rename SiteSetting.disable_edit_notifications to disable_system_edit_notifications
- The older name could cause some confusion because the setting does not disable all edit notifications, only system ones.
* FIX: Add frozen_string_literal: true in the migration
* DEV: Deprecate 'disable_edit_notifications'
This feature adds the ability to customize the HTML part of all emails using a custom HTML template and optionally some CSS to style it. The CSS will be parsed and converted into inline styles because CSS is poorly supported by email clients. When writing the custom HTML and CSS, be aware of what email clients support. Keep customizations very simple.
Customizations can be added and edited in Admin > Customize > Email Style.
Since the summary email is already heavily styled, there is a setting to disable custom styles for summary emails called "apply custom styles to digest" found in Admin > Settings > Email.
As part of this work, RTL locales are now rendered correctly for all emails.
This commit contains 3 features:
- FEATURE: Allow downloading watched words
This introduces a button that allows admins to download watched words per action in a `.txt` file.
- FEATURE: Allow clearing watched words in bulk
This adds a "Clear All" button that clears all deleted words per action (e.g. block, flag etc.)
- FEATURE: List all blocked words contained in the post when it's blocked
When a post is rejected because it contains one or more blocked words, the error message now lists all the blocked words contained in the post.
-------
This also changes the format of the file for importing watched words from `.csv` to `.txt` so it becomes inconsistent with the extension of the file when watched words are exported.
Created a rake task for destroying multiple categories along with any
subcategories and topics the belong to those categories.
Also created a rake task for listing all of your categories.
Refactored existing destroy rake tasks to use new logging method, that
allows for puts output in the console but prevents it from showing in
the specs.
Context: https://meta.discourse.org/t/121589
This new setting option lets group owners message/mention large groups
without granting that privilege to all members.
This can cause unbound CPU usage in some cases, and excessive logging in other cases. This commit moves redis readonly information into the local process, but maintains the DistributedCache for postgres readonly state.
The site settings beginning with "topic views heat" and "topic post like
heat" are set to defaults when installing Discourse, but there has not
been a process or guidance for updating these values based on
community activity.
This feature will update them once a month. The low, medium, and
high settings will be based on the minimums of the 45th, 25th, and
10th percentile topics respectively, so that 45% of topics will have
some "heat".
Disable automatic changes with the automatic_topic_heat_values setting.
Previous to this fix is a post had the test www.test.com/abc it would fail
to index.
This also simplifies the rules to avoid full url parsing which can be
expensive
Previously we used custom fields to denote a user was anonymous, this was
risky in that custom fields are prone to race conditions and are not
properly dedicated, missing constraints and so on.
The new table `anonymous_users` is properly protected. There is only one
possible shadow account per user, which is enforced using a constraint.
Every anonymous user will have a unique row in the new table.