We have a `cleanup!` class method on bookmarks that deletes
bookmarks X days after their related record (post/topic) are
deleted. This commit changes this method to use the
registered_bookmarkables for this instead, and each bookmarkable
type can delete related bookmarks in their own way.
Due to some changes we started notifying via push notifications on other
families of notifications. There are a total of about 30 or so possible
notification you could get, some can be pushed.
This fallback means that if for any reason we are unable to find an icon
for a push notification we just fallback to the Discourse logo.
Also go with a simple reply icon for watching first post.
Note, that in production `image_url` can return an exception if an image is
missing. This is not the case in test / development.
Censored watched words were not censored inside the title of an inline
oneboxes. Malicious users could exploit this behaviour to insert bad
words. The same issue has been fixed for regular Oneboxes in commit
d184fe59ca.
Previously, with the default `editing_grace_period`, hotlinked images were pulled 5 minutes after a post is created. This delay was added to reduce the chance of automated edits clashing with user edits.
This commit refactors things so that we can pull hotlinked images immediately. URLs are immediately updated in the post's `cooked` HTML. The post's raw markdown is updated later, after the `editing_grace_period`.
This involves a number of behind-the-scenes changes including:
- Schedule Jobs::PullHotlinkedImages immediately after Jobs::ProcessPost. Move scheduling to after the `update_column` call to avoid race conditions
- Move raw changes into a separate job, which is delayed until after the ninja-edit window
- Move disable_if_low_on_disk_space logic into the `pull_hotlinked_images` job
- Move raw-parsing/replacing logic into `InlineUpload` so it can be easily be shared between `UpdateHotlinkedRaw` and `PullUserProfileHotlinkedImages`
This makes it easier to find PMs involving a particular user, for
example by searching for `in:messages thisUser` (previously, that query
would only return results in posts where `thisUser` was in the post body).
The cache was causing state to leak between tests since the `WatchedWord` record in the DB would have been rolled back but `WordWatcher` still had the word in the cache.
7a284164 previously switched the UserDestroyer to use find_each when iterating over UserHistory records. Unfortunately, since this logic is wrapped in a transaction, this didn't actually solve the memory usage problem. ActiveRecord maintains references to all modified models within a transaction.
This commit updates the logic to use a single SQL query, rather than updating models one-by-one
These validate/after_create/after_destroy methods were added
back in b8828d4a2d before
the RegisteredBookmarkable API and pattern was nailed down.
This commit updates BookmarkManager to call out to the
relevant bookmarkable for these and bookmark_metadata for
consistency.
We have not used anything related to bookmarks for PostAction
or UserAction records since 2020, bookmarks are their own thing
now. Deleting all this is just cleaning up old cruft.
Latest redis interoduces a block form of multi / pipelined, this was incorrectly
passed through and not namespaced.
Fix also updates logster, we held off on upgrading it due to missing functions
A bit of a mixed bag, this addresses several edge areas of bookmarks and makes them compatible with polymorphic bookmarks (hidden behind the `use_polymorphic_bookmarks` site setting). The main ones are:
* ExportUserArchive compatibility
* SyncTopicUserBookmarked job compatibility
* Sending different notifications for the bookmark reminders based on the bookmarkable type
* Import scripts compatibility
* BookmarkReminderNotificationHandler compatibility
This PR also refactors the `register_bookmarkable` API so it accepts a class descended from a `BaseBookmarkable` class instead. This was done because we kept having to add more and more lambdas/properties inline and it was very messy, so a factory pattern is cleaner. The classes can be tested independently as well.
Some later PRs will address some other areas like the discourse narrative bot, advanced search, reports, and the .ics endpoint for bookmarks.
This bug was causing double events to be fired as :user_badge_granted is already called when a `user_badge` is created. More over the signature of the block in the UserBadge code is `badge_id, user_id` not `badge, user_id`.
* hidden siteSetting to enable experimental sidebar
* user preference to enable experimental sidebar
* `experimental_sidebar_enabled` attribute for current user
* Empty glimmer component for Sidebar
This pull request follows on from https://github.com/discourse/discourse/pull/16308. This one does the following:
* Changes `BookmarkQuery` to allow for querying more than just Post and Topic bookmarkables
* Introduces a `Bookmark.register_bookmarkable` method which requires a model, serializer, fields and preload includes for searching. These registered `Bookmarkable` types are then used when validating new bookmarks, and also when determining which serializer to use for the bookmark list. The `Post` and `Topic` bookmarkables are registered by default.
* Adds new specific types for Post and Topic bookmark serializers along with preloading of associations in `UserBookmarkList`
* Changes to the user bookmark list template to allow for more generic bookmarkable types alongside the Post and Topic ones which need to display in a particular way
All of these changes are gated behind the `use_polymorphic_bookmarks` site setting, apart from the .hbs changes where I have updated the original `UserBookmarkSerializer` with some stub methods.
Following this PR will be several plugin PRs (for assign, chat, encrypt) that will register their own bookmarkable types or otherwise alter the bookmark serializers in their own way, also gated behind `use_polymorphic_bookmarks`.
This commit also removes `BookmarkQuery.preloaded_custom_fields` and the functionality surrounding it. It was added in 0cd502a558 but only used by one plugin (discourse-assign) where it has since been removed, and is now used by no plugins. We don't need it anymore.
`Scoped order is ignored, it's forced to be batch order.`
`find_each` ignores the `order` scope and triggers a warning in
production which is noisy.
Follow-up to 7a284164ce
This commit improves the logic for rolling up IPv4 screened IP
addresses and extending it for IPv6. IPv4 addresses will roll up only
up to /24. IPv6 can rollup to /48 at most. The log message that is
generated contains the list of original IPs and new subnet.
All users are members of the EVERYONE group, but this group is special and
is omitted from the group_users table. When checking permission we need to
make sure we also add a bypass.
This also fixes a very buggy test in post_alerter, it was confirming the
broken behavior due to fabricator flow.
When it defined the tag group the everyone group automatically had full access
then the additional permission fabricated just added one more group. After
fix was made to code the test started failing. Fabricators can be risky.
When emailing a group inbox and including other support-type
emails (or even just regular ones with autoresponders) in the
CC field, each automated reply to the group inbox triggered
more emails to be sent out to all CC addresses to notify them
of the new reply, which in turn caused more automated emails
to be sent to the group inbox.
This commit fixes the issue by preventing any emails being sent
by the PostAlerter when the new post has an incoming email record
which is_auto_generated, which we detect in Email::Receiver.
Fixes the issue where making a user x as owner of a post doesn't
cause the concerned topic to be listed in new owner's `My Posts`
top menu filter
per https://meta.discourse.org/t/199369
Under some conditions, replacing an `<img` with `![]()` can break rendering, and make the image disappear.
Context at https://meta.discourse.org/t/152801
The search_ignore_accents site setting can be used to make the search
indexer remove the accents before indexing the content. The unaccent
function from PostgreSQL is better than Ruby's unicode_normalize(:nfkd).
Discourse users and associated accounts are created or updated when a
user logins or connects the account using their account preferences.
This new API can be used to create associated accounts and users too,
if necessary.
This allows text editors to use correct syntax coloring for the heredoc sections.
Heredoc tag names we use:
languages: SQL, JS, RUBY, LUA, HTML, CSS, SCSS, SH, HBS, XML, YAML/YML, MF, ICS
other: MD, TEXT/TXT, RAW, EMAIL
We added this constraint in 5bd55acf83
but it is causing problems in hosted sites and is catching the
issue too far down the line. This commit removes the constraint
for now, and also fixes an issue found with PostDestroyer
which wasn't using the UserStatCountUpdater when updating post_count
and thus was causing negative numbers to occur.
Breakdown of fixes in this commit:
* `UserStat#topic_count` was not updated when visibility of
the topic changed.
* `UserStat#post_count` was not updated when post was hidden or
unhidden.
* `TopicConverter` was only incrementing or decrementing the counts by 1
even if a user has multiple posts in the topic.
* The commit turns off the verbose logging by default as it is just
noise to normal users who are not debugging this problem.
In ab5361d69a, we rescue from the PG error
but the transaction is already aborted causing any DB query after to
fail. As such, we avoid triggering the error in the first place by
checking that we would not be insertin a negative number into the
counter cache.
Follow-up to ab5361d69a
There are still spots in the code base which results in us trying to turn the post and topic count negative. However,
we have a job that runs on a daily basis which will correct the count. Therefore, avoid raising an error for now
and log the exception instead.
Random strings can result into much longer tsvectors. For example
parsing a Base64 string of ~600kb can result in a tsvector of over 1MB,
which is the maximum size of a tsvector.
Follow-up-to: 823c3f09d4
Ensures that `UserStat#post_count` and `UserStat#topic_count` does not
go below 0. When it does like it did now, we tend to have bugs in our
code since we're usually coding with the assumption that the count isn't
negative.
In order to support the constraints, our post and topic fabricators in
tests will now automatically increment the count for the respective
user's `UserStat` as well. We have to do this because our fabricators
bypasss `PostCreator` which holds the responsibility of updating `UserStat#post_count` and
`UserStat#topic_count`.
This commit fixes a bug where we our `HTMLScrubber` was only searching
for emoji img tags which contains only the "emoji" class. However, our emoji image tags
may contain more than just the "emoji" class like "only-emoji" when an
emoji exists by itself on a single line.
Also:
* Remove an unused method (#fill_email)
* Replace a method that was used just once (#generate_username) with `SecureRandom.alphanumeric`
* Remove an obsolete dev puma `tmp/restart` file logic
* File.exists? is deprecated and removed in Ruby 3.2 in favor of
File.exist?
* Dir.exists? is deprecated and removed in Ruby 3.2 in favor of
Dir.exist?
You can add callbacks that get called before updating an already consolidated notification or creating a consolidated one.
Instances of this rule can add callbacks to access the old notifications about to be destroyed or the consolidated one and add additional data inside the data hash versus having to execute extra queries when adding this logic inside the `set_mutations` block.
I plan to use this in an upcoming discourse-reactions PR, where I want to like a post without notifying the user, so I can instead create a reaction notification.
Additionally, we decouple the a11y attributes from the icon itself, which will let us extend the widget's icon without losing them.
This PR moves the behavior from the PostAlerter. We delete an existing liked notification and set the `username2` attribute to the previous `display_username`. We repeat this process unless the last one is old enough or it's not in the most recent ones.
We previously used ConsolidateNotifications with a threshold of 1 to re-use an existing notification and bump it to the top instead of creating a new one. It produces some jumpiness in the user notification list, and it relies on updating the `created_at` attribute, which is a bit hacky.
As a better alternative, we're introducing a new plan that deletes all the previous versions of the notification, then creates a new one.
We send the reminder using the GroupMessage class, which supports removing previous messages. We can't match them by raw because they could mention different moderators. Also, I had to change the subject to remove dynamically generated values, which is necessary for finding them.
This commit introduces a new site setting "google_oauth2_hd_groups". If enabled, group information will be fetched from Google during authentication, and stored in the Discourse database. These 'associated groups' can be connected to a Discourse group via the "Membership" tab of the group preferences UI.
The majority of the implementation is generic, so we will be able to add support to more authentication methods in the near future.
https://meta.discourse.org/t/managing-group-membership-via-authentication/175950
* REFACTOR: Improve support for consolidating notifications.
Before this commit, we didn't have a single way of consolidating notifications. For notifications like group summaries, we manually removed old ones before creating a new one. On the other hand, we used an after_create callback for likes and group membership requests, which caused unnecessary work, as we need to delete the record we created to replace it with a consolidated one.
We now have all the consolidation rules centralized in a single place: the consolidation planner class. Other parts of the app looking to create a consolidable notification can do so by calling Notification#consolidate_or_save!, instead of the default Notification#create! method.
Finally, we added two more rules: one for re-using existing group summaries and another for deleting duplicated dashboard problems PMs notifications when the user is tracking the moderator's inbox. Setting the threshold to one forces the planner to apply this rule every time.
I plan to add plugin support for adding custom rules in another PR to keep this one relatively small.
* DEV: Introduces a plugin API for consolidating notifications.
This commit removes the `Notification#filter_by_consolidation_data` scope since plugins could have to define their criteria. The Plan class now receives two blocks, one to query for an already consolidated notification, which we'll try to update, and another to query for existing ones to consolidate.
It also receives a consolidation window, which accepts an ActiveSupport::Duration object, and filter notifications created since that value.
This commit adds token_hash and scopes columns to email_tokens table.
token_hash is a replacement for the token column to avoid storing email
tokens in plaintext as it can pose a security risk. The new scope column
ensures that email tokens cannot be used to perform a different action
than the one intended.
To sum up, this commit:
* Adds token_hash and scope to email_tokens
* Reuses code that schedules critical_user_email
* Refactors EmailToken.confirm and EmailToken.atomic_confirm methods
* Periodically cleans old, unconfirmed or expired email tokens
Use @here to mention all users that were allowed to topic directly or
through group, who liked topics or read the topic. Only first 10 users
will be notified.
We are pushing /notification-alert/#{user_id} and /notification/#{user_id}
messages to MessageBus from both PostAlerter and User#publish_notification_state.
This can cause memory issues on large sites with many users. This commit
stems the bleeding by only sending these alert messages if the user
in question has been seen in the last 30 days, which eliminates a large
chunk of users on some sites.
When 31035010af
was done it failed to take into account the case where the smtp_enabled
site setting was true, but the topic had no allowed groups / no
incoming email record, which caused errors for topics even with
nothing to do with group SMTP.
When there are multiple groups on a topic, we were selecting
the first from the topic allowed groups to act as the sender
email address when sending group SMTP replies via PostAlerter.
However, this was not ordered, and since there is no created_at
column on TopicAllowedGroup we cannot order this nicely, which
caused just a random group to be used (based on whatever postgres
decided it felt like that morning).
This commit changes the group used for SMTP sending to be the
group using the email_username of the to address of the first
incoming email for the topic, if there are more than one allowed
groups on the topic. Otherwise it just uses the only SMTP enabled
group.
This PR introduces a new `enable_experimental_backup_uploads` site setting (default false and hidden), which when enabled alongside `enable_direct_s3_uploads` will allow for direct S3 multipart uploads of backup .tar.gz files.
To make multipart external uploads work with both the S3BackupStore and the S3Store, I've had to move several methods out of S3Store and into S3Helper, including:
* presigned_url
* create_multipart
* abort_multipart
* complete_multipart
* presign_multipart_part
* list_multipart_parts
Then, S3Store and S3BackupStore either delegate directly to S3Helper or have their own special methods to call S3Helper for these methods. FileStore.temporary_upload_path has also removed its dependence on upload_path, and can now be used interchangeably between the stores. A similar change was made in the frontend as well, moving the multipart related JS code out of ComposerUppyUpload and into a mixin of its own, so it can also be used by UppyUploadMixin.
Some changes to ExternalUploadManager had to be made here as well. The backup direct uploads do not need an Upload record made for them in the database, so they can be moved to their final S3 resting place when completing the multipart upload.
This changeset is not perfect; it introduces some special cases in UploadController to handle backups that was previously in BackupController, because UploadController is where the multipart routes are located. A subsequent pull request will pull these routes into a module or some other sharing pattern, along with hooks, so the backup controller and the upload controller (and any future controllers that may need them) can include these routes in a nicer way.
This commit introduces a new s3:ensure_cors_rules rake task
that is run as a prerequisite to s3:upload_assets. This rake
task calls out to the S3CorsRulesets class to ensure that
the 3 relevant sets of CORS rules are applied, depending on
site settings:
* assets
* direct S3 backups
* direct S3 uploads
This works for both Global S3 settings and Database S3 settings
(the latter set directly via SiteSetting).
As it is, only one rule can be applied, which is generally
the assets rule as it is called first. This commit changes
the ensure_cors! method to be able to apply new rules as
well as the existing ones.
This commit also slightly changes the existing rules to cover
direct S3 uploads via uppy, especially multipart, which requires
some more headers.
Instead of using image-uploader, which relies on the old
UploadMixin, we can now use the uppy-image-uploader which
uses the new UppyUploadMixin which is stable enough and
supports both regular XHR uploads and direct S3 uploads,
controlled by a site setting (default to XHR).
At some point it may make sense to rename uppy-image-uploader
back to image-uploader, once we have gone through plugins
etc. and given a bit of deprecation time period.
This commit also fixes `for_private_message`, `for_site_setting`,
and `pasted` flags not being sent via uppy uploads onto the
UploadCreator, both via regular XHR uploads and also through
external/multipart uploads.
The uploaders changed are:
* site setting images
* badge images
* category logo
* category background
* group flair
* profile background
* profile card background
Calling create_notification_alert could still send a notification to a
suspended user. This just moves the check if user is suspended right
before sending the notification.
It allows saving local date to calendar.
Modal is giving option to pick between ics and google. User choice can be remembered as a default for the next actions.
Also promote the `create_notification_alert` and `push_notification`
methods from instance methods to class methods so that plugins can call
them. This is temporary until we add a more comprehensive API for
extending `PostAlerter`.
Discourse is sending regularly message to admins when potential problems are persisted. Most of the time they have exactly the same content. In that case, when there are no replies, the old one should be trashed before a new one is created.
This pull request introduces the endpoints required, and the JavaScript functionality in the `ComposerUppyUpload` mixin, for direct S3 multipart uploads. There are four new endpoints in the uploads controller:
* `create-multipart.json` - Creates the multipart upload in S3 along with an `ExternalUploadStub` record, storing information about the file in the same way as `generate-presigned-put.json` does for regular direct S3 uploads
* `batch-presign-multipart-parts.json` - Takes a list of part numbers and the unique identifier for an `ExternalUploadStub` record, and generates the presigned URLs for those parts if the multipart upload still exists and if the user has permission to access that upload
* `complete-multipart.json` - Completes the multipart upload in S3. Needs the full list of part numbers and their associated ETags which are returned when the part is uploaded to the presigned URL above. Only works if the user has permission to access the associated `ExternalUploadStub` record and the multipart upload still exists.
After we confirm the upload is complete in S3, we go through the regular `UploadCreator` flow, the same as `complete-external-upload.json`, and promote the temporary upload S3 into a full `Upload` record, moving it to its final destination.
* `abort-multipart.json` - Aborts the multipart upload on S3 and destroys the `ExternalUploadStub` record if the user has permission to access that upload.
Also added are a few new columns to `ExternalUploadStub`:
* multipart - Whether or not this is a multipart upload
* external_upload_identifier - The "upload ID" for an S3 multipart upload
* filesize - The size of the file when the `create-multipart.json` or `generate-presigned-put.json` is called. This is used for validation.
When the user completes a direct S3 upload, either regular or multipart, we take the `filesize` that was captured when the `ExternalUploadStub` was first created and compare it with the final `Content-Length` size of the file where it is stored in S3. Then, if the two do not match, we throw an error, delete the file on S3, and ban the user from uploading files for N (default 5) minutes. This would only happen if the user uploads a different file than what they first specified, or in the case of multipart uploads uploaded larger chunks than needed. This is done to prevent abuse of S3 storage by bad actors.
Also included in this PR is an update to vendor/uppy.js. This has been built locally from the latest uppy source at d613b849a6. This must be done so that I can get my multipart upload changes into Discourse. When the Uppy team cuts a proper release, we can bump the package.json versions instead.
While merging two user accounts don't merge the source user's email address if the target user is not a human.
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
Long posts may have `cooked` fields that produce tsvectors longer than
the maximum size of 1MiB (1,048,576 bytes). This commit uses just the
first million characters of the scrubbed cooked text for indexing.
Reducing the size to exactly 1MB (1_048_576) is not sufficient because
sometimes the output tsvector may be longer than the input and this
gives us some breathing room.
We shouldn't be checking if a user is allowed to do an action in the logger. We should be checking it just before we perform the action. In fact, guardians in the logger can make things even worse in case of a security bug. Let's say we forgot to check user's permissions before performing some action, but we still have a call to the guardian in the logger. In this case, a user would perform the action anyway, and this action wouldn't even be logged!
I've checked all cases and I confirm that we're safe to delete this calls from the logger.
I've added two calls to guardians in admin/user_controller. We didn't have security bugs there, because regular users can't access admin/... routes at all. But it's good to have calls to guardian in these methods anyway, neighboring methods have them.
This adds a few different things to allow for direct S3 uploads using uppy. **These changes are still not the default.** There are hidden `enable_experimental_image_uploader` and `enable_direct_s3_uploads` settings that must be turned on for any of this code to be used, and even if they are turned on only the User Card Background for the user profile actually uses uppy-image-uploader.
A new `ExternalUploadStub` model and database table is introduced in this pull request. This is used to keep track of uploads that are uploaded to a temporary location in S3 with the direct to S3 code, and they are eventually deleted a) when the direct upload is completed and b) after a certain time period of not being used.
### Starting a direct S3 upload
When an S3 direct upload is initiated with uppy, we first request a presigned PUT URL from the new `generate-presigned-put` endpoint in `UploadsController`. This generates an S3 key in the `temp` folder inside the correct bucket path, along with any metadata from the clientside (e.g. the SHA1 checksum described below). This will also create an `ExternalUploadStub` and store the details of the temp object key and the file being uploaded.
Once the clientside has this URL, uppy will upload the file direct to S3 using the presigned URL. Once the upload is complete we go to the next stage.
### Completing a direct S3 upload
Once the upload to S3 is done we call the new `complete-external-upload` route with the unique identifier of the `ExternalUploadStub` created earlier. Only the user who made the stub can complete the external upload. One of two paths is followed via the `ExternalUploadManager`.
1. If the object in S3 is too large (currently 100mb defined by `ExternalUploadManager::DOWNLOAD_LIMIT`) we do not download and generate the SHA1 for that file. Instead we create the `Upload` record via `UploadCreator` and simply copy it to its final destination on S3 then delete the initial temp file. Several modifications to `UploadCreator` have been made to accommodate this.
2. If the object in S3 is small enough, we download it. When the temporary S3 file is downloaded, we compare the SHA1 checksum generated by the browser with the actual SHA1 checksum of the file generated by ruby. The browser SHA1 checksum is stored on the object in S3 with metadata, and is generated via the `UppyChecksum` plugin. Keep in mind that some browsers will not generate this due to compatibility or other issues.
We then follow the normal `UploadCreator` path with one exception. To cut down on having to re-upload the file again, if there are no changes (such as resizing etc) to the file in `UploadCreator` we follow the same copy + delete temp path that we do for files that are too large.
3. Finally we return the serialized upload record back to the client
There are several errors that could happen that are handled by `UploadsController` as well.
Also in this PR is some refactoring of `displayErrorForUpload` to handle both uppy and jquery file uploader errors.
Configuring staged users to watch categories and tags is a way to sign
them up to get many emails. These emails may be unwanted and get marked
as spam, hurting the site's email deliverability.
Users can opt-in to email notifications by logging on to their
account and configuring their own preferences.
If staff need to be able to configure these preferences on behalf of
staged users, the "allow changing staged user tracking" site setting
can be enabled. Default is to not allow it.
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
There was a bug with changing timestamps using the topic wrench button. Under some circumstances, a topic was disappearing from the top of the latest tab after changing timestamps. Steps to reproduce:
- Choose a topic on the latest tab (the topic should be created some time ago, but has recent posts)
- Change topic timestamps (for example, move them one day forward):
- Go back to the latest tab and see that topic has disappeared.
This PR fixes this. We were setting topic.bumped_at to the timestamp user specified on the modal. This is incorrect. Instead, we should be setting topic.bumped_at to the created_at timestamp of the last regular (not a whisper and so on) post on the topic.
Currently when bulk-awarding a badge that can be granted multiple times, users in the CSV file are granted the badge once no matter how many times they're listed in the file and only if they don't have the badge already.
This PR adds a new option to the Badge Bulk Award feature so that it's possible to grant users a badge even if they already have the badge and as many times as they appear in the CSV file.
User flair was given by user's primary group. This PR separates the
two, adds a new field to the user model for flair group ID and users
can select their flair from user preferences now.
For other private messages we have the site setting
personal_email_time_window_seconds (default 20s) which allows
people to edit their post etc. before the email is sent.
This PR makes the Jobs::GroupSmtpEmail enqueuer in the
PostAlerter use the same delay.
<!-- NOTE: All pull requests should have tests (rspec in Ruby, qunit in JavaScript). If your code does not include test coverage, please include an explanation of why it was omitted. -->
This PR backtracks a fair bit on this one https://github.com/discourse/discourse/pull/13220/files.
Instead of sending the group SMTP email for each user via `UserNotifications`, we are changing to send only one email with the existing `Jobs::GroupSmtpEmail` job and `GroupSmtpMailer`. We are changing this job and mailer along with `PostAlerter` to make the first topic allowed user the `to_address` for the email and any other `topic_allowed_users` to be the CC address on the email. This is to cut down on emails sent via SMTP, which is subject to daily limits from providers such as Gmail. We log these details in the `EmailLog` table now.
In addition to this, we have changed `PostAlerter` to no longer rely on incoming email email addresses for sending the `GroupSmtpEmail` job. This was unreliable as a user's email could have changed in the meantime. Also it was a little overcomplicated to use the incoming email records -- it is far simpler to reason about to just use topic allowed users.
This also adds a fix to include cc_addresses in the EmailLog.addressed_to_user scope.
The generated regular expressions did not contain \b which matched
every text that contained the word, even if it was only a substring of
a word.
For example, if "art" was a watched word a post containing word
"artist" matched.
Subclasses must call #delete_user_actions inside build_actions to support user deletion. The method adds a delete user bundle, which has a delete and a delete + block option. Every subclass is responsible for implementing these actions.
Notifying about a tag change sometimes resulted in loading a large
number of users in memory just to perform an exclusion. This commit
prefers to do inclusion (i.e. instead of exclude users X, do include
users in groups Y) and does it in SQL to avoid fetching unnecessary
data that is later discarded.
When a group only has SMTP enabled and not IMAP, we do not
want to enqueue the :group_smtp_email job because using the group's
SMTP credentials for sending user_private_message emails is
handled by the UserNotifications class.
We do not want the :group_smtp_email job to be enqueued because
that uses a reply key instead of the group.email_username
for the reply-to address which is not what we want for SMTP
only, and also creates an IncomingEmail record to prevent IMAP
double syncing which we do not need either.
There is an open question about what happens when IMAP is
enabled after SMTP has been enabled for a while, and also questions
around whether we could do away with :group_smtp_email altogether
and handle everything via EmailLog and UserNotifications, adding
additional columns to the former and modifying the Imap::Sync
class to take this into account...a lot more further testing
for IMAP needs to be done to answer those questions.
For now, this fix should be sufficient to get the correct
reply-to address for user_private_response messages sent in
response to emails sent directly to the group's
email_username SMTP address.
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
This PR changes the `UserNotification` class to send outbound `user_private_message` using the group's SMTP settings, but only if:
* The first allowed_group on the topic has SMTP configured and enabled
* SiteSetting.enable_smtp is true
* The group does not have IMAP enabled, if this is enabled the `GroupSMTPMailer` handles things
The email is sent using the group's `email_username` as both the `from` and `reply-to` address, so when the user replies from their email it will go through the group's SMTP inbox, which needs to have email forwarding set up to send the message on to a location (such as a hosted site email address like meta@discoursemail.com) where it can be POSTed into discourse's handle_mail route.
Also includes a fix to `EmailReceiver#group_incoming_emails_regex` to include the `group.email_username` so the group does not get a staged user created and invited to the topic (which was a problem for IMAP), as well as updating `Group.find_by_email` to find using the `email_username` as well for inbound emails with that as the TO address.
#### Note
This is safe to merge without impacting anyone seriously. If people had SMTP enabled for a group they would have IMAP enabled too currently, and that is a very small amount of users because IMAP is an alpha product, and also because the UserNotification change has a guard to make sure it is not used if IMAP is enabled for the group. The existing IMAP tests work, and I tested this functionality by manually POSTing replies to the SMTP address into my local discourse.
There will probably be more work needed on this, but it needs to be tested further in a real hosted environment to continue.
It was not clear that replace watched words can be used to replace text
with URLs. This introduces a new watched word type that makes it easier
to understand.
This overhauls the user interface for the group email settings management, aiming to make it a lot easier to test the settings entered and confirm they are correct before proceeding. We do this by forcing the user to test the settings before they can be saved to the database. It also includes some quality of life improvements around setting up IMAP and SMTP for our first supported provider, GMail. This PR does not remove the old group email config, that will come in a subsequent PR. This is related to https://meta.discourse.org/t/imap-support-for-group-inboxes/160588 so read that if you would like more backstory.
### UI
Both site settings of `enable_imap` and `enable_smtp` must be true to test this. You must enable SMTP first to enable IMAP.
You can prefill the SMTP settings with GMail configuration. To proceed with saving these settings you must test them, which is handled by the EmailSettingsValidator.
If there is an issue with the configuration or credentials a meaningful error message should be shown.
IMAP settings must also be validated when IMAP is enabled, before saving.
When saving IMAP, we fetch the mailboxes for that account and populate them. This mailbox must be selected and saved for IMAP to work (the feature acts as though it is disabled until the mailbox is selected and saved):
### Database & Backend
This adds several columns to the Groups table. The purpose of this change is to make it much more explicit that SMTP/IMAP is enabled for a group, rather than relying on settings not being null. Also included is an UPDATE query to backfill these columns. These columns are automatically filled when updating the group.
For GMail, we now filter the mailboxes returned. This is so users cannot use a mailbox like Sent or Trash for syncing, which would generally be disastrous.
There is a new group endpoint for testing email settings. This may be useful in the future for other places in our UI, at which point it can be extracted to a more generic endpoint or module to be included.
Previously we would retry push notifications indefinitely for all errors
except for ExpiredSubscription
Under certain conditions other persistent errors may arise such as a persistent
rate limit.
If we track more than 3 errors in a period of time longer than a day we will
delete the subscription
Also performs a bit of internal cleanup to ensure protected methods really
are private.
This PR improves the UI of bulk select so that its context is applied to the Dismiss Unread and Dismiss New buttons. Regular users (not just staff) are now able to use topic bulk selection on the /new and /unread routes to perform these dismiss actions more selectively.
For Dismiss Unread, there is a new count in the text of the button and in the modal when one or more topic is selected with the bulk select checkboxes.
For Dismiss New, there is a count in the button text, and we have added functionality to the server side to accept an array of topic ids to dismiss new for, instead of always having to dismiss all new, the same as the bulk dismiss unread functionality. To clean things up, the `DismissTopics` service has been rolled into the `TopicsBulkAction` service.
We now also show the top Dismiss/Dismiss New button based on whether the bottom one is in the viewport, not just based on the topic count.
Over the years we accrued many spelling mistakes in the code base.
This PR attempts to fix spelling mistakes and typos in all areas of the code that are extremely safe to change
- comments
- test descriptions
- other low risk areas
Watched words are always regular expressions, despite watched_words_
_regular_expressions being enabled or not. Internally, wildcard
characters are replaced with a regular expression that matches any non
whitespace character.
* FIX: Hide tag watched words if tagging is disabled
These 'autotag' words were shown even if tagging was disabled.
* FIX: Make autotag watched words case insensitive
This commit also fixes the bug when no tag was applied if no other tag
was already present.
We have a few places in the code where we need to validate various email related settings, and will have another soon with the improved group email settings UI. This PR introduces a class which can validate POP3, IMAP, and SMTP credentials and also provide a friendly error message for issues if they must be presented to an end user.
This PR does not change any existing code to use the new service. I have added a TODO to change POP3 validation and the email test rake task to use the new validator post-release.
* FIX: Link notification to first unread post
If a topic with a few posts was posted in a watched category or with a
watched tag, the created notification would always point to the last
post, instead of pointing to the first one.
The root cause is that the query that fetched the first unread post
uses 'TopicUser' records and those are not created by default for
user watching a category or tag. In this case, it should use the
'CategoryUser' or 'TagUser' records.
* DEV: Use named bind variables
The message in logs will now look like:
```
BadgeGranter::GrantError: Failed to backfill 'Some Badge' badge: {:post_ids=>[]}. Reason: ERROR: column "email" does not exist
LINE 6: ...t id as user_id, current_timestamp as granted_at, email from...
```
When the admin creates a new custom field they can specify if that field should be searchable or not.
That setting is taken into consideration for quick search results.
Previously watched words ignored topic titles when applying auto tagging rules.
Also copy has been improved to reflect how the system behaves.
The text hints that we are only watching first post now
Rails 6.1.3.1 deprecates a few API and has some internal changes that break our tests suite, so this commit fixes all the deprecations and errors and now Discourse should be fully compatible with Rails 6.1.3.1. We also have a new release of the rails_failover gem that's compatible with Rails 6.1.3.1.
This PR adds a new category setting which is a column in the `categories` table, `allow_unlimited_owner_edits_on_first_post`.
What this does is:
* Inside the `can_edit_post?` method of `PostGuardian`, if the current user editing a post is the owner of the post, it is the first post, and the topic's category has `allow_unlimited_owner_edits_on_first_post`, then we bypass the check for `LimitedEdit#edit_time_limit_expired?` on that post.
* Also, similar to wiki topics, in `PostActionNotifier#after_create_post_revision` we send a notification to all users watching a topic when the OP is edited in a topic with the category setting `allow_unlimited_owner_edits_on_first_post` enabled.
This is useful for forums where there is a Marketplace or similar category, where topics are created and then updated indefinitely by the OP rather than the OP making new topics or additional replies. In a way this acts similar to a wiki that only one person can edit.
To add an extra layer of security, we sanitize settings before shipping them to the client. We don't sanitize those that have the "html" type.
The CookedPostProcessor already uses Loofah for sanitization, so I chose to also use it for this. I added it to our gemfile since we installed it as a transitive dependency.
Count errors on updating themes in the error bucket. Otherwise,
there was a chance that this could hide errors eg, if a deploy key to a
private repo were to be deleted. Admins probably would like to know about this.
This PR allows invitations to be used when the DiscourseConnect SSO is enabled for a site (`enable_discourse_connect`) and local logins are disabled. Previously invites could not be accepted with SSO enabled simply because we did not have the code paths to handle that logic.
The invitation methods that are supported include:
* Inviting people to groups via email address
* Inviting people to topics via email address
* Using invitation links generated by the Invite Users UI in the /my/invited/pending route
The flow works like this:
1. User visits an invite URL
2. The normal invitation validations (redemptions/expiry) happen at that point
3. We store the invite key in a secure session
4. The user clicks "Accept Invitation and Continue" (see below)
5. The user is redirected to /session/sso then to the SSO provider URL then back to /session/sso_login
6. We retrieve the invite based on the invite key in secure session. We revalidate the invitation. We show an error to the user if it is not valid. An additional check here for invites with an email specified is to check the SSO email matches the invite email
7. If the invite is OK we create the user via the normal SSO methods
8. We redeem the invite and activate the user. We clear the invite key in secure session.
9. If the invite had a topic we redirect the user there, otherwise we redirect to /
Note that we decided for SSO-based invites the `must_approve_users` site setting is ignored, because the invite is a form of pre-approval, and because regular non-staff users cannot send out email invites or generally invite to the forum in this case.
Also deletes some group invite checks as per https://github.com/discourse/discourse/pull/12353
Currently the process of adding a custom image to badge is quite clunky; you have to upload your image to a topic, and then copy the image URL and pasting it in a text field. Besides being clucky, if the topic or post that contains the image is deleted, the image will be garbage-collected in a few days and the badge will lose the image because the application is not that the image is referenced by a badge.
This commit improves that by adding a proper image uploader widget for badge images.
We were sending 2 emails for user silencing if a message was provided in the UI. Also always send email for user silence and user suspend with reason regardless of whether message provided.
This commit includes other various improvements to watched words.
auto_silence_first_post_regex site setting was removed because it overlapped
with 'require approval' watched words.
Previously we were checking truthiness in some places, and `== true` in
others. That can lead to some inconsistent UX where the interface says
the email is valid, but account creation fails.
This commit ensures values are boolean when set, and raises an error for
other value types.
If this safety check is triggered, it means the specific auth provider
needs to be updated to pass booleans.
The bug was mentioned on meta: https://meta.discourse.org/t/pressing-dismiss-new-doesnt-clear-new-topics/179858
Problem is that sometimes the user has TopicUser records with `last_read_post_number` set as NULL. In that case, the topic is still "new" to them and should be dismissed when they click dismiss button.
In addition, I added that condition to post_migration and bumped the number to fix existing records. Migration is written to be idempotent so it will make no harm to already deployed instances.
Previously when inheriting category auto-close settings for a topic, those settings were disrupted if another topic timer was assigned or if a topic was closed then manually re-opened.
This PR makes it so that when a topic is manually re-opened the topic auto-close settings are inherited from the category. However, they will now be based on the topic created_at date. As an example, for a topic with a category auto close hours setting of 72 (3 days):
* Topic was created on 2021-02-15 08:00
* Topic was closed on 2021-02-16 10:00
* Topic was opened again on 2021-02-17 06:00
Now, the topic will inherit the auto close timer again and will close automatically at **2021-02-18 08:00**, which is based on the creation date. If the current date and time is greater than the original auto-close time (e.g. we were at 2021-02-20 13:45) then no auto-close timer is created.
Note, this will not happen if the topic category auto-close setting is "based on last post".
Updating a topic's visibility did not increase or decrease the
topic_count of a category, but Category.update_stats does ignore
unlisted topics which resulted in inconsistencies when deleting
such topics.
Original PR was reverted because of broken migration https://github.com/discourse/discourse/pull/12058
I fixed it by adding this line
```
AND topics.id IN(SELECT id FROM topics ORDER BY created_at DESC LIMIT :max_new_topics)
```
This time it is left joining a limited amount of topics. I tested it on few databases and it worked quite smooth
Follow up https://github.com/discourse/discourse/pull/11968
Dismiss all new topics using the same DismissTopicService. In addition, MessageBus receives exact topic ids which should be marked as `seen`.
* FEATURE: Ability to dismiss new topics in a specific tag
Follow up of https://github.com/discourse/discourse/pull/11927
Using the same mechanism to disable new topics in a tag.
* FIX: respect when category and tag is selected
The 'Discourse SSO' protocol is being rebranded to DiscourseConnect. This should help to reduce confusion when 'SSO' is used in the generic sense.
This commit aims to:
- Rename `sso_` site settings. DiscourseConnect specific ones are prefixed `discourse_connect_`. Generic settings are prefixed `auth_`
- Add (server-side-only) backwards compatibility for the old setting names, with deprecation notices
- Copy `site_settings` database records to the new names
- Rename relevant translation keys
- Update relevant translations
This commit does **not** aim to:
- Rename any Ruby classes or methods. This might be done in a future commit
- Change any URLs. This would break existing integrations
- Make any changes to the protocol. This would break existing integrations
- Change any functionality. Further normalization across DiscourseConnect and other auth methods will be done separately
The risks are:
- There is no backwards compatibility for site settings on the client-side. Accessing auth-related site settings in Javascript is fairly rare, and an error on the client side would not be security-critical.
- If a plugin is monkey-patching parts of the auth process, changes to locale keys could cause broken error messages. This should also be unlikely. The old site setting names remain functional, so security-related overrides will remain working.
A follow-up commit will be made with a post-deploy migration to delete the old `site_settings` rows.
This PR allows entering a float value for topic timers e.g. 0.5 for 30 minutes when entering hours, 0.5 for 12 hours when entering days. This is achieved by adding a new column to store the duration of a topic timer in minutes instead of the ambiguous both hours and days that it could be before.
This PR has ommitted the post migration to delete the duration column in topic timers; it will be done in a subsequent PR to ensure that no data is lost if the UPDATE query to set duration_mintues fails.
I have to keep the old keyword of duration in set_or_create_topic_timer for backwards compat, will remove at a later date after plugins are updated.
This is a try to simplify logic around dismiss new topics to have one solution to work in all places - dismiss all-new, dismiss new in a specific category or even in a specific tag.
This adds a new table UserNotificationSchedules which stores monday-friday start and ends times that each user would like to receive notifications (with a Boolean enabled to remove the use of the schedule). There is then a background job that runs every day and creates do_not_disturb_timings for each user with an enabled notification schedule. The job schedules timings 2 days in advance. The job is designed so that it can be run at any point in time, and it will not create duplicate records.
When a users saves their notification schedule, the schedule processing service will run and schedule do_not_disturb_timings. If the user should be in DND due to their schedule, the user will immediately be put in DND (message bus publishes this state).
The UI for a user's notification schedule is in user -> preferences -> notifications. By default every day is 8am - 5pm when first enabled.
There was an issue that occurred with this order of operations:
* An IMAP topic was created by emailing a group
* A second user was invited to the topic (not the OP and not the group)
* A user with access to the group replies to the topic
* The second user receives a user_private_message notification email because of their involvement in the topic
* The second user replies to the email via email
This new reply would then go and notify the other group PM users, except for those who emailed the group topic directly, which is handled via the group SMTP mailer. However because the new post already has an incoming email because it is parsed via the Email::Receiver via POP3 the group SMTP section of the post alerter is skipped, and the group's email address is not ignored for the user_private_message notification.
This PR fixes it so the group is not ever sent an email via the PM notification. This is important because any new emails in the group's IMAP inbox will be picked up by the Imap::Sync code and created as a new topic which is not at all desirable.
Also in this PR I split up the specs a bit more for group SMTP in the post alerter to make them easier to read and they each only test one thing.
Fixes bug introduced by bd25627198
What happens is we send notifications to everyone involved in the group inbox topic about new posts, however we pass the param `skip_send_email_to: email_addresses`. In the above commit I removed the group email address from this `email_addresses` array. This breaks the IMAP inbox because we email the group with the reply, and the IMAP sync tool finds this email and opens a new unrelated topic with it.
This PR fixes a race condition with the IMAP notification code. In the `Email::Receiver` we call the `NewPostManager` to create the post and enqueue jobs and sends alerts via `PostAlerter`. However, if the post alerter reaches the `notify_pm_users` and the `group_notifying_via_smtp` method _before_ the incoming email is updated with the post and topic, we unnecessarily send a notification to the person who just posted. The result of this is that the IMAP syncer re-imports the email sent to the user about their own post, which looks like this in the group inbox:
To fix this, we skip the jobs enqueued by `NewPostManager` and only enqueue them with `PostJobsEnqueuer` manually _after_ the incoming email record has been updated with the post and topic.
Other improvements:
* Moved code to calculate email addresses from `IncomingEmail` records into the topic, with a group passed in, for easier testing and debugging. It is not the responsibility of the post alerter to figure this stuff out.
* Add shortcut methods on `IncomingEmail` to split or provide an empty array for to and cc addresses to avoid repetition.
Splits the `ToggleTopicClosed` job into two distinct `OpenTopic` and `CloseTopic` jobs to make the code clearer. The old job cannot be deleted yet because of outstanding sidekiq schedules, so a todo has been added to do so later this year.
Also replaced mentions of `topic_status_update` with `topic_timer` in some files, because the `topic_status_update` model is obsolete and replaced by topic timer.
Added some shortcut methods for checking if a topic is open/whether a user can change an open topic.
Passes by_user to :user_unsilenced so plugins can detect whether or not
a silence was done automatically (by system user) or manually (by non-system)
Adds the ability to pass details in the action logger params so custom loggers
can pass their own details eg, in custom silence logs
This commit adds an additional find_user_by_email hook to ManagedAuthenticator so that GitHub login can continue to support secondary email addresses
The github_user_infos table will be dropped in a follow-up commit.
This is the last core authenticator to be migrated to ManagedAuthenticator 🎉
- IgnoredUser records should all now have an expiring_at value. This commit enforces that in the DB, and fixes any corrupt rows
- Changes to the ignored user list are now handled by the `/u/{username}/notification_level` endpoint. This allows setting expiration dates on the ignore. This commit removes the old logic for saving a list of usernames in the user preferences.
- Many specs were calling `IgnoredUser.create`. This commit changes them to use `Fabricate(:ignored_user)` for consistency
The NewUserOfTheMonth badge is part of the Badges::GettingStarted group. This group is skipped in BadgeGranter if the user skips the new user tips. However, the NewUserOfTheMonth badge granter job does not account for this. Instead, it notifies the user they've received the badge even if they did not.
This commit introduces a simple fix to allow granting of this badge even to users who skipped the new user tips.
DEV: add plugin hooks for silence message parameters
Allows plugins to add, and update extra silence message params for custom
i18n vars
Allows plugins to override system messages via `message_title` and
`message_raw` parameters. We can later expose these params where necessary via event
hooks. Expose the parameter for the on user_silenced trigger.
Like "default watching" and "default tracking" categories option now the "regular" categories support is added. It will be useful for sites that are muted by default. The user option will be displayed only if `mute_all_categories_by_default` site setting is enabled.
* FIX: Unlike own posts on ownership transfer
If a user has liked a post that has passed the
`post_undo_action_window_mins` system setting window and you transfer ownership
of that post to that user you will be the owner of a post that you have
liked, but cannot unlike resulting in a weird UI behavior. This commit
fixes this issue.
The existing tests didn't check for the timeout window for unliking
posts so I added that in.
I couldn't find a good way to do this logic inside of the guardian class
so rather than duplicating behavior of the `PostActionDestroyer` class
inside of the `PostOwnerChanger` I decided to pass in a "bypass"
variable that could be used to check if the calling class is the
'post_owner_changer' and bypass the guardian instead. I went this route
because the guardian `can_delete_post_action` method has no way of
distinguishing how to allow a user to be able to unlike their own posts
after the timeout window but only on a post owner change.
* use an options hash instead
Enabling the moderators_manage_categories_and_groups site setting will allow moderator users to create/manage groups.
* show New Group form to moderators
* Allow moderators to update groups and read logs, where appropriate
* Rename site setting from create -> manage
* improved tests
* Migration should rename old log entries
* Log group changes, even if those changes mean you can no longer see the group
* Slight reshuffle
* RouteTo /g if they no longer have permissions to view group
* FEATURE: don't notify about changed tags for a private message
Only staff members observing specific tag should receive a notification
* FIX: remove other category which is not used
* FIX: improved specs to ensure that revise was succesful
In the near future, we will be swtiching to PG headlines to generate the
search blurb. As such, we need to replace audio and video links in the
raw data used for headline generation. This also means that we avoid
replacing links each time we need to generate the blurb.
* DEV: Show message when cannot invite user to PM
When inviting a user to a PM return a message that says, "Sorry, this
user can't be invited." if they have been muted or are not in a users
allowed pm users list.
* Minor refactor & improved some text
We do prefix matching in search so there is no need to inject the extra
terms.
Before:
```
"'discourse':10,11 'discourse.org':10,11 'org':10,11 'test':8A,10,11 'test.discourse.org':10,11 'titl':4A 'uncategor':9B"
```
After:
```
"'discourse.org':10,11 'org':10,11 'test':8A 'test.discourse.org':10,11 'titl':4A 'uncategor':9B"
```
* FEATURE: Allow List for PMs
This feature adds a new user setting that is disabled by default that
allows them to specify a list of users that are allowed to send them
private messages. This way they don't have to maintain a large list of
users they don't want to here from and instead just list the people they
know they do want. Staff will still always be able to send messages to
the user.
* Update PR based on feedback
```
discourse_development=# SELECT alias, lexemes FROM TS_DEBUG('www.discourse.org');
alias | lexemes
-------+---------------------
host | {www.discourse.org}
discourse_development=# SELECT TO_TSVECTOR('www.discourse.org');
to_tsvector
-----------------------
'www.discourse.org':1
```
Given the above lexeme, we will inject additional lexeme by splitting
the host on `.`. The actual tsvector stored will look something like
```
tsvector
---------------------------------------
'discourse':1 'discourse.org':1 'org':1 'www':1 'www.discourse.org':1
```
There is a feature in search where we take over from the tokenizer
in postgres and attempt to inject more words into search.
So for example: sam.i.am will inject the words i and am.
This is not ideal cause there are many edge cases and this can
cause extreme index bloat.
This is an opening move commit to make it configurable, over the
next few weeks we will evaluate and decide if we disable this by
default or simply remove.
This reverts commit 20780a1eee.
* SECURITY: re-adds accidentally reverted commit:
03d26cd6: ensure embed_url contains valid http(s) uri
* when the merge commit e62a85cf was reverted, git chose the 2660c2e2 parent to land on
instead of the 03d26cd6 parent (which contains security fixes)
This ensures that at a minimum you are notified once a day of
repeat edits by the same user.
Long term we may consider winding this down to say 1 hour or
making it configurable.
Due to a refactor in e90f9e5cc4 we stopped notifying on edits if
a user liked a post and then edited.
The like could have happened a long time ago so this gets extra
confusing.
This change makes the suppression more deliberate. We only want
to suppress quite/link/mention if the user already got a reply
notification.
We can expand this suppression if it is not enough.
We have the `# frozen_string_literal: true` comment on all our
files. This means all string literals are frozen. There is no need
to call #freeze on any literals.
For files with `# frozen_string_literal: true`
```
puts %w{a b}[0].frozen?
=> true
puts "hi".frozen?
=> true
puts "a #{1} b".frozen?
=> true
puts ("a " + "b").frozen?
=> false
puts (-("a " + "b")).frozen?
=> true
```
For more details see: https://samsaffron.com/archive/2018/02/16/reducing-string-duplication-in-ruby
It's possible to cause a 500 error by putting in weird characters in the
input field for updating a users website on their profile.
Normal invalid input like not including the domain extension is already
handled by the user_profile model validation. This fix ensures a server
error doesn't occur for weird input characters.
It previously failed to match URLs with characters other than `[a-zA-z0-9\.\/:-]`. This meant that `PullHotlinkedImages` would sometimes download an external image and then never use it in any posts.
* FEATURE: Add created_at column to user_badges
This makes scraping for newly granted badges easier.
Patch from @eviltrout applied.
Co-authored-by: Robin Ward <robin.ward@gmail.com>
If the feature is enabled, staff members can construct a URL and publish a
topic for others to browse without the regular Discourse chrome.
This is useful if you want to use Discourse like a CMS and publish
topics as articles, which can then be embedded into other systems.
* FIX: FlagSockpuppets should not flag a post if a post of that user was already rejected by staff
* Update spec/services/flag_sockpuppets_spec.rb
Co-Authored-By: Robin Ward <robin.ward@gmail.com>
Co-authored-by: Robin Ward <robin.ward@gmail.com>
This fix ensures that if a staged user is linked to or quoted they won't
be emailed about it.
A staged user could email into a category, and another user could quote
them inside of a completely different category and we don't want a
staged user to receive an email for this.
Bug report:
https://meta.discourse.org/t/-/145202/9
For example given a custom badge with SQL:
```
SELECT 1
-- I am a comment
```
You end up with
```
FROM (SELECT 1
-- I am a comment) q
```
This fix adds newlines so you end up with the now-valid:
```
FROM (
SELECT 1
-- I am a comment
) q
```
* This PR implements the scheduling and notification system for bookmark reminders. Every 5 minutes a schedule runs to check any reminders that need to be sent before now, limited to **300** reminders at a time. Any leftover reminders will be sent in the next run. This is to avoid having to deal with fickle sidekiq and reminders in the far-flung future, which would necessitate having a background job anyway to clean up any missing `enqueue_at` reminders.
* If a reminder is sent its `reminder_at` time is cleared and the `reminder_last_sent_at` time is filled in. Notifications are only user-level notifications for now.
* All JavaScript and frontend code related to displaying the bookmark reminder notification is contained here. The reminder functionality is now re-enabled in the bookmark modal as well.
* This PR also implements the "Remind me next time I am at my desktop" bookmark reminder functionality. When the user is on a mobile device they are able to select this option. When they choose this option we set a key in Redis saying they have a pending at desktop reminder. The next time they change devices we check if the new device is desktop, and if it is we send reminders using a DistributedMutex. There is also a job to ensure consistency of these reminders in Redis (in case Redis drops the ball) and the at desktop reminders expire after 20 days.
* Also in this PR is a fix to delete all Bookmarks for a user via `UserDestroyer`
* FIX: when unread reply notification exists don't create new
From time to time, the user is creating a reply post and then they want to add additional details. They edit an existing post and for example, add a quote from a previous one.
In that situation, if the user to whom reply was directed to already have the unread notification, we should not create the new one.
That behaviour was mentioned here: https://meta.discourse.org/t/reply-then-edit-to-add-quote-notification-redundancy/138358
* FIX: dont create new notification if already exists
The commit: 75069ff179
allows users to remove their primary group, but this introduced a bug
where if you were to edit any other profile info like location or
website which is a form on a separate page then the flair dropdown,
would cause the selected flair to be removed.
This fix ensures that if the `primary_group_id` parameter is missing
from the update payload it does not remove the existing
`primary_group_id`. It will only remove the `primary_group_id` if it is
present in the payload and empty.
This is not used in core or official plugins, and has been printing a deprecation notice since v2.3.0beta4. All OpenID 2.0 code and dependencies have been dropped. The user_open_ids table remains for now, in case anyone has missed the deprecation notice, and needs to migrate their data.
Context at https://meta.discourse.org/t/-/113249
Previously, `notify_first_post_users` was loading all users into memory simultaneously, which can cause Sidekiq to run out of memory for large sites. `notify_post_users` was loading every user one-by-one in a loop.
This commit makes both these functions load users in batches of 100. This should make the memory usage of `notify_first_post_users` lower, and reduce the number of queries required in `notify_post_users`.
Regression was created here:
https://github.com/discourse/discourse/pull/8750
When tag or category is added and the user is watching that category/tag
we changed notification type to `edited` instead of `new post`.
However, the logic here should be a little bit more sophisticated.
If the user has already seen the post, notification should be `edited`.
However, when user hasn't yet seen post, notification should be "new
reply". The case for that is when for example topic is under private
category and set for publishing later. In that case, we modify an
existing topic, however, for a user, it is like a new post.
Discussion on meta:
https://meta.discourse.org/t/publication-of-timed-topics-dont-trigger-new-topic-notifications/139335/13
* FEATURE: Replace existing badge owners when using the bulk award feature
* Use ActiveRecord to sanitize title update query, Change replace checkbox text
Co-Authored-By: Robin Ward <robin.ward@gmail.com>
Co-authored-by: Robin Ward <robin.ward@gmail.com>
There is a feature, that when tag or category is added to the topic,
customers who are watching that category or tag are notified.
The problem is that it is using default notification type "new post"
It would be better to use "new post" only when there really is a new
post and "edited" when categories or tags were modified.
This fix allows a user to remove their currently assigned primary group
if the Site Setting `user selected primary groups` is enabled.
Before this fix, if a user selected "none" for their primary group it
would silently fail and never be updated.
* UI: Mass grant a badge from the admin ui
* Send the uploaded CSV and badge ID to the backend
* Read the CSV and grant badge in batches
* UX: Communicate the result to the user
* Don't award if badge is disabled
* Create a 'send_notification' method to remove duplicated code, slightly shrink badge image. Replace router transition with href.
* Dynamically discover current route
This is required for people using apps with custom protocols. We still verify the entire URL (including protocol) against the site setting value.
Refactored wildcard_url_checker so that it always returns a boolean, rather than sometimes returning a regex match.
We like to stay as close as possible to latest with rubocop cause the cops
get better.
This update required some code changes, specifically the default is to avoid
explicit returns where implicit is done
Also this renames a few rules
This feature adds the ability to define synonyms for tags, and the ability to merge one tag into another while keeping it as a synonym. For example, tags named "js" and "java-script" can be synonyms of "javascript". When searching and creating topics using synonyms, they will be mapped to the base tag.
Along with this change is a new UI found on each tag's page (for example, `/tags/javascript`) where more information about the tag can be shown. It will list the synonyms, which categories it's restricted to (if any), and which tag groups it belongs to (if tag group names are public on the `/tags` page by enabling the "tags listed by group" setting). Staff users will be able to manage tags in this UI, merge tags, and add/remove synonyms.
When uploading a file to a theme component, and that file is existing and has already been marked as secure, we now automatically mark the file as secure: false, change the ACL, and log the action as the user (also rebake the posts for the upload)
* Add timezone to user_options table
* Also migrate existing timezone values from UserCustomField,
which is where the discourse-calendar plugin is storing them
* Allow user to change their core timezone from Profile
* Auto guess & set timezone on login & invite accept & signup
* Serialize user_options.timezone for group members. this is so discourse-group-timezones can access the core user timezone, as it is being removed in discourse-calendar.
* Annotate user_option with timezone
* Validate timezone values