On the client-side, message-bus subscriptions and reviewable count UI is based on the 'redesigned_user_menu_enabled' boolean. We need to use the same logic on the server-side to ensure things work correctly when legacy navigation is used alongside the new user menu.
2e78045a fixed the anonymization job so that it correctly updated self-mentions, which are not logged in the post_actions table. The solution was to scan the entire `posts` table with an `raw ILIKE` query. On sites with many posts, this can take a very long time.
This commit updates the job to take a two-pass approach:
First, we update posts based on the post_actions table. This is much more efficient than a full table scan, and takes care of all 'non-self' mentions.
Then, we make a second pass using the `raw ILIKE` approach. Since we already took care of most posts, we can scope this down to self-mentions only. By filtering the query to a specific posts.user_id, it is significantly more performant than a full table scan.
Currently, we’re performing a check when a user is suspended in the
`UserEmail` job and we’re assuming a `post` is always available, which
is not the case. The code indeed breaks when the job is called with the
`account_suspended` type option.
This patch fixes this issue by making the check use the safe navigation
operator, thus making it working when `post` is not provided.
Currently, when a suspended user belongs to a group PM (private message
with more than two people in it) and a staff member sends a message to
this group PM, then the suspended user will receive an email.
This happens because a suspended user can only receive emails from staff
members. But in this case, this can be seen as a bug as the expected
behavior would be instead to not send any email to the suspended user. A
staff member can participate in active discussions like any other
member and so their messages in this context shouldn’t be treated
differently than the ones from regular users.
This patch addresses this issue by checking if a suspended user receives
a message from a group PM or not. If that’s the case then an email won’t
be sent no matter if the post originated from a staff member or not.
* UX: add type tag and design update
* UX: clarify status copy in reviewQ
* DEV: switch to selectKit
* UX: color approve/reject buttons in RQ
* DEV: regroup actions
* UX: add type tag and design update
* UX: clarify status copy in reviewQ
* Join questions for flagged post with "or" with new I18n function
* Move ReviewableScores component out of context
* Add CSS classes to reviewable-item based on human type
* UX: add table header for scoring
* UX: don't display % score
* UX: prefix modifier class with dash
* UX: reviewQ flag table styling
* UX: consistent use of ignore icon
* DEV: only show context question on pending status
* UX: only show table headers on pending status
* DEV: reviewQ regroup actions for hidden posts
* UX: reviewQ > approve/reject buttons
* UX: reviewQ add fadeout
* UX: reviewQ styling
* DEV: move scores back into component
* UX: reviewQ mobile styling
* UX: score table on mobile
* UX: reviewQ > move meta info outside table
* UX: reviewQ > score layout fixes
* DEV: readd `agree_and_keep` and fix the spec tests.
* Fix the spec tests
* fix the quint test
* DEV: readd deleting replies
* UX: reviewQ copy tweaks
* DEV: readd test for ignore + delete replies
* Remove old
* FIX: Add perform_ignore back in for backwards compat
* DEV: add an action alias `ignore` for `ignore_and_do_nothing`.
---------
Co-authored-by: Martin Brennan <martin@discourse.org>
Co-authored-by: Vinoth Kannan <svkn.87@gmail.com>
Forcing distributed muted to raise when a notify reviewable job is running
leads to excessive errors in the logs under many conditions.
The new pattern
1. Optimises the counting of reviewables so it is a lot faster
2. Holds the distributed lock for 2 minutes (max)
The downside is the job queue can get blocked up when tons of notify
reviewables are running at the same time. However this should be very
rare in the real world, as we only notify when stuff is flagged which
is fairly infrequent.
This also give a fair bit more time for the notifications which may be
a little slow on large sites with tons of mods.
The #pluck_first freedom patch, first introduced by @danielwaterworth has served us well, and is used widely throughout both core and plugins. It seems to have been a common enough use case that Rails 6 introduced it's own method #pick with the exact same implementation. This allows us to retire the freedom patch and switch over to the built-in ActiveRecord method.
There is no replacement for #pluck_first!, but a quick search shows we are using this in a very limited capacity, and in some cases incorrectly (by assuming a nil return rather than an exception), which can quite easily be replaced with #pick plus some extra handling.
Posts with self-mentions aren't updated with username updates. This happens
because mention `UserAction` entries aren't logged for self-mentions.
This change updates the lookup of `Post` and `PostRevision` with mentions to bypass
`UserAction` entries.
Under scenarios of extremely high load where large numbers of `Reviewable*` items are being created, it has been observed that multiple instances of the `NotifyReviewable` job may run simultaneously.
These jobs will work satisfactorily if the concurrency is limited to 1, and the different types of jobs (items reviewable by admins, vs moderators, vs particular groups, etc.) are run eventually.
This change introduces a new option to `DistributedMutex` which allows the `max_get_lock_attempts` to be specified. If the number is exceeded an error will be raised, which will cause Sidekiq to requeue the job. Sidekiq has existing logic to back-off on retry times for jobs that have failed multiple times.
So it can easily be overwritten in a plugin for example.
### Added more tests to provide better coverage
We previously only had `u.silenced_till IS NULL` but I made it consistent with pretty much every other places where we check for "active" users.
These two new lines do change the query a tiny bit though.
**Before**
- You could not get the badge if you were currently silenced (no matter what period is being checked)
- You could get the badge if you were suspended 😬
**After**
- You can't get the badge if you were silenced during the past year
- You can't get the badge if you were suspended during the past year
### Improved the performance of the query by using `NOT EXISTS` instead of `LEFT JOIN / COUNT() = 0`
There is no difference in behaviour between
```sql
LEFT JOIN user_badges AS ub ON ub.user_id = u.id AND ...
[...]
HAVING COUNT(ub.*) = 0
```
and
```sql
NOT EXISTS (SELECT 1 FROM user_badges AS ub WHERE ub.user_id = u.id AND ...)
```
The only difference is performance-wise. The `NOT EXISTS` is 10-30% faster on very large databases (aka. posts and users in X millions). I checked on 3 of the largest datasets I could find.
```
class Jobs::DummyDelayedJob < Jobs::Base
def execute(args = {})
end
end
RSpec.describe "Jobs.run_immediately!" do
before { Jobs.run_immediately! }
it "explodes" do
current_user = Fabricate(:user)
Jobs.enqueue_in(1.seconds, :dummy_delayed_job)
sign_in(current_user)
end
end
```
The test above will fail with the following error if `ActiveRecord::Base.connection_handler.clear_active_connections!` is called before the configured Capybara server checks out a connection from the connection pool.
```
ActiveRecord::ActiveRecordError:
Cannot expire connection, it is owned by a different thread: #<Thread:0x00007f437391df58@puma srv tp 001 /home/tgxworld/.asdf/installs/ruby/3.1.3/lib/ruby/gems/3.1.0/gems/puma-6.0.2/lib/puma/thread_pool.rb:106 sleep_forever>. Current thread: #<Thread:0x00007f437d6cfc60 run>.
```
We're not exactly sure if this is an ActiveRecord bug or not but we've
invested too much time into investigating this problem. Fundamentally,
we also no longer understand why `ActiveRecord::Base.connection_handler.clear_active_connections!` is being called in an ensure block
within `Jobs::Base#perform` which was added in
ceddb6e0da 10 years ago. This
commit moves the logic for running jobs immediately out of the
`Jobs::Base#perform` method into another `Jobs::Base#perform_immediately` method such that
`ActiveRecord::Base.connection_handler.clear_active_connections!` is not
called. This change will only impact the test environment.
When sending emails out via group SMTP, if we
are sending them to non-staged users we want
to mask those emails with BCC, just so we don't
expose them to anyone we shouldn't. Staged users
are ones that have likely only interacted with
support via email, and will likely include other
people who were CC'd on the original email to the
group.
Co-authored-by: Martin Brennan <martin@discourse.org>
* DEV: Skip push notifications for active online users
Currently, users with active push subscriptions get push notifications
regardless of their "presence" on the site.
This change introduces a `push_notification_time_window_mins`
site setting which is used in conjunction with a user's `last_seen_at` to
determine if push notifications should be sent. A user is considered to
be actively online if their `last_seen_at` is within `push_notification_time_window_mins`
minutes. `push_notification_time_window_mins` is set to 10 by default.
* DEV: Remove client param for push_notification_time_window_mins site setting
Co-authored-by: Bianca Nenciu <nbianca@users.noreply.github.com>
Co-authored-by: Bianca Nenciu <nbianca@users.noreply.github.com>
This commit adds a new notification that gets sent to admins when the site gets new features after an upgrade/deploy. Clicking on the notification takes the admin to the admin dashboard at `/admin` where they can see the new features under the "New Features" section.
Internal topic: t/87166.
This new site setting replaces the
`enable_experimental_sidebar_hamburger` and `enable_sidebar` site
settings as the sidebar feature exits the experimental phase.
Note that we're replacing this without depreciation since the previous
site setting was considered experimental.
Internal Ref: /t/86563
Users who can access the review queue can claim a pending reviewable(s) which means that the claimed reviewable(s) can only be handled by the user who claimed it. Currently, we show claimed reviewables in the user menu, but this can be annoying for other reviewers because they can't do anything about a reviewable claimed by someone. So this PR makes sure that we only show in the user menu reviewables that are claimed by nobody or claimed by the current user.
Internal topic: t/77235.
While load testing our user creation code path in production, we
identified that executing the DB statement to update the `Group#user_count` column within a
transaction is creating a bottleneck for us. This is because the
creation of a user and addition of the user to the relevant groups are
done in a transaction. When we execute the DB statement to update
`Group#user_count` for the relevant group, a row level lock is held
until the transaction completes. This row level lock acts like a global
lock when the server is creating users that will be added to the same
group in quick succession.
Instead of updating the counter cache within a transaction which the
default ActiveRecord `counter_cache` option does, we simply update the
counter cache outside of the committing transaction.
Co-authored-by: Rafael dos Santos Silva <xfalcox@gmail.com>
Co-authored-by: Rafael dos Santos Silva <xfalcox@gmail.com>
The previous sidebar default tags and categories implementation did not
allow for a user to configure their sidebar to have no categories or
tags. This commit changes how the defaults are applied. When a user is being created,
we create the SidebarSectionLink records based on the `default_sidebar_categories` and
`default_sidebar_tags` site settings. SidebarSectionLink records are
only created for categories and tags which the user has visibility on at
the point of user creation.
With this change, we're also adding the ability for admins to apply
changes to the `default_sidebar_categories` and `default_sidebar_tags`
site settings historically when changing their site setting. When a new
category/tag has been added to the default, the new category/tag will be
added to the sidebar for all users if the admin elects to apply the changes historically.
Like wise when a tag/category is removed, the tag/category will be
removed from the sidebar for all users if the admin elects to apply the
changes historically.
Internal Ref: /t/73500
Previously, when categories were not muted by default, we were sending message about unmuted topics (topics which user explicitly set notification level to watching)
The same mechanism can be used to fix a bug. When the user was explicitly watching topic, but category was muted, then the user was not informed about new reply.
Skipped invites were not counted at all and some invites could generate
more than one error and resulted in a grand total that was not equal to
the count of bulk invites.
This commit renames all secure_media related settings to secure_uploads_* along with the associated functionality.
This is being done because "media" does not really cover it, we aren't just doing this for images and videos etc. but for all uploads in the site.
Additionally, in future we want to secure more types of uploads, and enable a kind of "mixed mode" where some uploads are secure and some are not, so keeping media in the name is just confusing.
This also keeps compatibility with the `secure-media-uploads` path, and changes new
secure URLs to be `secure-uploads`.
Deprecated settings:
* secure_media -> secure_uploads
* secure_media_allow_embed_images_in_emails -> secure_uploads_allow_embed_images_in_emails
* secure_media_max_email_embed_image_size_kb -> secure_uploads_max_email_embed_image_size_kb
We previously had a system which would generate a 10x10px preview of images and add their URLs in a data-small-upload attribute. The client would then use that as the background-image of the `<img>` element. This works reasonably well on fast connections, but on slower connections it can take a few seconds for the placeholders to appear. The act of loading the placeholders can also break or delay the loading of the 'real' images.
This commit replaces the placeholder logic with a new approach. Instead of a 10x10px preview, we use imagemagick to calculate the average color of an image and store it in the database. The hex color value then added as a `data-dominant-color` attribute on the `<img>` element, and the client can use this as a `background-color` on the element while the real image is loading. That means no extra HTTP request is required, and so the placeholder color can appear instantly.
Dominant color will be calculated:
1. When a new upload is created
2. During a post rebake, if the dominant color is missing from an upload, it will be calculated and stored
3. Every 15 minutes, 25 old upload records are fetched and their dominant color calculated and stored. (part of the existing PeriodicalUpdates job)
Existing posts will continue to use the old 10x10px placeholder system until they are next rebaked
We observed that some sites seemingly put us in a tarpit when we attempt to pull hotlinked images. Increasing the timeout will help in these situations.
When a topic was published from a shared draft and it had tags, the
users watching the tags were not notified. The problem was that the
topics are usually created in a secret category and publishing it just
moves an existent topic to the target category, without making any
changes to the tags.
It's already included in the `ignored_columns` list in the group model. 03ffb0bf27/app/models/group.rb (L9)
Also, removed the `MigrateGroupFlairImages` onceoff job and spec.
This table holds associations between uploads and other models. This can be used to prevent removing uploads that are still in use.
* DEV: Create upload_references
* DEV: Use UploadReference instead of PostUpload
* DEV: Use UploadReference for SiteSetting
* DEV: Use UploadReference for Badge
* DEV: Use UploadReference for Category
* DEV: Use UploadReference for CustomEmoji
* DEV: Use UploadReference for Group
* DEV: Use UploadReference for ThemeField
* DEV: Use UploadReference for ThemeSetting
* DEV: Use UploadReference for User
* DEV: Use UploadReference for UserAvatar
* DEV: Use UploadReference for UserExport
* DEV: Use UploadReference for UserProfile
* DEV: Add method to extract uploads from raw text
* DEV: Use UploadReference for Draft
* DEV: Use UploadReference for ReviewableQueuedPost
* DEV: Use UploadReference for UserProfile's bio_raw
* DEV: Do not copy user uploads to upload references
* DEV: Copy post uploads again after deploy
* DEV: Use created_at and updated_at from uploads table
* FIX: Check if upload site setting is empty
* DEV: Copy user uploads to upload references
* DEV: Make upload extraction less strict
This commit introduces a new site setting: `block_hotlinked_media`. When enabled, all attempts to hotlink media (images, videos, and audio) will fail, and be replaced with a linked placeholder. Exceptions to the rule can be added via `block_hotlinked_media_exceptions`.
`download_remote_image_to_local` can be used alongside this feature. In that case, hotlinked images will be blocked immediately when the post is created, but will then be replaced with the downloaded version a few seconds later.
This implementation is purely server-side, and does not impact the composer preview.
Technically, there are two stages to this feature:
1. `PrettyText.sanitize_hotlinked_media` is called during `PrettyText.cook`, and whenever new images are introduced by Onebox. It will iterate over all src/srcset attributes in the post HTML and check if they're allowed. If not, the attributes will be removed and replaced with a `data-blocked-hotlinked-src(set)` attribute
2. In the `CookedPostProcessor`, we iterate over all `data-blocked-hotlinked-src(set)` attributes and check whether we have a downloaded version of the media. If yes, we update the src to use the downloaded version. If not, the entire media element is replaced with a placeholder. The placeholder is labelled 'external media', and is a link to the offsite media.
Previously, with the default `editing_grace_period`, hotlinked images were pulled 5 minutes after a post is created. This delay was added to reduce the chance of automated edits clashing with user edits.
This commit refactors things so that we can pull hotlinked images immediately. URLs are immediately updated in the post's `cooked` HTML. The post's raw markdown is updated later, after the `editing_grace_period`.
This involves a number of behind-the-scenes changes including:
- Schedule Jobs::PullHotlinkedImages immediately after Jobs::ProcessPost. Move scheduling to after the `update_column` call to avoid race conditions
- Move raw changes into a separate job, which is delayed until after the ninja-edit window
- Move disable_if_low_on_disk_space logic into the `pull_hotlinked_images` job
- Move raw-parsing/replacing logic into `InlineUpload` so it can be easily be shared between `UpdateHotlinkedRaw` and `PullUserProfileHotlinkedImages`
Previously this mapping of **cooked** images was only being run for oneboxes. Now it runs for all images, so we can transform hotlinked images without needing to immediately update `raw`
Incorporates learnings from /t/64227:
* Changes the code to set access control posts in the rake
task to be an efficient UPDATE SQL query.
The original version was timing out with 312017 post uploads,
the new query took ~3s to run.
* Changes the code to mark uploads as secure/not secure in
the rake task to be an efficient UPDATE SQL query rather than
using UploadSecurity. This took a very long time previously,
and now takes only a few seconds.
* Spread out ACL syncing for uploads into jobs with batches of
100 uploads at a time, so they can be parallelized instead
of having to wait ~1.25 seconds for each ACL to be changed
in S3 serially.
One issue that still remains is post rebaking. Doing this serially
is painfully slow. We have a way to do this in sidekiq via PeriodicalUpdates
but this is limited by max_old_rebakes_per_15_minutes. It would
be better to fan this rebaking out into jobs like we did for the
ACL sync, but that should be done in another PR.
This commit migrates all bookmarks to be polymorphic (using the
bookmarkable_id and bookmarkable_type) columns. It also deletes
all the old code guarded behind the use_polymorphic_bookmarks setting
and changes that setting to true for all sites and by default for
the sake of plugins.
No data is deleted in the migrations, the old post_id and for_topic
columns for bookmarks will be dropped later on.
We have not used anything related to bookmarks for PostAction
or UserAction records since 2020, bookmarks are their own thing
now. Deleting all this is just cleaning up old cruft.
A bit of a mixed bag, this addresses several edge areas of bookmarks and makes them compatible with polymorphic bookmarks (hidden behind the `use_polymorphic_bookmarks` site setting). The main ones are:
* ExportUserArchive compatibility
* SyncTopicUserBookmarked job compatibility
* Sending different notifications for the bookmark reminders based on the bookmarkable type
* Import scripts compatibility
* BookmarkReminderNotificationHandler compatibility
This PR also refactors the `register_bookmarkable` API so it accepts a class descended from a `BaseBookmarkable` class instead. This was done because we kept having to add more and more lambdas/properties inline and it was very messy, so a factory pattern is cleaner. The classes can be tested independently as well.
Some later PRs will address some other areas like the discourse narrative bot, advanced search, reports, and the .ics endpoint for bookmarks.
This will make future changes to the 'pull hotlinked images' system easier. This commit should not introduce any functional change.
For now, the old post_custom_field data is kept in the database. This will be dropped in a future commit.
* FEATURE: Let sites add a sitemap.xml file.
This PR adds the same features discourse-sitemap provides to core. Sitemaps are only added to the robots.txt file if the `enable_sitemap` setting is enabled and `login_required` disabled.
After merging discourse/discourse-sitemap#34, this change will take priority over the sitemap plugin because it will disable itself. We're also using the same sitemaps table, so our migration won't try to create it
again using `if_not_exists: true`.
Sometimes we need to update a _lot_ of ACLs on S3 (such as when secure media
is enabled), and since it takes ~1s per upload to update the ACL, this is
best spread out over many jobs instead of having to do the whole thing serially.
In future, it will be better to have a job that can be run based on
a column on uploads (e.g. acl_stale) so we can track progress, similar
to how we can set the baked_version to nil to rebake posts.
raw_html posts (i.e. those which are pulled as part of our comments integration) don't go through our markdown pipeline, so `upload://` URLs are not supported. Running pull_hotlinked_images will break any images in the post.
In future we may add support for pulling hotlinked images in these posts. But for now, disabling it will stop it breaking images.
This commit introduces a new use_polymorphic_bookmarks site setting
that is default false and hidden, that will be used to help continuous
development of polymorphic bookmarks. This setting **should not** be
enabled anywhere in production yet, it is purely for local development.
This commit uses the setting to enable create/update/delete actions
for polymorphic bookmarks on the server and client side. The bookmark
interactions on topics/posts are all usable. Listing, searching,
sending bookmark reminders, and other edge cases will be handled
in subsequent PRs.
Comprehensive UI tests will be added in the final PR -- we already
have them for regular bookmarks, so it will just be a matter of
changing them to be for polymorphic bookmarks.
The meaning of reminder_at and reminder_last_sent_at changed after
commit 6d422a8033. A bookmark reminder
will fire only if reminder_last_sent_at is null, but before that it
fired everytime reminder_at was set. This is no longer true because
sometimes reminder_at continues to exist even after a reminder fired.
The user can select what happens with a bookamrk after it expires. New
option allow bookmark's reminder to be kept even after it has expired.
After a bookmark's reminder notification is created, the reminder date
will be highlighted in red until the user resets the reminder date.
User can do that using the new Clear Reminder button from the dropdown.
This allows text editors to use correct syntax coloring for the heredoc sections.
Heredoc tag names we use:
languages: SQL, JS, RUBY, LUA, HTML, CSS, SCSS, SH, HBS, XML, YAML/YML, MF, ICS
other: MD, TEXT/TXT, RAW, EMAIL
We validate the *format* of email addresses in many places with a match against
a regex, often with very slightly different syntax.
Adding a separate EmailAddressValidator simplifies the code in a few spots and
feels cleaner.
Deprecated the old location in case someone is using it in a plugin.
No functionality change is in this commit.
Note: the regex used at the moment does not support using address literals, e.g.:
* localpart@[192.168.0.1]
* localpart@[2001:db8::1]
This commit introduces two new APIs for handling unused uploads, one
can be used to exclude uploads in bulk when the data model allow and
the other one excludes uploads one by one.
Whenever we got a bounced email in the Email::Receiver we
previously would just set bounced: true on the EmailLog and
discard the status/diagnostic code. This commit changes this
flow to store the bounce error code (defined in the RFC at
https://www.iana.org/assignments/smtp-enhanced-status-codes/smtp-enhanced-status-codes.xhtml)
not just in the Email::Receiver, but also via webhook events
from other mail services and from SNS.
This commit does not surface the bounce error in the UI,
we can do that later if necessary.
This commit introduces our own handling and warning for Sidekiq's new 'non-json-serializable' warning. This decouples us from Sidekiq's own deprecation cycle, and allows us to use our own deprecation system. It also means that the dump/parse happens in test mode, which will help us to catch occurrences before they reach production.
Job arguments go via JSON, and so symbols will appear as strings in the Job's `#execute` method. The latest version of Sidekiq has started warning about this to reduce developer confusion.
The latest version of Sidekiq introduced a warning when jobs are queued with arguments which 'do not stringify to JSON safely'. In the vast majority of cases, this is because a hash is passed with symbols as keys. When those args are passed to the job, the keys will be stringified.
Our job wrapper already takes care of this issue by calling '.with_indifferent_access' on the args before passing them to `#execute`, so we don't need to change anything about our use. All we need to do is satisfy Sidekiq's warning system by 'stringifying' all the keys before enqueuing the job.
* File.exists? is deprecated and removed in Ruby 3.2 in favor of
File.exist?
* Dir.exists? is deprecated and removed in Ruby 3.2 in favor of
Dir.exist?
This commit introduces scheduled problem checks for the admin dashboard, which are long running or otherwise cumbersome problem checks that will be run every 10 minutes rather than every time the dashboard is loaded. If these scheduled checks add a problem, the problem will remain until it is cleared or until the scheduled job runs again.
An example of a check that should be scheduled is validating credentials against an external provider.
This commit also introduces the concept of a `priority` to the problems generated by `AdminDashboardData` and the scheduled checks. This is `low` by default, and can be set to `high`, but this commit does not change any part of the UI with this information, only adds a CSS class.
I will be making a follow up PR to check group SMTP credentials.
We send the reminder using the GroupMessage class, which supports removing previous messages. We can't match them by raw because they could mention different moderators. Also, I had to change the subject to remove dynamically generated values, which is necessary for finding them.
This commit introduces a new site setting "google_oauth2_hd_groups". If enabled, group information will be fetched from Google during authentication, and stored in the Discourse database. These 'associated groups' can be connected to a Discourse group via the "Membership" tab of the group preferences UI.
The majority of the implementation is generic, so we will be able to add support to more authentication methods in the near future.
https://meta.discourse.org/t/managing-group-membership-via-authentication/175950
Old OnceOff job could perform pretty slowly on sites with millions of emails
New implementation operates in batches in a migration, minimizing locking.
This commit adds token_hash and scopes columns to email_tokens table.
token_hash is a replacement for the token column to avoid storing email
tokens in plaintext as it can pose a security risk. The new scope column
ensures that email tokens cannot be used to perform a different action
than the one intended.
To sum up, this commit:
* Adds token_hash and scope to email_tokens
* Reuses code that schedules critical_user_email
* Refactors EmailToken.confirm and EmailToken.atomic_confirm methods
* Periodically cleans old, unconfirmed or expired email tokens
When this setting is turned on, it will check that normalized emails
are unique. Normalized emails are emails without any dots or plus
aliases.
This setting can be used to block use of aliases of the same email
address.
Sometimes, a user may have a malformed email such as
`test@test.com<mailto:test@test.com` their email address,
and as a topic participant will be included as a CC email
when sending a GroupSmtpEmail. This causes the CC parsing to
fail and further down the line in Email::Sender the code
to check the CC addresses expects an array but gets a string
instead because of the parse failure.
Instead, we can just check if the CC addresses are valid
and drop them if they are not in the GroupSmtpEmail job.
An upstream validation bug in the aws-sdk-sns library could enable RCE under certain circumstances. This commit updates the upstream gem, and adds additional validation to provide defense-in-depth.
This reverts commit f5cf647e57.
The gem breaks usage of Rails URL helpers when used outside views and
controllers, for example in
88ecb83382/app/models/upload.rb (L239-L242)
the `upload_short_path` method call fails with an undefined method
exception when this gem is enabled.
The lazy route initialization cuts down boot time of rails.
On my local system it cuts out 200ms of boot time taking me from 3.2 to 3 seconds.
This is not a radically enormous amount of time, but paper cuts add up, and a faster boot in dev will make everyone happy.
TBD if we want to also include this in production.
Gem is heavily maintained by @amatsuda, last commit 3 days ago.
We don't actually use the reminder_type for bookmarks anywhere;
we are just storing it. It has no bearing on the UI. It used
to be relevant with the at_desktop bookmark reminders (see
fa572d3a7a)
This commit marks the column as readonly, ignores it, and removes
the index, and it will be dropped in a later PR. Some plugins
are relying on reminder_type partially so some stubs have been
left in place to avoid errors.
We don't need no stinkin' denormalization! This commit ignores
the topic_id column on bookmarks, to be deleted at a later date.
We don't really need this column and it's better to rely on the
post.topic_id as the canonical topic_id for bookmarks, then we
don't need to remember to update both columns if the bookmarked
post moves to another topic.
Discourse is sending regularly message to admins when potential problems are persisted. Most of the time they have exactly the same content. In that case, when there are no replies, the old one should be trashed before a new one is created.
PresenceChannel aims to be a generic system for allow the server, and end-users, to track the number and identity of users performing a specific task on the site. For example, it might be used to track who is currently 'replying' to a specific topic, editing a specific wiki post, etc.
A few key pieces of information about the system:
- PresenceChannels are identified by a name of the format `/prefix/blah`, where `prefix` has been configured by some core/plugin implementation, and `blah` can be any string the implementation wants to use.
- Presence is a boolean thing - each user is either present, or not present. If a user has multiple clients 'present' in a channel, they will be deduplicated so that the user is only counted once
- Developers can configure the existence and configuration of channels 'just in time' using a callback. The result of this is cached for 2 minutes.
- Configuration of a channel can specify permissions in a similar way to MessageBus (public boolean, a list of allowed_user_ids, and a list of allowed_group_ids). A channel can also be placed in 'count_only' mode, where the identity of present users is not revealed to end-users.
- The backend implementation uses redis lua scripts, and is designed to scale well. In the future, hard limits may be introduced on the maximum number of users that can be present in a channel.
- Clients can enter/leave at will. If a client has not marked itself 'present' in the last 60 seconds, they will automatically 'leave' the channel. The JS implementation takes care of this regular check-in.
- On the client-side, PresenceChannel instances can be fetched from the `presence` ember service. Each PresenceChannel can be used entered/left/subscribed/unsubscribed, and the service will automatically deduplicate information before interacting with the server.
- When a client joins a PresenceChannel, the JS implementation will automatically make a GET request for the current channel state. To avoid this, the channel state can be serialized into one of your existing endpoints, and then passed to the `subscribe` method on the channel.
- The PresenceChannel JS object is an ember object. The `users` and `count` property can be used directly in ember templates, and in computed properties.
- It is important to make sure that you `unsubscribe()` and `leave()` any PresenceChannel objects after use
An example implementation may look something like this. On the server:
```ruby
register_presence_channel_prefix("site") do |channel|
next nil unless channel == "/site/online"
PresenceChannel::Config.new(public: true)
end
```
And on the client, a component could be implemented like this:
```javascript
import Component from "@ember/component";
import { inject as service } from "@ember/service";
export default Component.extend({
presence: service(),
init() {
this._super(...arguments);
this.set("presenceChannel", this.presence.getChannel("/site/online"));
},
didInsertElement() {
this.presenceChannel.enter();
this.presenceChannel.subscribe();
},
willDestroyElement() {
this.presenceChannel.leave();
this.presenceChannel.unsubscribe();
},
});
```
With this template:
```handlebars
Online: {{presenceChannel.count}}
<ul>
{{#each presenceChannel.users as |user|}}
<li>{{avatar user imageSize="tiny"}} {{user.username}}</li>
{{/each}}
</ul>
```
This bug was introduced by f66007ec83.
In PostJobsEnqueuer we previously did not fire the after_post_create
event and after_topic_create event for private message topics. This was
changed in the above commit in order to publish message bus messages
for topic tracking state updates. Unfortunately this caused the
NotifyMailingListSubscribers job to be enqueued for all posts including
private messages, and admins and the users involved in the PMs got
emailed the contents of the PMs if they had mailing list mode enabled.
Luckily the impact of this was mitigated by a Guardian#can_see? check
for each mailing list mode user in the NotifyMailingListSubscribers job.
We never want to notify mailing list mode subscribers for private messages
so an early return has been added there, plus the logic in PostJobsEnqueuer
has been fixed, and tests have been added to that class where there were
none before.
There are certain design decisions that were made in this commit.
Private messages implements its own version of topic tracking state because there are significant differences between regular and private_message topics. Regular topics have to track categories and tags while private messages do not. It is much easier to design the new topic tracking state if we maintain two different classes, instead of trying to mash this two worlds together.
One MessageBus channel per user and one MessageBus channel per group. This allows each user and each group to have their own channel backlog instead of having one global channel which requires the client to filter away unrelated messages.
This adds a few different things to allow for direct S3 uploads using uppy. **These changes are still not the default.** There are hidden `enable_experimental_image_uploader` and `enable_direct_s3_uploads` settings that must be turned on for any of this code to be used, and even if they are turned on only the User Card Background for the user profile actually uses uppy-image-uploader.
A new `ExternalUploadStub` model and database table is introduced in this pull request. This is used to keep track of uploads that are uploaded to a temporary location in S3 with the direct to S3 code, and they are eventually deleted a) when the direct upload is completed and b) after a certain time period of not being used.
### Starting a direct S3 upload
When an S3 direct upload is initiated with uppy, we first request a presigned PUT URL from the new `generate-presigned-put` endpoint in `UploadsController`. This generates an S3 key in the `temp` folder inside the correct bucket path, along with any metadata from the clientside (e.g. the SHA1 checksum described below). This will also create an `ExternalUploadStub` and store the details of the temp object key and the file being uploaded.
Once the clientside has this URL, uppy will upload the file direct to S3 using the presigned URL. Once the upload is complete we go to the next stage.
### Completing a direct S3 upload
Once the upload to S3 is done we call the new `complete-external-upload` route with the unique identifier of the `ExternalUploadStub` created earlier. Only the user who made the stub can complete the external upload. One of two paths is followed via the `ExternalUploadManager`.
1. If the object in S3 is too large (currently 100mb defined by `ExternalUploadManager::DOWNLOAD_LIMIT`) we do not download and generate the SHA1 for that file. Instead we create the `Upload` record via `UploadCreator` and simply copy it to its final destination on S3 then delete the initial temp file. Several modifications to `UploadCreator` have been made to accommodate this.
2. If the object in S3 is small enough, we download it. When the temporary S3 file is downloaded, we compare the SHA1 checksum generated by the browser with the actual SHA1 checksum of the file generated by ruby. The browser SHA1 checksum is stored on the object in S3 with metadata, and is generated via the `UppyChecksum` plugin. Keep in mind that some browsers will not generate this due to compatibility or other issues.
We then follow the normal `UploadCreator` path with one exception. To cut down on having to re-upload the file again, if there are no changes (such as resizing etc) to the file in `UploadCreator` we follow the same copy + delete temp path that we do for files that are too large.
3. Finally we return the serialized upload record back to the client
There are several errors that could happen that are handled by `UploadsController` as well.
Also in this PR is some refactoring of `displayErrorForUpload` to handle both uppy and jquery file uploader errors.
* FIX: Clear stale status of reloaded reviewables
Navigating away from and back to the reviewables reloaded Reviewable
records, but did not clear the "stale" attribute.
* FEATURE: Show user who last acted on reviewable
When a user acts on a reviewable, all other clients are notified and a
generic "reviewable was resolved by someone" notice was shown instead of
the buttons. There is no need to keep secret the username of the acting
user.
Currently when bulk-awarding a badge that can be granted multiple times, users in the CSV file are granted the badge once no matter how many times they're listed in the file and only if they don't have the badge already.
This PR adds a new option to the Badge Bulk Award feature so that it's possible to grant users a badge even if they already have the badge and as many times as they appear in the CSV file.
This PR fixes a couple of issues related to group SMTP:
1. When running the group SMTP job, we were exiting early if the email was for the OP because of an IMAP race condition. However this causes issues when replying as a new topic for an existing SMTP topic, as the recipient does not get the OP email which can cause threading problems.
2. When sending emails for a new topic spun out like the issue in 1., we are not maintaining the original subject/topic title because that is based on the incoming email record, which we were not doing because the group SMTP email was never sent because of issue 1.
There was an issue with the TopicTimerEnqueuer where any timer
that failed to enqueue_typed_job with an error would prevent
all other pending timers after the one that errored from running.
To mitigate this we just capture the error and log it (so we can
still fix it if needed for bug crushing) and proceed with the
rest of the timer enqueues.
The commit https://github.com/discourse/discourse/pull/13544 highlighted
this issue originally in hosted sites.
<!-- NOTE: All pull requests should have tests (rspec in Ruby, qunit in JavaScript). If your code does not include test coverage, please include an explanation of why it was omitted. -->
Use the `sidekiq_retry_in` code from Jobs::UserEmail in group SMTP. Also we don't need to keep `seconds_to_delay` -- sidekiq uses the default delay calculation if you return 0 or nil from the block. See 3330df0ee3/lib/sidekiq/job_retry.rb (L216-L234) for sidekiq default retry delay logic.
I experimented with extracting this into a concern or a module, but `sidekiq_retry_in` is quite magic and it would not allow me to abstract away into a module that calls some method specificall in the child job class.
I would love to write tests for this, but it does not seem possible (not sure if its because of our test
setup) to write tests that test sidekiq's retry capability, and I am not sure if we should be anyway. Initial addition
to UserEmail did not test this functionality
d224966a0e
Skip group SMTP email (and add log) if:
* topic is deleted
* post is deleted
* smtp has been disabled for the group
Skip without log if:
* enable_smtp site setting is false
* disable_emails site setting is yes
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
This PR backtracks a fair bit on this one https://github.com/discourse/discourse/pull/13220/files.
Instead of sending the group SMTP email for each user via `UserNotifications`, we are changing to send only one email with the existing `Jobs::GroupSmtpEmail` job and `GroupSmtpMailer`. We are changing this job and mailer along with `PostAlerter` to make the first topic allowed user the `to_address` for the email and any other `topic_allowed_users` to be the CC address on the email. This is to cut down on emails sent via SMTP, which is subject to daily limits from providers such as Gmail. We log these details in the `EmailLog` table now.
In addition to this, we have changed `PostAlerter` to no longer rely on incoming email email addresses for sending the `GroupSmtpEmail` job. This was unreliable as a user's email could have changed in the meantime. Also it was a little overcomplicated to use the incoming email records -- it is far simpler to reason about to just use topic allowed users.
This also adds a fix to include cc_addresses in the EmailLog.addressed_to_user scope.
* FEATURE: Staff can receive pending user reminders more frequently.
We now express the "pending_users_reminder_delay" in minutes instead of hours so staff can have finer control over the delay.
We need to keep in mind that the reminders could still take up to 20 minutes, even when using a lower value. We send them from a scheduled job.
* Migrate to a new site setting for the reminders delay
ATM it only implements server side of it, as my need is for automation purposes. However it should probably be added in the UI too as it's unexpected to have pinned_until and no bannered_until.
Anonymizing a user changed their email address, destroyed all
associated InvitedUser records, but did not destroy the invites
associated to user's email.
When we call Bookmark.cleanup! we want to make sure that
topic_user.bookmarked is updated for topics linked to the
bookmarks that were deleted. Also when PostDestroyer calls
destroy and recover. We have a job for this already --
SyncTopicUserBookmarked -- so we just utilize that.
Subclasses must call #delete_user_actions inside build_actions to support user deletion. The method adds a delete user bundle, which has a delete and a delete + block option. Every subclass is responsible for implementing these actions.
Notifying about a tag change sometimes resulted in loading a large
number of users in memory just to perform an exclusion. This commit
prefers to do inclusion (i.e. instead of exclude users X, do include
users in groups Y) and does it in SQL to avoid fetching unnecessary
data that is later discarded.
The previous commits removed reviewables leading to a bad user
experience. This commit updates the status, replaces actions with a
message and greys out the reviewable.
Over the years we accrued many spelling mistakes in the code base.
This PR attempts to fix spelling mistakes and typos in all areas of the code that are extremely safe to change
- comments
- test descriptions
- other low risk areas
Email change requests are never deleted no matter if they completed
successfully or not. The abandoned requests have the disadvantage of
showing up as unconfirmed emails in user's preferences page.
Recalculating a ReviewableFlaggedPost's score after rejecting or ignoring it sets the score as 0, which means that we can't find them after reviewing. They don't surpass the minimum priority threshold and are hidden.
Additionally, we only want to use agreed flags when calculating the different priority thresholds.
* FEATURE: add support for like webhooks
Add support for like webhooks. Webhook events only send on user membership
in the defined webhook group filters.
This also fixes group webhook events, as before this was never used, and
the logic was not correct.
This filter hides reviewables with a score lower than the "reviewable_low_priority_threshold" setting. We only use reviewables that already met this threshold to calculate the Medium and High priority filters.
It used to check if an upload record exists, which is wrong because an
invalid upload record exists even if the upload was not created.
The other improvement is a better log message.
Previously watched words ignored topic titles when applying auto tagging rules.
Also copy has been improved to reflect how the system behaves.
The text hints that we are only watching first post now
scopes are incredibly annoying to preload, simply adding :user_emails is not
enough.
Instead of relying on scopes simply iterate through user_emails which is
properly preloaded.
This removes 2 * N+1 when generating user reports.
When posts are moved from one topic to another, the `topic_user.bookmarked` column for all users in the new and the old topic needs to be resynced, for example because a user bookmarks post 12 in topic 1, then it is moved to topic 2, the topic_user record for topic 1 should no longer be bookmarked. A background job has been added to sync the column for a specified topic, or for no topic at all, which does it for all topics like the migration.
Also includes a migration that we have run in the past to fix bad data.
----
This has been addressed in other places in the past:
https://github.com/discourse/discourse/pull/10211https://github.com/discourse/discourse/pull/10188
When bulk inviting, the uploaded CSV file may contain wrong values for
the user fields. This tries to automatically correct them by finding
the most similar option (by ignoring the case).
Admins can use bulk invites to pre-populate user fields. The imported
CSV file must have a header with "email" column (first position) and
names of the user fields (exact match).
Under the hood, the bulk invite will create staged users and populate
the user fields of those.