Not when doing a site-wide search like we do in the Directory.
This solves the following specfailure:
1) DirectoryItemsController with data finds user by name
Failure/Error: expect(json['directory_items'].length).to eq(1)
expected: 1
got: 0
(compared using ==)
# ./spec/requests/directory_items_controller_spec.rb:88:in `block (3 levels) in <main>'
# ./spec/rails_helper.rb:271:in `block (2 levels) in <top (required)>'
# ./bundle/ruby/2.7.0/gems/webmock-3.11.1/lib/webmock/rspec.rb:37:in `block (2 levels) in <top (required)>'
When doing a user search (eg. when mentioning a user) we will not prioritie
users who hasn't been seen in over a year.
REFACTOR the user-search specs to be more precise regarding the ordering
Lots of changes but it's mostly a refactoring.
The interesting part that was fix are the 'load_problem_<model>_ids' methods.
They will now return records with no search data associated so they can be properly indexed for the search.
This "bad" state usually happens after a migration.
Improvements to make console access to IncomingEmail more pleasant, and stopping certain IMAP logs from landing in the DB because they just create too much noise,
This adds a new table UserNotificationSchedules which stores monday-friday start and ends times that each user would like to receive notifications (with a Boolean enabled to remove the use of the schedule). There is then a background job that runs every day and creates do_not_disturb_timings for each user with an enabled notification schedule. The job schedules timings 2 days in advance. The job is designed so that it can be run at any point in time, and it will not create duplicate records.
When a users saves their notification schedule, the schedule processing service will run and schedule do_not_disturb_timings. If the user should be in DND due to their schedule, the user will immediately be put in DND (message bus publishes this state).
The UI for a user's notification schedule is in user -> preferences -> notifications. By default every day is 8am - 5pm when first enabled.
This should make it easier to track down how the incoming email was created, which is one of four locations:
The POP3 poller (which picks up reply via email replies)
The admin email controller #handle_mail (which is where hosted mail is sent)
The IMAP sync tool
The group SMTP mailer, which sends emails when replying to IMAP topics, pre-emptively creating IncomingEmail records to avoid double syncing
Moves the topic timer jobs from being scheduled ahead of time with enqueue_at to a 5 minute scheduled run like bookmark reminders, in a new job called Jobs::EnqueueTopicTimers. Backwards compatibility is maintained by checking if an existing topic timer job is enqueued in sidekiq for the timer, and if it is not running it inside the new job.
The functionality to close/open a topic if it is in the opposite state still remains in the after_save block of TopicTimer, with further commentary, which is used for Open/Close Temporarily.
This also removes the ensure_consistency! functionality of topic timers as it is no longer needed; the new job will always pick up the timers because they are not stored in a fragile state of sidekiq.
This PR fixes a race condition with the IMAP notification code. In the `Email::Receiver` we call the `NewPostManager` to create the post and enqueue jobs and sends alerts via `PostAlerter`. However, if the post alerter reaches the `notify_pm_users` and the `group_notifying_via_smtp` method _before_ the incoming email is updated with the post and topic, we unnecessarily send a notification to the person who just posted. The result of this is that the IMAP syncer re-imports the email sent to the user about their own post, which looks like this in the group inbox:
To fix this, we skip the jobs enqueued by `NewPostManager` and only enqueue them with `PostJobsEnqueuer` manually _after_ the incoming email record has been updated with the post and topic.
Other improvements:
* Moved code to calculate email addresses from `IncomingEmail` records into the topic, with a group passed in, for easier testing and debugging. It is not the responsibility of the post alerter to figure this stuff out.
* Add shortcut methods on `IncomingEmail` to split or provide an empty array for to and cc addresses to avoid repetition.
Feature for `Must Approve Users` setup. When a user is rejected, a staff member can optionally set a reason for audit purposes. In addition, feedback email can be sent to the user.
Meta: https://meta.discourse.org/t/account-rejection-email/103112/8
Splits the `ToggleTopicClosed` job into two distinct `OpenTopic` and `CloseTopic` jobs to make the code clearer. The old job cannot be deleted yet because of outstanding sidekiq schedules, so a todo has been added to do so later this year.
Also replaced mentions of `topic_status_update` with `topic_timer` in some files, because the `topic_status_update` model is obsolete and replaced by topic timer.
Added some shortcut methods for checking if a topic is open/whether a user can change an open topic.
Fixes a rare race condition causing the `Imap::Sync` class to create an incoming email and associated post/topic, which then kicks off the PostAlerter to notify others in the PM about a reply in the topic, but for the OP which is not necessary (because the person emailing the IMAP inbox already knows about the OP). Basically, we should never be sending the group SMTP email for the first post in a topic.
Also in this PR:
* Custom attribute accessors for the to/from/cc addresses on `IncomingEmail`, to parse them from an array to a joined string so the logic for this is only in one place.
* Store extra detail against the `IncomingEmail` created in `GroupSmtpMailer`
* regex test Mail header Reply-To as string instead of Field, which fixes `warning: deprecated Object#=~ is called on Mail::Field; it always returns nil`
* Add DEBUG_IMAP to log all IMAP logs as warnings for easier debugging
* Changed the Rails logging to `ImapSyncLog` in the `GroupSmtpMailer`
- Only initialize the S3Helper when needed
- Skip initializing the S3Helper for S3Store#cdn_url
- Allow cook_url to be passed a `local` hint to skip unnecessary checks
These 2 indexes optimise performance on profile pages.
The summary page displays:
1. A list of "Top Link" - links sorted by number of clicks posted by user
2. A list of "Top Replies" - replies made by a user that go the most hearts
These two areas could devolve into full index or table scans, new indexes are there to avoid this cost on large dbs
One minor downside is that storage requirements go a tiny bit up to maintain the new indexes
* DEV: Remove with_deleted workarounds for old Rails version
These workarounds using private APIs are no longer required in the latest version of Rails. The referenced issue (https://github.com/rails/rails/issues/4306) was closed in 2013. The acts_as_paranoid workaround which this was based on was removed for rails > 5.
Switching to using a scope also allows us to use it within a `belongs_to` relation (e.g. in the Poll model). This avoids issues which can be caused by unscoping all `where` clauses.
Predicates are not necessarily strings, so calling `.join(" AND ")` can sometimes cause weird errors. If we use `WhereClause#ast`, and then `.to_sql` we achieve the same thing with fully public APIs, and it will work successfully for all predicates.
If we clear the in-process cache first, it might get re-filled from the
DB before we clear the DB cache. This would be more likely on high-traffic
sites.
* FIX: 'false' value was treated as a truthy value
For example, latest.json?no_subcategories=false used to have set
no_subcategories to the string value of 'false', which is not false.
* DEV: Remove dead code
* FIX: Redirect to /none under the right conditions
These conditions are:
- neither /all or /none present
- only for default filter
* FIX: Build correct topic list filter
/none was never added to the topic list filter
* FIX: Do not show count for subcategories if 'none' category
* FIX: preload_key must contain /none if no_subcategories
Being that system badges ship with every instance of Discourse, we've opted to define the name, description, and long description in our locales files to promote translation into other languages. When an admin visited the overview page of a system badge in their admin panel, they were met with disabled inputs for these text properties. The problem is that we failed to educate the admin that the text needs to be managed via the site text customization settings.
This change adds a small "Customize Text" link under theses inputs that takes the admin to the specific site text customization where they can make desired changes.
Notification is created by a job. If the job is evaluated before changes are committed to a database, a notification will have an incorrect URL.
Therefore, the job should be lodged in enqueue_jobs method which is triggered after the transaction:
```ruby
Topic.transaction do
move_posts_to topic
end
add_allowed_users(participants) if participants.present? && @move_to_pm
enqueue_jobs(topic)
```
I improved a little bit specs to ensure that the destination topic_id is set. However, that tests are passing even without code improvements. I couldn't find an easy way to "delay" database transaction.
Meta: https://meta.discourse.org/t/bug-with-notifications-for-moved-posts/168937
When the invite was being redeemed and the ReviewableUser record status
for the invited user was not pending an error was being raised.
This commit makes sure that we are only looking for ReviewableUser
record with status pending and updates that to approved.
On category create an exception will be thrown on this job because the
save transaction hasn't completed yet and the job cannot find the
category id. To prevent this we can use the rails 6 `after_save_commit`
hook that will fire after the category save transaction has finished for
both update and create actions.
Previously thumbnails were only preloaded for queries using `TopicQuery#default_results`, which meant that requests for PM topic lists would lead to N+1 queries.
This commit moves the preloading into TopicList#load_topics, along with other similar preloads (e.g. plugin custom fields)
The direct call to `ActiveRecord::Associations::Preloader#preload` is necessary because `@topics` can be an array, not an `ActiveRecord::Relation`
Themes marked for auto update will be automatically updated when
Discourse is updated. This is triggered by discourse_docker or
docker_manager running Rake task 'themes:update'.
* FIX: Store Reviewable's force_review as a boolean.
Using the `force_review` flag raises the score to hit the minimum visibility threshold. This strategy turned out to be ineffective on sites with a high number of flags, where these values could rapidly fluctuate.
This change adds a `force_review` column on the reviewables table and modifies the `Reviewable#list_for` method to show these items when passing the `status: :pending` option, even if the score is not high enough. ReviewableQueuedPosts and ReviewableUsers are always created using this option.
Rapid concurrent SSO attempts is something that happens quite frequently
in the wild at large enough scale.
When this happens conditions such as adding a user to a group could possibly
fire concurrently causing a user to be added to the same group twice and
erroring out.
To avoid all concurrency issues here we protect with a coarse distributed
mutex. This heavily mitigates the risk around concurrent group additions and
concurrent updates to user related records.
This commit adds an additional find_user_by_email hook to ManagedAuthenticator so that GitHub login can continue to support secondary email addresses
The github_user_infos table will be dropped in a follow-up commit.
This is the last core authenticator to be migrated to ManagedAuthenticator 🎉
PostDestroyer should accept the option to permanently destroy post from the database. In addition, when the first post is destroyed it destroys the whole topic.
Currently, that feature is limited to private messages and creator of the post. It will be used by discourse-encrypt to explode encrypted private messages.
Ensure we do not respect max_tags_in_filter_list when showing the list of PM tags.
This filter is used on a full page view and there is not point limiting it to a small number.
The expectation is that PM tags are very rarely used, so a hard limit of 1000 should be safe for now.
It used to simply say "not allowed" without giving any hint what the
problem could be. This commit refactors the code and tries to improve
readability.
- IgnoredUser records should all now have an expiring_at value. This commit enforces that in the DB, and fixes any corrupt rows
- Changes to the ignored user list are now handled by the `/u/{username}/notification_level` endpoint. This allows setting expiration dates on the ignore. This commit removes the old logic for saving a list of usernames in the user preferences.
- Many specs were calling `IgnoredUser.create`. This commit changes them to use `Fabricate(:ignored_user)` for consistency
A site owner attempting to use both the email_subject site setting and translation overrides for normal post notification
email subjects would find themselves frusturated at the lack of template argument parity.
Make all the variables available for translation overrides by adding the subject variables to the custom interpolation keys list and applying them.
Reported at https://meta.discourse.org/t/customize-subject-format-for-standard-emails/20801/47?u=riking
This commit adds a site setting `auto_close_topics_create_linked_topic`
which when enabled works in conjunction with `auto_close_topics_post_count`
setting and creates a new linked topic for the topic just closed.
The auto-created new topic contains a link for all the previous topics
and the topic titles are appended with `(Part {n})`.
The setting is enabled by default.
There is a site setting reply_by_email_enabled which when combined with reply_by_email_address creates a Reply-To header in emails in the format "test+%{reply_key}@test.com" along with a PostReplyKey record, so when replying Discourse knows where to route the reply.
However this conflicts with the IMAP implementation. Since we are sending the email for a group via SMTP and from their actual email account, we want all replys to go to that email account as well so the IMAP sync job can pick them up and put them in the correct place. So if the group has IMAP enabled and configured, then the reply-to header will be correct.
This PR also makes a further fix to 64b0b50 by using the correct recipient user for the PostReplyKey record. If the post user is used we encounter this error:
if destination.user_id != user.id && !forwarded_reply_key?(destination, user)
raise ReplyUserNotMatchingError, "post_reply_key.user_id => #{destination.user_id.inspect}, user.id => #{user.id.inspect}"
end
This is because the user above is found from the from_address, but the destination which is the PostReplyKey is made by the post.user, which will be different people.
Resending an invite moved the expire date in the future, but did not
invalidate it. For example, if an invite was sent to an email,
invalidated and then resent, it would still be left invalidated.
* FEATURE - Add SiteSettings to control JPEG image quality
`recompress_original_jpg_quality` - the maximum quality of a newly
uploaded file.
`image_preview_jpg_quality` - the maximum quality of OptimizedImages
This consolidates logic used to match routes in ApiKey, UserApiKey and DefaultCurrentUserProvider. This reduces duplicated logic, and will allow UserApiKeysScope to easily re-use the parameter matching logic from ApiKeyScope
Adds a new slow mode for topics that are heating up. Users will have to wait for a period of time before being able to post again.
We store this interval inside the topics table and track the last time a user posted using the last_posted_at datetime in the TopicUser relation.
Dependency on gifsicle, allow_animated_avatars and allow_animated_thumbnails
site settings were all removed. Animated GIF images are still allowed, but
the generated optimized images are no longer animated for those (which were
used for avatars and thumbnails).
The added 'animated' is populated by extracting information using FastImage.
This field was used to selectively reoptimize old animations. This process
happens in the background.
Now that we have support for user-selectable color schemes, it makes sense
to simplify seeding and theme updates in the wizard.
We now:
- seed only one theme, named "Default" (previously "Light")
- seed a user-selectable Dark color scheme
- rename the "Themes" wizard step to "Colors"
- update the default theme's color scheme if a default is set
(a new theme is created if there is no default)
When posts or topics are deleted we don't want to immediately delete associated bookmarks, so we have a grace period to recover them and their reminders if the post or topic is un-deleted. This PR adds a task to the Weekly scheduled job to go and delete bookmarks attached to posts or topics deleted > 3 days ago.
DEV: Replace instances of Discourse.base_uri with Discourse.base_path
This is clearer because the base_uri is actually just a path prefix. This continues the work started in 555f467.
Previously, Jobs::EnqueueDigestEmails would enqueue a digest job for every user, even if there are no topics to send. The digest job would exit, no email would send, and last_emailed_at would not change. 30 minutes later, Jobs::EnqueueDigestEmails would run again and re-enqueue jobs for the same users.
120fa8ad introduced a temporary mitigation for this issue, by randomly selecting a subset of those users each time.
This commit adds a new `digest_attempted_at` column to the `user_stats` table. This column is updated every time a digest job completes for a user. Using this, we can avoid scheduling digest jobs for the same user every 30 minutes. This also removes the random user selection in 120fa8ad, and instead prioritizes users who had digests attempted the longest time ago.
See https://meta.discourse.org/t/changing-a-users-email/164512 for additional context.
Previously when an admin user changed a user's email we assumed that they would need a password reset too because they likely did not have access to their account. This proved to be incorrect, as there are other reasons a user needs admin to change their email. This PR:
* Changes the admin change email for user flow so the user is sent an email to confirm the change
* We now record who the email change request was requested by
* If the requested by user is admin and not the user we note this in the email sent to the user
* We also make the confirm change email route open to anonymous users, so it can be clicked by the user even if they do not have access to their account. If there is a logged in user we make sure the confirmation matches the current user.
Previously, moving a category into another one, that already had a child category of that name (but with a non-conflicting slug) would cause a 500 error:
```
# PG::UniqueViolation:
# ERROR: duplicate key value violates unique constraint "unique_index_categories_on_name"
# DETAIL: Key (COALESCE(parent_category_id, '-1'::integer), name)=(5662, Amazing Category 0) already exists.
```
It now returns 422, and shows the same message as when you're renaming a category: "Category Name has already been taken".
Upload.secure_media_url? raised an exceptions when the URL was invalid,
which was a issue in some situations where secure media URLs must be
removed.
For example, sending digests used PrettyText.strip_secure_media,
which used Upload.secure_media_url? to replace secure media with
placeholders. If the URL was invalid, then an exception would be raised
and left unhandled.
Now instead in UrlHelper.rails_route_from_url we return nil if there is something wrong with the URL.
Co-authored-by: Bianca Nenciu <nenciu.bianca@gmail.com>
`self.class` here evaluates to `Class` and then we're calling `define_method` on it which means all classes will have those methods defined in them. For example:
```
~/discourse(master*) » rails c
Loading development environment (Rails 6.0.3.3)
[1] pry(main)> Integer.methods
=> [:sqrt,
:yaml_tag,
:email_domains_blacklist=,
:email_domains_whitelist=,
:unicode_username_character_whitelist=,
:user_website_domains_whitelist=,
:whitelisted_link_domains=,
:email_domains_blacklist,
:email_domains_whitelist,
:unicode_username_character_whitelist,
...
...
```
Fix here is to use `self.define_singleton_method`.
Use the names as provided by discourse-fonts and remove the
translated strings.
It also ensures that the selected font is present in case a font will
be removed in the future.
This is a little bit of refactoring. Core Discourse should have default promotion message for TL2.
In addition, when the Discobot plugin is enabled, the user is invited to advanced training
This PR removes the user reminder topic timers, because that system has been supplanted and improved by bookmark reminders. The option is removed from the UI and all existing user reminder topic timers are migrated to bookmark reminders.
Migration does this:
* Get all topic_timers with status_type 5 (reminders)
* Gets all bookmarks where the user ID and topic ID match
* Loops through the found topic timers
* If there is no bookmark for the OP of the topic, then we just create a bookmark with a reminder
* If there is a bookmark for the OP of the topic and it does **not** have a reminder set, then just
update it with the topic timer reminder
* If there is a bookmark for the OP of the topic with a reminder then just discard the topic timer
* Cancels all outstanding user reminder topic timers
* **Trashes (not deletes) all user reminder topic timers**
Notes:
* For now I have left the user reminder topic timer job class in place; this is so the jobs can be cancelled in the migration. It and the specs will be deleted in the next PR.
* At a later date I will write a migration to delete all trashed user topic timers. They are not deleted here in case there are data issues and they need to be recovered.
* A future PR will change the UI of the topic timer modal to make it look more like the bookmark modal.
Currently, if a group's visibility is set to "Group owners, members" then the mods can't view those group pages. The same rule is applied for members visibility setting too.
This reverts commit 7fc7090. And fixed the spec test fails.
Currently, if a group's visibility is set to "Group owners, members" then the mods can't view those group pages. The same rule is applied for members visibility setting too.
After restoring a backup it takes up to 48 hours for uploads stored on S3 to appear in the S3 inventory. This change prevents alerts about missing uploads by preventing the EnsureS3UploadsExistence job from running in the first 48 hours after a restore. During the restore it deletes the count of missing uploads from the PluginStore, so that an alert isn't triggered by an old number.
Admins can currently add the bookmarks discovery route link
to the homepage interface, but users can't presently select
that as their default home view. This change facilitates that,
adding the option to the existing Default Home Page dropdown on
the User Preferences Interface page.
If a user always read all group messages, we will never update the
`first_pm_unread_at` column since the previous query will not return the
group_user. Instead, we should update `first_pm_unread_at` to the
current timestamp if the user has read everything.
Follow-up to 9b75d95fc6
It is possible that a user could exist without an email, if so we should
not enqueue a job to download their gravatar.
This commit resolves this error that can occur:
```
Job exception: undefined method `email' for nil:NilClass
/var/www/discourse/app/models/user.rb:1204:in `email'
/var/www/discourse/app/jobs/regular/update_gravatar.rb:12:in `execute'
```
This commit also fixes the original spec which actually was wrong. The
job never enqueued in the original spec and so the gravatar was never
actually updated and the test was checking if the two values were the
same, but they were both null and never updated, so of course they were
the same!
A new test has also been added to make sure the gravatar job isn't
enqueued when a user's email is missing.
Previously in some cases the test suite could fail due to a bad entry in
redis from previous tests
This ensures the correct cache is expired when needed
Additionally improves performance of the redis check
This commit is addressing an issue where it is possible that there could
be multiple topic timer jobs running to close a topic or a weird race
condition state causing a topic that was just closed to be re-opened.
By removing the logic from the Topic Timer model into the Topic Timer
controller endpoint we isolate the code that is used for setting an
auto-open or an auto-close timer to just that functionality making the
topic timer background jobs safer if multiple are running.
Possibly in the future if we would like this logic back in the model a
refactor will be needed where we actually pass in the auto-close and
auto-open action instead of mixing it with the close and open
action that is currently being passed to the controller.