Job arguments go via JSON, and so symbols will appear as strings in the Job's `#execute` method. The latest version of Sidekiq has started warning about this to reduce developer confusion.
This commit introduces scheduled problem checks for the admin dashboard, which are long running or otherwise cumbersome problem checks that will be run every 10 minutes rather than every time the dashboard is loaded. If these scheduled checks add a problem, the problem will remain until it is cleared or until the scheduled job runs again.
An example of a check that should be scheduled is validating credentials against an external provider.
This commit also introduces the concept of a `priority` to the problems generated by `AdminDashboardData` and the scheduled checks. This is `low` by default, and can be set to `high`, but this commit does not change any part of the UI with this information, only adds a CSS class.
I will be making a follow up PR to check group SMTP credentials.
We send the reminder using the GroupMessage class, which supports removing previous messages. We can't match them by raw because they could mention different moderators. Also, I had to change the subject to remove dynamically generated values, which is necessary for finding them.
This commit introduces a new site setting "google_oauth2_hd_groups". If enabled, group information will be fetched from Google during authentication, and stored in the Discourse database. These 'associated groups' can be connected to a Discourse group via the "Membership" tab of the group preferences UI.
The majority of the implementation is generic, so we will be able to add support to more authentication methods in the near future.
https://meta.discourse.org/t/managing-group-membership-via-authentication/175950
This commit adds token_hash and scopes columns to email_tokens table.
token_hash is a replacement for the token column to avoid storing email
tokens in plaintext as it can pose a security risk. The new scope column
ensures that email tokens cannot be used to perform a different action
than the one intended.
To sum up, this commit:
* Adds token_hash and scope to email_tokens
* Reuses code that schedules critical_user_email
* Refactors EmailToken.confirm and EmailToken.atomic_confirm methods
* Periodically cleans old, unconfirmed or expired email tokens
Discourse is sending regularly message to admins when potential problems are persisted. Most of the time they have exactly the same content. In that case, when there are no replies, the old one should be trashed before a new one is created.
PresenceChannel aims to be a generic system for allow the server, and end-users, to track the number and identity of users performing a specific task on the site. For example, it might be used to track who is currently 'replying' to a specific topic, editing a specific wiki post, etc.
A few key pieces of information about the system:
- PresenceChannels are identified by a name of the format `/prefix/blah`, where `prefix` has been configured by some core/plugin implementation, and `blah` can be any string the implementation wants to use.
- Presence is a boolean thing - each user is either present, or not present. If a user has multiple clients 'present' in a channel, they will be deduplicated so that the user is only counted once
- Developers can configure the existence and configuration of channels 'just in time' using a callback. The result of this is cached for 2 minutes.
- Configuration of a channel can specify permissions in a similar way to MessageBus (public boolean, a list of allowed_user_ids, and a list of allowed_group_ids). A channel can also be placed in 'count_only' mode, where the identity of present users is not revealed to end-users.
- The backend implementation uses redis lua scripts, and is designed to scale well. In the future, hard limits may be introduced on the maximum number of users that can be present in a channel.
- Clients can enter/leave at will. If a client has not marked itself 'present' in the last 60 seconds, they will automatically 'leave' the channel. The JS implementation takes care of this regular check-in.
- On the client-side, PresenceChannel instances can be fetched from the `presence` ember service. Each PresenceChannel can be used entered/left/subscribed/unsubscribed, and the service will automatically deduplicate information before interacting with the server.
- When a client joins a PresenceChannel, the JS implementation will automatically make a GET request for the current channel state. To avoid this, the channel state can be serialized into one of your existing endpoints, and then passed to the `subscribe` method on the channel.
- The PresenceChannel JS object is an ember object. The `users` and `count` property can be used directly in ember templates, and in computed properties.
- It is important to make sure that you `unsubscribe()` and `leave()` any PresenceChannel objects after use
An example implementation may look something like this. On the server:
```ruby
register_presence_channel_prefix("site") do |channel|
next nil unless channel == "/site/online"
PresenceChannel::Config.new(public: true)
end
```
And on the client, a component could be implemented like this:
```javascript
import Component from "@ember/component";
import { inject as service } from "@ember/service";
export default Component.extend({
presence: service(),
init() {
this._super(...arguments);
this.set("presenceChannel", this.presence.getChannel("/site/online"));
},
didInsertElement() {
this.presenceChannel.enter();
this.presenceChannel.subscribe();
},
willDestroyElement() {
this.presenceChannel.leave();
this.presenceChannel.unsubscribe();
},
});
```
With this template:
```handlebars
Online: {{presenceChannel.count}}
<ul>
{{#each presenceChannel.users as |user|}}
<li>{{avatar user imageSize="tiny"}} {{user.username}}</li>
{{/each}}
</ul>
```
This adds a few different things to allow for direct S3 uploads using uppy. **These changes are still not the default.** There are hidden `enable_experimental_image_uploader` and `enable_direct_s3_uploads` settings that must be turned on for any of this code to be used, and even if they are turned on only the User Card Background for the user profile actually uses uppy-image-uploader.
A new `ExternalUploadStub` model and database table is introduced in this pull request. This is used to keep track of uploads that are uploaded to a temporary location in S3 with the direct to S3 code, and they are eventually deleted a) when the direct upload is completed and b) after a certain time period of not being used.
### Starting a direct S3 upload
When an S3 direct upload is initiated with uppy, we first request a presigned PUT URL from the new `generate-presigned-put` endpoint in `UploadsController`. This generates an S3 key in the `temp` folder inside the correct bucket path, along with any metadata from the clientside (e.g. the SHA1 checksum described below). This will also create an `ExternalUploadStub` and store the details of the temp object key and the file being uploaded.
Once the clientside has this URL, uppy will upload the file direct to S3 using the presigned URL. Once the upload is complete we go to the next stage.
### Completing a direct S3 upload
Once the upload to S3 is done we call the new `complete-external-upload` route with the unique identifier of the `ExternalUploadStub` created earlier. Only the user who made the stub can complete the external upload. One of two paths is followed via the `ExternalUploadManager`.
1. If the object in S3 is too large (currently 100mb defined by `ExternalUploadManager::DOWNLOAD_LIMIT`) we do not download and generate the SHA1 for that file. Instead we create the `Upload` record via `UploadCreator` and simply copy it to its final destination on S3 then delete the initial temp file. Several modifications to `UploadCreator` have been made to accommodate this.
2. If the object in S3 is small enough, we download it. When the temporary S3 file is downloaded, we compare the SHA1 checksum generated by the browser with the actual SHA1 checksum of the file generated by ruby. The browser SHA1 checksum is stored on the object in S3 with metadata, and is generated via the `UppyChecksum` plugin. Keep in mind that some browsers will not generate this due to compatibility or other issues.
We then follow the normal `UploadCreator` path with one exception. To cut down on having to re-upload the file again, if there are no changes (such as resizing etc) to the file in `UploadCreator` we follow the same copy + delete temp path that we do for files that are too large.
3. Finally we return the serialized upload record back to the client
There are several errors that could happen that are handled by `UploadsController` as well.
Also in this PR is some refactoring of `displayErrorForUpload` to handle both uppy and jquery file uploader errors.
There was an issue with the TopicTimerEnqueuer where any timer
that failed to enqueue_typed_job with an error would prevent
all other pending timers after the one that errored from running.
To mitigate this we just capture the error and log it (so we can
still fix it if needed for bug crushing) and proceed with the
rest of the timer enqueues.
The commit https://github.com/discourse/discourse/pull/13544 highlighted
this issue originally in hosted sites.
<!-- NOTE: All pull requests should have tests (rspec in Ruby, qunit in JavaScript). If your code does not include test coverage, please include an explanation of why it was omitted. -->
* FEATURE: Staff can receive pending user reminders more frequently.
We now express the "pending_users_reminder_delay" in minutes instead of hours so staff can have finer control over the delay.
We need to keep in mind that the reminders could still take up to 20 minutes, even when using a lower value. We send them from a scheduled job.
* Migrate to a new site setting for the reminders delay
Subclasses must call #delete_user_actions inside build_actions to support user deletion. The method adds a delete user bundle, which has a delete and a delete + block option. Every subclass is responsible for implementing these actions.
Email change requests are never deleted no matter if they completed
successfully or not. The abandoned requests have the disadvantage of
showing up as unconfirmed emails in user's preferences page.
Recalculating a ReviewableFlaggedPost's score after rejecting or ignoring it sets the score as 0, which means that we can't find them after reviewing. They don't surpass the minimum priority threshold and are hidden.
Additionally, we only want to use agreed flags when calculating the different priority thresholds.
This filter hides reviewables with a score lower than the "reviewable_low_priority_threshold" setting. We only use reviewables that already met this threshold to calculate the Medium and High priority filters.
Currently the process of adding a custom image to badge is quite clunky; you have to upload your image to a topic, and then copy the image URL and pasting it in a text field. Besides being clucky, if the topic or post that contains the image is deleted, the image will be garbage-collected in a few days and the badge will lose the image because the application is not that the image is referenced by a badge.
This commit improves that by adding a proper image uploader widget for badge images.
* FIX: Reduce the time_read threshold to one minute.
Five minutes is too much and could fill the queue with false positives.
* Update spec/jobs/enqueue_suspect_users_spec.rb
Co-authored-by: Arpit Jalan <arpit@techapj.com>
Co-authored-by: Arpit Jalan <arpit@techapj.com>
Completing the discobot tutorial gives you ~3m of reading time, so we set the limit at 5m. Additionally, we use an "OR" clause to cover the case when you just scroll through a single topic.
Original PR was reverted because of broken migration https://github.com/discourse/discourse/pull/12058
I fixed it by adding this line
```
AND topics.id IN(SELECT id FROM topics ORDER BY created_at DESC LIMIT :max_new_topics)
```
This time it is left joining a limited amount of topics. I tested it on few databases and it worked quite smooth
Follow up https://github.com/discourse/discourse/pull/11968
Dismiss all new topics using the same DismissTopicService. In addition, MessageBus receives exact topic ids which should be marked as `seen`.
This is a try to simplify logic around dismiss new topics to have one solution to work in all places - dismiss all-new, dismiss new in a specific category or even in a specific tag.
Lots of changes but it's mostly a refactoring.
The interesting part that was fix are the 'load_problem_<model>_ids' methods.
They will now return records with no search data associated so they can be properly indexed for the search.
This "bad" state usually happens after a migration.
This adds a new table UserNotificationSchedules which stores monday-friday start and ends times that each user would like to receive notifications (with a Boolean enabled to remove the use of the schedule). There is then a background job that runs every day and creates do_not_disturb_timings for each user with an enabled notification schedule. The job schedules timings 2 days in advance. The job is designed so that it can be run at any point in time, and it will not create duplicate records.
When a users saves their notification schedule, the schedule processing service will run and schedule do_not_disturb_timings. If the user should be in DND due to their schedule, the user will immediately be put in DND (message bus publishes this state).
The UI for a user's notification schedule is in user -> preferences -> notifications. By default every day is 8am - 5pm when first enabled.
This should make it easier to track down how the incoming email was created, which is one of four locations:
The POP3 poller (which picks up reply via email replies)
The admin email controller #handle_mail (which is where hosted mail is sent)
The IMAP sync tool
The group SMTP mailer, which sends emails when replying to IMAP topics, pre-emptively creating IncomingEmail records to avoid double syncing
Moves the topic timer jobs from being scheduled ahead of time with enqueue_at to a 5 minute scheduled run like bookmark reminders, in a new job called Jobs::EnqueueTopicTimers. Backwards compatibility is maintained by checking if an existing topic timer job is enqueued in sidekiq for the timer, and if it is not running it inside the new job.
The functionality to close/open a topic if it is in the opposite state still remains in the after_save block of TopicTimer, with further commentary, which is used for Open/Close Temporarily.
This also removes the ensure_consistency! functionality of topic timers as it is no longer needed; the new job will always pick up the timers because they are not stored in a fragile state of sidekiq.
This adds a safe default to not process pop3 emails when the pop3 polling option is set up that are > 1 week old. This is to avoid the situation where an older mailbox is used, which causes us to go and process all emails in that mailbox, sending out error emails to the senders of emails which cannot be parsed successfully.
My initial implementation didn't consider this case. We should skip imported users if the "imported_id" field is present, even if there're other custom fields.
- IgnoredUser records should all now have an expiring_at value. This commit enforces that in the DB, and fixes any corrupt rows
- Changes to the ignored user list are now handled by the `/u/{username}/notification_level` endpoint. This allows setting expiration dates on the ignore. This commit removes the old logic for saving a list of usernames in the user preferences.
- Many specs were calling `IgnoredUser.create`. This commit changes them to use `Fabricate(:ignored_user)` for consistency
b8c676e7 added the 'forever' option to the UI, and this is correctly stored in the database. However, we had a hard-coded limit of 4 months in the cleanup job. This commit removes the limit, so ignores can last forever.
Dependency on gifsicle, allow_animated_avatars and allow_animated_thumbnails
site settings were all removed. Animated GIF images are still allowed, but
the generated optimized images are no longer animated for those (which were
used for avatars and thumbnails).
The added 'animated' is populated by extracting information using FastImage.
This field was used to selectively reoptimize old animations. This process
happens in the background.
When posts or topics are deleted we don't want to immediately delete associated bookmarks, so we have a grace period to recover them and their reminders if the post or topic is un-deleted. This PR adds a task to the Weekly scheduled job to go and delete bookmarks attached to posts or topics deleted > 3 days ago.