Previously, with the default `editing_grace_period`, hotlinked images were pulled 5 minutes after a post is created. This delay was added to reduce the chance of automated edits clashing with user edits.
This commit refactors things so that we can pull hotlinked images immediately. URLs are immediately updated in the post's `cooked` HTML. The post's raw markdown is updated later, after the `editing_grace_period`.
This involves a number of behind-the-scenes changes including:
- Schedule Jobs::PullHotlinkedImages immediately after Jobs::ProcessPost. Move scheduling to after the `update_column` call to avoid race conditions
- Move raw changes into a separate job, which is delayed until after the ninja-edit window
- Move disable_if_low_on_disk_space logic into the `pull_hotlinked_images` job
- Move raw-parsing/replacing logic into `InlineUpload` so it can be easily be shared between `UpdateHotlinkedRaw` and `PullUserProfileHotlinkedImages`
Incorporates learnings from /t/64227:
* Changes the code to set access control posts in the rake
task to be an efficient UPDATE SQL query.
The original version was timing out with 312017 post uploads,
the new query took ~3s to run.
* Changes the code to mark uploads as secure/not secure in
the rake task to be an efficient UPDATE SQL query rather than
using UploadSecurity. This took a very long time previously,
and now takes only a few seconds.
* Spread out ACL syncing for uploads into jobs with batches of
100 uploads at a time, so they can be parallelized instead
of having to wait ~1.25 seconds for each ACL to be changed
in S3 serially.
One issue that still remains is post rebaking. Doing this serially
is painfully slow. We have a way to do this in sidekiq via PeriodicalUpdates
but this is limited by max_old_rebakes_per_15_minutes. It would
be better to fan this rebaking out into jobs like we did for the
ACL sync, but that should be done in another PR.
This commit migrates all bookmarks to be polymorphic (using the
bookmarkable_id and bookmarkable_type) columns. It also deletes
all the old code guarded behind the use_polymorphic_bookmarks setting
and changes that setting to true for all sites and by default for
the sake of plugins.
No data is deleted in the migrations, the old post_id and for_topic
columns for bookmarks will be dropped later on.
- Only validate if custom_fields are loaded, so that we don't trigger a db query
- Only validate public user fields, not all custom_fields
This commit also reverts the unrelated spec changes in ba148e08, which were required to work around these issues
Currently we don’t apply watched words to custom user fields nor user
profile fields.
This led to users being able to use blocked words in their bio, location
or some custom user fields.
This patch addresses this issue by adding some validations so it’s not
possible anymore to save the User model or the UserProfile model if they
contain blocked words.
A bit of a mixed bag, this addresses several edge areas of bookmarks and makes them compatible with polymorphic bookmarks (hidden behind the `use_polymorphic_bookmarks` site setting). The main ones are:
* ExportUserArchive compatibility
* SyncTopicUserBookmarked job compatibility
* Sending different notifications for the bookmark reminders based on the bookmarkable type
* Import scripts compatibility
* BookmarkReminderNotificationHandler compatibility
This PR also refactors the `register_bookmarkable` API so it accepts a class descended from a `BaseBookmarkable` class instead. This was done because we kept having to add more and more lambdas/properties inline and it was very messy, so a factory pattern is cleaner. The classes can be tested independently as well.
Some later PRs will address some other areas like the discourse narrative bot, advanced search, reports, and the .ics endpoint for bookmarks.
`TestLogger` was responsible for some flaky specs runs:
```
Error during failsafe response: undefined method `debug' for #<TestLogger:0x0000556c4b942cf0 @warnings=1>
Did you mean? debugger
```
This commit also cleans up other uses of `FakeLogger`
When changing upload security using `Upload#update_secure_status`,
we may not have the context of how an upload is being created, because
this code path can be run through scheduled jobs. When calling
update_secure_status, the normal ActiveRecord validations are run,
and ours include validating extensions. In some cases the upload
is created in an automated way, such as user export zips, and the
security is applied later, with the extension prohibited from
use when normally uploading.
This caused the upload to fail validation on `update_secure_status`,
causing the security change to silently fail. This fixes the issue
by skipping the file extension validation when the upload security
is being changed.
Sometimes we need to update a _lot_ of ACLs on S3 (such as when secure media
is enabled), and since it takes ~1s per upload to update the ACL, this is
best spread out over many jobs instead of having to do the whole thing serially.
In future, it will be better to have a job that can be run based on
a column on uploads (e.g. acl_stale) so we can track progress, similar
to how we can set the baked_version to nil to rebake posts.
raw_html posts (i.e. those which are pulled as part of our comments integration) don't go through our markdown pipeline, so `upload://` URLs are not supported. Running pull_hotlinked_images will break any images in the post.
In future we may add support for pulling hotlinked images in these posts. But for now, disabling it will stop it breaking images.
Under some conditions, replacing an `<img` with `![]()` can break rendering, and make the image disappear.
Context at https://meta.discourse.org/t/152801
The user can select what happens with a bookamrk after it expires. New
option allow bookmark's reminder to be kept even after it has expired.
After a bookmark's reminder notification is created, the reminder date
will be highlighted in red until the user resets the reminder date.
User can do that using the new Clear Reminder button from the dropdown.
It's very easy to forget to add `require 'rails_helper'` at the top of every core/plugin spec file, and omissions can cause some very confusing/sporadic errors.
By setting this flag in `.rspec`, we can remove the need for `require 'rails_helper'` entirely.
This allows text editors to use correct syntax coloring for the heredoc sections.
Heredoc tag names we use:
languages: SQL, JS, RUBY, LUA, HTML, CSS, SCSS, SH, HBS, XML, YAML/YML, MF, ICS
other: MD, TEXT/TXT, RAW, EMAIL
This commit introduces two new APIs for handling unused uploads, one
can be used to exclude uploads in bulk when the data model allow and
the other one excludes uploads one by one.
Breakdown of fixes in this commit:
* `UserStat#topic_count` was not updated when visibility of
the topic changed.
* `UserStat#post_count` was not updated when post was hidden or
unhidden.
* `TopicConverter` was only incrementing or decrementing the counts by 1
even if a user has multiple posts in the topic.
* The commit turns off the verbose logging by default as it is just
noise to normal users who are not debugging this problem.
This commit introduces our own handling and warning for Sidekiq's new 'non-json-serializable' warning. This decouples us from Sidekiq's own deprecation cycle, and allows us to use our own deprecation system. It also means that the dump/parse happens in test mode, which will help us to catch occurrences before they reach production.
This commit introduces scheduled problem checks for the admin dashboard, which are long running or otherwise cumbersome problem checks that will be run every 10 minutes rather than every time the dashboard is loaded. If these scheduled checks add a problem, the problem will remain until it is cleared or until the scheduled job runs again.
An example of a check that should be scheduled is validating credentials against an external provider.
This commit also introduces the concept of a `priority` to the problems generated by `AdminDashboardData` and the scheduled checks. This is `low` by default, and can be set to `high`, but this commit does not change any part of the UI with this information, only adds a CSS class.
I will be making a follow up PR to check group SMTP credentials.
We previously used ConsolidateNotifications with a threshold of 1 to re-use an existing notification and bump it to the top instead of creating a new one. It produces some jumpiness in the user notification list, and it relies on updating the `created_at` attribute, which is a bit hacky.
As a better alternative, we're introducing a new plan that deletes all the previous versions of the notification, then creates a new one.
We send the reminder using the GroupMessage class, which supports removing previous messages. We can't match them by raw because they could mention different moderators. Also, I had to change the subject to remove dynamically generated values, which is necessary for finding them.
Currently the Message-IDs we send out for outbound email
are not unique; for a post they look like:
topic/TOPIC_ID/POST_ID@HOST
And for a topic they look like:
topic/TOPIC_ID@HOST
This commit changes the outbound Message-IDs to also have
a random suffix before the host, so the new format is
like this:
topic/TOPIC_ID/POST_ID.RANDOM_SUFFIX@HOST
Or:
topic/TOPIC_ID.RANDOM_SUFFIX@HOST
This should help with email deliverability. This change
is backwards-compatible, the old Message-ID format will
still be recognized in the mail receiver flow, so people
will still be able to reply using Message-IDs, In-Reply-To,
and References headers that have already been sent.
This commit also refactors Message-ID related logic
to a central location, and adds judicious amounts of
tests and documentation.
* REFACTOR: Improve support for consolidating notifications.
Before this commit, we didn't have a single way of consolidating notifications. For notifications like group summaries, we manually removed old ones before creating a new one. On the other hand, we used an after_create callback for likes and group membership requests, which caused unnecessary work, as we need to delete the record we created to replace it with a consolidated one.
We now have all the consolidation rules centralized in a single place: the consolidation planner class. Other parts of the app looking to create a consolidable notification can do so by calling Notification#consolidate_or_save!, instead of the default Notification#create! method.
Finally, we added two more rules: one for re-using existing group summaries and another for deleting duplicated dashboard problems PMs notifications when the user is tracking the moderator's inbox. Setting the threshold to one forces the planner to apply this rule every time.
I plan to add plugin support for adding custom rules in another PR to keep this one relatively small.
* DEV: Introduces a plugin API for consolidating notifications.
This commit removes the `Notification#filter_by_consolidation_data` scope since plugins could have to define their criteria. The Plan class now receives two blocks, one to query for an already consolidated notification, which we'll try to update, and another to query for existing ones to consolidate.
It also receives a consolidation window, which accepts an ActiveSupport::Duration object, and filter notifications created since that value.
This commit adds token_hash and scopes columns to email_tokens table.
token_hash is a replacement for the token column to avoid storing email
tokens in plaintext as it can pose a security risk. The new scope column
ensures that email tokens cannot be used to perform a different action
than the one intended.
To sum up, this commit:
* Adds token_hash and scope to email_tokens
* Reuses code that schedules critical_user_email
* Refactors EmailToken.confirm and EmailToken.atomic_confirm methods
* Periodically cleans old, unconfirmed or expired email tokens
Currently, Discourse rate limits all incoming requests by the IP address they
originate from regardless of the user making the request. This can be
frustrating if there are multiple users using Discourse simultaneously while
sharing the same IP address (e.g. employees in an office).
This commit implements a new feature to make Discourse apply rate limits by
user id rather than IP address for users at or higher than the configured trust
level (1 is the default).
For example, let's say a Discourse instance is configured to allow 200 requests
per minute per IP address, and we have 10 users at trust level 4 using
Discourse simultaneously from the same IP address. Before this feature, the 10
users could only make a total of 200 requests per minute before they got rate
limited. But with the new feature, each user is allowed to make 200 requests
per minute because the rate limits are applied on user id rather than the IP
address.
The minimum trust level for applying user-id-based rate limits can be
configured by the `skip_per_ip_rate_limit_trust_level` global setting. The
default is 1, but it can be changed by either adding the
`DISCOURSE_SKIP_PER_IP_RATE_LIMIT_TRUST_LEVEL` environment variable with the
desired value to your `app.yml`, or changing the setting's value in the
`discourse.conf` file.
Requests made with API keys are still rate limited by IP address and the
relevant global settings that control API keys rate limits.
Before this commit, Discourse's auth cookie (`_t`) was simply a 32 characters
string that Discourse used to lookup the current user from the database and the
cookie contained no additional information about the user. However, we had to
change the cookie content in this commit so we could identify the user from the
cookie without making a database query before the rate limits logic and avoid
introducing a bottleneck on busy sites.
Besides the 32 characters auth token, the cookie now includes the user id,
trust level and the cookie's generation date, and we encrypt/sign the cookie to
prevent tampering.
Internal ticket number: t54739.
Sometimes, a user may have a malformed email such as
`test@test.com<mailto:test@test.com` their email address,
and as a topic participant will be included as a CC email
when sending a GroupSmtpEmail. This causes the CC parsing to
fail and further down the line in Email::Sender the code
to check the CC addresses expects an array but gets a string
instead because of the parse failure.
Instead, we can just check if the CC addresses are valid
and drop them if they are not in the GroupSmtpEmail job.
This commit removes the recipient's username from the
respond to / participants list that is shown at the bottom
of user notification emails. For example if the recipient's
username was jsmith, and there were participants ljones and
bmiller, we currently show this:
> "reply to this email to respond to jsmith, ljones, bmiller"
or
> "Participants: jsmith, ljones, bmiller"
However this is a bit redundant, as you are not replying to
yourself here if you are the recipient user. So we omit the
recipient user's username from this list, which is only used
in the text of the email and not elsewhere.
We don't actually use the reminder_type for bookmarks anywhere;
we are just storing it. It has no bearing on the UI. It used
to be relevant with the at_desktop bookmark reminders (see
fa572d3a7a)
This commit marks the column as readonly, ignores it, and removes
the index, and it will be dropped in a later PR. Some plugins
are relying on reminder_type partially so some stubs have been
left in place to avoid errors.
We don't need no stinkin' denormalization! This commit ignores
the topic_id column on bookmarks, to be deleted at a later date.
We don't really need this column and it's better to rely on the
post.topic_id as the canonical topic_id for bookmarks, then we
don't need to remember to update both columns if the bookmarked
post moves to another topic.
Discourse is sending regularly message to admins when potential problems are persisted. Most of the time they have exactly the same content. In that case, when there are no replies, the old one should be trashed before a new one is created.
This bug was introduced by f66007ec83.
In PostJobsEnqueuer we previously did not fire the after_post_create
event and after_topic_create event for private message topics. This was
changed in the above commit in order to publish message bus messages
for topic tracking state updates. Unfortunately this caused the
NotifyMailingListSubscribers job to be enqueued for all posts including
private messages, and admins and the users involved in the PMs got
emailed the contents of the PMs if they had mailing list mode enabled.
Luckily the impact of this was mitigated by a Guardian#can_see? check
for each mailing list mode user in the NotifyMailingListSubscribers job.
We never want to notify mailing list mode subscribers for private messages
so an early return has been added there, plus the logic in PostJobsEnqueuer
has been fixed, and tests have been added to that class where there were
none before.
This adds a few different things to allow for direct S3 uploads using uppy. **These changes are still not the default.** There are hidden `enable_experimental_image_uploader` and `enable_direct_s3_uploads` settings that must be turned on for any of this code to be used, and even if they are turned on only the User Card Background for the user profile actually uses uppy-image-uploader.
A new `ExternalUploadStub` model and database table is introduced in this pull request. This is used to keep track of uploads that are uploaded to a temporary location in S3 with the direct to S3 code, and they are eventually deleted a) when the direct upload is completed and b) after a certain time period of not being used.
### Starting a direct S3 upload
When an S3 direct upload is initiated with uppy, we first request a presigned PUT URL from the new `generate-presigned-put` endpoint in `UploadsController`. This generates an S3 key in the `temp` folder inside the correct bucket path, along with any metadata from the clientside (e.g. the SHA1 checksum described below). This will also create an `ExternalUploadStub` and store the details of the temp object key and the file being uploaded.
Once the clientside has this URL, uppy will upload the file direct to S3 using the presigned URL. Once the upload is complete we go to the next stage.
### Completing a direct S3 upload
Once the upload to S3 is done we call the new `complete-external-upload` route with the unique identifier of the `ExternalUploadStub` created earlier. Only the user who made the stub can complete the external upload. One of two paths is followed via the `ExternalUploadManager`.
1. If the object in S3 is too large (currently 100mb defined by `ExternalUploadManager::DOWNLOAD_LIMIT`) we do not download and generate the SHA1 for that file. Instead we create the `Upload` record via `UploadCreator` and simply copy it to its final destination on S3 then delete the initial temp file. Several modifications to `UploadCreator` have been made to accommodate this.
2. If the object in S3 is small enough, we download it. When the temporary S3 file is downloaded, we compare the SHA1 checksum generated by the browser with the actual SHA1 checksum of the file generated by ruby. The browser SHA1 checksum is stored on the object in S3 with metadata, and is generated via the `UppyChecksum` plugin. Keep in mind that some browsers will not generate this due to compatibility or other issues.
We then follow the normal `UploadCreator` path with one exception. To cut down on having to re-upload the file again, if there are no changes (such as resizing etc) to the file in `UploadCreator` we follow the same copy + delete temp path that we do for files that are too large.
3. Finally we return the serialized upload record back to the client
There are several errors that could happen that are handled by `UploadsController` as well.
Also in this PR is some refactoring of `displayErrorForUpload` to handle both uppy and jquery file uploader errors.
We now use the group's full name in group SMTP emails, so we are dropping the via #{site_name}. If group owners still want this they can just change the full name of the group.
Currently when bulk-awarding a badge that can be granted multiple times, users in the CSV file are granted the badge once no matter how many times they're listed in the file and only if they don't have the badge already.
This PR adds a new option to the Badge Bulk Award feature so that it's possible to grant users a badge even if they already have the badge and as many times as they appear in the CSV file.
This PR fixes a couple of issues related to group SMTP:
1. When running the group SMTP job, we were exiting early if the email was for the OP because of an IMAP race condition. However this causes issues when replying as a new topic for an existing SMTP topic, as the recipient does not get the OP email which can cause threading problems.
2. When sending emails for a new topic spun out like the issue in 1., we are not maintaining the original subject/topic title because that is based on the incoming email record, which we were not doing because the group SMTP email was never sent because of issue 1.
There was an issue with the TopicTimerEnqueuer where any timer
that failed to enqueue_typed_job with an error would prevent
all other pending timers after the one that errored from running.
To mitigate this we just capture the error and log it (so we can
still fix it if needed for bug crushing) and proceed with the
rest of the timer enqueues.
The commit https://github.com/discourse/discourse/pull/13544 highlighted
this issue originally in hosted sites.
<!-- NOTE: All pull requests should have tests (rspec in Ruby, qunit in JavaScript). If your code does not include test coverage, please include an explanation of why it was omitted. -->
Skip group SMTP email (and add log) if:
* topic is deleted
* post is deleted
* smtp has been disabled for the group
Skip without log if:
* enable_smtp site setting is false
* disable_emails site setting is yes
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
We do not want to show the In Reply To section of the
group SMTP email template, it is similar to Context Posts
which we removed and is unnecessary.
This PR also removes the link to staged user profiles in
the email; their email addresses will just be converted
to regular mailto: links.
This PR makes several changes to the group SMTP email contents to make it look more like a support inbox message.
* Remove the context posts, they only add clutter to the email and replies
* Display email addresses of staged users instead of odd generated usernames
* Add a "please reply above this line" message to sent emails
This PR backtracks a fair bit on this one https://github.com/discourse/discourse/pull/13220/files.
Instead of sending the group SMTP email for each user via `UserNotifications`, we are changing to send only one email with the existing `Jobs::GroupSmtpEmail` job and `GroupSmtpMailer`. We are changing this job and mailer along with `PostAlerter` to make the first topic allowed user the `to_address` for the email and any other `topic_allowed_users` to be the CC address on the email. This is to cut down on emails sent via SMTP, which is subject to daily limits from providers such as Gmail. We log these details in the `EmailLog` table now.
In addition to this, we have changed `PostAlerter` to no longer rely on incoming email email addresses for sending the `GroupSmtpEmail` job. This was unreliable as a user's email could have changed in the meantime. Also it was a little overcomplicated to use the incoming email records -- it is far simpler to reason about to just use topic allowed users.
This also adds a fix to include cc_addresses in the EmailLog.addressed_to_user scope.
* FEATURE: Staff can receive pending user reminders more frequently.
We now express the "pending_users_reminder_delay" in minutes instead of hours so staff can have finer control over the delay.
We need to keep in mind that the reminders could still take up to 20 minutes, even when using a lower value. We send them from a scheduled job.
* Migrate to a new site setting for the reminders delay
ATM it only implements server side of it, as my need is for automation purposes. However it should probably be added in the UI too as it's unexpected to have pinned_until and no bannered_until.
When we call Bookmark.cleanup! we want to make sure that
topic_user.bookmarked is updated for topics linked to the
bookmarks that were deleted. Also when PostDestroyer calls
destroy and recover. We have a job for this already --
SyncTopicUserBookmarked -- so we just utilize that.
Leaking state and non-obvious order (before :all runs *before* RailsHelper.test_setup) are not worth it.
A replacement PR for #13370. Fixes some flaky specs, e.g.
```
bin/rspec './spec/components/freedom_patches/translate_accelerator_spec.rb[1:3]' './spec/jobs/clean_up_user_export_topics_spec.rb[1:1]' --tag ~type:multisite --seed 35994
```
Also included:
* DEV: No need for locale reset (we do it anyway in rails_helper in `test_setup`)
Using 1 as the default value is confusing for some people as low-score flags are hidden unless staff uses the "(any)" priority filter. Let's change it to 0 and let every site adjust the setting to match their needs.
Over the years we accrued many spelling mistakes in the code base.
This PR attempts to fix spelling mistakes and typos in all areas of the code that are extremely safe to change
- comments
- test descriptions
- other low risk areas
Email change requests are never deleted no matter if they completed
successfully or not. The abandoned requests have the disadvantage of
showing up as unconfirmed emails in user's preferences page.
Recalculating a ReviewableFlaggedPost's score after rejecting or ignoring it sets the score as 0, which means that we can't find them after reviewing. They don't surpass the minimum priority threshold and are hidden.
Additionally, we only want to use agreed flags when calculating the different priority thresholds.
* FEATURE: add support for like webhooks
Add support for like webhooks. Webhook events only send on user membership
in the defined webhook group filters.
This also fixes group webhook events, as before this was never used, and
the logic was not correct.
This filter hides reviewables with a score lower than the "reviewable_low_priority_threshold" setting. We only use reviewables that already met this threshold to calculate the Medium and High priority filters.
It used to check if an upload record exists, which is wrong because an
invalid upload record exists even if the upload was not created.
The other improvement is a better log message.
Previously watched words ignored topic titles when applying auto tagging rules.
Also copy has been improved to reflect how the system behaves.
The text hints that we are only watching first post now
When posts are moved from one topic to another, the `topic_user.bookmarked` column for all users in the new and the old topic needs to be resynced, for example because a user bookmarks post 12 in topic 1, then it is moved to topic 2, the topic_user record for topic 1 should no longer be bookmarked. A background job has been added to sync the column for a specified topic, or for no topic at all, which does it for all topics like the migration.
Also includes a migration that we have run in the past to fix bad data.
----
This has been addressed in other places in the past:
https://github.com/discourse/discourse/pull/10211https://github.com/discourse/discourse/pull/10188
We introduced a cap on the number of bookmarks the user can add in be145ccf2f. However this has caused unintended side effects; when the `jobs/scheduled/bookmark_reminder_notifications.rb` runs we get this error for users who already had more bookmarks than the limit:
> Job exception: Validation failed: Sorry, you have too many bookmarks, visit #{url}/my/activity/bookmarks to remove some.
This is because the `clear_reminder!` call was triggering a bookmark validation, which raised an error because the user already had to many, holding up other reminders.
This PR also adds `max_bookmarks_per_user` hidden site setting (default 2000). This replaces the BOOKMARK_LIMIT const so we can raise it for certain sites.
When bulk inviting, the uploaded CSV file may contain wrong values for
the user fields. This tries to automatically correct them by finding
the most similar option (by ignoring the case).
Admins can use bulk invites to pre-populate user fields. The imported
CSV file must have a header with "email" column (first position) and
names of the user fields (exact match).
Under the hood, the bulk invite will create staged users and populate
the user fields of those.
The user mailing list mode continued to be silently enabled and
UserEmail job checked just that ignoring site setting
disable_mailing_list_mode.
An additional migrate was added to set disable_mailing_list_mode
to false if any users enabled the mailing list mode already.
Currently the process of adding a custom image to badge is quite clunky; you have to upload your image to a topic, and then copy the image URL and pasting it in a text field. Besides being clucky, if the topic or post that contains the image is deleted, the image will be garbage-collected in a few days and the badge will lose the image because the application is not that the image is referenced by a badge.
This commit improves that by adding a proper image uploader widget for badge images.
It was used both when inviting from a topic page and when creating
invites with "Send to topic on first login", while it should be used
only in the former case.
Onebox content may only be resolved during the process_post job. Onebox content could change the content of the excerpt, so we need to make sure the excerpt is updated accordingly.
* FIX: Reduce the time_read threshold to one minute.
Five minutes is too much and could fill the queue with false positives.
* Update spec/jobs/enqueue_suspect_users_spec.rb
Co-authored-by: Arpit Jalan <arpit@techapj.com>
Co-authored-by: Arpit Jalan <arpit@techapj.com>
Completing the discobot tutorial gives you ~3m of reading time, so we set the limit at 5m. Additionally, we use an "OR" clause to cover the case when you just scroll through a single topic.
Previously when inheriting category auto-close settings for a topic, those settings were disrupted if another topic timer was assigned or if a topic was closed then manually re-opened.
This PR makes it so that when a topic is manually re-opened the topic auto-close settings are inherited from the category. However, they will now be based on the topic created_at date. As an example, for a topic with a category auto close hours setting of 72 (3 days):
* Topic was created on 2021-02-15 08:00
* Topic was closed on 2021-02-16 10:00
* Topic was opened again on 2021-02-17 06:00
Now, the topic will inherit the auto close timer again and will close automatically at **2021-02-18 08:00**, which is based on the creation date. If the current date and time is greater than the original auto-close time (e.g. we were at 2021-02-20 13:45) then no auto-close timer is created.
Note, this will not happen if the topic category auto-close setting is "based on last post".
Original PR was reverted because of broken migration https://github.com/discourse/discourse/pull/12058
I fixed it by adding this line
```
AND topics.id IN(SELECT id FROM topics ORDER BY created_at DESC LIMIT :max_new_topics)
```
This time it is left joining a limited amount of topics. I tested it on few databases and it worked quite smooth
Follow up https://github.com/discourse/discourse/pull/11968
Dismiss all new topics using the same DismissTopicService. In addition, MessageBus receives exact topic ids which should be marked as `seen`.
The bug was mentioned on meta https://meta.discourse.org/t/users-are-seeing-handling-of-unhandled-tag-again/155367
It was related to users who are watching a specific topic. In that case, when the hidden tag was added or removed to the topic they were notified by `NotifyTagChangeJob`.
That job should take hidden tags into consideration. If all changed tags are in a hidden group, it should exclude user not belong to that group.
At the same time, if visible to anyone tag is added or removed users watching topic should be notified.
The 'Discourse SSO' protocol is being rebranded to DiscourseConnect. This should help to reduce confusion when 'SSO' is used in the generic sense.
This commit aims to:
- Rename `sso_` site settings. DiscourseConnect specific ones are prefixed `discourse_connect_`. Generic settings are prefixed `auth_`
- Add (server-side-only) backwards compatibility for the old setting names, with deprecation notices
- Copy `site_settings` database records to the new names
- Rename relevant translation keys
- Update relevant translations
This commit does **not** aim to:
- Rename any Ruby classes or methods. This might be done in a future commit
- Change any URLs. This would break existing integrations
- Make any changes to the protocol. This would break existing integrations
- Change any functionality. Further normalization across DiscourseConnect and other auth methods will be done separately
The risks are:
- There is no backwards compatibility for site settings on the client-side. Accessing auth-related site settings in Javascript is fairly rare, and an error on the client side would not be security-critical.
- If a plugin is monkey-patching parts of the auth process, changes to locale keys could cause broken error messages. This should also be unlikely. The old site setting names remain functional, so security-related overrides will remain working.
A follow-up commit will be made with a post-deploy migration to delete the old `site_settings` rows.
This PR allows entering a float value for topic timers e.g. 0.5 for 30 minutes when entering hours, 0.5 for 12 hours when entering days. This is achieved by adding a new column to store the duration of a topic timer in minutes instead of the ambiguous both hours and days that it could be before.
This PR has ommitted the post migration to delete the duration column in topic timers; it will be done in a subsequent PR to ensure that no data is lost if the UPDATE query to set duration_mintues fails.
I have to keep the old keyword of duration in set_or_create_topic_timer for backwards compat, will remove at a later date after plugins are updated.
This is a try to simplify logic around dismiss new topics to have one solution to work in all places - dismiss all-new, dismiss new in a specific category or even in a specific tag.
This PR revamps the topic timer UI, using the time shortcut selector from the bookmark modal.
* Fixes an issue where the duration of hours/days after last reply or auto delete replies was not enforced to be > 0
* Fixed an issue where the timer dropdown options were not reloaded correctly if the topic status changes in the background (use `MessageBus` to publish topic state in the open/close timer jobs)
* Moved the duration input and the "based on last post" option from the `future-date-input` component, as it was only used for topic timers. Also moved out the notice that is displayed which was also only relevant for topic timers.