* DEV: Show warning message when using ember css selectors
When editing the theme css via the admin UI a warning message
will be displayed if it detects that the `#emberXXX` or `.ember-view`
css selectors are being used. These are dynamic selectors that ember
generates, but they can change so they should not be used.
* Update error message text to be more helpful
* Display a warning instead of erroring out
This allows the theme to still be saved, but a warning is displayed.
Updated the tests to check for the error message.
Updated the pre tags css so that it wraps for long messages.
If no email is provided, email_valid should be set false, so that
Discourse can prompt the user for an email and verify it.
This fixes signups via twitter for accounts with no email address.
Previously we would always take the first image in a post to use as the
thumbnail. On media-heavy sites, users may want to manually select a
specific image as the topic thumbnail. This commit allows this to be
done via a `|thumbnail` attribute in markdown.
For example, in this case, bbb would be chosen as the thumbnail:
```
![alttext|100x100](upload://aaa)
![alttext|100x100|thumbnail](upload://bbb)
```
Follow up https://github.com/discourse/discourse/pull/11968
Dismiss all new topics using the same DismissTopicService. In addition, MessageBus receives exact topic ids which should be marked as `seen`.
The bug was mentioned on meta https://meta.discourse.org/t/users-are-seeing-handling-of-unhandled-tag-again/155367
It was related to users who are watching a specific topic. In that case, when the hidden tag was added or removed to the topic they were notified by `NotifyTagChangeJob`.
That job should take hidden tags into consideration. If all changed tags are in a hidden group, it should exclude user not belong to that group.
At the same time, if visible to anyone tag is added or removed users watching topic should be notified.
You can use `discourse restore --location=local FILENAME` if you want to restore a backup that is stored locally even though the `backup_location` has the value `s3`.
* FEATURE: Ability to dismiss new topics in a specific tag
Follow up of https://github.com/discourse/discourse/pull/11927
Using the same mechanism to disable new topics in a tag.
* FIX: respect when category and tag is selected
* DEV: Escape backslashes in curl example
We need to escape these backslashes otherwise they get filtered out when
generating the api docs.
* FIX: uniqItems should be uniqueItems
The 'Discourse SSO' protocol is being rebranded to DiscourseConnect. This should help to reduce confusion when 'SSO' is used in the generic sense.
This commit aims to:
- Rename `sso_` site settings. DiscourseConnect specific ones are prefixed `discourse_connect_`. Generic settings are prefixed `auth_`
- Add (server-side-only) backwards compatibility for the old setting names, with deprecation notices
- Copy `site_settings` database records to the new names
- Rename relevant translation keys
- Update relevant translations
This commit does **not** aim to:
- Rename any Ruby classes or methods. This might be done in a future commit
- Change any URLs. This would break existing integrations
- Make any changes to the protocol. This would break existing integrations
- Change any functionality. Further normalization across DiscourseConnect and other auth methods will be done separately
The risks are:
- There is no backwards compatibility for site settings on the client-side. Accessing auth-related site settings in Javascript is fairly rare, and an error on the client side would not be security-critical.
- If a plugin is monkey-patching parts of the auth process, changes to locale keys could cause broken error messages. This should also be unlikely. The old site setting names remain functional, so security-related overrides will remain working.
A follow-up commit will be made with a post-deploy migration to delete the old `site_settings` rows.
This PR allows entering a float value for topic timers e.g. 0.5 for 30 minutes when entering hours, 0.5 for 12 hours when entering days. This is achieved by adding a new column to store the duration of a topic timer in minutes instead of the ambiguous both hours and days that it could be before.
This PR has ommitted the post migration to delete the duration column in topic timers; it will be done in a subsequent PR to ensure that no data is lost if the UPDATE query to set duration_mintues fails.
I have to keep the old keyword of duration in set_or_create_topic_timer for backwards compat, will remove at a later date after plugins are updated.
If a list of email addresses is pasted into a group’s Add Members form
that has one or more email addresses of users who already belong to the
group and all other email addresses are for users who do not yet exist
on the forum then no invites were being sent. This commit ensures that
we send invites to new users.
This is a try to simplify logic around dismiss new topics to have one solution to work in all places - dismiss all-new, dismiss new in a specific category or even in a specific tag.
This moves all the rate limiting for user second factor (based on `params[:second_factor_token]` existing) to the one place, which rate limits by IP and also by username if a user is found.
A more general, lower-level change in addition to #11950.
Most code paths already check if SSO is enabled or if local logins are disabled before trying to create an email invite.
This is a safety net to ensure no invalid invites sneak by.
Also includes:
FIX: Don't allow to bulk invite when SSO is on (or when local logins are disabled)
This mirrors can_invite_to_forum? and other email invite code paths.
Issue originally reported in https://meta.discourse.org/t/bypass-sso-by-adding-unkown-email-to-group/177339
Inviting people via email address to a group when SSO is enabled (or local logins are disabled) led to a situation where user records were being created bypassing single sign-on.
We already prevent that in most places. This adds required checks to `GroupsController`.
This pull requests contains a series of improvements to groups
settings and member management such as:
- Showing which users have set a group as primary
- Moving similar settings together under Effects
- Adding bulk select and actions to members page
This PR revamps the topic timer UI, using the time shortcut selector from the bookmark modal.
* Fixes an issue where the duration of hours/days after last reply or auto delete replies was not enforced to be > 0
* Fixed an issue where the timer dropdown options were not reloaded correctly if the topic status changes in the background (use `MessageBus` to publish topic state in the open/close timer jobs)
* Moved the duration input and the "based on last post" option from the `future-date-input` component, as it was only used for topic timers. Also moved out the notice that is displayed which was also only relevant for topic timers.
Disabling shared drafts used to leave topics in an inconsistent state
where they were not displayed as shared drafts and thus there was no
way of publishing them. Moreover, they were accessible just to users
who have permissions to create shared drafts.
This commit adds another permission check that is used for most
operations and the old can_create_shared_draft? remains used just when
creating a new shared draft.
Using "UrlHelper#absolute" returns the S3 URL, which is fine for the client because it modifies it to use the CDN instead. On the other hand, this replacement doesn't happen when the URL is server-side rendered, returning a 403 for the system's avatar.
* document user endpoints, allow for empty request/response bodies
* document more user endpoints, improve debugging output if no details are specified
* document some more user endpoints
* minor cleanup
* FIX: flakey tests due to bad regex
Adds a new column/setting to groups, allow_unknown_sender_topic_replies, which is default false. When enabled, this scenario is allowed via IMAP:
* OP sends an email to the support email address which is synced to a group inbox via IMAP, creating a group topic
* Group user replies to the group topic
* An email notification is sent to the OP of the topic via GroupSMTPMailer
* The OP has several email accounts and the reply is sent to all of them, or they forward their reply to another email account
* The OP replies from a different email address than the OP (gloria@gmail.com instead of gloria@hey.com for example)
* The a new staged user is created, the new reply is accepted and added to the topic, and the staged user is added to the topic allowed users
Without allow_unknown_sender_topic_replies enabled the new reply creates an entirely new topic (because the email address it is sent from is not previously part of the topic email chain).
This PR adds security_last_changed_at and security_last_changed_reason to uploads. This has been done to make it easier to track down why an upload's secure column has changed and when. This necessitated a refactor of the UploadSecurity class to provide reasons why the upload security would have changed.
As well as this, a source is now provided from the location which called for the upload's security status to be updated as they are several (e.g. post creator, topic security updater, rake tasks, manual change).
Not when doing a site-wide search like we do in the Directory.
This solves the following specfailure:
1) DirectoryItemsController with data finds user by name
Failure/Error: expect(json['directory_items'].length).to eq(1)
expected: 1
got: 0
(compared using ==)
# ./spec/requests/directory_items_controller_spec.rb:88:in `block (3 levels) in <main>'
# ./spec/rails_helper.rb:271:in `block (2 levels) in <top (required)>'
# ./bundle/ruby/2.7.0/gems/webmock-3.11.1/lib/webmock/rspec.rb:37:in `block (2 levels) in <top (required)>'
When doing a user search (eg. when mentioning a user) we will not prioritie
users who hasn't been seen in over a year.
REFACTOR the user-search specs to be more precise regarding the ordering
This cookie is only used during login. Having it persist after that can
cause some unusual behavior, especially for sites with short session
lengths.
We were already deleting the cookie following a new signup, but not for
existing users.
This commit moves the cookie deletion logic out of the erb template, and
adds logic and tests to ensure it is always deleted consistently.
Co-authored-by: Jarek Radosz <jradosz@gmail.com>
A simplified version of the logic used in the function before my fix is as follow:
```ruby
result = []
things = [0,1,2,3]
max_values = 2
every = (things.size.to_f / max_values).ceil
things.each_with_index do |t, index|
next unless (t % every) === 0
result << t
end
p result # [0, 2]
# 3 doesn’t get included
```
The problem is that if you get unlucky two times you won't get last tuple(s) and might get a very erroneous date.
Double unlucky:
- last tuple index % computed every !== 0 and you don't get the last tuple
- the last tuple is related to a post with a very different date than the previous tuples (on year difference in our case)
Lots of changes but it's mostly a refactoring.
The interesting part that was fix are the 'load_problem_<model>_ids' methods.
They will now return records with no search data associated so they can be properly indexed for the search.
This "bad" state usually happens after a migration.
- Read in schemas from actual json files instead of a ruby hash. This is
helpful because we will be automatically generating .json schema files
from json responses and don't want to manually write ruby hash schema
files.
- Create a helper method for rspec schema validation tests to dry up code
* DEV: Add schema checking to api doc testing
This commit improves upon rswag which lacks schema checking. rswag
really only checks that the https status matches, but this change adds
in the json-schema_builder gem which also has schema validation.
Now we can define schemas for each of our requests/responses in the
`spec/requests/api/schemas` directory which will make our documentation
specs a lot cleaner.
If we update a serializer by either adding or removing an attribute the
tests will now fail (this is a good thing!). Also if you change the type
of an attribute say from an array to a string the tests will now fail.
This will help significantly with keeping the docs in sync with actual
code changes! Now if you change how an endpoint will respond you will
have to update the docs too in order for the tests to pass. :D
This PR is inspired by:
https://www.tealhq.com/post/how-teal-keeps-their-api-tests-and-documentation-in-sync
* Swap out json schema validator gem
Swapped out the outdated json-schema_builder gem with the json_schemer
gem.
* Add validation fields to schema
In order to have "strict" validation we need to add
`additionalProperties: false` to the schema, and we need to specify
which attributes are required.
Updated the debugging test output to print out the error details if
there are any.
Improvements to make console access to IncomingEmail more pleasant, and stopping certain IMAP logs from landing in the DB because they just create too much noise,
This adds a new table UserNotificationSchedules which stores monday-friday start and ends times that each user would like to receive notifications (with a Boolean enabled to remove the use of the schedule). There is then a background job that runs every day and creates do_not_disturb_timings for each user with an enabled notification schedule. The job schedules timings 2 days in advance. The job is designed so that it can be run at any point in time, and it will not create duplicate records.
When a users saves their notification schedule, the schedule processing service will run and schedule do_not_disturb_timings. If the user should be in DND due to their schedule, the user will immediately be put in DND (message bus publishes this state).
The UI for a user's notification schedule is in user -> preferences -> notifications. By default every day is 8am - 5pm when first enabled.
This should make it easier to track down how the incoming email was created, which is one of four locations:
The POP3 poller (which picks up reply via email replies)
The admin email controller #handle_mail (which is where hosted mail is sent)
The IMAP sync tool
The group SMTP mailer, which sends emails when replying to IMAP topics, pre-emptively creating IncomingEmail records to avoid double syncing
There was an issue that occurred with this order of operations:
* An IMAP topic was created by emailing a group
* A second user was invited to the topic (not the OP and not the group)
* A user with access to the group replies to the topic
* The second user receives a user_private_message notification email because of their involvement in the topic
* The second user replies to the email via email
This new reply would then go and notify the other group PM users, except for those who emailed the group topic directly, which is handled via the group SMTP mailer. However because the new post already has an incoming email because it is parsed via the Email::Receiver via POP3 the group SMTP section of the post alerter is skipped, and the group's email address is not ignored for the user_private_message notification.
This PR fixes it so the group is not ever sent an email via the PM notification. This is important because any new emails in the group's IMAP inbox will be picked up by the Imap::Sync code and created as a new topic which is not at all desirable.
Also in this PR I split up the specs a bit more for group SMTP in the post alerter to make them easier to read and they each only test one thing.
As per @davidtaylorhq 's comment at 6e2be3e#r46069906, this fixes an oversight where if login_required is enabled and an anon user follows a confirm new email link they are forced to login, which is not what the intent of #10830 was.
It returned a 429 error code with a 'Retry-After' header if a
RateLimiter::LimitExceeded was raised and unhandled, but the header was
missing if the request was limited in the 'RequestTracker' middleware.
Moves the topic timer jobs from being scheduled ahead of time with enqueue_at to a 5 minute scheduled run like bookmark reminders, in a new job called Jobs::EnqueueTopicTimers. Backwards compatibility is maintained by checking if an existing topic timer job is enqueued in sidekiq for the timer, and if it is not running it inside the new job.
The functionality to close/open a topic if it is in the opposite state still remains in the after_save block of TopicTimer, with further commentary, which is used for Open/Close Temporarily.
This also removes the ensure_consistency! functionality of topic timers as it is no longer needed; the new job will always pick up the timers because they are not stored in a fragile state of sidekiq.
This adds a safe default to not process pop3 emails when the pop3 polling option is set up that are > 1 week old. This is to avoid the situation where an older mailbox is used, which causes us to go and process all emails in that mailbox, sending out error emails to the senders of emails which cannot be parsed successfully.
Admins can now edit translations in different languages without having to change their locale. We display a warning when there's a fallback language set.
Transient errors in migration are ignored, silently corrupting
data, and the migration is incomplete and misses many sources of
uploads, which will lead to an incorrect expectation of independence
from the remote object storage after announcing that the migration
was successful, regardles of whether transient errors permanently
corrupted the data.
Remove this migration until such time as it is re-written to
follow the same pattern has the migration to s3, moving the
core logic out of the task.
Fixes bug introduced by bd25627198
What happens is we send notifications to everyone involved in the group inbox topic about new posts, however we pass the param `skip_send_email_to: email_addresses`. In the above commit I removed the group email address from this `email_addresses` array. This breaks the IMAP inbox because we email the group with the reply, and the IMAP sync tool finds this email and opens a new unrelated topic with it.
This results in some fun disasters if allowed to happen. For now, just issue an oblique error message; a localized message will be added on the client.
This PR fixes a race condition with the IMAP notification code. In the `Email::Receiver` we call the `NewPostManager` to create the post and enqueue jobs and sends alerts via `PostAlerter`. However, if the post alerter reaches the `notify_pm_users` and the `group_notifying_via_smtp` method _before_ the incoming email is updated with the post and topic, we unnecessarily send a notification to the person who just posted. The result of this is that the IMAP syncer re-imports the email sent to the user about their own post, which looks like this in the group inbox:
To fix this, we skip the jobs enqueued by `NewPostManager` and only enqueue them with `PostJobsEnqueuer` manually _after_ the incoming email record has been updated with the post and topic.
Other improvements:
* Moved code to calculate email addresses from `IncomingEmail` records into the topic, with a group passed in, for easier testing and debugging. It is not the responsibility of the post alerter to figure this stuff out.
* Add shortcut methods on `IncomingEmail` to split or provide an empty array for to and cc addresses to avoid repetition.
Added GET user by external_id to the api docs.
Fixed `/users/{username}` docs to be `/u/{username}`
Extracted out common user response into a shared helper.
Feature for `Must Approve Users` setup. When a user is rejected, a staff member can optionally set a reason for audit purposes. In addition, feedback email can be sent to the user.
Meta: https://meta.discourse.org/t/account-rejection-email/103112/8
* FEATURE: Import attachments
* FEATURE: Add support for importing multiple forums in one
* FEATURE: Add support for category and tag mapping
* FEATURE: Import groups
* FIX: Add spaces around images
* FEATURE: Custom mapping of user rank to trust levels
* FIX: Do not fail import if it cannot import polls
* FIX: Optimize existing records lookup
Co-authored-by: Gerhard Schlager <mail@gerhard-schlager.at>
Co-authored-by: Jarek Radosz <jradosz@gmail.com>
It used to change the category of the topic, instead of the destination
category (topic.category_id instead of topic.shared_draft.category_id).
The shared drafts controls were displayed only if the current category
matched the 'shared drafts category', which was not true for shared
drafts that had their categories changed (affected by the previous bug).
This PR adds functionality for the IMAP sync code to detect if a UID that is missing from the mail group mailbox is in the Spam/Junk folder for the mail account, and if so delete the associated Discourse topic. This is identical to what we do for emails that are moved for Trash.
If an email is missing but not in Spam or Trash, then we mark the incoming email record with imap_missing: true. This may be used in future to further filter or identify these emails, and perhaps go hunting for them in the email account in bulk.
Note: This adds some code duplication because the trash and spam email detection and handling is very similar. I intend to do more refactors/improvements to the IMAP sync code in time because there is a lot of room for improvement.
If a group you're a member of is invited to a PM, you can no longer remove yourself from it. This means you won't be able to remove the message from your inbox, and even if you archive it, it'll come back once someone replies.
Splits the `ToggleTopicClosed` job into two distinct `OpenTopic` and `CloseTopic` jobs to make the code clearer. The old job cannot be deleted yet because of outstanding sidekiq schedules, so a todo has been added to do so later this year.
Also replaced mentions of `topic_status_update` with `topic_timer` in some files, because the `topic_status_update` model is obsolete and replaced by topic timer.
Added some shortcut methods for checking if a topic is open/whether a user can change an open topic.
If the sliding window size is N seconds, then a moment at the Nth second
should be considered as the moment outside of the sliding window.
Otherwise, if the sliding window is already full, at the Nth second,
a new call wouldn't be allowed, but a time to wait before the next call
would be equal to zero, which is confusing.
In other words, the end of the time range shouldn't be included in the
sliding window.
Let's say we start at the second 0, and the sliding window size is 10
seconds. In the current version of rate limiter, this sliding window will
be considered as a time range [0, 10] (including the end of the range),
which actually is 11 seconds in length.
After this fix, the time range will be considered as [0, 10)
(excluding the end of the range), which is exactly 10 seconds in length.
It was a problem because during this operation only the first frame
is kept. This commit removes the alternative solution to check if a GIF
image is animated.
* DEV: TopicTrackingState calls should happen in the background
It was observed that calling TopicTrackingState on popular topics could result in a large number of calls to redis, resulting in slow response times when posting replies.
These calls should be moved to a background job.
* DEV: PostUpdateTopicTrackingState should execute on default queue
Those fail on the buggy i18n release (1.8.6) and pass on 1.8.5, 1.8.7 (the revert release), and with the second stab at thread safety on the current master (63a79cb929)
Fixes a rare race condition causing the `Imap::Sync` class to create an incoming email and associated post/topic, which then kicks off the PostAlerter to notify others in the PM about a reply in the topic, but for the OP which is not necessary (because the person emailing the IMAP inbox already knows about the OP). Basically, we should never be sending the group SMTP email for the first post in a topic.
Also in this PR:
* Custom attribute accessors for the to/from/cc addresses on `IncomingEmail`, to parse them from an array to a joined string so the logic for this is only in one place.
* Store extra detail against the `IncomingEmail` created in `GroupSmtpMailer`
* regex test Mail header Reply-To as string instead of Field, which fixes `warning: deprecated Object#=~ is called on Mail::Field; it always returns nil`
* Add DEBUG_IMAP to log all IMAP logs as warnings for easier debugging
* Changed the Rails logging to `ImapSyncLog` in the `GroupSmtpMailer`
* FIX: Inline onebox should use encoding from Content-Type header when present
* Use Regexp.last_match(1)
Signed-off-by: OsamaSayegh <asooomaasoooma90@gmail.com>
- Only initialize the S3Helper when needed
- Skip initializing the S3Helper for S3Store#cdn_url
- Allow cook_url to be passed a `local` hint to skip unnecessary checks
osts from topics with 'auto delete replies timer' with more than
skip_auto_delete_reply_likes likes will no longer be deleted. If 0,
all posts will be deleted.
Final sigma is not lower cased correctly in Ruby causing issues with routing.
This works around the issue by downcasing all usernames containing a sigma using JS.
Googlebot handles no-index headers very elegantly. It advises to leave as many routes as possible open and uses headers for high fidelity rules regarding indexes.
Discourse adds special `x-robot-tags` noindex headers to users, badges, groups, search and tag routes.
Following up on b52143feff we now have it so Googlebot gets special handling.
Rest of the crawlers get a far more aggressive disallow list to protect against excessive crawling.
My initial implementation didn't consider this case. We should skip imported users if the "imported_id" field is present, even if there're other custom fields.
You can now use `@me` to search for posts created by yourself, this is particularly handy if you have a long username.
`@me rainbow` will find all posts you created with the word rainbow.
Also cleans up test suite so it has no warnings.
`/srv/status` routes should not be cached at all. Also, we want to
decouple the route from Redis which `AnonymouseCache` relies on. The
`/srv/status` should continue to return a success response even if Redis
is down.
This is an edge-case of 9fb3629. An admin could set the shared draft category to one where both TL2 and TL3 users have access but only give shared draft access to TL3 users. If something like this happens, we need to make sure that TL2 users won't be able to see them, and they won't be listed on latest.
Before this change, `SharedDrafts` were lazily created when a destination category was selected. We now create it alongside the topic and set the destination to the same shared draft category.
* FEATURE: Allow categroy group moderators to list/unlist topics
If enabled via SiteSettings, a user belonging to a group which has been granted category group moderator privileges should be able to list/unlist topics belonging to the appropraite category.
It used to insert block Oneboxes inside paragraphs which resulted in
invalid HTML. This needed an additional parsing for removal of empty
paragraphs and the resulting HTML could still be invalid.
This commit ensure that block Oneboxes are inserted correctly, by
splitting the paragraph containing the link and putting the block
between the two. Paragraphs left with nothing but whitespaces will
be removed.
Follow up to 7f3a30d79f.
* FIX: 'false' value was treated as a truthy value
For example, latest.json?no_subcategories=false used to have set
no_subcategories to the string value of 'false', which is not false.
* DEV: Remove dead code
* FIX: Redirect to /none under the right conditions
These conditions are:
- neither /all or /none present
- only for default filter
* FIX: Build correct topic list filter
/none was never added to the topic list filter
* FIX: Do not show count for subcategories if 'none' category
* FIX: preload_key must contain /none if no_subcategories
25563357 moved the logout redirect logic from the client-side to the server-side. Unfortunately the login_required check was lost during the refactoring which meant that non-login-required sites would redirect to `/login` after redirect, and immediately restart the login process. Depending on the SSO implementation, that can make it impossible for users to log out cleanly.
This commit restores the login_required check, and prevents the potential redirect loop.
This commit is dedicated to https://twitter.com/FiloSottile/status/1335666583126073354 for reminding me that like timestamps are valuable data.
Likes additionally include the topic_id and post_number of the acted post, to aid in analysis. Flag export does not include the disposition by staff.
When jobs are enqueued inside a transaction, it's possible that they will be executed before the necessary data is available in the database. This commit ensures all jobs are enqueued in an ActiveRecord after_commit hook.
One potential downside here is if the job fails to enqueue, the transaction will no longer be aborted. However, the chance of that happening is reasonably low, and the impact is significantly lower than the current issue where jobs are scheduled before their data is ready.
Previously it matched the behavior of standard ActiveRecord after_commit callbacks. They do not work well within `joinable: false` nested transactions. Now `DB.after_commit` callbacks will only be run when the outermost transaction has been committed.
Tests always run inside transactions, so this also introduces some logic to run callbacks once the test-wrapping transaction is reached.
We should always hide user_id in response when `hide_email_address_taken` setting is enabled. Currently, it can be used to determine if the email was used or not.
This ensures that users are only served cached content in their own language. This commit also refactors to make use of the `Discourse.cache` framework rather than direct redis access
* DEV: More robust processing of URLs
The previous `UrlHelper.encode_component(CGI.unescapeHTML(UrlHelper.unencode(uri))` method would naively process URLs, which could result in a badly formed response.
`Addressable::URI.normalized_encode(uri)` appears to deal with these edge-cases in a more robust way.
* DEV: onebox should use UrlHelper
* DEV: fix spec
* DEV: Escape output when rendering local links
Notification is created by a job. If the job is evaluated before changes are committed to a database, a notification will have an incorrect URL.
Therefore, the job should be lodged in enqueue_jobs method which is triggered after the transaction:
```ruby
Topic.transaction do
move_posts_to topic
end
add_allowed_users(participants) if participants.present? && @move_to_pm
enqueue_jobs(topic)
```
I improved a little bit specs to ensure that the destination topic_id is set. However, that tests are passing even without code improvements. I couldn't find an easy way to "delay" database transaction.
Meta: https://meta.discourse.org/t/bug-with-notifications-for-moved-posts/168937
You can let non-staff users use shared drafts by modifying the `shared_drafts_min_trust_level` site setting. These users must have access to the shared draft category.
* FEATURE: Allow Category Group Moderators to edit topic titles
Adds category group moderators to the topic guardian’s `can_edit` method.
The value of `can_edit` is returned by the topic view serializer, and this value determines whether the current user can edit the title/category/tags of the topic directly (which category group moderators could already do by editing the first post of a topic).
Note that the value of `can_edit` is now always returned by the topic view serializer (ie, for both true and false values) to cover the case where a topic is moved out of a category that a category group moderator has permissions on, so that when the topic is reloaded the UI picks up that `can_edit` is now false, and thus the edit icon should no longer be displayed.
* DEV: Add a comment explaining why `can_edit` is always returned
When the invite was being redeemed and the ReviewableUser record status
for the invited user was not pending an error was being raised.
This commit makes sure that we are only looking for ReviewableUser
record with status pending and updates that to approved.
If a category and a sub-category have the same slug, adding a background to one of them will also show it on the other one. This was introduced in 8e3f667 to fix a discrepancy, which was later fixed in 214b4c3.
* FEATURE: onebox for local categories
This commit adjusts the category onebox to look more like the category boxes do on the category page.
Co-authored-by: Jordan Vidrine <jordan@jordanvidrine.com>
Force pushing a commit to a theme repository used to break the updater,
because the system was not able to count the commits behind the old and
new version. This operation failed because a force push deleted the old
commits.
The user was prompted with a simple "500 server error" message.
- Display reason for validation error when logging in via an authenticator
- Fix email validation handling for 'Discourse SSO', and add a spec
Previously, validation errors (e.g. blocked or already-taken emails) would raise a generic error with no useful information.
When calculating whether the attached uploads went over the SiteSetting.email_total_attachment_size_limit_kb.kilobytes limit, we were using the original_upload for the calculation instead of the actually attached_upload, which will be smaller in most cases because it can be an optimized image.
At the moment, when filtering by group, the directory will unconditionally return the current user at the top of the list. This is quite unexpected, given that the user is deliberately trying to filter the list. This commit makes sure the 'include current user' logic only triggers for unfiltered directories
This adds a new min_trust_level_to_allow_ignore site setting that enables admins to control the point at which a user is allowed to ignore other users.
After the search term is parsed for advanced search filters, the term may
become empty. Later, the same term will be passed to Discourse.route_for
which will raise an ArgumentError.
> URI(nil)
ArgumentError: bad argument (expected URI object or URI string)
* FEATURE: display error if Oneboxing fails due to HTTP error
- display warning if onebox URL is unresolvable
- display warning if attributes are missing
* FEATURE: Use new Instagram oEmbed endpoint if access token is configured
Instagram requires an Access Token to access their oEmbed endpoint. The requirements (from https://developers.facebook.com/docs/instagram/oembed/) are as follows:
- a Facebook Developer account, which you can create at developers.facebook.com
- a registered Facebook app
- the oEmbed Product added to the app
- an Access Token
- The Facebook app must be in Live Mode
The generated Access Token, once added to SiteSetting.facebook_app_access_token, will be passed to onebox. Onebox can then use this token to access the oEmbed endpoint to generate a onebox for Instagram.
* DEV: update user agent string
* DEV: don’t do HEAD requests against news.yahoo.com
* DEV: Bump onebox version from 2.1.5 to 2.1.6
* DEV: Avoid re-reading templates
* DEV: Tweaks to onebox mustache templates
* DEV: simplified error message for missing onebox data
* Apply suggestions from code review
Co-authored-by: Gerhard Schlager <mail@gerhard-schlager.at>
Allowing the editing of remote themes has been something Discourse has advised against for some time. This commit removes the ability to edit or upload files to remote themes from Admin > Customize to enforce the recommended practice.
* FIX: Store Reviewable's force_review as a boolean.
Using the `force_review` flag raises the score to hit the minimum visibility threshold. This strategy turned out to be ineffective on sites with a high number of flags, where these values could rapidly fluctuate.
This change adds a `force_review` column on the reviewables table and modifies the `Reviewable#list_for` method to show these items when passing the `status: :pending` option, even if the score is not high enough. ReviewableQueuedPosts and ReviewableUsers are always created using this option.
CookedPostProcessor replaces all large images with their optimized
versions, but for GIF images the optimized version is limited to first
frame only. This caused animations it cooked posts to require a click
to show up the lightbox and start playing.
Now that we have dark logo settings in core, we can relatively easily ensure that static pages (such as the 404 page) use a logo that is appropriate for the given light or dark color scheme.
Animated emojis were converted to static images. This commit moves the
responsability on site admins to optimize their animated emojis before
uploading them (gifsicle is no longer used).
Plugin client.en.yml and server.en.yml can now be client/server-(1-100).en.yml. 1 is the lowest priority, and 100 is the highest priority. This allows plugins to set their priority higher than other plugins, so that they can override eachothers' translations.
It used to simply say "title is invalid" without giving any hint what
the problem could be. This commit adds different errors messages for
all caps titles, low entropy titles or titles with very long words.
We updated version of moment and moment-timezone as our current versions are outdated making Discourse Dates broken on places where timezone had updates, like here in Brazil.
This also update highlightJS to the latest version and corrected a test that relied on a no longer supported locale in
moment.
Previously, `/u/by-external/{id}` would only work for 'Discourse SSO' systems. This commit adds a new 'provider' parameter to the URL: `/u/by-external/{provider}/{id}`
This is compatible with all auth methods which have migrated to the 'ManagedAuthenticator' pattern. That includes all core providers, and also popular plugins such as discourse-oauth2-basic and discourse-openid-connect.
The new route is admin-only, since some authenticators use sensitive information like email addresses as the external id.
This commit adds an additional find_user_by_email hook to ManagedAuthenticator so that GitHub login can continue to support secondary email addresses
The github_user_infos table will be dropped in a follow-up commit.
This is the last core authenticator to be migrated to ManagedAuthenticator 🎉
PostDestroyer should accept the option to permanently destroy post from the database. In addition, when the first post is destroyed it destroys the whole topic.
Currently, that feature is limited to private messages and creator of the post. It will be used by discourse-encrypt to explode encrypted private messages.
When embedding secure images which have been oneboxed, we checked to see if the image's parent's parent had the class onebox-body. This was not always effective as if the image does not get resized/optimized then it does not have the aspect-image div wrapping it. This would cause the image to embed in the email but be huge.
This PR changes the check to see if any of the image's ancestors have the class onebox-body, or if the image has the onebox-avatar class to account for variations in HTML structure.
Per Google, sites are encouraged to upgrade from Universal Analytics v3 `analytics.js` to v4 `gtag.js` for Google Analytics tracking. We're giving admins the option to stay on the v3 API or migrate to v4. Admins can change the implementation they're using via the `ga_version` site setting. Eventually Google will deprecate v3, but our implementation gives admins the choice on what to use for now.
We chose this implementation to make the change less error prone, as many site admins are using custom events via the v3 UA API. With the site stetting defaulted to `v3_analytics`, site analytics won't break until the admin is ready to make the migration.
Additionally, in the v4 implementation, we do not enable automatic pageview tracking (on by default in the v4 API). Instead we rely on Discourse's page change API to report pageviews on transition to avoid double-tracking.
It used to simply say "not allowed" without giving any hint what the
problem could be. This commit refactors the code and tries to improve
readability.
There are issues around displaying images on published pages when secure media is enabled. This PR temporarily makes it appear as if published pages are enabled if secure media is also enabled.
* FEATURE - allow category group moderators to delete topics
* Allow individual posts to be deleted
* DEV - refactor for new `can_moderate_topic?` method
Previous to this change our anonymous rate limits acted as a throttle.
New implementation means we now also consider rate limited requests towards
the limit.
This means that if an anonymous user is hammering the server it will not be
able to get any requests through until it subsides with traffic.
When the linked topic is created we'll not hardcode the topic title and
let onebox work its magic instead so that the title can be updated
automatically.
Users could be silenced or suspended by two staff members at the same time and
would not be aware of it. This commit shows an error message if another penalty
has been applied.
- IgnoredUser records should all now have an expiring_at value. This commit enforces that in the DB, and fixes any corrupt rows
- Changes to the ignored user list are now handled by the `/u/{username}/notification_level` endpoint. This allows setting expiration dates on the ignore. This commit removes the old logic for saving a list of usernames in the user preferences.
- Many specs were calling `IgnoredUser.create`. This commit changes them to use `Fabricate(:ignored_user)` for consistency
A site owner attempting to use both the email_subject site setting and translation overrides for normal post notification
email subjects would find themselves frusturated at the lack of template argument parity.
Make all the variables available for translation overrides by adding the subject variables to the custom interpolation keys list and applying them.
Reported at https://meta.discourse.org/t/customize-subject-format-for-standard-emails/20801/47?u=riking
This commit adds a site setting `auto_close_topics_create_linked_topic`
which when enabled works in conjunction with `auto_close_topics_post_count`
setting and creates a new linked topic for the topic just closed.
The auto-created new topic contains a link for all the previous topics
and the topic titles are appended with `(Part {n})`.
The setting is enabled by default.
We had an issue where onebox thumbnail was too large and thus was optimized, and we are using the image URLs in post to redact and re-embed, based on the sha1 in the URL. Optimized image URLs have extra stuff on the end like _99x99 so we were not parsing out the sha1 correctly. Another issue I found was for posts that have giant images, the original was being used to embed in the email and thus would basically never get included because it is huge.
For example the URL 787b17ea61_2_690x335.jpeg was not parsed correctly; we would end up with 787b17ea6140f4f022eb7f1509a692f2873cfe35_2_690x335.jpeg as the sha1 which would not find the image to re-embed that was already attached to the email.
This fix will use the first optimized image of the detected upload when we are redacting and then re-embedding to make sure we are not sending giant things in email. Also, I detect if it is a onebox thumbnail or the site icon and force appropriate sizes and styles.
This reverts commit e3de45359f.
We need to improve out strategy by adding a cache breaker with this change ... some assets on CDNs and clients may have incorrect CORS headers which can cause stuff to break.
There is a site setting reply_by_email_enabled which when combined with reply_by_email_address creates a Reply-To header in emails in the format "test+%{reply_key}@test.com" along with a PostReplyKey record, so when replying Discourse knows where to route the reply.
However this conflicts with the IMAP implementation. Since we are sending the email for a group via SMTP and from their actual email account, we want all replys to go to that email account as well so the IMAP sync job can pick them up and put them in the correct place. So if the group has IMAP enabled and configured, then the reply-to header will be correct.
This PR also makes a further fix to 64b0b50 by using the correct recipient user for the PostReplyKey record. If the post user is used we encounter this error:
if destination.user_id != user.id && !forwarded_reply_key?(destination, user)
raise ReplyUserNotMatchingError, "post_reply_key.user_id => #{destination.user_id.inspect}, user.id => #{user.id.inspect}"
end
This is because the user above is found from the from_address, but the destination which is the PostReplyKey is made by the post.user, which will be different people.
* PERF: we don't need to use a huge image to test thumbnails
Generating images with 5000x5000 dimensions is an expensive operation.
Using smaller images reduce the time of model spec from 11s to 3s and integration spec from 6s to 2s.
Documenting a few more endpoints so that our api docs can be
automatically generated. Made a couple other minor changes, like
including the "OK" example for our default success response.
Resending an invite moved the expire date in the future, but did not
invalidate it. For example, if an invite was sent to an email,
invalidated and then resent, it would still be left invalidated.
* FEATURE - Add SiteSettings to control JPEG image quality
`recompress_original_jpg_quality` - the maximum quality of a newly
uploaded file.
`image_preview_jpg_quality` - the maximum quality of OptimizedImages
* FEATURE: allow category group moderators to edit posts
If the `enable_category_group_moderation` SiteSetting is enabled, posts should be editable by those belonging to the appropraite groups.
Trying to include this attribute when topic_user is nil causes an error when visiting a topic as anon. Additionally, we don't display the slow mode banner for these users.
`max-width: 50%; max-height: 400px;` is a good fallback, however, if width and height are given and are smaller than fallback - we should persist that smaller size.
Our Email::Sender class accepts an optional user argument, which is used to create a PostReplyKey record when present. This record is used to sub out the %{reply_key} placeholder in the Reply-To mail header, so if we do not pass in the user we get a broken Reply-To header.
This is especially problematic in the IMAP group SMTP situation, because these emails go to customers that we are replying to, and when they reply to us the email bounces! This fixes the issue by passing user to the Email::Sender when sending a group_smtp email but there is still more to do in another PR.
This Email::Sender optional user is a bit of a footgun IMO, especially because most of the time we use it there is a user we can source. I would like to do another PR for this after this one to make the parameter not optional, so we don't end up with these reply issues down the line again.
This consolidates logic used to match routes in ApiKey, UserApiKey and DefaultCurrentUserProvider. This reduces duplicated logic, and will allow UserApiKeysScope to easily re-use the parameter matching logic from ApiKeyScope
Adds a new slow mode for topics that are heating up. Users will have to wait for a period of time before being able to post again.
We store this interval inside the topics table and track the last time a user posted using the last_posted_at datetime in the TopicUser relation.
It pauses Sidekiq, clears Redis (namespaced to the current site), clears Sidekiq jobs for the current site, restores the database and unpauses Sidekiq. Previously it stayed paused until the end of the restore.
Redis is cleared because we don't want any old data lying around (e.g. old Sidekiq jobs). Most data in Redis is prefixed with the name of the multisite, but Sidekiq jobs in a multisite are all stored in the same keys. So, deleting those jobs requires a little bit more logic.
Dependency on gifsicle, allow_animated_avatars and allow_animated_thumbnails
site settings were all removed. Animated GIF images are still allowed, but
the generated optimized images are no longer animated for those (which were
used for avatars and thumbnails).
The added 'animated' is populated by extracting information using FastImage.
This field was used to selectively reoptimize old animations. This process
happens in the background.
Now that we have support for user-selectable color schemes, it makes sense
to simplify seeding and theme updates in the wizard.
We now:
- seed only one theme, named "Default" (previously "Light")
- seed a user-selectable Dark color scheme
- rename the "Themes" wizard step to "Colors"
- update the default theme's color scheme if a default is set
(a new theme is created if there is no default)
When posts or topics are deleted we don't want to immediately delete associated bookmarks, so we have a grace period to recover them and their reminders if the post or topic is un-deleted. This PR adds a task to the Weekly scheduled job to go and delete bookmarks attached to posts or topics deleted > 3 days ago.
Per Google, sites are encouraged to upgrade from `analytics.js` to `gtag.js` for Google Analytics tracking. This commit updates core Discourse to use the new `gtag.js` API Google is asking sites to use. This API has feature parity with `analytics.js` but does not use trackers.
Creates a BabelHelper builder using a default list of plugins, to ensure the transpiled code is always using the same plugins instead of differents plugins in different cases.
When the server gets overloaded and lots of requests start queuing server
will attempt to shed load by returning 429 errors on background requests.
The client can flag a request as background by setting the header:
`Discourse-Background` to `true`
Out-of-the-box we shed load when the queue time goes above 0.5 seconds.
The only request we shed at the moment is the request to load up a new post
when someone posts to a topic.
We can extend this as we go with a more general pattern on the client.
Previous to this change, rate limiting would "break" the post stream which
would make suggested topics vanish and users would have to scroll the page
to see more posts in the topic.
Server needs this protection for cases where tons of clients are navigated
to a topic and a new post is made. This can lead to a self inflicted denial
of service if enough clients are viewing the topic.
Due to the internal security design of Discourse it is hard for a large
number of clients to share a channel where we would pass the full post body
via the message bus.
It also renames (and deprecates) triggerNewPostInStream to triggerNewPostsInStream
This allows us to load a batch of new posts cleanly, so the controller can
keep track of a backlog
Co-authored-by: Joffrey JAFFEUX <j.jaffeux@gmail.com>
A follow-up to a follow-up. (6932a373a3 and 572da7a57b)
Our `discourse_test` Docker image uses git 2.20.1 released on Dec 15, 2018. It does not support `git branch --show-current`. (it was added in 2.22.0)
Previously, any errors in those files would e.g. blow up the update process in docker_manager.
Now it prints out an error and proceeds as if there was no compatibility file.
Includes:
* DEV: Extract setup_git_repo
* DEV: Use `Dir.mktmpdir`
* DEV: Default to `main` branch (The latest versions of git already do this, so to avoid problems do this by default)
* FIX: second factor cannot be enabled if SSO is enabled
If `enable_sso` setting is enabled then admin should not be able to
enable `enforce_second_factor` setting as that will lock users out.
Co-authored-by: Robin Ward <robin.ward@gmail.com>
DEV: Replace instances of Discourse.base_uri with Discourse.base_path
This is clearer because the base_uri is actually just a path prefix. This continues the work started in 555f467.
See https://meta.discourse.org/t/email-address-change-confirmation-email-not-sent-but-every-other-notification-emails-are/165358
In short: with disable emails set to non-staff, email address change confirmation emails (those sent to the new address) are not sent for staff or admin members.
This was happening because we were looking up the staff user with the to_address of the email, but the to address was the new email address because we are sending a confirm email change email, and thus the user could not be found. We didn't need to do this anyway because we are passing the user into the Email::Sender class anyway.
* PERF: avoid lookbehinds when indexing search
Previously we used a `EmailCook.url_regexp` this regex used lookbehinds
Unfortunately certain strings could lead to pathological behavior causing
CPU to skyrocket and regex replace to take a very very long time.
EmailCook still needs a fix, but it is less urgent cause it already splits
to single lines. That said we will correct that as well in a seperate PR.
New implementation is far more naive and relies on the extra spaces search
indexer inserts.
When that site setting is enabled, the category counts (new/unread)
include the subcategory definition topics, but the topics aren't included
in the list. This fixes that discrepancy.
Previously, Jobs::EnqueueDigestEmails would enqueue a digest job for every user, even if there are no topics to send. The digest job would exit, no email would send, and last_emailed_at would not change. 30 minutes later, Jobs::EnqueueDigestEmails would run again and re-enqueue jobs for the same users.
120fa8ad introduced a temporary mitigation for this issue, by randomly selecting a subset of those users each time.
This commit adds a new `digest_attempted_at` column to the `user_stats` table. This column is updated every time a digest job completes for a user. Using this, we can avoid scheduling digest jobs for the same user every 30 minutes. This also removes the random user selection in 120fa8ad, and instead prioritizes users who had digests attempted the longest time ago.
To avoid blocking the sidekiq queue a limit of 10,000 digests per 30 minutes
is introduced.
This acts as a safety measure that makes sure we don't keep pouring oil on
a fire.
On multisites it is recommended to set the number way lower so sites do not
dominate the backlog. A reasonable default for multisites may be 100-500.
This can be controlled with the environment var
DISCOURSE_MAX_DIGESTS_ENQUEUED_PER_30_MINS_PER_SITE
See https://meta.discourse.org/t/changing-a-users-email/164512 for additional context.
Previously when an admin user changed a user's email we assumed that they would need a password reset too because they likely did not have access to their account. This proved to be incorrect, as there are other reasons a user needs admin to change their email. This PR:
* Changes the admin change email for user flow so the user is sent an email to confirm the change
* We now record who the email change request was requested by
* If the requested by user is admin and not the user we note this in the email sent to the user
* We also make the confirm change email route open to anonymous users, so it can be clicked by the user even if they do not have access to their account. If there is a logged in user we make sure the confirmation matches the current user.
* FEATURE: Export the entire user profile as json, not just bio/website
* FEATURE: Add session log information to user export
Even though the columns are named 'auth_token' etc, the content is not actually usable to log into the forum with. Despite all that, it is still truncated for export, to avoid any 'token hash cracking' situations.
Allows site administrators to pick different fonts for headings in the wizard and in their site settings. Also correctly displays the header logos in wizard previews.
When enable_personal_messages was disabled, moderators could not see
the private messages for the "moderators" group. The link was displayed
on the client side, but the checks on the server side did not allow it.
Previously this site setting `embed unlisted` defaulted to false and
empty topics would be generated for embed, but those topics tend to take
up a lot of room on the topic lists.
This new default creates invisible topics by default until they receive
their first reply.
Previously, moving a category into another one, that already had a child category of that name (but with a non-conflicting slug) would cause a 500 error:
```
# PG::UniqueViolation:
# ERROR: duplicate key value violates unique constraint "unique_index_categories_on_name"
# DETAIL: Key (COALESCE(parent_category_id, '-1'::integer), name)=(5662, Amazing Category 0) already exists.
```
It now returns 422, and shows the same message as when you're renaming a category: "Category Name has already been taken".
UploadRecovery only worked on missing Upload records. Now it also works with existing ones that have an invalid_etag status.
The specs (first that test the S3 path) are a bit of stub-a-palooza, but that's how much this class interacts with the the outside world. 🤷♂️
Upload.secure_media_url? raised an exceptions when the URL was invalid,
which was a issue in some situations where secure media URLs must be
removed.
For example, sending digests used PrettyText.strip_secure_media,
which used Upload.secure_media_url? to replace secure media with
placeholders. If the URL was invalid, then an exception would be raised
and left unhandled.
Now instead in UrlHelper.rails_route_from_url we return nil if there is something wrong with the URL.
Co-authored-by: Bianca Nenciu <nenciu.bianca@gmail.com>
Error messages for exceeded rate limits and invalid parameters always used the English locale instead of the default locale or the current user's locale.
The download link on the lightbox for images was not downloading the image if the upload was marked secure, because the code in the upload controller route was not respecting the dl=1 param for force download.
This PR fixes this so the download link works for secure images as well as regular ligthboxed images.
See https://meta.discourse.org/t/changing-a-users-email/164512 for context.
When admin changes an email for a user, we were incorrectly sending the password reset email to the user's old address. Also the new email does not come into effect until the reset password process is done, so this PR adds some notes to the admin to make this clearer.
Extracted commonly used spec helpers into spec/support/uploads_helpers.rb, removed unused stubs and let definitions. Makes it easier to write new S3-related specs without copy and pasting setup steps from other specs.
The NewUserOfTheMonth badge is part of the Badges::GettingStarted group. This group is skipped in BadgeGranter if the user skips the new user tips. However, the NewUserOfTheMonth badge granter job does not account for this. Instead, it notifies the user they've received the badge even if they did not.
This commit introduces a simple fix to allow granting of this badge even to users who skipped the new user tips.
This allows administrators to stop automatic redirect to an external authenticator. It only takes effect when there is a single authentication method, and the site is login_required
This is a little bit of refactoring. Core Discourse should have default promotion message for TL2.
In addition, when the Discobot plugin is enabled, the user is invited to advanced training
Since 9e4ed03, moderators can view groups with visibility level set to "Group owners, members and moderators".
This fixes an issue where moderators can see the group in /g but then get a 404 when clicking on individual groups.
typographer can change " to ” leading to breakages in parser
At least codify this. Longer term we want to re-prioritize typographer so
it always runs after bbcode parsing.
Previously attributes such as `[test a='a"a' b="a'a"]` were not correctly
handled.
This amends the regex parser to ensure it correctly parses attributes
without breaking incorrectly on the first nested quote
To check if a post contains any embedded media, we look if the "image_sizes" attribute is present in the new post manager arguments.
We want to see one boxed links, but we only store the raw content of the post. To work around this, I extracted the onebox logic from the composer editor into a module.
With secure media and the UploadSecurity class, we need a nice way for plugins to register custom upload types that should be considered public and never secure.
We are making the changes from the PR #10563 the default behaviour. Now, if secure media is enabled, secure images will be embedded in emails by default instead of redacting them and displaying a message. This will be a nicer overall experience by default, and for forums that want to be super strict with redaction this setting can always be disabled.
Groups page was loading fields that are only used on the group show
page, so move those fields to the GroupShowSerializer.
Also only fetch the default category and tag notifications once.
This is intended for use by plugins which are building their own topic lists, and want to include PMs alongside regular topics (e.g. discourse-assign). It does not get used directly in core.
Overall stubs lead to long term instability, this helps stabilize them.
Also fixed flaky spec around plugin hooks.
It was relying you `.posts` on system_user which could be loaded already
and invalid. Instead we now load it by hand.
This PR removes the user reminder topic timers, because that system has been supplanted and improved by bookmark reminders. The option is removed from the UI and all existing user reminder topic timers are migrated to bookmark reminders.
Migration does this:
* Get all topic_timers with status_type 5 (reminders)
* Gets all bookmarks where the user ID and topic ID match
* Loops through the found topic timers
* If there is no bookmark for the OP of the topic, then we just create a bookmark with a reminder
* If there is a bookmark for the OP of the topic and it does **not** have a reminder set, then just
update it with the topic timer reminder
* If there is a bookmark for the OP of the topic with a reminder then just discard the topic timer
* Cancels all outstanding user reminder topic timers
* **Trashes (not deletes) all user reminder topic timers**
Notes:
* For now I have left the user reminder topic timer job class in place; this is so the jobs can be cancelled in the migration. It and the specs will be deleted in the next PR.
* At a later date I will write a migration to delete all trashed user topic timers. They are not deleted here in case there are data issues and they need to be recovered.
* A future PR will change the UI of the topic timer modal to make it look more like the bookmark modal.
Editing a post didn't update the `post_uploads` right away. Instead it relied on the `CookedPostProcessor`. This can lead to an inconsistent state if uploads are added or removed during an edit and, for some reason, the `ProcessPost` job doesn't run (successfully). This inconsistency leads to missing uploads, because the newly added uploads appear to be unused and will be deleted by the `CleanUpUploads` job. In addition to that, uploads, which got removed during the edit, appear to be still in use and won't be deleted by the background job.
This commit ensures that the `post_uploads` are updated during the edit without relying on a background job.
In some cases Discourse admins may opt for sessions not to persist when a
browser is closed.
This is particularly useful in healthcare and education settings where
computers are shared among multiple workers.
By default `persistent_sessions` site setting is enabled, to opt out you
must disable the site setting.
Currently, if a group's visibility is set to "Group owners, members" then the mods can't view those group pages. The same rule is applied for members visibility setting too.
This reverts commit 7fc7090. And fixed the spec test fails.
Moderators should not be able to see `UserSerializer#group_users` and `UserSerializer#second_factor_enabled` of other users.
Impact of leaking this is low because the information leaked is not
exploitable.
Currently, if a group's visibility is set to "Group owners, members" then the mods can't view those group pages. The same rule is applied for members visibility setting too.
After restoring a backup it takes up to 48 hours for uploads stored on S3 to appear in the S3 inventory. This change prevents alerts about missing uploads by preventing the EnsureS3UploadsExistence job from running in the first 48 hours after a restore. During the restore it deletes the count of missing uploads from the PluginStore, so that an alert isn't triggered by an old number.
We must guarantee that "rel=noopener" was set if "target=_blank" is present, which is not always the case for trusted users. Also, if the link contains the "nofollow" attribute, it has to have the "ugc" attribute as well.
If a user always read all group messages, we will never update the
`first_pm_unread_at` column since the previous query will not return the
group_user. Instead, we should update `first_pm_unread_at` to the
current timestamp if the user has read everything.
Follow-up to 9b75d95fc6
In c6ceda8c, a bug was introduced where an admin searching for his own
private messages will actually end up searching through all private
messages on the site.
Follow-up to c6ceda8c4e
This PR introduces a few important changes to secure media redaction in emails. First of all, two new site settings have been introduced:
* `secure_media_allow_embed_images_in_emails`: If enabled we will embed secure images in emails instead of redacting them.
* `secure_media_max_email_embed_image_size_kb`: The cap to the size of the secure image we will embed, defaulting to 1mb, so the email does not become too big. Max is 10mb. Works in tandem with `email_total_attachment_size_limit_kb`.
`Email::Sender` will now attach images to the email based on these settings. The sender will also call `inline_secure_images` in `Email::Styles` after secure media is redacted and attachments are added to replace redaction messages with attached images. I went with attachment and `cid` URLs because base64 image support is _still_ flaky in email clients.
All redaction of secure media is now handled in `Email::Styles` and calls out to `PrettyText.strip_secure_media` to do the actual stripping and replacing with placeholders. `app/mailers/group_smtp_mailer.rb` and `app/mailers/user_notifications.rb` no longer do any stripping because they are earlier in the pipeline than `Email::Styles`.
Finally the redaction notice has been restyled and includes a link to the media that the user can click, which will show it to them if they have the necessary permissions.
![image](https://user-images.githubusercontent.com/920448/92341012-b9a2c380-f0ff-11ea-860e-b376b4528357.png)
- Lets child components extend color definitions
- Includes default theme color definitions
- Fails gracefully on color stylesheet SCSS errors
- Includes theme variables when extending colors
There is a request spec that was ignored with the `xit` flag almost a
year ago and every time you generate the api docs with
```
rake rswag:specs:swaggerize
```
it shows the output of this pending test and I guess I finally got sick
of looking at it, so here is a fix for it.
Original Commit: d84c34ad75
This PR ensures that new bookmarks cannot be created for deleted posts and topics, and also makes sure that if a bookmark was created and then the topic deleted that the show topic page does not error from trying to retrieve the bookmark reminder at.
It is possible that a user could exist without an email, if so we should
not enqueue a job to download their gravatar.
This commit resolves this error that can occur:
```
Job exception: undefined method `email' for nil:NilClass
/var/www/discourse/app/models/user.rb:1204:in `email'
/var/www/discourse/app/jobs/regular/update_gravatar.rb:12:in `execute'
```
This commit also fixes the original spec which actually was wrong. The
job never enqueued in the original spec and so the gravatar was never
actually updated and the test was checking if the two values were the
same, but they were both null and never updated, so of course they were
the same!
A new test has also been added to make sure the gravatar job isn't
enqueued when a user's email is missing.
DEV: add plugin hooks for silence message parameters
Allows plugins to add, and update extra silence message params for custom
i18n vars
Allows plugins to override system messages via `message_title` and
`message_raw` parameters. We can later expose these params where necessary via event
hooks. Expose the parameter for the on user_silenced trigger.
When a category is removed from `auto_watch_category` we are removing
CategoryUser. However, there are still TopicUser with notification level
set to `watching` which was inherited from Category.
We should move them back to `regular` unless they were modified by a user.
* FEATURE: Use predictable filenames inside the user archive export
* FEATURE: Include badges in user archive export
* FEATURE: Add user_visits table to the user archive export
Previously in some cases the test suite could fail due to a bad entry in
redis from previous tests
This ensures the correct cache is expired when needed
Additionally improves performance of the redis check
"en_US" doesn't contain most of the translations, so it falls back to "en". But that behavior stopped translation overrides to work for pluralized strings in "en_US", because it relies on existing translations. This fixes it by looking up the existing translation in all fallback locales.
This is in preparation for improvements to the user archive export data.
Some refactors happened along the way, including calling the different _export methods 'components' of the zip file.
Additionally, make the test for post export much more comprehensive.
Copy sources:
app/jobs/regular/export_csv_file.rb
spec/jobs/export_csv_file_spec.rb
This commit adds a new site setting "allowed_onebox_iframes". By default, all onebox iframes are allowed. When the list of domains is restricted, Onebox will automatically skip engines which require those domains, and use a fallback engine.
Originally disabled in 0c0192e. Upload specs now use separate paths for each spec worker.
Fixes an issue in UploadRecovery#recover_from_local – it didn't take into account the testing infix (e.g. test_0) in the uploads/tombstone paths.
With the addition of `PostSearchData#private_message`, a partial
index consisting of only search data from regular posts can be created.
The partial index helps to speed up searches on large sites since PG
will not have to do an index scan on the entire search data index which
has shown to be a bottle neck.
These fields are required when using the UI and if `suspend_until`
params isn't used the user never is actually suspended so we should
require these fields when suspending a user.
This commit is addressing an issue where it is possible that there could
be multiple topic timer jobs running to close a topic or a weird race
condition state causing a topic that was just closed to be re-opened.
By removing the logic from the Topic Timer model into the Topic Timer
controller endpoint we isolate the code that is used for setting an
auto-open or an auto-close timer to just that functionality making the
topic timer background jobs safer if multiple are running.
Possibly in the future if we would like this logic back in the model a
refactor will be needed where we actually pass in the auto-close and
auto-open action instead of mixing it with the close and open
action that is currently being passed to the controller.