PresenceChannel aims to be a generic system for allow the server, and end-users, to track the number and identity of users performing a specific task on the site. For example, it might be used to track who is currently 'replying' to a specific topic, editing a specific wiki post, etc.
A few key pieces of information about the system:
- PresenceChannels are identified by a name of the format `/prefix/blah`, where `prefix` has been configured by some core/plugin implementation, and `blah` can be any string the implementation wants to use.
- Presence is a boolean thing - each user is either present, or not present. If a user has multiple clients 'present' in a channel, they will be deduplicated so that the user is only counted once
- Developers can configure the existence and configuration of channels 'just in time' using a callback. The result of this is cached for 2 minutes.
- Configuration of a channel can specify permissions in a similar way to MessageBus (public boolean, a list of allowed_user_ids, and a list of allowed_group_ids). A channel can also be placed in 'count_only' mode, where the identity of present users is not revealed to end-users.
- The backend implementation uses redis lua scripts, and is designed to scale well. In the future, hard limits may be introduced on the maximum number of users that can be present in a channel.
- Clients can enter/leave at will. If a client has not marked itself 'present' in the last 60 seconds, they will automatically 'leave' the channel. The JS implementation takes care of this regular check-in.
- On the client-side, PresenceChannel instances can be fetched from the `presence` ember service. Each PresenceChannel can be used entered/left/subscribed/unsubscribed, and the service will automatically deduplicate information before interacting with the server.
- When a client joins a PresenceChannel, the JS implementation will automatically make a GET request for the current channel state. To avoid this, the channel state can be serialized into one of your existing endpoints, and then passed to the `subscribe` method on the channel.
- The PresenceChannel JS object is an ember object. The `users` and `count` property can be used directly in ember templates, and in computed properties.
- It is important to make sure that you `unsubscribe()` and `leave()` any PresenceChannel objects after use
An example implementation may look something like this. On the server:
```ruby
register_presence_channel_prefix("site") do |channel|
next nil unless channel == "/site/online"
PresenceChannel::Config.new(public: true)
end
```
And on the client, a component could be implemented like this:
```javascript
import Component from "@ember/component";
import { inject as service } from "@ember/service";
export default Component.extend({
presence: service(),
init() {
this._super(...arguments);
this.set("presenceChannel", this.presence.getChannel("/site/online"));
},
didInsertElement() {
this.presenceChannel.enter();
this.presenceChannel.subscribe();
},
willDestroyElement() {
this.presenceChannel.leave();
this.presenceChannel.unsubscribe();
},
});
```
With this template:
```handlebars
Online: {{presenceChannel.count}}
<ul>
{{#each presenceChannel.users as |user|}}
<li>{{avatar user imageSize="tiny"}} {{user.username}}</li>
{{/each}}
</ul>
```
The generate_presigned_put endpoint for direct external uploads
(such as the one for the uppy-image-uploader) records allowed
S3 metadata values on the uploaded object. We use this to store
the sha1-checksum generated by the UppyChecksum plugin, for later
comparison in ExternalUploadManager.
However, we were not doing this for the create_multipart endpoint,
so the checksum was never captured and compared correctly.
Also includes a fix to make sure UppyChecksum is the last preprocessor to run.
It is important that the UppyChecksum preprocessor is the last one to
be added; the preprocessors are run in order and since other preprocessors
may modify the file (e.g. the UppyMediaOptimization one), we need to
checksum once we are sure the file data has "settled".
Users can invite people to topic and they will be automatically
redirected to the topic when logging in after signing up. This commit
ensures a "invited_to_topic" notification is created when the invite is
redeemed.
The same notification is used for the "Notify" sharing method that is
found in share topic modal.
This bug was introduced by f66007ec83.
In PostJobsEnqueuer we previously did not fire the after_post_create
event and after_topic_create event for private message topics. This was
changed in the above commit in order to publish message bus messages
for topic tracking state updates. Unfortunately this caused the
NotifyMailingListSubscribers job to be enqueued for all posts including
private messages, and admins and the users involved in the PMs got
emailed the contents of the PMs if they had mailing list mode enabled.
Luckily the impact of this was mitigated by a Guardian#can_see? check
for each mailing list mode user in the NotifyMailingListSubscribers job.
We never want to notify mailing list mode subscribers for private messages
so an early return has been added there, plus the logic in PostJobsEnqueuer
has been fixed, and tests have been added to that class where there were
none before.
Since ad3ec5809f when a user chooses
the Dismiss New... option in the New topic list, we send a request
to topics/reset-new.json with ?tracked=false as the only parameter.
This then uses Topic as the scope for topics to dismiss, with no
other limitations. When we do topic_scope.pluck(:id), it gets the
ID of every single topic in the database (that is not deleted) to
pass to TopicsBulkAction, causing a huge query with severe performance
issues.
This commit changes the default scope to use
`TopicQuery.new(current_user).new_results(limit: false)`
which should only use the topics in the user's New list, which
will be a much smaller list, depending on the user's "new_topic_duration_minutes"
setting.
This is unnecessary, as when the temporary key is created
in S3Store we already include the s3_bucket_folder_path, and
the key will always start with temp/ to assist with lifecycle
rules for multipart uploads.
This was affecting Discourse.store.object_from_path,
Discourse.store.signed_url_for_path, and possibly others.
See also: e0102a5
There are certain design decisions that were made in this commit.
Private messages implements its own version of topic tracking state because there are significant differences between regular and private_message topics. Regular topics have to track categories and tags while private messages do not. It is much easier to design the new topic tracking state if we maintain two different classes, instead of trying to mash this two worlds together.
One MessageBus channel per user and one MessageBus channel per group. This allows each user and each group to have their own channel backlog instead of having one global channel which requires the client to filter away unrelated messages.
Previously we had temp/ in the middle of the S3 key path like so
* /uploads/default/temp/randomstring/test.png (normal site)
* /sitename/uploads/default/temp/randomstring/test.png (s3 folder path site)
* /standard10/uploads/sitename/temp/randomstring/test.png (multisite site)
However this necessitates making a lifecycle rule to clean up incomplete
S3 multipart uploads for every site, something which we cannot do. It makes
much more sense to have a structure with /temp at the start of the key,
which is what this commit does:
* /temp/uploads/default/randomstring/test.png (normal site)
* /temp/sitename/uploads/default/randomstring/test.png (s3 folder path site)
* /temp/standard10/uploads/sitename/randomstring/test.png (multisite site)
This pull request introduces the endpoints required, and the JavaScript functionality in the `ComposerUppyUpload` mixin, for direct S3 multipart uploads. There are four new endpoints in the uploads controller:
* `create-multipart.json` - Creates the multipart upload in S3 along with an `ExternalUploadStub` record, storing information about the file in the same way as `generate-presigned-put.json` does for regular direct S3 uploads
* `batch-presign-multipart-parts.json` - Takes a list of part numbers and the unique identifier for an `ExternalUploadStub` record, and generates the presigned URLs for those parts if the multipart upload still exists and if the user has permission to access that upload
* `complete-multipart.json` - Completes the multipart upload in S3. Needs the full list of part numbers and their associated ETags which are returned when the part is uploaded to the presigned URL above. Only works if the user has permission to access the associated `ExternalUploadStub` record and the multipart upload still exists.
After we confirm the upload is complete in S3, we go through the regular `UploadCreator` flow, the same as `complete-external-upload.json`, and promote the temporary upload S3 into a full `Upload` record, moving it to its final destination.
* `abort-multipart.json` - Aborts the multipart upload on S3 and destroys the `ExternalUploadStub` record if the user has permission to access that upload.
Also added are a few new columns to `ExternalUploadStub`:
* multipart - Whether or not this is a multipart upload
* external_upload_identifier - The "upload ID" for an S3 multipart upload
* filesize - The size of the file when the `create-multipart.json` or `generate-presigned-put.json` is called. This is used for validation.
When the user completes a direct S3 upload, either regular or multipart, we take the `filesize` that was captured when the `ExternalUploadStub` was first created and compare it with the final `Content-Length` size of the file where it is stored in S3. Then, if the two do not match, we throw an error, delete the file on S3, and ban the user from uploading files for N (default 5) minutes. This would only happen if the user uploads a different file than what they first specified, or in the case of multipart uploads uploaded larger chunks than needed. This is done to prevent abuse of S3 storage by bad actors.
Also included in this PR is an update to vendor/uppy.js. This has been built locally from the latest uppy source at d613b849a6. This must be done so that I can get my multipart upload changes into Discourse. When the Uppy team cuts a proper release, we can bump the package.json versions instead.
When emails were forwarded to a group inbox by the email address
of the group, for example when an email ends up in spam and must
be manually forwarded to the group+site@discoursemail.com address,
the OP of the topic ended up being the group's email address instead
of the sender who originally sent the email to the group inbox.
This commit detects that an email has been forwarded using existing
tools, and if the from address matches one of the group incoming
email addresses, then we look at the forwarded email's from address
and use that instead for the incoming email from address as well as
the staged/regular user used for the Topic.user.
This will make it much cleaner to forward emails into a group inbox,
and will prevent issues with PostAlerter where the OP is double-notified
for these emails.
Currently, pinned topics are ordered by the `bumped_at` column. This behavior is not desired because it gives admins no control over the order of pinned topics. This PR makes pinned topics ordered by the `pinned_at` column. A topic that is pinned last appears first in topic lists. If an admin wants an already pinned topic to appear first in the list of pinned topics, they'll have to unpin that topic and pin it again.
Meta topic: https://meta.discourse.org/t/how-do-i-set-the-order-of-pinned-topics/16935/23?u=osama.
* FIX: Revoking admin or moderator status doesn't require refresh to delete/anonymize/merge user
On the /admin/users/<id>/<username> page, there are action buttons that are either visible or hidden depending on a few fields from the AdminDetailsSerializer: `can_be_deleted`, `can_be_anonymized`, `can_be_merged`, `can_delete_all_posts`.
These fields are updated when granting/revoking admin or moderator status. However, those updates were not being reflected on the page. E.g. if a user is granted moderation privileges, the 'anonymize user' and 'merge' buttons still appear on the page, which is inconsistent with the backend state of the user. It requires refreshing the page to update the state.
This commit fixes that issue, by syncing the client model state with the server state when handling a successful response from the server. Now, when revoking privileges, the buttons automatically appear without refreshing the page. Similarly, when granting moderator privileges, the buttons automatically disappear without refreshing the page.
* Add detailed user response to spec for changed routes.
Add tests to verify that the revoke_moderation, grant_moderation, and revoke_admin routes return a response formatted according to the AdminDetailedUserSerializer.
Emails can include the marker in a different language, depending on
site and user settings. The email receiver always looked for the marker
in default language.
This is useful in the DiscourseHub mobile app, currently the app queries
the `about.json` endpoint, which can raise a CORS issue in some cases,
for example when the site only accepts logins from an external provider.
Allow admins to configure exceptions to our Rails rate limiter.
Configuration happens in the environment variables, and work with both
IPs and CIDR blocks.
Example:
```
env:
DISCOURSE_MAX_REQS_PER_IP_EXCEPTIONS: >-
14.15.16.32/27
216.148.1.2
```
`nullable` is no longer a valid type, and types also can't be an empty
string, so just bringing a number of issues with types in compliance
with the openapi spec.
When a staff member clicks on a user's number of flagged posts, we redirect them to the review queue, so it makes sense to count the number of items there to calculate the count.
We used to look at post action items to calculate this number, which doesn't match the number of items in the queue if old flags exist.
When a post is flagged with the reason of 'Something Else' a brief message can be added by the user which subsequently creates a `meta_topic` private message. The group `moderators` is automatically added to this topic.
If category group moderation is enabled, and the post belongs to a category with a reviewable group, that group should also be added to the meta_topic.
Note: This extends the `notify_moderators` logic, and will add the reviewable group to the meta_topic, regardless of the settings of that group.
The latest openapi spec version is v3.1.0
https://spec.openapis.org/oas/v3.1.0
Specifying the latest version will allow our openapi spec linter to use
this version and allow use to use the new type format that allows for
specifying a type as "null", which we need because sometimes our api
responses include null values instead of a "string", "integer", or
"object" type.
See: https://stackoverflow.com/a/48114322/588458
In 2018 check was added that TL1 welcome message is sent unless user already has BasicBadge granted.
I think we should also check if BasicBadge is even enabled. Otherwise, each time group is assigned to a user and trust level is recalculated, they will receive a welcome message.
When a user signs up via an external auth method, a new link is added to the signup modal which allows them to connect an existing Discourse account. This will only happen if:
- There is at least 1 other auth method available
and
- The current auth method permits users to disconnect/reconnect their accounts themselves
This handles a few edge cases which are extremely rare (due to the UI layout), but still technically possible:
- Ensure users are authenticated before attempting association.
- Add a message and logic for when a user already has an association for a given auth provider.
We weren't calling clear_all! for the rate limiter which
was the first problem, and the second problem was that it
is very odd to do state cleanup before tests instead of after,
so moved the disabling and clear_all! to after.
This reverts a part of changes introduced by https://github.com/discourse/discourse/pull/13947
In that PR I:
1. Disallowed topic feature links for TL-0 users
2. Additionally, disallowed just putting any URL in topic titles for TL-0 users
Actually, we don't need the second part. It introduced unnecessary complexity for no good reason. In fact, it tries to do the job that anti-spam plugins (like Akismet plugin) should be doing.
This PR reverts this second change.
This disallows putting URLs in topic titles for TL0 users, which means that:
If a TL-0 user puts a link into the title, a topic featured link won't be generated (as if it was disabled in the site settings)
Server methods for creating and updating topics will be refusing featured links when they are called by TL-0 users
TL-0 users won't be able to put any link into the topic title. For example, the title "Hey, take a look at https://my-site.com" will be rejected.
Also, it improves a bit server behavior when creating or updating feature links on topics in the categories with disabled featured links. Before the server just silently ignored a featured link field that was passed to him, now it will be returning 422 response.
* FIX: Update draft count when sequence is increased
Sometimes users ended up having a draft count higher than the actual
number of drafts.
* FIX: Do not update draft count twice
The call to DraftSequence.next! above already does it.
Group flair is not removed while removing a user from the group since the `before_save` callback methods are not triggered while using the `update_columns` method.
Discourse automatically sends a private message after backup or
restore finished. The private message used to contain the log inline
even when it was very long. A very long log can create issues because
the length of the post will be over the maximum allowed length of a
post. When that happens, Discourse will try to create an upload with
the logs. If that fails, it will trim the log and inline it.
Inlining secure images with the same name was not possible because they
were indexed by filename. If an email contained two files with the same
name, only the first image was used for both of them. The other file
was still attached to the email.
While merging two user accounts don't merge the source user's email address if the target user is not a human.
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
When the Reply-To header is present for incoming emails we
want to use it instead of the from address. This is usually the
case when forwarding an email via a mailing list into Discourse.
For now we are only using the Reply-To header if the email has
been forwarded via Google Groups, which is why we are checking the
X-Original-From header too. In future we may want to use the Reply-To
header in more cases.
Searching in a category looked only one level down, ignoring the site
setting max_category_nesting. The user interface did not support the
third level of categories and did not display them in the "Categorized"
input of the advanced search options.
* FEATURE: Onebox can match engines based on the content_type
`FinalDestination` now returns the `content_type` of a resolved URL.
`Oneboxer` passes this value to `Onebox` itself. Onebox engines can now specify a `matches_content_type` regex of content_types that the engine can handle, regardless of the URL.
`ImageOnebox` will match URLs with a content type of `image/png`, `jpg`, `gif`, `bmp`, `tif`, etc.
This will allow images that exist at a URL without a file type extension to be correctly rendered, assuming a valid `content_type` is returned.
When a post is created, the draft sequence is increased and then older
drafts are automatically executing a raw SQL query. This skipped the
Draft model callbacks and did not update user's draft count.
I fixed another problem related to a raw SQL query from Draft.cleanup!
method.
We shouldn't be checking if a user is allowed to do an action in the logger. We should be checking it just before we perform the action. In fact, guardians in the logger can make things even worse in case of a security bug. Let's say we forgot to check user's permissions before performing some action, but we still have a call to the guardian in the logger. In this case, a user would perform the action anyway, and this action wouldn't even be logged!
I've checked all cases and I confirm that we're safe to delete this calls from the logger.
I've added two calls to guardians in admin/user_controller. We didn't have security bugs there, because regular users can't access admin/... routes at all. But it's good to have calls to guardian in these methods anyway, neighboring methods have them.
This adds a few different things to allow for direct S3 uploads using uppy. **These changes are still not the default.** There are hidden `enable_experimental_image_uploader` and `enable_direct_s3_uploads` settings that must be turned on for any of this code to be used, and even if they are turned on only the User Card Background for the user profile actually uses uppy-image-uploader.
A new `ExternalUploadStub` model and database table is introduced in this pull request. This is used to keep track of uploads that are uploaded to a temporary location in S3 with the direct to S3 code, and they are eventually deleted a) when the direct upload is completed and b) after a certain time period of not being used.
### Starting a direct S3 upload
When an S3 direct upload is initiated with uppy, we first request a presigned PUT URL from the new `generate-presigned-put` endpoint in `UploadsController`. This generates an S3 key in the `temp` folder inside the correct bucket path, along with any metadata from the clientside (e.g. the SHA1 checksum described below). This will also create an `ExternalUploadStub` and store the details of the temp object key and the file being uploaded.
Once the clientside has this URL, uppy will upload the file direct to S3 using the presigned URL. Once the upload is complete we go to the next stage.
### Completing a direct S3 upload
Once the upload to S3 is done we call the new `complete-external-upload` route with the unique identifier of the `ExternalUploadStub` created earlier. Only the user who made the stub can complete the external upload. One of two paths is followed via the `ExternalUploadManager`.
1. If the object in S3 is too large (currently 100mb defined by `ExternalUploadManager::DOWNLOAD_LIMIT`) we do not download and generate the SHA1 for that file. Instead we create the `Upload` record via `UploadCreator` and simply copy it to its final destination on S3 then delete the initial temp file. Several modifications to `UploadCreator` have been made to accommodate this.
2. If the object in S3 is small enough, we download it. When the temporary S3 file is downloaded, we compare the SHA1 checksum generated by the browser with the actual SHA1 checksum of the file generated by ruby. The browser SHA1 checksum is stored on the object in S3 with metadata, and is generated via the `UppyChecksum` plugin. Keep in mind that some browsers will not generate this due to compatibility or other issues.
We then follow the normal `UploadCreator` path with one exception. To cut down on having to re-upload the file again, if there are no changes (such as resizing etc) to the file in `UploadCreator` we follow the same copy + delete temp path that we do for files that are too large.
3. Finally we return the serialized upload record back to the client
There are several errors that could happen that are handled by `UploadsController` as well.
Also in this PR is some refactoring of `displayErrorForUpload` to handle both uppy and jquery file uploader errors.
This commit adds the number of drafts a user has next to the "Draft"
label in the user preferences menu and activity tab. The count is
updated via MessageBus when a draft is created or destroyed.
In badge queries for 'First Share' and 'Nice/Good/Great Share' badges,
check that the user exists.
For 'Nice+ Share' badges, also grant badges if the number of shares is
equal to the threshhold count to better match the descriptions.
* DEV: Return 400 instead of 500 for invalid top period
This change will prevent a fatal 500 error when passing in an invalid
period param value to the `/top` route.
* Check if the method exists first
I couldn't get `ListController.respond_to?` to work, but was still able
to check if the method exists with
`ListController.action_methods.include?`. This way we can avoid relying
on the `NoMethodError` exception which may be raised during the course
of executing the method.
* Just check if the period param value is valid
* Use the new TopTopic.validate_period method
Prior to this fix, post whisperer in personal messages are revealed in
the topic's participants list even though non-staff users are unable to
see the whisper.
* Copy remove_member to new `leave` method
* Remove unneeded code from the leave method
* Rearrange the leave method
* Remove unneeded code from the remove_member method
* Add tests
* Implement on the client side
Using an invalid value was allowed. This commit tries to automatically
fix the color by adding missing # symbol or will show an error to the
user if it is not possible and it is not a CSS color either.
* Copy the add_members method to the new join method
* Remove unneeded code from the join method
* Rearrange the join method
* Remove unneeded stuff from the add_members method
* Extract add_user_to_group method
* Implement of the client side
* Tests
* Doesn't inline users.uniq
* Return promise from join.then()
* Remove unnecessary begin and end
* Revert "Return promise from join.then()"
This reverts commit bda84d8d
* Remove variable already_in_group
Will show the last 6 seen users as filtering suggestions when typing @ in quick search. (Previously the user suggestion required a character after the @.)
This also adds a default limit of 6 to the user search query, previously the backend was returning 20 results but a maximum of 6 results was being shown anyway.
When configured, all topics in the category inherits the slow mode
duration from the category's default.
Note that currently there is no way to remove the slow mode from the
topics once it has been set.
When the Forever option is selected for suspending a user, the user is suspended for 1000 years. Without customizing the site’s text, this time period is displayed to the user in the suspension email that is sent to the user, and if the user attempts to log back into the site. Telling someone that they have been suspended for 1000 years seems likely to come across as a bad attempt at humour.
This PR special case messages when a user suspended or silenced forever.
We now use the group's full name in group SMTP emails, so we are dropping the via #{site_name}. If group owners still want this they can just change the full name of the group.
Configuring staged users to watch categories and tags is a way to sign
them up to get many emails. These emails may be unwanted and get marked
as spam, hurting the site's email deliverability.
Users can opt-in to email notifications by logging on to their
account and configuring their own preferences.
If staff need to be able to configure these preferences on behalf of
staged users, the "allow changing staged user tracking" site setting
can be enabled. Default is to not allow it.
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
Flips content_security_policy_frame_ancestors default to enabled, and
removes HTTP_REFERER checks on embed requests, as the new referer
privacy options made the check fragile.
There was a bug with changing timestamps using the topic wrench button. Under some circumstances, a topic was disappearing from the top of the latest tab after changing timestamps. Steps to reproduce:
- Choose a topic on the latest tab (the topic should be created some time ago, but has recent posts)
- Change topic timestamps (for example, move them one day forward):
- Go back to the latest tab and see that topic has disappeared.
This PR fixes this. We were setting topic.bumped_at to the timestamp user specified on the modal. This is incorrect. Instead, we should be setting topic.bumped_at to the created_at timestamp of the last regular (not a whisper and so on) post on the topic.
* No need to return anything except a status code from the server
* Switch a badge state before sending a request and then switch it back in case of an error
Currently when bulk-awarding a badge that can be granted multiple times, users in the CSV file are granted the badge once no matter how many times they're listed in the file and only if they don't have the badge already.
This PR adds a new option to the Badge Bulk Award feature so that it's possible to grant users a badge even if they already have the badge and as many times as they appear in the CSV file.
A couple of weeks we made a change that skipped compressing assets used by the theme qunit page: https://github.com/discourse/discourse/pull/13619. This is a follow-up PR to stop the application helper from generating the assets for the theme qunit page with `.br` or `.gzip` extensions when a site uses S3 as a CDN.
Fixes two issues:
- ignores invalid XML in custom icon sprite SVG file (and outputs an error if sprite was uploaded via admin UI)
- clears SVG sprite cache when deleting an `icons-sprite` upload in a theme
This PR fixes a couple of issues related to group SMTP:
1. When running the group SMTP job, we were exiting early if the email was for the OP because of an IMAP race condition. However this causes issues when replying as a new topic for an existing SMTP topic, as the recipient does not get the OP email which can cause threading problems.
2. When sending emails for a new topic spun out like the issue in 1., we are not maintaining the original subject/topic title because that is based on the incoming email record, which we were not doing because the group SMTP email was never sent because of issue 1.
By default, Twitter will return the URL for the avatar image of the tweet poster as the `og:image` value.
However, if the `user_generated` attribute is true, we should not use this as the avatar URL as this will be an URL of an image in the tweet itself (e.g., an image belonging to a tweeted news story).
Previously we would re-calculate topic_user.liked for all users who have ever viewed the source or destination topic. This can be very expensive on large sites. Instead, we can use the array of moved post ids to find which users are actually affected by the move, and restrict the update query to only check/update their records.
On an example site this reduced the `update_post_action_cache` time from ~27s to 300ms
This PR adds the first use of Uppy in our codebase, hidden behind a enable_experimental_image_uploader site setting. When the setting is enabled only the user card background uploader will use the new uppy-image-uploader component added in this PR.
I've introduced an UppyUpload mixin that has feature parity with the existing Upload mixin, and improves it slightly to deal with multiple/single file distinctions and validations better. For now, this just supports the XHRUpload plugin for uppy, which keeps our existing POST to /uploads.json.
If user had a staged account and logged in using a third party service
a different username was suggested. This change will try to use the
username given by the authentication provider first, then the current
staged username and last suggest a new one.
The date shown in topic timeline was one day later if the post at that
position was made near midnight. This happened because the days number
was rounded down.
When a staged user tried to redeem an invite, a different username was
suggested and manually typing the staged username failed because the
username was not available.
When secure media is enabled or when upload secure status
is updated, we also try and update the upload ACL. However
if the object storage provider does not implement this we
get an Aws::S3::Errors::NotImplemented error. This PR handles
this error so the update_secure_status method does not error
out and still returns whether the secure status changed.
User flair was given by user's primary group. This PR separates the
two, adds a new field to the user model for flair group ID and users
can select their flair from user preferences now.
There was an issue with the TopicTimerEnqueuer where any timer
that failed to enqueue_typed_job with an error would prevent
all other pending timers after the one that errored from running.
To mitigate this we just capture the error and log it (so we can
still fix it if needed for bug crushing) and proceed with the
rest of the timer enqueues.
The commit https://github.com/discourse/discourse/pull/13544 highlighted
this issue originally in hosted sites.
<!-- NOTE: All pull requests should have tests (rspec in Ruby, qunit in JavaScript). If your code does not include test coverage, please include an explanation of why it was omitted. -->
Mixing multisite and standard specs can lead to issues (e.g. when using `fab!`)
Disabled the (upcoming https://github.com/discourse/rubocop-discourse/pull/11) rubocop rule for two files that have thoroughly tangled both types of specs.
* FEATURE: Redirect logged in user to invite topic
Users who were already logged in and were given an invite link to a
topic used to see an error message saying that they already have an
account and cannot redeem the invite. This commit amends that behavior
and redirects the user directly to the topic, if they can see it.
* FEATURE: Add logged in user to invite groups
Users who were already logged in and were given an invite link to a
group used to see an error message saying that they already have an
account and cannot redeem the invite. This commit amends that behavior
and adds the user to the group.
Take 2 of https://github.com/discourse/discourse/pull/13466.
Fixes a few issues with the original PR:
- color definition stylesheet target now includes the theme id, to avoid themes set to use the default color scheme loading the same stylesheet
- changes the internal cache key for color definition stylesheet to reset the pre-existing cache
And also move all the "top topics by period" routes to query string param.
/top/monthly => /top?period=monthly
/c/:slug/:id/l/top/monthly => /c/:slug/:id/l/top?period=monthly
/tag/:slug/l/top/daily => /tag/:slug/l/top?period=daily (new)
Users can invite people to topics from secured category, but they will
not be redirected to the topic after signing up unless they have the
permissions to view the topic. This commit shows a warning when invite
is saved if the topic is in a secured category and none of the invite
groups are allowed to see it.
Skip group SMTP email (and add log) if:
* topic is deleted
* post is deleted
* smtp has been disabled for the group
Skip without log if:
* enable_smtp site setting is false
* disable_emails site setting is yes
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
For other private messages we have the site setting
personal_email_time_window_seconds (default 20s) which allows
people to edit their post etc. before the email is sent.
This PR makes the Jobs::GroupSmtpEmail enqueuer in the
PostAlerter use the same delay.
<!-- NOTE: All pull requests should have tests (rspec in Ruby, qunit in JavaScript). If your code does not include test coverage, please include an explanation of why it was omitted. -->
This is just a hunch, but this is quite a complex test.
I think that there is some timing issue where the jobs
enqueued for generating the thumbnails via the serializer
thumbnails method and they aren't generated in time before
we do the json[:thumbnails] check. Split the tests up
into two, with one checking the right jobs are enqueued
and another with Jobs.run_immediately! that checks that
json[:thumbnails] is correct.
We renamed the site setting for this long ago, but there
were a few places left in the code base where "ninja edit"
needed to be turned into "grace period". Doing this here
to avoid combatative language.
We want to put the name of the trust level in to generated URLs, not the human-readable form.
i.e.:
`/admin/users/list/newuser`
rather than:
`/admin/users/list/new user`
Updated the context name, and fixed a typo that was the source of flakiness.
The error was:
```
1) TopicView with a few sample posts #first_post_bookmark_reminder_at gets the first post bookmark reminder at for the user
Failure/Error: expect(second[:reminder_at]).to eq_time(bookmark1.reminder_at)
2021-07-01 06:49:40 UTC is not within 1 millisecond of 2021-07-01 06:49:39 UTC
# ./spec/components/topic_view_spec.rb:426:in `block (4 levels) in <main>'
# ./spec/rails_helper.rb:279:in `block (2 levels) in <top (required)>'
# ./bundle/ruby/2.7.0/gems/webmock-3.13.0/lib/webmock/rspec.rb:37:in `block (2 levels) in <top (required)>'
```
When a topic is fully merged into another topic we close it and schedule it for deleting. But last time I changed this place I added a bug – when merging all posts in topic except the first one the topic was closing too.
If the OP is not merged into another topic, the original topic shouldn't be closed and marked for deletion. This PR fixes this.
We changed (https://github.com/discourse/discourse/pull/13407) behaviour of the topic level bookmark button recently. That PR made the button be opening the edit bookmark modal when there is only one bookmark on the topic instead of just removing that bookmark as it was before.
This PR fixes the next problems that weren't taken into account in the previous PR:
1. Everything should work fine even on very big topics when a bookmarked post is unloaded from the post stream. I've added code that loads the post we need and makes everything work as expected
2. When at least one bookmark on the topic has a reminder, we should always be showing the icon with a clock on the topic level bookmark button
3. We should show correct tooltips for the topic level bookmark button
We do not want to show the In Reply To section of the
group SMTP email template, it is similar to Context Posts
which we removed and is unnecessary.
This PR also removes the link to staged user profiles in
the email; their email addresses will just be converted
to regular mailto: links.
This PR makes several changes to the group SMTP email contents to make it look more like a support inbox message.
* Remove the context posts, they only add clutter to the email and replies
* Display email addresses of staged users instead of odd generated usernames
* Add a "please reply above this line" message to sent emails
This PR backtracks a fair bit on this one https://github.com/discourse/discourse/pull/13220/files.
Instead of sending the group SMTP email for each user via `UserNotifications`, we are changing to send only one email with the existing `Jobs::GroupSmtpEmail` job and `GroupSmtpMailer`. We are changing this job and mailer along with `PostAlerter` to make the first topic allowed user the `to_address` for the email and any other `topic_allowed_users` to be the CC address on the email. This is to cut down on emails sent via SMTP, which is subject to daily limits from providers such as Gmail. We log these details in the `EmailLog` table now.
In addition to this, we have changed `PostAlerter` to no longer rely on incoming email email addresses for sending the `GroupSmtpEmail` job. This was unreliable as a user's email could have changed in the meantime. Also it was a little overcomplicated to use the incoming email records -- it is far simpler to reason about to just use topic allowed users.
This also adds a fix to include cc_addresses in the EmailLog.addressed_to_user scope.
* FEATURE: Staff can receive pending user reminders more frequently.
We now express the "pending_users_reminder_delay" in minutes instead of hours so staff can have finer control over the delay.
We need to keep in mind that the reminders could still take up to 20 minutes, even when using a lower value. We send them from a scheduled job.
* Migrate to a new site setting for the reminders delay
ATM it only implements server side of it, as my need is for automation purposes. However it should probably be added in the UI too as it's unexpected to have pinned_until and no bannered_until.
Badges that are awarded multiple times can be favorite and not favorite
at the same time. This caused few problems when users tried to favorite
them as they were counted multiple times or their state was incorrectly
displayed.
Previously, we were storing custom svg sprite paths in the cache. This is a problem because sprites in themes get stored as uploads, and the returned paths were files in the temporary download cache which could sometimes be cleaned up, resulting in a broken cache.
I previously tried to fix this by skipping the missing files and clearing the cache, but that didn't work out well with CDNs. This PR stores the contents of the files in the custom_svg_sprites cache to avoid the problem of missing temp files.
Also, plugin custom icons are only included if the plugin is enabled.
In #12841, we started setting the ReviewableQueuedPost's target and topic after approving it instead of storing them in the payload. As a result, the reviewable_counts query started to include queued posts.
When a category is set to require approval, every post has an associated reviewable. Pointing that each post has an associated queued post is not necessary in this case, so I added a WHERE clause to skip them.
This adds the following columns to EmailLog:
* cc_addresses
* cc_user_ids
* topic_id
* raw
This is to bring the EmailLog table closer in parity to
IncomingEmail so it can be better utilized for Group SMTP
and IMAP mailing.
The raw column contains the full content of the outbound email,
but _only_ if the new hidden site setting
enable_raw_outbound_email_logging is enabled. Most sites do not
need it, and it's mostly required for IMAP and SMTP sending.
In the next pull request, there will be a migration to backfill
topic_id on the EmailLog table, at which point we can remove the
topic fallback method on EmailLog.
Before this change, calling `StyleSheet::Manager.stylesheet_details`
for the first time resulted in multiple queries to the database. This is
because the code was modelled in a way where each `Theme` was loaded
from the database one at a time.
This PR restructures the code such that it allows us to load all the
theme records in a single query. It also allows us to eager load the
required associations upfront. In order to achieve this, I removed the
support of loading multiple themes per request. It was initially added
to support user selectable theme components but the feature was never
completed and abandoned because it wasn't a feature that we thought was
worth building.
We already reject email replies to public topics via `SiteSetting.disallow_reply_by_email_after_days` and raising the `OldDestinationError`. This PR introduces similar behaviour for group inboxes, but without the rejection, and **only when SMTP is enabled for the group**.
If a reply is sent via email and the post is older than `SiteSetting.disallow_reply_by_email_after_days` days ago, then we create a new topic instead of making a reply in the old one and link back to the original topic. This is done to prevent long running group inbox discussions.
If a user posted a URL that appeared inside a Onebox, then the user
got a duplicate link notice. This was fixed by skipping those links in
Ruby.
If a user posted a URL that was Oneboxes and contained other links that
appeared in previous posts, then the user got a duplicate link notice.
This was fixed by skipping those links in JavaScript.
The generated regular expressions did not contain \b which matched
every text that contained the word, even if it was only a substring of
a word.
For example, if "art" was a watched word a post containing word
"artist" matched.
When we are emailing people from a group inbox, we are having
a PM conversation with them, as a support account would. In this
case mailing list headers do not make sense. It is not like a forum
topic where you may have tens or hundreds of participants -- it is a
conversation between the group and a small handful of people
directly contacting the group, often just one person.
The only header left in tact was List-Unsubsribe which is important
for letting people opt out to notifications.
* FIX: Allow SVG uploads if dimensions are a fraction of a unit
`UploadCreator` counts the number of pixels in an file to determine if it is valid. `pixels` is calculated by multiplying the width and height of the image, as determined by FastImage.
SVG files can have their width/height expressed in a variety of different units of measurement. For example, ‘px’, ‘in’, ‘cm’, ‘mm’, ‘pt’, ‘pc’, etc are all valid within SVG files. If an image has a width of `0.5in`, FastImage may interpret this as being a width of `0`, meaning it will report the `size` as being `0`.
However, we don’t need to concern ourselves with the number of ‘pixels’ in a SVG files, as that is irrelevant for this file format, so we can skip over the check for `pixels == 0` when processing this file type.
* DEV: Speed up getting SVG dimensions
The `-ping` flag prevents the entire image from being rasterized before a result is returned. See:
https://imagemagick.org/script/command-line-options.php#ping
The first thing we needed here was an enum rather than a boolean to determine how a directory_column was created. Now we have `automatic`, `user_field` and `plugin` directory columns.
This plugin API is assuming that the plugin has added a migration to a column to the `directory_items` table.
This was created to be initially used by discourse-solved. PR with API usage - https://github.com/discourse/discourse-solved/pull/137/
Anonymizing a user changed their email address, destroyed all
associated InvitedUser records, but did not destroy the invites
associated to user's email.
Profiling showed that we were roughly 10% of a request time creating all
the ActiveRecord objects for categories in the `Site` model on a site with 61 categories.
Instead of querying for the categories each time based on which categories the user can see,
we can just preload all of the categories upfront and filter out the
categories that the user can not see.
When dismissing new topics for the Tracked filter, the dismiss was
limited to 30 topics which is the default per page count for TopicQuery.
This happened even if you specified which topic IDs you were
selectively dismissing. This PR fixes that bug, and also moves
the per_page_count into a DEFAULT_PER_PAGE_COUNT for the TopicQuery
so it can be stubbed in tests.
Also moves the unused stub_const method into the spec helpers
for cases like this; it is much better to handle this in one place
with an ensure. In a follow up PR I will clean up other specs that
do the same thing and make them use stub_const.
When we call Bookmark.cleanup! we want to make sure that
topic_user.bookmarked is updated for topics linked to the
bookmarks that were deleted. Also when PostDestroyer calls
destroy and recover. We have a job for this already --
SyncTopicUserBookmarked -- so we just utilize that.
Subclasses must call #delete_user_actions inside build_actions to support user deletion. The method adds a delete user bundle, which has a delete and a delete + block option. Every subclass is responsible for implementing these actions.
Leaking state and non-obvious order (before :all runs *before* RailsHelper.test_setup) are not worth it.
A replacement PR for #13370. Fixes some flaky specs, e.g.
```
bin/rspec './spec/components/freedom_patches/translate_accelerator_spec.rb[1:3]' './spec/jobs/clean_up_user_export_topics_spec.rb[1:1]' --tag ~type:multisite --seed 35994
```
Also included:
* DEV: No need for locale reset (we do it anyway in rails_helper in `test_setup`)
Since we use the event to perform additional validations on the file, we should check if it added any errors to the upload before saving it. This change makes the UploadCreator more consistent since we no longer have to rely on exceptions.
Adds a new `smtp_group_id` column to `EmailLog` which is filled in if the mail `from_address` matches a group's `email_username`. This is for easier debugging, so we know which emails have been sent via group SMTP.
When replying to a user_private_message email originating from
a group PM that does _not_ have a reply key (e.g. when replying
directly to the group's SMTP address), we were mistakenly linking
the new post created from the reply to the OP and the user who
created the topic, based on the first IncomingEmail message ID in
the topic, rather than using the correct reply to user and post number
that the user actually replied to.
We now use the In-Reply-To header to look up the corresponding EmailLog
record when the user who replied was sent a user_private_message email,
and use the post from that as the reply_to_user/post.
This also removes superfluous filtering of incoming_email records. After
already filtering by message_id and then addressed_to_user (which only
returns incoming emails where the to, from, or cc address includes any
of the user's emails), we were filtering again but in the ruby code for
the exact same conditions. After removing this all existing tests still
pass.
If force_https is enabled all resource (including markdown preview and so on) will be accessed using HTTPS
If for any reason you attempt to link to non HTTPS reachable content content may appear broken
The `ember_jquery` bundle contains production builds of Ember and jQuery
which doesn't work with tests. This commits introduces a new
`theme_qunit_vendor` bundle which is copy of the `vendor` bundle but
doesn't contain `ember_jquery`.
This commit is a partial revert of
409c8585e4
The error was:
```
Jobs::Onceoff can run all once off jobs without errors
Failure/Error: j.new.execute_onceoff(nil)
TypeError:
can't create instance of singleton class
# ./spec/integrity/onceoff_integrity_spec.rb:13:in `new'
# ./spec/integrity/onceoff_integrity_spec.rb:13:in `block (3 levels) in <main>'
# ./spec/integrity/onceoff_integrity_spec.rb:12:in `each'
# ./spec/integrity/onceoff_integrity_spec.rb:12:in `block (2 levels) in <main>'
# ./spec/rails_helper.rb:279:in `block (2 levels) in <top (required)>'
# ./bundle/ruby/2.7.0/gems/webmock-3.13.0/lib/webmock/rspec.rb:37:in `block (2 levels) in <top (required)>'
```
Sometimes the class found by `ObjectSpace.each_object(Class)` would be e.g:
`#<Class:#<Jobs::MigrateBadgeImageToUploads:0x00007f96f8277400>>`
…instead of e.g:
`#<Jobs::MigrateBadgeImageToUploads:0x00007f96ffa59540>`
This commit changes the `#select` to filter out those classes.
* FIX: Quoting Oneboxed content should exclude formatting
When a post is quoted that includes Oneboxed content, we should not include the formatting generated by the Onebox. Rather, we should attempt to collapse the link referenced by the Onebox to a single line text link.
* DEV: fix tests
Over the years we have found that a few communities never discovered tags.
Instead of having them default off we now have them default on, ensuring
that everyone finds out about them.
Co-authored-by: Dan Ungureanu <dan@ungureanu.me>
IMDb movie links were being rendered as posters. This was because
IMDb was sending `og:type` as `image` randomly in some cases. To
fix this we'll now default all IMDb links as article type. This will
ensure that the IMDb onebox link includes all the information instead
of showing just a poster without any context.
When a group only has SMTP enabled and not IMAP, we do not
want to enqueue the :group_smtp_email job because using the group's
SMTP credentials for sending user_private_message emails is
handled by the UserNotifications class.
We do not want the :group_smtp_email job to be enqueued because
that uses a reply key instead of the group.email_username
for the reply-to address which is not what we want for SMTP
only, and also creates an IncomingEmail record to prevent IMAP
double syncing which we do not need either.
There is an open question about what happens when IMAP is
enabled after SMTP has been enabled for a while, and also questions
around whether we could do away with :group_smtp_email altogether
and handle everything via EmailLog and UserNotifications, adding
additional columns to the former and modifying the Imap::Sync
class to take this into account...a lot more further testing
for IMAP needs to be done to answer those questions.
For now, this fix should be sufficient to get the correct
reply-to address for user_private_response messages sent in
response to emails sent directly to the group's
email_username SMTP address.
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
In Ember CLI addons get put into the vendor bundle, as opposed to their
own bundle like we're doing in the Rails app. We never use pretty-text
without our vendor bundle so this should have no difference on
performance.
We need to keep the pretty-text bundle for server side cooking.
This PR changes the `UserNotification` class to send outbound `user_private_message` using the group's SMTP settings, but only if:
* The first allowed_group on the topic has SMTP configured and enabled
* SiteSetting.enable_smtp is true
* The group does not have IMAP enabled, if this is enabled the `GroupSMTPMailer` handles things
The email is sent using the group's `email_username` as both the `from` and `reply-to` address, so when the user replies from their email it will go through the group's SMTP inbox, which needs to have email forwarding set up to send the message on to a location (such as a hosted site email address like meta@discoursemail.com) where it can be POSTed into discourse's handle_mail route.
Also includes a fix to `EmailReceiver#group_incoming_emails_regex` to include the `group.email_username` so the group does not get a staged user created and invited to the topic (which was a problem for IMAP), as well as updating `Group.find_by_email` to find using the `email_username` as well for inbound emails with that as the TO address.
#### Note
This is safe to merge without impacting anyone seriously. If people had SMTP enabled for a group they would have IMAP enabled too currently, and that is a very small amount of users because IMAP is an alpha product, and also because the UserNotification change has a guard to make sure it is not used if IMAP is enabled for the group. The existing IMAP tests work, and I tested this functionality by manually POSTing replies to the SMTP address into my local discourse.
There will probably be more work needed on this, but it needs to be tested further in a real hosted environment to continue.
This improves the display of available tags in categories that are
configured to require at least (x) tags from a tag group.
There are two changes included:
- regular users will now see all the available tags in the required tag
group (previously they could see a max. of 5 tags)
- staff users will now see the tags from the required tag group when
the tag group contains more tags than the default limit (also set to 5)
Both changes only apply to the default query (i.e. no search terms).
It used to require SiteSetting.min_trust_level_to_allow_invite to
invite a user to a group, even if the user existed and the inviter was
a group owner.
Users who use encoded slugs on their sites sometimes run into 500 error when pasting a link to another topic in a post. The problem happens when generating a backward "reflection" link that would appear in a linked topic. Link URL restricted on the database level to 500 chars in length. At first glance, it should work since we have a restriction on topic title length.
But it doesn't work when a site uses encoded slugs, like here (take a look at the URL). The link to a topic, in this case, can be much longer than 500 characters.
By the way, an error happens only when generating a "reflection" link and doesn't happen with a direct link, we truncate that link. It works because, in this case, the original long link is still present in the post body and can be used for navigation. But we can't do the same for backward "reflection" links (without rewriting their implementation), the whole link must be saved to the database.
The simplest and cleanest solution will be just to remove the restriction on the database level. Abuse is impossible here since we are already protected by the restriction on topic title length. There aren’t performance benefits in using length-constrained columns in Postgres, in fact, length-constrained columns need a few extra CPU cycles to check the length when storing data.
When a topic is fully merged into another topic we close it and schedule its deleting. But, because of a bug, if the merged topic contains some moderator actions or small actions it won't be merged. This change fixes this problem.
An important note: in general, we don't want to close a topic after moving posts if it still contains some regular posts or whispers. But when we are moving posts to a private message we don't want the notice about it to be publicly visible. So we use whispers with action_code == 'split_topic' instead of small_actions in such cases and we should ignore this specific kind of whispers when decide if we should close the merged topic.
It was not clear that replace watched words can be used to replace text
with URLs. This introduces a new watched word type that makes it easier
to understand.
* FIX: return an empty result if response from Amazon is missing attributes
Check we have the basic attributes requires to construct a Onebox for Amazon.
This is an attempt to handle scenarios where we receive a valid 200-status response from an Amazon request that does not include the data we’re expecting.
* Update lib/onebox/engine/amazon_onebox.rb
Co-authored-by: Régis Hanol <regis@hanol.fr>
Co-authored-by: Régis Hanol <regis@hanol.fr>
Refactors `TrustLevel` and moves translations from server to client
Additional changes:
* "staff" and "admin" wasn't translatable in site settings
* it replaces a concatenated string with a translation
* uses translation for trust levels in users_by_trust_level report
* adds a DB migration to rename keys of translation overrides affected by this commit
In Ember CLI, the vendor bundler includes Ember/jQuery, so this brings
our app closer to that configuration.
We have a couple pages (Reset Password / Confirm New Email) where we need
`ember_jquery` without vendor so the file still exists for those cases.
Some specs failed when `LOAD_PLUGINS=1` was set while migrating the test DB and the narrative-bot plugin disabled the `send_welcome_message` site setting.
On the web, we display only an excerpt in a monospace font and the rest
of the body is hidden under ellipsis. The email displayed both of them
and it did not use the same style. This commit leaves only the excerpt
in emails and makes it use a monospace font to display it.
When a topic is fully merged into another topic we close it. Now we want also to set a timer for deleting this topic. By default, stub topics will be deleted in 7 days. Users can change this period or disable auto-deleting by setting the period to 0.
This overhauls the user interface for the group email settings management, aiming to make it a lot easier to test the settings entered and confirm they are correct before proceeding. We do this by forcing the user to test the settings before they can be saved to the database. It also includes some quality of life improvements around setting up IMAP and SMTP for our first supported provider, GMail. This PR does not remove the old group email config, that will come in a subsequent PR. This is related to https://meta.discourse.org/t/imap-support-for-group-inboxes/160588 so read that if you would like more backstory.
### UI
Both site settings of `enable_imap` and `enable_smtp` must be true to test this. You must enable SMTP first to enable IMAP.
You can prefill the SMTP settings with GMail configuration. To proceed with saving these settings you must test them, which is handled by the EmailSettingsValidator.
If there is an issue with the configuration or credentials a meaningful error message should be shown.
IMAP settings must also be validated when IMAP is enabled, before saving.
When saving IMAP, we fetch the mailboxes for that account and populate them. This mailbox must be selected and saved for IMAP to work (the feature acts as though it is disabled until the mailbox is selected and saved):
### Database & Backend
This adds several columns to the Groups table. The purpose of this change is to make it much more explicit that SMTP/IMAP is enabled for a group, rather than relying on settings not being null. Also included is an UPDATE query to backfill these columns. These columns are automatically filled when updating the group.
For GMail, we now filter the mailboxes returned. This is so users cannot use a mailbox like Sent or Trash for syncing, which would generally be disastrous.
There is a new group endpoint for testing email settings. This may be useful in the future for other places in our UI, at which point it can be extracted to a more generic endpoint or module to be included.
The default `allow_title` column value is "true" for regular and leader badges. After we disable it in admin side the seed method enabling it again while upgrading. So we shouldn't do it for existing badges.
We need to be careful when stubbing this method. SessionController#become won't be defined if production is set to true, so if these tests run first, calling #sign_in will fail for other tests.
Calling sign_in before stubbing guarantees the method is defined because the check happens when the class is loaded.
Discourse shouldn't dynamically calculate the path of uploads and optimized images after a file has been stored on disk or S3. Otherwise it might calculate the wrong path if the SHA1 or extension stored in the database doesn't match the actual file path.
* FIX: Improve GitHub folder regexp in Onebox
It used to match any GitHub URL that was not matched by the other GitHub
Oneboxes and it did not do a good job at handling those. With this
change, the generic Onebox will handle the remaining URLs.
* FEATURE: Add Onebox for GitHub Actions
* FEATURE: Add Onebox for PR check runs
* FIX: Remove image from GitHub folder Oneboxes
It is a generic, auto-generated image which does not provide any value.
* DEV: Add tests
* FIX: Strip HTML comments from PR body
Previously we would retry push notifications indefinitely for all errors
except for ExpiredSubscription
Under certain conditions other persistent errors may arise such as a persistent
rate limit.
If we track more than 3 errors in a period of time longer than a day we will
delete the subscription
Also performs a bit of internal cleanup to ensure protected methods really
are private.
Admins can visit an approved queued topic from the review queue by clicking their title. We no longer store the created post and topic ids in the reviewable's payload object. Instead, we set the `topic_id` and `target_id` attributes.
Using 1 as the default value is confusing for some people as low-score flags are hidden unless staff uses the "(any)" priority filter. Let's change it to 0 and let every site adjust the setting to match their needs.
* Move onebox gem in core library
* Update template file path
* Remove warning for onebox gem caching
* Remove onebox version file
* Remove onebox gem
* Add sanitize gem
* Require onebox library in lazy-yt plugin
* Remove onebox web specific code
This code was used in standalone onebox Sinatra application
* Merge Discourse specific AllowlistedGenericOnebox engine in core
* Fix onebox engine filenames to match class name casing
* Move onebox specs from gem into core
* DEV: Rename `response` helper to `onebox_response`
Fixes a naming collision.
* Require rails_helper
* Don't use `before/after(:all)`
* Whitespace
* Remove fakeweb
* Remove poor unit tests
* DEV: Re-add fakeweb, plugins are using it
* Move onebox helpers
* Stub Instagram API
* FIX: Follow additional redirect status codes (#476)
Don’t throw errors if we encounter 303, 307 or 308 HTTP status codes in responses
* Remove an empty file
* DEV: Update the license file
Using the copy from https://choosealicense.com/licenses/gpl-2.0/#
Hopefully this will enable GitHub to show the license UI?
* DEV: Update embedded copyrights
* DEV: Add Onebox copyright notice
* DEV: Add MIT license, convert COPYRIGHT.txt to md
* DEV: Remove an incorrect copyright claim
Co-authored-by: Jarek Radosz <jradosz@gmail.com>
Co-authored-by: jbrw <jamie@goatforce5.org>
There are two methods which the server uses to verify an invite is being redeemed with a matching email:
1) The email token, supplied via a `?t=` parameter
2) The validity of the email, as provided by the auth provider
Only one of these needs to be true for the invite to be redeemed successfully on the server. The frontend logic was previously only checking (2). This commit updates the frontend logic to match the server.
This commit does not affect the invite redemption logic. It only affects the 'show' endpoint, and the UI.
The previous commits removed reviewables leading to a bad user
experience. This commit updates the status, replaces actions with a
message and greys out the reviewable.
This PR improves the UI of bulk select so that its context is applied to the Dismiss Unread and Dismiss New buttons. Regular users (not just staff) are now able to use topic bulk selection on the /new and /unread routes to perform these dismiss actions more selectively.
For Dismiss Unread, there is a new count in the text of the button and in the modal when one or more topic is selected with the bulk select checkboxes.
For Dismiss New, there is a count in the button text, and we have added functionality to the server side to accept an array of topic ids to dismiss new for, instead of always having to dismiss all new, the same as the bulk dismiss unread functionality. To clean things up, the `DismissTopics` service has been rolled into the `TopicsBulkAction` service.
We now also show the top Dismiss/Dismiss New button based on whether the bottom one is in the viewport, not just based on the topic count.
Based on feedback from Matt Haughey, we don't need to use so many words when describing a deleted topic or post.
Co-authored-by: Martin Brennan <martin@discourse.org>
We support two types of custom excerpts. It can be <div class="excerpt"> or <span class="excerpt">: b21f74060e/lib/excerpt_parser.rb (L120)
We also ignore max excerpt length for custom excerpts. But we forgot to process div when ignoring max length.
When editing the first post for the topic we do two AJAX requests
to two separate controllers in this order:
PUT /t/topic-name
PUT /posts/2489523
This causes two post revisor calls, which end up triggering the
:post_edited DiscourseEvent twice. This is then picked up and sent
as a WebHook event twice. However we do not need to send a :post_edited
webhook event if the first post is being edited and topic_changed is
true from the :post_edited DiscourseEvent, because a second event will
shortly come through for just the post.
See https://meta.discourse.org/t/post-webhook-fires-two-times-on-post-edited-for-first-post-in-a-topic/162408
Continued on from https://github.com/discourse/discourse/pull/10590
When uploads are created from the composer (editing or creating a post),
for sites with secure uploads enabled we assume security by default and
that new upload is set to secure. When the post is created, we then
check whether the post uploads _actually_ need to be secure and adjust
accordingly.
We were not doing this when revising a post, so when a new upload was
created when editing a post in a public topic, the secure status stayed
true erroneously causing issues with image previews, among other things.
Over the years we accrued many spelling mistakes in the code base.
This PR attempts to fix spelling mistakes and typos in all areas of the code that are extremely safe to change
- comments
- test descriptions
- other low risk areas
Under certain conditions admins would miss messages when posting action in
topics where they have permission.
This also fixes an error where we would sometimes explode when publishing to
an empty group.
Previously, we only precompiled the CSS for parent themes but not for
the child themes. As a result, the CSS for child themes were being
compiled during the first request which made the respond time high for
that request.
Watched words are always regular expressions, despite watched_words_
_regular_expressions being enabled or not. Internally, wildcard
characters are replaced with a regular expression that matches any non
whitespace character.
* FIX: Hide tag watched words if tagging is disabled
These 'autotag' words were shown even if tagging was disabled.
* FIX: Make autotag watched words case insensitive
This commit also fixes the bug when no tag was applied if no other tag
was already present.
Email change requests are never deleted no matter if they completed
successfully or not. The abandoned requests have the disadvantage of
showing up as unconfirmed emails in user's preferences page.
This is a recent regression introduced by https://github.com/discourse/discourse/pull/12937 which makes it so that when looking at a user profile that is not your own, specifically the category and tag notification settings, you would see your own settings instead of the target user. This is only a problem for admins because regular users cannot see these details for other users.
The issue was that we were using `scope` in the serializer, which refers to the current user, rather than using a scope for the target user via `Guardian.new(user)`.
However, on further inspection the `notification_levels_for` method for `TagUser` and `CategoryUser` did not actually need to be accepting an instance of Guardian, all that it was using it for was to check guardian.anonymous? which is just a fancy way of saying user.blank?. Changed this method to just accept a user instead and send the user in from the serializer.
* DEV: Allow wildcards in Oneboxer optional domain Site Settings
Allows a wildcard to be used as a subdomain on Oneboxer-related SiteSettings, e.g.:
- `force_get_hosts`
- `cache_onebox_response_body_domains`
- `force_custom_user_agent_hosts`
* DEV: fix typos
* FIX: Try doing a GET after receiving a 500 error from a HEAD
By default we try to do a `HEAD` requests. If this results in a 500 error response, we should try to do a `GET`
* DEV: `force_get_hosts` should be a hidden setting
* DEV: Oneboxer Strategies
Have an alternative oneboxing ‘strategy’ (i.e., set of options) to use when an attempt to generate a Onebox fails. Keep track of any non-default strategies that were used on a particular host, and use that strategy for that host in the future.
Initially, the alternate strategy (`force_get_and_ua`) forces the FinalDestination step of Oneboxing to do a `GET` rather than `HEAD`, and forces a custom user agent.
* DEV: change stubbed return code
The stubbed status code needs to be a value not recognized by FinalDestination
We have a few places in the code where we need to validate various email related settings, and will have another soon with the improved group email settings UI. This PR introduces a class which can validate POP3, IMAP, and SMTP credentials and also provide a friendly error message for issues if they must be presented to an end user.
This PR does not change any existing code to use the new service. I have added a TODO to change POP3 validation and the email test rake task to use the new validator post-release.
* FIX: Ensure the same email cannot be invited twice
When creating a new invite with a duplicated email, the old invite will
be updated and returned. When updating an invite with a duplicated email
address, an error will be returned.
* FIX: not Ember helper does not exist
* FIX: Sync can_invite_to_forum? and can_invite_to?
The two methods should perform the same basic set of checks, such as
check must_approve_users site setting.
Ideally, one of the methods would call the other one or be merged and
that will happen in the future.
* FIX: Show invite to group if user is group owner
* FIX: flaky specs after topic view custom filters
When ensuring TopicView class variables return to the original state it should use empty Hash instead of empty Array. That
https://github.com/discourse/discourse/blob/master/lib/topic_view.rb#L60
* FIX: convert to string for topic view custom filter
Recalculating a ReviewableFlaggedPost's score after rejecting or ignoring it sets the score as 0, which means that we can't find them after reviewing. They don't surpass the minimum priority threshold and are hidden.
Additionally, we only want to use agreed flags when calculating the different priority thresholds.
Some emails coming in via the mail receiver can still end up
with bad encoding when trying to enqueue the job. This catches
the last encoding issue and forces iso-8559-1 and encodes to
UTF-8 to circumvent the issue.
Currently, when the target is not available we're returning the error message "`You are not permitted to view the requested resource`" which is not clear.
We have found when receiving and posting inbound emails to the handle_mail route, it is better to POST the payload as a base64 encoded string to avoid strange encoding issues. This introduces a new param of `email_encoded` and maintains the legacy param of email, showing a deprecation warning. Eventually the old param of `email` will be dropped and the new one `email_encoded` will be the only way to handle_mail.
The user may have changed their category or tag tracking settings since a topic was tracked/watched based on those settings in the past. In that case we need to alter the reason message we show them otherwise it is very confusing for the end user to be told they are tracking a topic because of a category, when they are no longer tracking that category.
For example: "You will see a count of new replies because you are tracking this category." becomes: "You will see a count of new replies because you were tracking this category in the past."
To do this, it was necessary to add tag and category tracking info to current user serializer. I improved the serializer code so it only does 3 SQL queries instead of 9 to get the tracking information for tags and categories for the current user.
Uncategorized was sometimes visible even if allow_uncategorized_topics
was false. This was especially happening on mobile, if at least one
topic was uncategorized.
* FIX: Link notification to first unread post
If a topic with a few posts was posted in a watched category or with a
watched tag, the created notification would always point to the last
post, instead of pointing to the first one.
The root cause is that the query that fetched the first unread post
uses 'TopicUser' records and those are not created by default for
user watching a category or tag. In this case, it should use the
'CategoryUser' or 'TagUser' records.
* DEV: Use named bind variables
nil was converted to "" and the matching regex would return [] and then be converted to nil with max usage.
Example exception:
```
NoMethodError (undefined method `<=' for nil:NilClass)
lib/text_sentinel.rb:71:in `seems_unpretentious?'
lib/validators/quality_title_validator.rb:13:in `validate_each'
lib/topic_creator.rb:25:in `valid?'
```
* FEATURE: add support for like webhooks
Add support for like webhooks. Webhook events only send on user membership
in the defined webhook group filters.
This also fixes group webhook events, as before this was never used, and
the logic was not correct.
Sometimes, parts of the application pass in the locale as a string, not a symbol. This was causing the translate_accelerator to cache two versions of the locale separately: one cache for the symbol version, and one cache for the string version. For example, in a running production process:
```
irb(main):001:0> I18n.instance_variable_get(:@loaded_locales)
=> [:en, "en"]
```
This commit ensures the `locale` key is always converted to a symbol, and adds a spec to ensure the same locale cannot appear twice in `@loaded_locales`