By default, Rails only includes the Vary:Accept header in responses when the Accept: header is included in the request. This means that proxies/browsers may cache a response to a request with a missing Accept header, and then later serve that cached version for a request which **does** supply the Accept header. This can lead to some very unexpected behavior in browsers.
This commit adds the Vary:Accept header for all requests, even if the Accept header is not present in the request. If a format parameter (e.g. `.json` suffix) is included in the path, then the Accept header is still omitted. (The format parameter takes precedence over any Accept: header, so the response is no longer varies based on the Accept header)
`/u/username/invited.json?filter=expired` and `/u/username/invited.json?filter=pending` APIs are already returning data to admins. However, the `can_see_invite_details?` boolean was false, which prevented the Ember frontend from showing the tabs correctly. This commit updates the guardian method to match reality.
* FIX: do not display add to calendar for past dates
There is no value in saving past dates into calendar
* FIX: remove postId and move ICS to frontend
PostId is not necessary and will make the solution more generic for dates which doesn't belong to a specific post.
Also, ICS file can be generated in JavaScript to avoid calling backend.
Sometimes administrators want to permanently delete posts and topics
from the database. To make sure that this is done for a good reasons,
administrators can do this only after one minute has passed since the
post was deleted or immediately if another administrator does it.
- Allow the `/presence/get` endpoint to return multiple channels in a single request (limited to 50)
- When multiple presence channels are initialized in a single Ember runloop, batch them into a single GET request
- Introduce the `presence-pretender` to allow easy testing of PresenceChannel-related features
- Introduce a `use_cache` boolean (default true) on the the server-side PresenceChannel initializer. Useful during testing.
Change the type for default_calendar to a string.
The type specified for the default calendar in the api docs wasn't a
valid type. The linting in the api docs repo reports:
```
`type` can be one of the following only: "object", "array", "string", "number", "integer", "boolean", "null".
```
This linting currently is only in the `discourse_api_docs` repo.
* DEV: Add include_subcategories param to api docs
Adding the `include_subcategories=true` query param to the
`/categories.json` api docs.
Follow up to: fe676f334a
* fix spec
It allows saving local date to calendar.
Modal is giving option to pick between ics and google. User choice can be remembered as a default for the next actions.
* FEATURE: Return subcategories on categories endpoint
When using the API subcategories will now be returned nested inside of
each category response under the `subcategory_list` param. We already
return all the subcategory ids under the `subcategory_ids` param, but
you then would have to make multiple separate API calls to fetch each of
those subcategories. This way you can get **ALL** of the categories
along with their subcategories in a single API response.
The UI will not be affected by this change because you need to pass in
the `include_subcategories=true` param in order for subcategories to be
returned.
In a follow up PR I'll add the API scoping for fetching categories so
that a readonly API key can be used for the `/categories.json` endpoint. This
endpoint should be used instead of the `/site.json` endpoint for
fetching a sites categories and subcategories.
* Update PR based on feedback
- Have spec check for specific subcategory
- Move comparison check out of loop
- Only populate subcategory list if option present
- Remove empty array initialization
- Update api spec to allow null response
* More PR updates based on feedback
- Use a category serializer for the subcategory_list
- Don't include the subcategory_list param if empty
- For the spec check for the subcategory by id
- Fix spec to account for param not present when empty
This commit resolves refactors can_invite_to? to use
can_invite_to_forum? for checking the site-wide permissions and then
perform topic specific checkups.
Similarly, can_invite_to? is always used with a topic object and this is
now enforced.
There was another problem before when `must_approve_users` site setting
was not checked when inviting users to forum, but was checked when
inviting to a topic.
Another minor security issue was that group owners could invite to
group topics even if they did not have the minimum trust level to do
it.
It was possible to see notifications of other users using routes:
- notifications/responses
- notifications/likes-received
- notifications/mentions
- notifications/edits
We weren't showing anything private (like notifications about private messages), only things that're publicly available in other places. But anyway, it feels strange that it's possible to look at notifications of someone else. Additionally, there is a risk that we can unintentionally leak something on these pages in the future.
This commit restricts these routes.
The file size error messages for max_image_size_kb and
max_attachment_size_kb are shown to the user in the KB
format, regardless of how large the limit is. Since we
are going to support uploading much larger files soon,
this KB-based limit soon becomes unfriendly to the end
user.
For example, if the max attachment size is set to 512000
KB, this is what the user sees:
> Sorry, the file you are trying to upload is too big (maximum
size is 512000KB)
This makes the user do math. In almost all file explorers that
a regular user would be familiar width, the file size is shown
in a format based on the maximum increment (e.g. KB, MB, GB).
This commit changes the behaviour to output a humanized file size
instead of the raw KB. For the above example, it would now say:
> Sorry, the file you are trying to upload is too big (maximum
size is 512 MB)
This humanization also handles decimals, e.g. 1536KB = 1.5 MB
Allows creating a bookmark with the `for_topic` flag introduced in d1d2298a4c set to true. This happens when clicking on the Bookmark button in the topic footer when no other posts are bookmarked. In a later PR, when clicking on these topic-level bookmarks the user will be taken to the last unread post in the topic, not the OP. Only the OP can have a topic level bookmark, and users can also make a post-level bookmark on the OP of the topic.
I had to do some pretty heavy refactors because most of the bookmark code in the JS topics controller was centred around instances of Post JS models, but the topic level bookmark is not centred around a post. Some refactors were just for readability as well.
Also removes some missed reminderType code from the purge in 41e19adb0d
Due to the way that rswag expands shared components we were getting this
warning when linting our api docs:
```
Component: "user_response" is never used.
```
This change refactors the `api/users_spec.rb` file so that it uses the
new way of doing things with a separate `user_get_response.json` schema
file rather then the old way of loading a shared response inside of the
swagger_helper.rb file.
The color_scheme_id needs to be an integer not a string.
This is one of the failing tests that showed this error:
https://github.com/discourse/discourse/runs/3598414971
It showed this error
`POSSIBLE ISSUE W/: /user_themes/0/color_scheme_id`
And this is part of the site.json response:
```
...
"user_themes"=>[{"theme_id"=>149, "name"=>"Cool theme 111", "default"=>false, "color_scheme_id"=>37}]
...
```
We don't actually use the reminder_type for bookmarks anywhere;
we are just storing it. It has no bearing on the UI. It used
to be relevant with the at_desktop bookmark reminders (see
fa572d3a7a)
This commit marks the column as readonly, ignores it, and removes
the index, and it will be dropped in a later PR. Some plugins
are relying on reminder_type partially so some stubs have been
left in place to avoid errors.
We don't need no stinkin' denormalization! This commit ignores
the topic_id column on bookmarks, to be deleted at a later date.
We don't really need this column and it's better to rely on the
post.topic_id as the canonical topic_id for bookmarks, then we
don't need to remember to update both columns if the bookmarked
post moves to another topic.
Administrators can use second factor to confirm granting admin access
without using email. The old method of confirmation via email is still
used as a fallback when second factor is unavailable.
The previous excerpt was a simple truncated raw message. Starting with
this commit, the raw content of the draft is cooked and an excerpt is
extracted from it. The logic for extracting the excerpt mimics the the
`ExcerptParser` class, but does not implement all functionality, being
a much simpler implementation.
The two draft controllers have been merged into one and the /draft.json
route has been changed to /drafts.json to be consistent with the other
route names.
When a user archives a personal message, they are redirected back to the
inbox and will refresh the list of the topics for the given filter.
Publishing an event to the user results in an incorrect incoming message
because the list of topics has already been refreshed.
This does mean that if a user has two tabs opened, the non-active tab
will not receive the incoming message but at this point we do not think
the technical trade-offs are worth it to support this feature. We
basically have to somehow exclude a client from an incoming message
which is not easy to do.
Follow-up to fc1fd1b416
At this point in time, we do not think supporting unread and new when an
admin is looking at another user's messages is worth supporting.
Follow-up to fc1fd1b416
From the openapi spec:
https://spec.openapis.org/oas/latest.html#fixed-fields-7
each endpoint needs to have an `operationId`:
> Unique string used to identify the operation. The id MUST be unique
> among all operations described in the API. The operationId value is
> case-sensitive. Tools and libraries MAY use the operationId to uniquely
> identify an operation, therefore, it is RECOMMENDED to follow common
> programming naming conventions.
Running the linter on our openapi.json file with this command:
`npx @redocly/openapi-cli lint openapi.json`
produced the following warning on all of our endpoints:
> Operation object should contain `operationId` field
This commit resolves these warnings by adding an operationId field to
each endpoint.
Previously, a group's `default_notification_level` change will only affect the users added after it.
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
PresenceChannel aims to be a generic system for allow the server, and end-users, to track the number and identity of users performing a specific task on the site. For example, it might be used to track who is currently 'replying' to a specific topic, editing a specific wiki post, etc.
A few key pieces of information about the system:
- PresenceChannels are identified by a name of the format `/prefix/blah`, where `prefix` has been configured by some core/plugin implementation, and `blah` can be any string the implementation wants to use.
- Presence is a boolean thing - each user is either present, or not present. If a user has multiple clients 'present' in a channel, they will be deduplicated so that the user is only counted once
- Developers can configure the existence and configuration of channels 'just in time' using a callback. The result of this is cached for 2 minutes.
- Configuration of a channel can specify permissions in a similar way to MessageBus (public boolean, a list of allowed_user_ids, and a list of allowed_group_ids). A channel can also be placed in 'count_only' mode, where the identity of present users is not revealed to end-users.
- The backend implementation uses redis lua scripts, and is designed to scale well. In the future, hard limits may be introduced on the maximum number of users that can be present in a channel.
- Clients can enter/leave at will. If a client has not marked itself 'present' in the last 60 seconds, they will automatically 'leave' the channel. The JS implementation takes care of this regular check-in.
- On the client-side, PresenceChannel instances can be fetched from the `presence` ember service. Each PresenceChannel can be used entered/left/subscribed/unsubscribed, and the service will automatically deduplicate information before interacting with the server.
- When a client joins a PresenceChannel, the JS implementation will automatically make a GET request for the current channel state. To avoid this, the channel state can be serialized into one of your existing endpoints, and then passed to the `subscribe` method on the channel.
- The PresenceChannel JS object is an ember object. The `users` and `count` property can be used directly in ember templates, and in computed properties.
- It is important to make sure that you `unsubscribe()` and `leave()` any PresenceChannel objects after use
An example implementation may look something like this. On the server:
```ruby
register_presence_channel_prefix("site") do |channel|
next nil unless channel == "/site/online"
PresenceChannel::Config.new(public: true)
end
```
And on the client, a component could be implemented like this:
```javascript
import Component from "@ember/component";
import { inject as service } from "@ember/service";
export default Component.extend({
presence: service(),
init() {
this._super(...arguments);
this.set("presenceChannel", this.presence.getChannel("/site/online"));
},
didInsertElement() {
this.presenceChannel.enter();
this.presenceChannel.subscribe();
},
willDestroyElement() {
this.presenceChannel.leave();
this.presenceChannel.unsubscribe();
},
});
```
With this template:
```handlebars
Online: {{presenceChannel.count}}
<ul>
{{#each presenceChannel.users as |user|}}
<li>{{avatar user imageSize="tiny"}} {{user.username}}</li>
{{/each}}
</ul>
```
The generate_presigned_put endpoint for direct external uploads
(such as the one for the uppy-image-uploader) records allowed
S3 metadata values on the uploaded object. We use this to store
the sha1-checksum generated by the UppyChecksum plugin, for later
comparison in ExternalUploadManager.
However, we were not doing this for the create_multipart endpoint,
so the checksum was never captured and compared correctly.
Also includes a fix to make sure UppyChecksum is the last preprocessor to run.
It is important that the UppyChecksum preprocessor is the last one to
be added; the preprocessors are run in order and since other preprocessors
may modify the file (e.g. the UppyMediaOptimization one), we need to
checksum once we are sure the file data has "settled".
Users can invite people to topic and they will be automatically
redirected to the topic when logging in after signing up. This commit
ensures a "invited_to_topic" notification is created when the invite is
redeemed.
The same notification is used for the "Notify" sharing method that is
found in share topic modal.
Since ad3ec5809f when a user chooses
the Dismiss New... option in the New topic list, we send a request
to topics/reset-new.json with ?tracked=false as the only parameter.
This then uses Topic as the scope for topics to dismiss, with no
other limitations. When we do topic_scope.pluck(:id), it gets the
ID of every single topic in the database (that is not deleted) to
pass to TopicsBulkAction, causing a huge query with severe performance
issues.
This commit changes the default scope to use
`TopicQuery.new(current_user).new_results(limit: false)`
which should only use the topics in the user's New list, which
will be a much smaller list, depending on the user's "new_topic_duration_minutes"
setting.
There are certain design decisions that were made in this commit.
Private messages implements its own version of topic tracking state because there are significant differences between regular and private_message topics. Regular topics have to track categories and tags while private messages do not. It is much easier to design the new topic tracking state if we maintain two different classes, instead of trying to mash this two worlds together.
One MessageBus channel per user and one MessageBus channel per group. This allows each user and each group to have their own channel backlog instead of having one global channel which requires the client to filter away unrelated messages.
This pull request introduces the endpoints required, and the JavaScript functionality in the `ComposerUppyUpload` mixin, for direct S3 multipart uploads. There are four new endpoints in the uploads controller:
* `create-multipart.json` - Creates the multipart upload in S3 along with an `ExternalUploadStub` record, storing information about the file in the same way as `generate-presigned-put.json` does for regular direct S3 uploads
* `batch-presign-multipart-parts.json` - Takes a list of part numbers and the unique identifier for an `ExternalUploadStub` record, and generates the presigned URLs for those parts if the multipart upload still exists and if the user has permission to access that upload
* `complete-multipart.json` - Completes the multipart upload in S3. Needs the full list of part numbers and their associated ETags which are returned when the part is uploaded to the presigned URL above. Only works if the user has permission to access the associated `ExternalUploadStub` record and the multipart upload still exists.
After we confirm the upload is complete in S3, we go through the regular `UploadCreator` flow, the same as `complete-external-upload.json`, and promote the temporary upload S3 into a full `Upload` record, moving it to its final destination.
* `abort-multipart.json` - Aborts the multipart upload on S3 and destroys the `ExternalUploadStub` record if the user has permission to access that upload.
Also added are a few new columns to `ExternalUploadStub`:
* multipart - Whether or not this is a multipart upload
* external_upload_identifier - The "upload ID" for an S3 multipart upload
* filesize - The size of the file when the `create-multipart.json` or `generate-presigned-put.json` is called. This is used for validation.
When the user completes a direct S3 upload, either regular or multipart, we take the `filesize` that was captured when the `ExternalUploadStub` was first created and compare it with the final `Content-Length` size of the file where it is stored in S3. Then, if the two do not match, we throw an error, delete the file on S3, and ban the user from uploading files for N (default 5) minutes. This would only happen if the user uploads a different file than what they first specified, or in the case of multipart uploads uploaded larger chunks than needed. This is done to prevent abuse of S3 storage by bad actors.
Also included in this PR is an update to vendor/uppy.js. This has been built locally from the latest uppy source at d613b849a6. This must be done so that I can get my multipart upload changes into Discourse. When the Uppy team cuts a proper release, we can bump the package.json versions instead.