This is so we can join the Notification table onto the
Bookmark table. A slight refactor was needed to ensure
that the required values are always included and the
consumer does not need to think about this.
The discourse-chat and discourse-data-explorer plugins
will be updated to take advantage of this commit.
Tag edit notifications are either created by PostActionNotifier or
PostRevisor. PostActionNotifier already checks if the site setting is
enabled. but PostRevisor scheduled a NotifyTagChange job without
checking disable_tags_edit_notifications.
This commit introduces a new plugin API to register
a group of stats that will be included in about.json
and also conditionally in the site about UI at /about.
The usage is like this:
```ruby
register_about_stat_group("chat_messages", show_in_ui: true) do
{
last_day: 1,
"7_days" => 10,
"30_days" => 100,
count: 1000,
previous_30_days: 120
}
end
```
In reality the stats will be generated any way the implementer
chooses within the plugin. The `last_day`, `7_days`, `30_days,` and `count`
keys must be present but apart from that additional stats may be added.
Only those core 4 stat keys will be shown in the UI, but everything will be shown
in about.json.
The stat group name is used to prefix the stats in about.json like so:
```json
"chat_messages_last_day": 2322,
"chat_messages_7_days": 2322,
"chat_messages_30_days": 2322,
"chat_messages_count": 2322,
```
The `show_in_ui` option (default false) is used to determine whether the
group of stats is shown on the site About page in the Site Statistics
table. Some stats may be needed purely for reporting purposes and thus
do not need to be shown in the UI to admins/users. An extension to the Site
serializer, `displayed_about_plugin_stat_groups`, has been added so this
can be inspected on the client-side.
* FIX: properly validate multiselect user fields on user creation
* Add test cases
* FIX: don't check multiselect user fields for watched words
* Clarifiy/simplify tests
* Roll back apply_watched_words changes
Since this method no longer needs to deal with arrays for now. If/when
we add new user fields which uses them, we can deal with it then.
The `unread_not_too_old` attribute is a little odd because there should never be a case where
the user's first_unread_at column is less than the `Topic#updated_at`
column of an unread topic. The `unread_not_too_old` attribute is causing
a bug where topic states synced into `TopicTrackingState` do not appear
as unread because the attribute does not exsist on a normal `Topic`
object and hence never set.
It makes more sense to use user_ids for the UserCommScreener
introduced in fa5f3e228c since
in most cases the ID will be available, not the username. This
was discovered while starting work on a plugin that will
use this. In the cases where only usernames are available
the extra query is negligble.
The idea behind this refactor is to centralise all of the user ignoring / muting / disallow PM checks in a single place, so they can be used consistently in core as well as for plugins like chat, while improving the main bulk of the checks to run in a single fast non-AR query.
Also fixed up the invite error when someone is muting/ignoring the user that is trying to invite them to the topic.
Currently we only apply watched words of the `Block` type to custom user
fields and user profile fields.
This patch enables all rules to be applied such as `Censor` or
`Replace`.
Previously the spec was hardcoded to expect a certain compiled result. The precise result will change as we update to more recent Ember versions.
Instead, we can compile via the special theme compiler, and then compile our expected result via the normal compiler. If they match, then things are working as intended. This technique should be safe across template-compiler upgrades.
* FIX: min/max username length limits weren't validated
The custom validators introduced in e0d7cda made so we ignored the mix
and max values set on site_settings.yml. That change allowed admins to
set values outside of the range defined on the yaml file.
Related to https://meta.discourse.org/t/group-names-with-more-than-60-characters-broken/232115?u=falco
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
The site.json endpoint was missing some optional fields on the
categories property that is returned. This commit documents the
`parent_category_id` which is present if there are subcategories and it
also documents the `custom_fields` property which the chat plugin is
using causing some flaky tests.
Mutating the `raw` variable like this would cause issues upstream, meaning that the modification is not persisted. Instead, we should allocate a new string like the other replacement methods.
* FIX: Posts can belong to hard-deleted topics
This was a problem when serializing deleted posts because they might
belong to a topic that was permanently deleted. This caused to DB
lookup to fail immediately and raise an exception. In this case, the
endpoint returned a 404.
* FIX: Remove N+1 queries
Deleted topics were not loaded because of the default scope that
filters out all deleted topics. It executed a query for each deleted
topic.
If an image is oneboxed directly, then we should replace the onebox URL with a markdown image tag. This ensures that the wrapper link points to the downloaded version rather than the original.
This regressed in bf6f8299
Logging out failed when the current user was cached by an instance of `Auth::DefaultCurrentUserProvider` and `#log_off_user` was called on a different instance of that class.
Co-authored-by: Sam <sam.saffron@gmail.com>
This happened when a middleware accessed the `currentUser` before a controller had a chance to populate the `action_dispatch.request.path_parameters` env variable. In that case Discourse would always cache `nil` as `currentUser`.
We previously relied on CSS animation-delay for the splash. This means that we can get inconsistent results based on device/network conditions.
This PR moves us to a more consistent timing based on {request time + 2 seconds}
Internal topic: /t/65378/65
Before, whispers were only available for staff members.
Config has been changed to allow to configure privileged groups with access to whispers. Post migration was added to move from the old setting into the new one.
I considered having a boolean column `whisperer` on user model similar to `admin/moderator` for performance reason. Finally, I decided to keep looking for groups as queries are only done for current user and didn't notice any N+1 queries.
Updates automatically data on the stats section of the topic.
It will update automatically the following information: likes, replies and last reply (timestamp and user)
* DEV: Fix flaky admin badges.json api docs spec
This commit is to fix this incredibly vague error message:
```
Failure/Error: expect(valid).to eq(true)
expected: true
got: false
```
From this test:
> Assertion: badges /admin/badges.json get success response behaves like
> a JSON endpoint response body matches the documented response schema
I was finally able to repro locally using parallel tests:
```
RAILS_ENV=test bundle exec ./bin/turbo_rspec
```
I *think* the parallel tests might be swallowing the `puts` output, but
when I also specified the individual spec file
```
RAILS_ENV=test bundle exec ./bin/turbo_rspec spec/requests/api/badges_spec.rb
```
It revealed the issue:
```
VALIDATION DETAILS: {"missing_keys"=>["i18n_name"]}
```
``` ruby
...
def include_i18n_name?
object.system?
end
```
Looks like if the "system" user isn't being used the `i18n_name` won't
be returned in the json response so we shouldn't mark it as a required
attribute.
* Switch to using fab!
When using `let(:badge)` to fabricate a test badge it wouldn't be
returned from the controller, but switching to using `fab!` allows it to
be returned in the json data giving us a non-system badge to test
against.
We have a `cleanup!` class method on bookmarks that deletes
bookmarks X days after their related record (post/topic) are
deleted. This commit changes this method to use the
registered_bookmarkables for this instead, and each bookmarkable
type can delete related bookmarks in their own way.
When calling the API to delete a user:
```
curl -X DELETE "https://discourse.example.com/admin/users/159.json" \
-H "Content-Type: multipart/form-data;" \
-H "Api-Key: ***" \
-H "Api-Username: ***" \
-F "delete_posts=true" \
-F "block_email=false" \
-F "block_urls=false" \
-F "block_ip=false"
```
Setting the parameters `block_email`, `block_urls` and `block_ip`explicitly to `false` did not work because the values weren't being parsed to boolean.
This PR introduces a new hidden site setting that allows admins to display a splash screen while site assets load.
The splash screen can be enabled via the `splash_screen` hidden site setting.
This is what the splash screen currently looks like
5ceb72f085.mp4
Once site assets load, the splash screen is automatically removed.
To control the loading text that shows in the splash screen, you can change the preloader_text translation string in admin > customize > text
Future versions of redis will validate this port number causing the tests
relying on this to fail with:
```
Redis::CommandError:
ERR Invalid master port
```
Also change from an IPv4 address that might feasibly be in use to an IPv6
random ULA address that almost *certainly* won't be.
It's already included in the `ignored_columns` list in the group model. 03ffb0bf27/app/models/group.rb (L9)
Also, removed the `MigrateGroupFlairImages` onceoff job and spec.
In certain situations, a logged in user can redeem an invite with an email that
either doesn't match the invite's email or does not adhere to the email domain
restriction of an invite link. The impact of this flaw is aggrevated
when the invite has been configured to add the user that accepts the
invite into restricted groups.
The category description fields as part of the rswag specs for the
site.json endpoint were flakey. Removing the `required` attribute allows
us to still document that these fields exists, but that depending on
certain site settings they may not be present in the response.
Certain rogue bots such as Yandex may send across invalid CSP reports
when CSP report collection is enabled.
This ensures that invalid reports will not cause log floods and simply
returns a 422 error.
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
At some point in the past we decided to rename the 'regular' notification state of topics/categories to 'normal'. However, some UI copy was missed when the initial renaming was done so this commit changes the spots that were missed to the new name.
This is pre-request work to introduce a splash screen while site assets load.
The only change this commit introduces is that it ensures we add the defer attribute to core/plugin/theme .JS files. This will allow us to insert markup before the browser starts evaluating those scripts later on. It has no visual or functional impact on core.
This will not have any impact on how themes and plugins work. The only exception is themes loading external scripts in the </head> theme field directly via script tags. Everything will work the same but those would need to add the defer attribute if they want to keep the benefits introduced in this PR.
```
WARNING: Using `expect { }.not_to raise_error(SpecificErrorClass)` risks false positives, since literally any other error would cause the expectation to pass, including those raised by Ruby (e.g. `NoMethodError`, `NameError` and `ArgumentError`), meaning the code you are intending to test may not even get reached. Instead consider using `expect { }.not_to raise_error` or `expect { }.to raise_error(DifferentSpecificErrorClass)`. This message can be suppressed by setting: `RSpec::Expectations.configuration.on_potential_false_positives = :nothing`. Called from /var/www/discourse/spec/lib/retrieve_title_spec.rb:155:in `block (3 levels) in <main>'.
```
This doesn't cope well with gzipped, binary, or large responses. Ideally we would teach FinalDestination to safely retrieve and decode some of the response body. But for now, let's remove the broken implementation.
This reverts commit 94c3bbc2d1.
At this current point in time, we do not have enough data on whether
this centralisation is the trade-offs of coupling features into a single
channel.
Presence endpoints are often called asynchronously at the same time as other request, and never need to modify the session. Skipping ensures that an unneeded cookie rotation doesn't race against another request and cause issues.
This change brings presence in line with message-bus's behaviour.
When sending emails with delivery_method_options -> return_response
set to true, the SMTP sending code inside Mail will return the SMTP
response when calling deliver! for mail within the app. This commit
ensures that Email::Sender captures this response if it is returned
and stores it against the EmailLog created for the sent email.
A follow up PR will make this visible within the admin email UI.
As part of this commit, a bug where updating a tag's notification level on the server side does not update the state of the user's tag notification levels on the client side is fixed too.
We do not zero-pad our base62 short URLs, so there is no guarantee that the length is 27. Instead, let's greedily match all consecutive base62 characters and look for a matching upload.
This reverts bd32656157 and 36f5d5eada.
* FIX: Fix a bug that is accessing the values in a hash wrongly and write tests
I decided to write tests in order to be confident in my refactor that's in the next commit.
Meanwhile I have discovered a potential bug. The `title_attr` key was accessed as a string,
but all the keys are actually symbols so it was never evaluated to be true.
irb(main):025:0> d = {key: 'value'}
=> {:key=>"value"}
irb(main):026:0> d['key']
=> nil
irb(main):027:0> d[:key]
=> "value"
* DEV: Extract methods for readability
I will be adding a new method following the conventions in place for adding a new normalizer. And this will make the readability of the `raw` block even more difficult; so I am extracting self contained private methods beforehand.
* FEATURE: Parse JSON-LD and introduce Movie object
JSON LD data is very easily transferable to Ruby objects because they contain types. If these types are mapped to Ruby objects, it is also better to make all the parsed data very explicit and easily extendable.
JSON-LD has many more standardized item types, with a full list here: https://schema.org/docs/full.html
However in order to decrease the scope, I only adapted the movie type.
* DEV: Change inheritance between normalizers
Normalizers are not supposed to have an inheritance relationships amongst each other. They are all normalizers, but all normalizing separate protocols. This is why I chose to extract a parent class and relieve Open Graph off that responsibility. Removing the parent class altogether could also a possibility, but I am keeping the scope limited to having a more accurate representation of the normalizers while making it easier to add a new one.
* Lint changes
* Bring back the Oembed OpenGraph inheritance
There is one test that caught that this inheritance was necessary. I still think modelling wise this inheritance shouldn't exist, but this can be tackled separately.
* Return empty hash if the json received is invalid
Before this change if there was a parsing error with JSON it would throw an exception. The goal of this commit is to rescue that exception and then log a warning. I chose to use Discourse's logger wrapper `warn_exception` to have the backtrace and not just used Rails logger. I considered raising an `InvalidParameters` error however if the JSON here is invalid it should not block showing of the Onebox, so logging is enough.
* Prep to support more JSONLD schema types with case
* Extract mustache template object created from JSONLD
The `WebhookController` inherits directly from `ActionController::Base`. Since Rails 5.2, forgery protection has been enabled by default. When we applied those new defaults in 0403a8633b, it took effect on this controller and broke integrations.
This commit explicitly disables CSRF protection on these webhook routes, and updates the specs so they'll catch this kind of regression in future.
This PR changes the rescue block to rescue only Net::TimeoutError exceptions and removes the log line to prevent clutter the logs with errors that are ignored. Other errors can bubble up because they're errors we probably want to know about
This table holds associations between uploads and other models. This can be used to prevent removing uploads that are still in use.
* DEV: Create upload_references
* DEV: Use UploadReference instead of PostUpload
* DEV: Use UploadReference for SiteSetting
* DEV: Use UploadReference for Badge
* DEV: Use UploadReference for Category
* DEV: Use UploadReference for CustomEmoji
* DEV: Use UploadReference for Group
* DEV: Use UploadReference for ThemeField
* DEV: Use UploadReference for ThemeSetting
* DEV: Use UploadReference for User
* DEV: Use UploadReference for UserAvatar
* DEV: Use UploadReference for UserExport
* DEV: Use UploadReference for UserProfile
* DEV: Add method to extract uploads from raw text
* DEV: Use UploadReference for Draft
* DEV: Use UploadReference for ReviewableQueuedPost
* DEV: Use UploadReference for UserProfile's bio_raw
* DEV: Do not copy user uploads to upload references
* DEV: Copy post uploads again after deploy
* DEV: Use created_at and updated_at from uploads table
* FIX: Check if upload site setting is empty
* DEV: Copy user uploads to upload references
* DEV: Make upload extraction less strict
Following the Rails 7 upgrade, the `DISCOURSE_SMTP_ENABLE_START_TLS`
setting doesn’t work anymore. This is because Rails upgraded the
`net-smtp` gem to the 0.3.1 version which enables `starttls` by default.
The `mail` gem doesn’t support this new behavior yet and doesn’t know
how to disable TLS. This should be fixed in an upcoming release.
Meanwhile applying this patch allows us to get back the previous
behavior which is expected by many.
This commit introduces a new site setting: `block_hotlinked_media`. When enabled, all attempts to hotlink media (images, videos, and audio) will fail, and be replaced with a linked placeholder. Exceptions to the rule can be added via `block_hotlinked_media_exceptions`.
`download_remote_image_to_local` can be used alongside this feature. In that case, hotlinked images will be blocked immediately when the post is created, but will then be replaced with the downloaded version a few seconds later.
This implementation is purely server-side, and does not impact the composer preview.
Technically, there are two stages to this feature:
1. `PrettyText.sanitize_hotlinked_media` is called during `PrettyText.cook`, and whenever new images are introduced by Onebox. It will iterate over all src/srcset attributes in the post HTML and check if they're allowed. If not, the attributes will be removed and replaced with a `data-blocked-hotlinked-src(set)` attribute
2. In the `CookedPostProcessor`, we iterate over all `data-blocked-hotlinked-src(set)` attributes and check whether we have a downloaded version of the media. If yes, we update the src to use the downloaded version. If not, the entire media element is replaced with a placeholder. The placeholder is labelled 'external media', and is a link to the offsite media.
* FIX: Email Send post has already been taken error
Adding a failing test first before coming up with a good solution.
Related: 357011eb3b
The above commit changed
```
PostReplyKey.find_or_create_by_safe!
```
to
```
PostReplyKey.create_or_find_by!
```
But I don't think it is working as a 1-1 replacement because of the
`Validation failed: Post has already been taken` error we are receiving
with this change. Also we need to make sure we don't re-introduce any
concurrency issues.
Reported: https://meta.discourse.org/t/224706/13
* Remove rails unique constraint and rely on db index
I believe this is what is causing `create_or_find_by!` to fail. Because
we have a unique constraint in the db I think we can remove this rails
unique constraint?
* clean up spec wording
This commit resolves a bug where users are not auto approved based on
`SiteSetting.auto_approve_email_domains` when
`SiteSetting.must_approve_users` has been enabled.
When a site has `SiteSetting.invite_only` enabled, we create a
`ReviewableUser`record when activating a user if the user is not
approved. Therefore, we need to approve the user when redeeming an
invite.
There are some uncertainties surrounding why a `ReviewableRecord` is
created for a user in an invites only site but this commit does not seek
to address that.
Follow-up to 7c4e2d33fa
Twitter does not allow SVGs to be used for twitter:image
metadata (see https://developer.twitter.com/en/docs/twitter-for-websites/cards/overview/markup)
so we should fall back to the site logo if the image option
provided to `crawlable_meta_data` or SiteSetting.site_twitter_summary_large_image_url
is an SVG, and do not add the meta tag for twitter:image at all
if the site logo is an SVG.
This security fix affects sites which have `SiteSetting.must_approve_users`
enabled. There are intentional and unintentional cases where invited
users can be auto approved and are deemed to have skipped the staff approval process.
Instead of trying to reason about when auto-approval should happen, we have decided that
enabling the `must_approve_users` setting going forward will just mean that all new users
must be explicitly approved by a staff user in the review queue. The only case where users are auto
approved is when the `auto_approve_email_domains` site setting is used.
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
The server-side implementation had unintentionally changed to include `-{id}` at the end of the body class name. This change meant that the JS client was unaware of the class, and didn't remove it when navigating away from the category page.
This commit fixes the server-side implementation to match the client
Due to some changes we started notifying via push notifications on other
families of notifications. There are a total of about 30 or so possible
notification you could get, some can be pushed.
This fallback means that if for any reason we are unable to find an icon
for a push notification we just fallback to the Discourse logo.
Also go with a simple reply icon for watching first post.
Note, that in production `image_url` can return an exception if an image is
missing. This is not the case in test / development.
Previously we limited Discourse Connect provider to 1 secret per domain.
This made it pretty awkward to cycle secrets in environments where config
takes time to propagate
This change allows for the same domain to have multiple secrets
Also fixes internal implementation on DiscourseConnectProvider which was
not thread safe as it leaned on class variables to ferry data around
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
Co-authored-by: David Taylor <david@taylorhq.com>
In 7328a2bfb0 we changed the
InlineOneboxer#onebox_for method to run the title of the
onebox through WatchedWord#censor_text. However, it is
allowable for the title to be nil, which was causing this
error in production:
> NoMethodError : undefined method gsub for nil:NilClass
We just need to check whether the title is nil before trying
to censor it.
Previously we hardcoded the DOWNLOAD_URL_EXPIRES_AFTER_SECONDS const
inside S3Helper to be 5 minutes (300 seconds). For various reasons,
some hosted sites may need this to be longer for other integrations.
The maximum expiry time for presigned URLs is 1 week (which is
604800 seconds), so that has been added as a validation on the
setting as well. The setting is hidden because 99% of the time
it should not be changed.
Censored watched words were not censored inside the title of an inline
oneboxes. Malicious users could exploit this behaviour to insert bad
words. The same issue has been fixed for regular Oneboxes in commit
d184fe59ca.
When searching for PMs or PMs in a group inbox, results in the header search were not being limited to 5 with a "More" link to the full page search. This PR fixes that.
It also simplifies the logic and updates the search API docs to include recently added `in:messages` and `group_messages:groupname` options.
When saving / creating bookmarks, we have code to save
the user's preference of bookmark_auto_delete_preference
to their user_options.
Unfortunately this can cause weirdness when plugins
have code using BookmarkManager to set the auto delete preference for
only a specific bookmark.
This commit introduces a save_user_preferences option (false
by default) so that this user preference is not saved unless
specified by the consumer of BookmarkManager, so plugins will
not have to worry about it.
Previously, with the default `editing_grace_period`, hotlinked images were pulled 5 minutes after a post is created. This delay was added to reduce the chance of automated edits clashing with user edits.
This commit refactors things so that we can pull hotlinked images immediately. URLs are immediately updated in the post's `cooked` HTML. The post's raw markdown is updated later, after the `editing_grace_period`.
This involves a number of behind-the-scenes changes including:
- Schedule Jobs::PullHotlinkedImages immediately after Jobs::ProcessPost. Move scheduling to after the `update_column` call to avoid race conditions
- Move raw changes into a separate job, which is delayed until after the ninja-edit window
- Move disable_if_low_on_disk_space logic into the `pull_hotlinked_images` job
- Move raw-parsing/replacing logic into `InlineUpload` so it can be easily be shared between `UpdateHotlinkedRaw` and `PullUserProfileHotlinkedImages`
Meta topic: https://meta.discourse.org/t/prevent-to-linkify-when-there-is-a-redirect/226964/2?u=osama.
This commit adds a new site setting `block_onebox_on_redirect` (default off) for blocking oneboxes (full and inline) of URLs that redirect. Note that an initial http → https redirect is still allowed if the redirect location is identical to the source (minus the scheme of course). For example, if a user includes a link to `http://example.com/page` and the link resolves to `https://example.com/page`, then the link will onebox (assuming it can be oneboxed) even if the setting is enabled. The reason for this is a user may type out a URL (i.e. the URL is short and memorizable) with http and since a lot of sites support TLS with http traffic automatically redirected to https, so we should still allow the URL to onebox.
Incorporates learnings from /t/64227:
* Changes the code to set access control posts in the rake
task to be an efficient UPDATE SQL query.
The original version was timing out with 312017 post uploads,
the new query took ~3s to run.
* Changes the code to mark uploads as secure/not secure in
the rake task to be an efficient UPDATE SQL query rather than
using UploadSecurity. This took a very long time previously,
and now takes only a few seconds.
* Spread out ACL syncing for uploads into jobs with batches of
100 uploads at a time, so they can be parallelized instead
of having to wait ~1.25 seconds for each ACL to be changed
in S3 serially.
One issue that still remains is post rebaking. Doing this serially
is painfully slow. We have a way to do this in sidekiq via PeriodicalUpdates
but this is limited by max_old_rebakes_per_15_minutes. It would
be better to fan this rebaking out into jobs like we did for the
ACL sync, but that should be done in another PR.
This commit migrates all bookmarks to be polymorphic (using the
bookmarkable_id and bookmarkable_type) columns. It also deletes
all the old code guarded behind the use_polymorphic_bookmarks setting
and changes that setting to true for all sites and by default for
the sake of plugins.
No data is deleted in the migrations, the old post_id and for_topic
columns for bookmarks will be dropped later on.
Previously we were only applying the restriction to `a[href]` and `img[src]`. This commit ensures we apply the same logic to all allowlisted media src attributes.
This makes it easier to find PMs involving a particular user, for
example by searching for `in:messages thisUser` (previously, that query
would only return results in posts where `thisUser` was in the post body).
The title had to be added both on the 404 page generated by the server
side, displayed when the user reaches a bad page directly and the 404
page rendered by Ember when a user reaches a missing topic while
navigating the forum.
- Only validate if custom_fields are loaded, so that we don't trigger a db query
- Only validate public user fields, not all custom_fields
This commit also reverts the unrelated spec changes in ba148e08, which were required to work around these issues
The cache was causing state to leak between tests since the `WatchedWord` record in the DB would have been rolled back but `WordWatcher` still had the word in the cache.
The logic in 06893380 only works for `.js` files. It breaks down for `.br.js` and `.gz.js` files. This commit makes things more robust by extracting only the base_url from the service-worker JS, and taking the map filename from the original `sourceMappingURL` comment.
7a284164 previously switched the UserDestroyer to use find_each when iterating over UserHistory records. Unfortunately, since this logic is wrapped in a transaction, this didn't actually solve the memory usage problem. ActiveRecord maintains references to all modified models within a transaction.
This commit updates the logic to use a single SQL query, rather than updating models one-by-one
22a7905f restructured how we load Ember CLI assets in production. Unfortunately, it also broke sourcemaps for those assets. This commit fixes that regression via a couple of changes:
- It adds the necessary `.map` paths to `config.assets.precompile`
- It swaps Sprockets' default `SourcemappingUrlProcessor` with an extended version which maintains relative URLs of maps
`script_asset_path('.../blah.js.map')` was appending `.js`, which would result in a filename like `.js.map.js`. It would also lose the `/assets` prefix, since the map files are not included in the sprockets manifest.
This commit updates the sourceMappingURL rewriting logic to calculate the service-worker's own JS url, and then append `.map`.
When we build and send emails using MessageBuilder and Email::Sender
we add custom headers defined in SiteSetting.email_custom_headers.
However this was causing errors in cases where the custom headers
defined a header that we already specify in outbound emails (e.g.
the Precedence: list header for topic/post emails).
This commit makes it so we always use the header value defined in Discourse
core if there is a duplicate, discarding the custom header value
from the site setting.
cf. https://meta.discourse.org/t/email-notifications-fail-if-duplicate-headers-exist/222960/14
We have a .ics endpoint for user bookmarks, this
commit makes it so polymorphic bookmarks work on
that endpoint, using the serializer associated with
the RegisteredBookmarkable.
Currently the only way to allow tagging on pms is to use the `allow_staff_to_tag_pms` site setting. We are removing that site setting and replacing it with `pm_tags_allowed_for_groups` which will allow for non staff tagging. It will be group based permissions instead of requiring the user to be staff.
If the existing value of `allow_staff_to_tag_pms` is `true` then we include the `staff` groups as a default for `pm_tags_allowed_for_groups`.
Currently we don’t apply watched words to custom user fields nor user
profile fields.
This led to users being able to use blocked words in their bio, location
or some custom user fields.
This patch addresses this issue by adding some validations so it’s not
possible anymore to save the User model or the UserProfile model if they
contain blocked words.
This allows the category_id filter for the bookmark
report to work with polymorphic bookmarks. Honestly this
is a little hardcode-y at the moment but until we go and
make this report a lot more flexible with more filters
I don't think it's worth the work to add extra interfaces
to RegisteredBookmarkable and BaseBookmarkable to make
this more flexible. This is enough for now.
This was causing issues on some sites, having the const, because this really is heavily
dependent on upload speed. We request 5-10 URLs at a time with this endpoint; for
a 1.5GB upload with 5mb parts this could mean 60 requests to the server to get all
the part URLs. If the user's upload speed is super fast they may request all 60
batches in a minute, if it is slow they may request 5 batches in a minute.
The other external upload endpoints are not hit as often, so they can stay as constant
values for now. This commit also increases the default to 20 requests/minute.
We have not used anything related to bookmarks for PostAction
or UserAction records since 2020, bookmarks are their own thing
now. Deleting all this is just cleaning up old cruft.
Latest redis interoduces a block form of multi / pipelined, this was incorrectly
passed through and not namespaced.
Fix also updates logster, we held off on upgrading it due to missing functions
This fixes a corner case of the perf optimization in d4e35f5.
When you have the the same post showing in multiple tab/devices and like
said post in one place, we updated the like count but didn't flip the
`acted` bool in the front-end. This caused a small visual desync.
Co-authored-by: Penar Musaraj <pmusaraj@gmail.com>
A bit of a mixed bag, this addresses several edge areas of bookmarks and makes them compatible with polymorphic bookmarks (hidden behind the `use_polymorphic_bookmarks` site setting). The main ones are:
* ExportUserArchive compatibility
* SyncTopicUserBookmarked job compatibility
* Sending different notifications for the bookmark reminders based on the bookmarkable type
* Import scripts compatibility
* BookmarkReminderNotificationHandler compatibility
This PR also refactors the `register_bookmarkable` API so it accepts a class descended from a `BaseBookmarkable` class instead. This was done because we kept having to add more and more lambdas/properties inline and it was very messy, so a factory pattern is cleaner. The classes can be tested independently as well.
Some later PRs will address some other areas like the discourse narrative bot, advanced search, reports, and the .ics endpoint for bookmarks.
Admins won't be able to disable strip_image_metadata if they don't
disable composer_media_optimization_image_enabled first since the later
will strip the same metadata on client during upload, making disabling
the former have no effect.
Bug report at https://meta.discourse.org/t/-/223350
When a user views a topic that contains a topic timer to publish to a
restricted category, an error occurs on the client side because the user
does not have access to information about the category.
This commit fixes it such that the topic timer is not shown to the user
if the user does not have access to the category.
`TestLogger` was responsible for some flaky specs runs:
```
Error during failsafe response: undefined method `debug' for #<TestLogger:0x0000556c4b942cf0 @warnings=1>
Did you mean? debugger
```
This commit also cleans up other uses of `FakeLogger`
This was due to a server side bug when unicode usernames have been
enabled. We were double encoding the unicode username in the URL
resulting in a invalid URL.
This happened only for languages other than "en" and when `I18n.t` was called without any interpolation keys. The lib still tried to interpolate keys because it interpreted the `overrides` option as interpolation key.
When an admin enters a badly formed regular expression in the
permalink_normalizations site setting, a RegexpError exception is
generated everytime a URL is normalized (see Permalink.normalize_url).
The new validator validates every regular expression present in the
setting value (delimited by '|').
If the identity provider does not provide a precise username value, then we should use our UserNameSuggester to generate one and use it for the override. This makes the override consistent with initial account creation.
This will make future changes to the 'pull hotlinked images' system easier. This commit should not introduce any functional change.
For now, the old post_custom_field data is kept in the database. This will be dropped in a future commit.
This commit introduces a new site setting: `use_name_for_username_suggestions` (default true)
Admins can disable it if they want to stop using Name values when generating usernames for users. This can be useful if you want to keep real names private-by-default or, when used in conjunction with the `use_email_for_username_and_name_suggestions` setting, you would prefer to use email-based username suggestions.
* hidden siteSetting to enable experimental sidebar
* user preference to enable experimental sidebar
* `experimental_sidebar_enabled` attribute for current user
* Empty glimmer component for Sidebar
We were calling `dup` on the hash and using that to check for changes. However, we were not duplicating the values, so changes to arrays or nested hashes would not be detected.
This pull request follows on from https://github.com/discourse/discourse/pull/16308. This one does the following:
* Changes `BookmarkQuery` to allow for querying more than just Post and Topic bookmarkables
* Introduces a `Bookmark.register_bookmarkable` method which requires a model, serializer, fields and preload includes for searching. These registered `Bookmarkable` types are then used when validating new bookmarks, and also when determining which serializer to use for the bookmark list. The `Post` and `Topic` bookmarkables are registered by default.
* Adds new specific types for Post and Topic bookmark serializers along with preloading of associations in `UserBookmarkList`
* Changes to the user bookmark list template to allow for more generic bookmarkable types alongside the Post and Topic ones which need to display in a particular way
All of these changes are gated behind the `use_polymorphic_bookmarks` site setting, apart from the .hbs changes where I have updated the original `UserBookmarkSerializer` with some stub methods.
Following this PR will be several plugin PRs (for assign, chat, encrypt) that will register their own bookmarkable types or otherwise alter the bookmark serializers in their own way, also gated behind `use_polymorphic_bookmarks`.
This commit also removes `BookmarkQuery.preloaded_custom_fields` and the functionality surrounding it. It was added in 0cd502a558 but only used by one plugin (discourse-assign) where it has since been removed, and is now used by no plugins. We don't need it anymore.
A category_required_tag_group should always have an associated tag_group. However, this is only enforced at the application layer, so it's technically possible for the database to include a category_required_tag_group without a matching tag_group.
Previously that situation would cause the whole site to go offline. With this change, it will cause some unexpected behavior, but the site serializer will not raise an error.
Previously, accessing the Rails app directly in development mode would give you assets from our 'legacy' Ember asset pipeline. The only way to run with Ember CLI assets was to run ember-cli as a proxy. This was quite limiting when working on things which are bypassed when using the ember-cli proxy (e.g. changes to `application.html.erb`). Also, since `ember-auto-import` introduced chunking, visiting `/theme-qunit` under Ember CLI was failing to include all necessary chunks.
This commit teaches Sprockets about our Ember CLI assets so that they can be used in development mode, and are automatically collected up under `/public/assets` during `assets:precompile`. As a bonus, this allows us to remove all the custom manifest modification from `assets:precompile`.
The key changes are:
- Introduce a shared `EmberCli.enabled?` helper
- When ember-cli is enabled, add ember-cli `/dist/assets` as the top-priority Rails asset directory
- Have ember-cli output a `chunks.json` manifest, and teach `preload_script` to read it and append the correct chunks to their associated `afterFile`
- Remove most custom ember-cli logic from the `assets:precompile` step. Instead, rely on Rails to take care of pulling the 'precompiled' assets into the `public/assets` directory. Move the 'renaming' logic to runtime, so it can be used in development mode as well.
- Remove fingerprinting from `ember-cli-build`, and allow Rails to take care of things
Long-term, we may want to replace Sprockets with the lighter-weight Propshaft. The changes made in this commit have been made with that long-term goal in mind.
tldr: when you visit the rails app directly, you'll now be served the current ember-cli assets. To keep these up-to-date make sure either `ember serve`, or `ember build --watch` is running. If you really want to load the old non-ember-cli assets, then you should start the server with `EMBER_CLI_PROD_ASSETS=0`. (the legacy asset pipeline will be removed very soon)
* Adds a hidden site setting: `max_participant_names`
* Replaces duplicate code in `GroupSmtpMailer` and `UserNotifications`
* Groups are sorted by the number of users (decreasing)
* Replaces the query to count users of each group with `Group#user_count`)
* Users are sorted by their last reply in the topic (most recent first)
* Adds lots of tests
We intend to switch to the `:json` serializer, which will stringify all keys. However, we need a clean revert path. This commit ensures that our `_t` cookie handling works with both marshal (the current default) and json (the new default) serialization.
This PR enables custom email dark mode styles by default that were added here.
There is currently poor support for dark mode queries in mail clients. The main beneficiary of these changes will be Apple Mail and Outlook.
Enjoy the darkness 🕶️
When changing upload security using `Upload#update_secure_status`,
we may not have the context of how an upload is being created, because
this code path can be run through scheduled jobs. When calling
update_secure_status, the normal ActiveRecord validations are run,
and ours include validating extensions. In some cases the upload
is created in an automated way, such as user export zips, and the
security is applied later, with the extension prohibited from
use when normally uploading.
This caused the upload to fail validation on `update_secure_status`,
causing the security change to silently fail. This fixes the issue
by skipping the file extension validation when the upload security
is being changed.
Previously, 'crop' would resize the image to have the requested width, then crop the height to the requested value. This works when cropping images vertically, but not when cropping them horizontally.
For example, trying to crop a 500x500 image to 200x500 was actually resulting in a 200x200 image. Having an OptimizedImage with width/height columns mismatching the actual OptimizedImage width/height causes some unusual issues.
This commit ensures that a call to `OptimizedImage.crop(from, to, width, height)` will always return an image of the requested width/height. The `w x h^` syntax defines minimum width/height, while maintaining aspect ratio.
This commit fixes two issues at play. The first was introduced
in f6c852b (or maybe not introduced
but rather revealed). When a user posted a new message in a topic,
they received the unread topic tracking state MessageBus message,
and the Unread (X) indicator was incremented by one, because with the
aforementioned perf commit we "guess" the correct last read post
for the user, because we no longer calculate individual users' read
status there. This meant that every time a user posted in a topic
they tracked, the unread indicator was incremented. To get around
this, we can just exclude the user who created the post from the
target users of the unread state message.
The second issue was related to the private message topic tracking
state, and was somewhat similar. Whenever a user created a new private
message, the New (X) indicator was incremented, and could not be
cleared until the page was refreshed. To solve this, we just don't
update the topic state for the user when the new_topic tracking state
message comes through if the user who created the topic is the
same as the current user.
cf. https://meta.discourse.org/t/bottom-of-topic-shows-there-is-1-unread-remaining-when-there-are-actually-0-unread-topics-remaining/220817
It used to show the warning that said only members of certain groups
could view the topic even if the group "everyone" was listed in
category's permission list.
This PR will include `suspended` attribute in post serializer to check it in post widget and add a CSS class name.
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
Discourse has the Discourse Connect Provider protocol that makes it possible to
use a Discourse instance as an identity provider for external sites. As a
natural extension to this protocol, this PR adds a new feature that makes it
possible to use Discourse as a 2FA provider as well as an identity provider.
The rationale for this change is that it's very difficult to implement 2FA
support in a website and if you have multiple websites that need to have 2FA,
it's unrealistic to build and maintain a separate 2FA implementation for each
one. But with this change, you can piggyback on Discourse to take care of all
the 2FA details for you for as many sites as you wish.
To use Discourse as a 2FA provider, you'll need to follow this guide:
https://meta.discourse.org/t/-/32974. It walks you through what you need to
implement on your end/site and how to configure your Discourse instance. Once
you're done, there is only one additional thing you need to do which is to
include `require_2fa=true` in the payload that you send to Discourse.
When Discourse sees `require_2fa=true`, it'll prompt the user to confirm their
2FA using whatever methods they've enabled (TOTP or security keys), and once
they confirm they'll be redirected back to the return URL you've configured and
the payload will contain `confirmed_2fa=true`. If the user has no 2FA methods
enabled however, the payload will not contain `confirmed_2fa`, but it will
contain `no_2fa_methods=true`.
You'll need to be careful to re-run all the security checks and ensure the user
can still access the resource on your site after they return from Discourse.
This is very important because there's nothing that guarantees the user that
will come back from Discourse after they confirm 2FA is the same user that
you've redirected to Discourse.
Internal ticket: t62183.
This commit improves the logic for rolling up IPv4 screened IP
addresses and extending it for IPv6. IPv4 addresses will roll up only
up to /24. IPv6 can rollup to /48 at most. The log message that is
generated contains the list of original IPs and new subnet.
* FEATURE: Let sites add a sitemap.xml file.
This PR adds the same features discourse-sitemap provides to core. Sitemaps are only added to the robots.txt file if the `enable_sitemap` setting is enabled and `login_required` disabled.
After merging discourse/discourse-sitemap#34, this change will take priority over the sitemap plugin because it will disable itself. We're also using the same sitemaps table, so our migration won't try to create it
again using `if_not_exists: true`.
Sometimes we need to update a _lot_ of ACLs on S3 (such as when secure media
is enabled), and since it takes ~1s per upload to update the ACL, this is
best spread out over many jobs instead of having to do the whole thing serially.
In future, it will be better to have a job that can be run based on
a column on uploads (e.g. acl_stale) so we can track progress, similar
to how we can set the baked_version to nil to rebake posts.
After this commit, category group permissions can only be seen by users
that are allowed to manage a category. In the past, we inadvertently
included a category's group permissions settings in `CategoriesController#show`
and `CategoriesController#find_by_slug` endpoints for normal users when
those settings are only a concern to users that can manage a category.
The permissions for the 'everyone' group were not serialized because
the list of groups a user can view did not include it. This bug was
introduced in commit dfaf9831f7.
Due to default CSP web workers instantiated from CDN based assets are still
treated as "same-origin" meaning that we had no way of safely instansiating
a web worker from a theme.
This limits the theme system and adds the arbitrary restriction that WASM
based components can not be safely used.
To resolve this limitation all js assets in about.json are also cached on
local domain.
{
"name": "Header Icons",
"assets" : {
"worker" : "assets/worker.js"
}
}
This can then be referenced in JS via:
settings.theme_uploads_local.worker
local_js_assets are unconditionally served from the site directly and
bypass the entire CDN, using the pre-existing JavascriptCache
Previous to this change this code was completely dormant on sites which
used s3 based uploads, this reuses the very well tested and cached asset
system on s3 based sites.
Note, when creating local_js_assets it is highly recommended to keep the
assets lean and keep all the heavy working in CDN based assets. For example
wasm files can still live on the CDN but the lean worker that loads it can
live on local.
This change unlocks wasm in theme components, so wasm is now also allowed
in `theme_authorized_extensions`
* more usages of upload.content
* add a specific test for upload.content
* Adjust logic to ensure that after upgrades we still get a cached local js
on save
There are still some, but those are in actual code that's used outside core, so the change there would need to go through the deprecation cycle. That's a task for another day.
Previously we only supported a single 'required tag group' for a category. This commit allows admins to specify multiple required tag groups, each with their own minimum tag count.
A new category_required_tag_groups database table replaces the existing columns on the categories table. Data is automatically migrated.
Previous to this change an optimisation stripped crawler content from
all mobile browsers.
This had a side effect that meant that when we dropped support for an old
mobile platform we would stop rendering topic and topic list pages.
The new implementation ensures we only perform the optimisation on modern
mobile browsers.
This patch removes some of our freedom patches that have been deprecated
for some time now.
Some of them have been updated so we’re not shipping code based on an
old version of Rails.
All users are members of the EVERYONE group, but this group is special and
is omitted from the group_users table. When checking permission we need to
make sure we also add a bypass.
This also fixes a very buggy test in post_alerter, it was confirming the
broken behavior due to fabricator flow.
When it defined the tag group the everyone group automatically had full access
then the additional permission fabricated just added one more group. After
fix was made to code the test started failing. Fabricators can be risky.
This did not work properly everytime because the destination URL was
saved in a cookie and that can be lost for various reasons. This commit
redirects the user to invited topic if it exists.
raw_html posts (i.e. those which are pulled as part of our comments integration) don't go through our markdown pipeline, so `upload://` URLs are not supported. Running pull_hotlinked_images will break any images in the post.
In future we may add support for pulling hotlinked images in these posts. But for now, disabling it will stop it breaking images.
When emailing a group inbox and including other support-type
emails (or even just regular ones with autoresponders) in the
CC field, each automated reply to the group inbox triggered
more emails to be sent out to all CC addresses to notify them
of the new reply, which in turn caused more automated emails
to be sent to the group inbox.
This commit fixes the issue by preventing any emails being sent
by the PostAlerter when the new post has an incoming email record
which is_auto_generated, which we detect in Email::Receiver.
Via the API it is possible to create a user with an integer username. So
123 instead of "123". This causes the following 500 error:
```
NoMethodError (undefined method `unicode_normalize' for 1:Integer)
app/models/user.rb:276:in `normalize_username'
```
See: https://meta.discourse.org/t/222281
Fixes the issue where making a user x as owner of a post doesn't
cause the concerned topic to be listed in new owner's `My Posts`
top menu filter
per https://meta.discourse.org/t/199369
can_permanently_delete field in Post and TopicViewDetails serializers
cannot use Guardian's can_permanently_delete beause their use is
different. The field from the serializers is used to show the button
and the button is shown even if the post cannot be removed forever
because not enough time has passed since it was first deleted. The
guardian method is used by the controller to check that the post can
really be deleted.
Previous to this change if any of the assets were not allowed extensions
they would simply be silently ignored, this could lead to broken themes
that are very hard to debug
Our group fabrication creates groups with name "my_group_#{n}" where n
is the sequence number of the group being created. However, this can
cause the test to be flaky if and when a group with name `my_group_10`
is created as it will be ordered before
`my_group_9`. This commits makes the group names determinstic to
eliminate any flakiness.
This reverts commit 558bc6b746.
In certain instances when viewing a category, the name of a group with
restricted visilbity may be revealed to users which do not have the
required permission.
This commit introduces a new use_polymorphic_bookmarks site setting
that is default false and hidden, that will be used to help continuous
development of polymorphic bookmarks. This setting **should not** be
enabled anywhere in production yet, it is purely for local development.
This commit uses the setting to enable create/update/delete actions
for polymorphic bookmarks on the server and client side. The bookmark
interactions on topics/posts are all usable. Listing, searching,
sending bookmark reminders, and other edge cases will be handled
in subsequent PRs.
Comprehensive UI tests will be added in the final PR -- we already
have them for regular bookmarks, so it will just be a matter of
changing them to be for polymorphic bookmarks.
Under some conditions, replacing an `<img` with `![]()` can break rendering, and make the image disappear.
Context at https://meta.discourse.org/t/152801
* DEV: add testing for multi del on keys
Following #15905 we were missing some tests, this covers cases where
del is used in the form of .del(key1,key2)
Tags (and tag groups) can be configured so that they can only be used in specific categories and (optionally) restrict topics in these categories to be able to add/use only these tags. These restrictions work as expected when a topic is created without going through the review queue; however, if the topic has to be reviewed by a moderator then these restrictions currently aren't checked before the topic is sent to the review queue, but they're checked later when a moderator tries to approve the topic. This is because if a user manages to submit a topic that doesn't meet the restrictions, moderators won't be able to approve and it'll be stuck in the review queue.
This PR prevents topics that don't meet the tags requirements from being sent to the review queue and shows the poster an error message that indicates which tags that cannot be used.
Internal ticket: t60562.
Previously, our `upload://` protocol urls were only supported in markdown image tags. This meant that our PullHotlinkedImages job was forced to convert `<img` tags to markdown. Depending on the exact syntax, this can actually cause the image to break.
This commit adds support for `upload://` inside regular HTML `<img` tags. In a future commit, we'll be able to use this to make our PullHotlinkedImages job much more robust.
Context at https://meta.discourse.org/t/152801
Since we give a 200 response for login errors, we should be checking
whether the error key exists in each case or not.
Some tests were broken, because they weren't checking.
* FIX: Redirect if Discourse-Xhr-Redirect is present
`handleRedirect` was passed an wrong argument type (a string) instead of
a jqXHR object and missed the fields checked in condition, thus always
evaluating to `false`.
* FIX: Add `errors` field if group update confirmation
An explicit confirmation about the effect of the group update is
required if the default notification level changes. Previously, if the
confirmation was missing the API endpoint failed silently returning
a 200 response code and a `user_count` field. This change ensures that
a proper error code is returned (422), a descriptive error message and
the additional information in the `user_count` field.
This commit also refactors the API endpoint to use the
`Discourse-Xhr-Redirect` header to redirect the user if the group is
no longer visible.
PostAnalyzer and CookedPostProcessor both replace URLs with oneboxes.
PostAnalyzer did not use the max_oneboxes_per_post site and setting and
CookedPostProcessor replaced at most max_oneboxes_per_post URLs ignoring
the oneboxes that were replaced already by PostAnalyzer.
If the crawled page returned an error, `FinalDestination#safe_get`
yielded `nil` for `uri` and `chunk` arguments. Another problem is that
`get` did not handle the case when `safe_get` failed and did not return
the `location` and `set_cookie` headers.
Previously we would issue a 403 for all invalid routes under `/tags/c/...`, which is not semantically correct. In some cases, these 403'd routes would then be handled successfully in the Ember app, leading to some very confusing behavior.
If a group's messageable_level is set to nobody then staff can't should not be able to send PMs to it.
Co-authored-by: Martin Brennan <martin@discourse.org>
* DEV: Allow params to be passed on topic redirects
There are several places where we redirect a url to a standard topic url
like `/t/:slug/:topic_id` but we weren't always passing query parameters
to the new url.
This change allows a few more query params to be included on the
redirect. The new params that are permitted are page, print, and
filter_top_level_replies. Any new params will need to be specified.
This also prevents the odd trailing empty page param that would
sometimes appear on a redirect. `/t/:slug/:id.json?page=`
* rubocop: fix missing space after comma
* fix another page= reference
* FEATURE: use canonical links in posts.rss feed
Previously we used non canonical links in posts.rss
These links get crawled frequently by crawlers when discovering new
content forcing crawlers to hop to non canonical pages just to end up
visiting canonical pages
This uses up expensive crawl time and adds load on Discourse sites
Old links were of the form:
`https://DOMAIN/t/SLUG/43/21`
New links are of the form
`https://DOMAIN/t/SLUG/43?page=2#post_21`
This also adds a post_id identified element to crawler view that was
missing.
Note, to avoid very expensive N+1 queries required to figure out the
page a post is on during rss generation, we cache that information.
There is a smart "cache breaker" which ensures worst case scenario is
a "page drift" - meaning we would publicize a post is on page 11 when
it is actually on page 10 due to post deletions. Cache holds for up to
12 hours.
Change only impacts public post RSS feeds (`/posts.rss`)
This update topic route has never worked. Better late than never. I am
in favor of using non-slug urls when using the api so I do think we
should fix this route.
Just thought I would update the `:id` param to `:topic_id` here in the
routes file instead of updating the controller to handle both params.
Added a spec to test this route.
Also added the same constraint we have on other topic routes to ensure
we only pass in an ID that is a digit.
The `blocked onebox domains` setting lets site owners change what sites
are allowed to be oneboxed. When a link is entered into a post,
Discourse checks the domain of the link against that setting and blocks
the onebox if the domain is blocked. But if there's a chain of
redirects, then only the final destination website is checked against
the site setting.
This commit amends that behavior so that every website in the redirect
chain is checked against the site setting, and if anything is blocked
the original link doesn't onebox at all in the post. The
`Discourse-No-Onebox` header is also checked in every response and the
onebox is blocked if the header is set to "1".
Additionally, Discourse will now include the `Discourse-No-Onebox`
header with every response if the site requires login to access content.
This is done to signal to a Discourse instance that it shouldn't attempt
to onebox other Discourse instances if they're login-only. Non-Discourse
websites can also use include that header if they don't wish to have
Discourse onebox their content.
Internal ticket: t59305.
Previously, if an admin user tried to add/remove
users to another user's ignored list, it would
be added to their own ignore list because the
controller used current_user. Now for admins only
a source_user_id parameter can be passed through,
which will be used to ignore the target user for
that source user.
The user can select what happens with a bookamrk after it expires. New
option allow bookmark's reminder to be kept even after it has expired.
After a bookmark's reminder notification is created, the reminder date
will be highlighted in red until the user resets the reminder date.
User can do that using the new Clear Reminder button from the dropdown.
The search_ignore_accents site setting can be used to make the search
indexer remove the accents before indexing the content. The unaccent
function from PostgreSQL is better than Ruby's unicode_normalize(:nfkd).
When creating files with create-multipart, if the file
size was somehow zero we were showing a very unhelpful
error message to the user. Now we show a nicer message,
and proactively don't call the API if we know the file
size is 0 bytes in JS, along with extra console logging
to help with debugging.
Some product pages on Amazon are using a new HTML structure, meaning the previous Onebox engine was unable to gather the price and/or description. This change should allow these pages to be Oneboxed.
This change adds support for the categories endpoint to have an api
scope. Only adds GET scope for listing categories and for fetching a
single category.
See: https://meta.discourse.org/t/218080/4
This PR adds an extra description to the 2FA page when granting a user admin access. It also introduces a general system for adding customized descriptions that can be used by future actions.
(Follow-up to dd6ec65061)
Discourse users and associated accounts are created or updated when a
user logins or connects the account using their account preferences.
This new API can be used to create associated accounts and users too,
if necessary.
Our @mention user search prioritized users based on prefix matches.
So if searching for `sa` we will display `sam`, `asam` in that order
Previously, we did not prioritize group matches based on prefix. This change ensures better parity.
Implementation notes:
1. User search only prioritizes based on username prefix, not name prefix. TBD if we want to change that.
2. @mention on client side will show 0 group matches if we fill up all the spots with user matches. TBD if we want to unconditionally show the first / second group match.
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
This feature was rarely used, could be used for spamming users and was
impossible to add a context to why the user was notified of a topic. A
simple private messages that includes the link and personalized message
can be used instead.
* DEV: Show only top level replies
Adds a new query param to the topic view so that we can filter out posts
that aren't top level replies. If a post is a reply to another post
instead of the original topic post we should not include it in the
response if the `filter_top_level_replies` query param is present.
* add rspec test
It's very easy to forget to add `require 'rails_helper'` at the top of every core/plugin spec file, and omissions can cause some very confusing/sporadic errors.
By setting this flag in `.rspec`, we can remove the need for `require 'rails_helper'` entirely.
This allows text editors to use correct syntax coloring for the heredoc sections.
Heredoc tag names we use:
languages: SQL, JS, RUBY, LUA, HTML, CSS, SCSS, SH, HBS, XML, YAML/YML, MF, ICS
other: MD, TEXT/TXT, RAW, EMAIL
Previously email validations could fire when deleting posts if for
certain reasons any user validations fail on the user objects
This kind of condition could happen in core due to a corruption of a
user record, or via a plugin that introduces a new validation on User
Lib specs moved in 45cc16098d
Move the new selectable_avatars_mode_validator_spec to the new location
Remove the old selectable_avatars_enabled_validator_spec
follow-up of d1bdb6c65d
* FEATURE: upload an avatar option for uploading avatars with selectable avatars
Allow staff or users at or above a trust level to upload avatars even when the site
has selectable avatars enabled.
Everyone can still pick from the list of avatars. The option to upload is shown
below the selectable avatar list.
refactored boolean site setting into an enum with the following values:
disabled: No selectable avatars enabled (default)
everyone: Show selectable avatars, and allow everyone to upload custom avatars
tl1: Show selectable avatars, but require tl1+ and staff to upload custom avatars
tl2: Show selectable avatars, but require tl2+ and staff to upload custom avatars
tl3: Show selectable avatars, but require tl3+ and staff to upload custom avatars
tl4: Show selectable avatars, but require tl4 and staff to upload custom avatars
staff: Show selectable avatars, but only allow staff to upload custom avatars
no_one: Show selectable avatars. No users can upload custom avatars
Co-authored-by: Régis Hanol <regis@hanol.fr>
Currently, providing things like `filter[%24acunetix]=1` to
`UserActionsController#index` will throw an exception because instead of
getting a string as expected, we get a hash instead.
This patch simply uses `#permit` from strong parameters properly: first
we apply it on the whole parameters, this way it filters the keys we’re
interested in. By doing this, if the value is a hash for example, the
whole key/value pair will be ignored completely.
This commit handles the edge case where a draft is lost with no warnings if the user edits the title (or category/tags) of a topic while they're replying.to the same topic. Repro steps are as follows:
1. Start replying to a topic and type enough to get a draft saved.
2. Scroll up to the topic title and click the pencil icon next to the topic title, change the title, category and/or tags, and then save the changes.
3. Reload the page and you'll see that the draft is gone.
This happens because we only allow 1 draft per topic per user and when you edit the title of a topic that you're replying to, from the server perspective it'll look like as if you've submitted your reply so it will advance the draft sequence for the topic and delete the draft.
The fix in this commit makes `PostRevisor` skip advancing the draft sequence when a topic's title is edited using the pencil button next to the title.
Internal ticket: t60854.
Co-authored-by: Robin Ward <robin.ward@gmail.com>
This option will make it so the [quote] bbcode will always
include the HTML link to the quoted post, even if a topic_id
is not provided in the PrettyText#cook options. This is so
[quote] bbcode can be used in other places, like chat messages,
that always need the link and do not have an "off-topic" ID
to use.
Previously cached counting made redis calls in main thread and performed
the flush in main thread.
This could lead to pathological states in extreme heavy load.
This refactor reduces load and cleans up the interface
Previously we were publishing one messagebus message per user which was 'tracking' a topic. On large sites, this can easily be 1000+ messages. The important information in the message is common between all users, so we can manage with a single message on a shared channel, which will be much more efficient.
For user-specific values (notification_level and last_read_post_number), the JS app can infer values which are 'good enough'. Correct values will be loaded as soon as a topic-list containing the topic is visited.
aa1442fdc3 split theme stylesheets so that every component gets its own stylesheet. Therefore, there is now no need for parent themes to collate the settings/variables of its children during scss compilation.
Technically this is a breaking change for any themes which depend on the settings/variables of their child components. That was never a supported/recommended arrangement, so we don't expect this to cause issues.
If a theme is updated to introduce a new setting AND immediately make use of it in a stylesheet, then an error was being shown. This is because the stylesheet compilation was using the theme's cached settings, and the cache is only cleared **after** the theme has finished compiling.
This commit updates the SCSS compilation to use uncached values for settings. A similar fix was applied to other parts of theme compilation back in 2020: (a51b8d9c66)
The previous Discourse.cache usage was different to how other theme-related caching is handled, and also requires reaching out to redis every time. The common theme cache is held in memory (as a DistributedCache)
This commit makes sure that the email log's bounce_error_code
conforms to the SMTP error code RFC on save, so that
it is always in the format X.X.X or XXX without any
additional string details. Also included is a migration
to fix this issue for past records.
Similar site settings exist for likes and edits and the new ones work
in a similar way.
By default, users below TL2 have a limit of 20, the limit is increased
by 1.5 for TL2 users up to 30, by 2 for TL3 users up to 40 and by 3 for
TL4 users up to 60.
We validate the *format* of email addresses in many places with a match against
a regex, often with very slightly different syntax.
Adding a separate EmailAddressValidator simplifies the code in a few spots and
feels cleaner.
Deprecated the old location in case someone is using it in a plugin.
No functionality change is in this commit.
Note: the regex used at the moment does not support using address literals, e.g.:
* localpart@[192.168.0.1]
* localpart@[2001:db8::1]
Themes often cache `nil` values in a DistributedCache. This bug meant that we were re-calculating some values on every request, AND triggering message-bus publishing on every request.
This fix should provide a significant performance improvement for busy sites.
2FA support in Discourse was added and grown gradually over the years: we first
added support for TOTP for logins, then we implemented backup codes, and last
but not least, security keys. 2FA usage was initially limited to logging in,
but it has been expanded and we now require 2FA for risky actions such as
adding a new admin to the site.
As a result of this gradual growth of the 2FA system, technical debt has
accumulated to the point where it has become difficult to require 2FA for more
actions. We now have 5 different 2FA UI implementations and each one has to
support all 3 2FA methods (TOTP, backup codes, and security keys) which makes
it difficult to maintain a consistent UX for these different implementations.
Moreover, there is a lot of repeated logic in the server-side code behind these
5 UI implementations which hinders maintainability even more.
This commit is the first step towards repaying the technical debt: it builds a
system that centralizes as much as possible of the 2FA server-side logic and
UI. The 2 main components of this system are:
1. A dedicated page for 2FA with support for all 3 methods.
2. A reusable server-side class that centralizes the 2FA logic (the
`SecondFactor::AuthManager` class).
From a top-level view, the 2FA flow in this new system looks like this:
1. User initiates an action that requires 2FA;
2. Server is aware that 2FA is required for this action, so it redirects the
user to the 2FA page if the user has a 2FA method, otherwise the action is
performed.
3. User submits the 2FA form on the page;
4. Server validates the 2FA and if it's successful, the action is performed and
the user is redirected to the previous page.
A more technically-detailed explanation/documentation of the new system is
available as a comment at the top of the `lib/second_factor/auth_manager.rb`
file. Please note that the details are not set in stone and will likely change
in the future, so please don't use the system in your plugins yet.
Since this is a new system that needs to be tested, we've decided to migrate
only the 2FA for adding a new admin to the new system at this time (in this
commit). Our plan is to gradually migrate the remaining 2FA implementations to
the new system.
For screenshots of the 2FA page, see PR #15377 on GitHub.
* FIX: Don't accept accents in slug if generation_method == 'ascii'
Fixes bug reported in:
- https://meta.discourse.org/t/404-when-trying-to-edit-category-with-accent-in-slug/214762
- https://meta.discourse.org/t/formatting-and-accents-in-urls/215734/5
Assuming `SiteSetting.slug_generation_method == 'ascii'.
If the user provides a slug containing non-ascii characters while
creating the category, the user will receive a 404 error just
after saving the category since the slug will be escaped anyway but
Category.find_by_slug_path won't escape the category slug
causing the Edit Page of the category to be inaccessible.
This commit checks the provided slug and raises an error if the
provided slugcontains non-ascii characters ensuring that the
provided value is consistent with the site settings.
It also changes Category.find_by_slug_path to always escape the slug,
since if present, it is escaped anyway in Category.ensure_slug to
prevent the 404 in the Edit Category Page in case the user already
have some category with a non-ascii slug.
* Removed trailing whitespace
When parent category or grandparent category is muted, then category should be muted as well.
Still, it can be overridden by setting individual subcategory notification level.
CategoryUser record is not created, mute for subcategories is purely virtual.
This can happen if the topic to which a user is invited is in a private
category and the user was not invited to one of the groups that can see
that specific category.
This used to be a warning and this commit makes it an error.
These calls were originally introduced to ensure that any stale users were cleaned up regularly. This is quite an expensive process to run on every `GET /presence/get` call, and will also cause errors during readonly mode.
Since the original introduction of this logic, we added the `Jobs::PresenceChannelAutoLeave` which runs every minute. That should be enough to clean up any stale users.
Note that users which explicitly `leave` a channel are still removed immediately. This auto_leave logic just takes care of clients which have disappeared without leaving.
This commit introduces two new APIs for handling unused uploads, one
can be used to exclude uploads in bulk when the data model allow and
the other one excludes uploads one by one.
We added this constraint in 5bd55acf83
but it is causing problems in hosted sites and is catching the
issue too far down the line. This commit removes the constraint
for now, and also fixes an issue found with PostDestroyer
which wasn't using the UserStatCountUpdater when updating post_count
and thus was causing negative numbers to occur.
Previously calls such as `Emoji["smile"]` would force a full dehydration of
objects from Redis.
This introduces a version safe site and global emoji cache so lookups are
cheap. It eliminates iterating through the list of emojis and pulling from
redis.
Distributed cache uses a normalized name as the key and stores an Array tuple
with version and Emoji. Successful hits always confirm version matches.
Interface to Emoji object remains unchanged.
We opted for 2 caches to improve reuse on multisites. misses though will be
stored in both caches. If there is a hit on the global cache we can avoid
looking up in site local cache and storing a miss there.
Previously we were calling `EXPIRE` every time we incremented a given key. Instead, we can call EXPIRE once when the key is first populated. A LUA script is used to make this as efficient as possible.
Consumers of this Concern use daily keys. Since we're now calling EXPIRE only at the beginning of the day, rather than throughout the day, the expire time has been increased from 3 to 4 days.
Whenever we got a bounced email in the Email::Receiver we
previously would just set bounced: true on the EmailLog and
discard the status/diagnostic code. This commit changes this
flow to store the bounce error code (defined in the RFC at
https://www.iana.org/assignments/smtp-enhanced-status-codes/smtp-enhanced-status-codes.xhtml)
not just in the Email::Receiver, but also via webhook events
from other mail services and from SNS.
This commit does not surface the bounce error in the UI,
we can do that later if necessary.
Without this parameter, requests for sourcemaps on shared-CDN multisites will not be routed to the correct database, resulting in a 404.
The stylesheet content now depends on the site hostname, so the hostname has been added to the digest.
We serve `service-worker.js` in an unusual way, which means that the sourcemap is not available on an adjacent path. This means that the browser fails to fetch the map, and shows an error in the console.
This commit re-writes the source map reference in the static_controller to be an absolute link to the asset (including the appropriate CDN, if enabled), and adds a spec for the behavior.
It's important to do this at runtime, rather than JS precompile time, so that changes to CDN configuration do not require re-compilation to take effect.
- Update UI to improve contrast
- Make it clear that the message is only shown to administrators
- Add theme name and id to the console output
- Parse the error backtrace to identify the theme-id for post-decoration errors
- Improve console output to include the theme name / URL
- Add `?safe_mode=no_custom` to the admin panel link, so that it will work even if the theme is causing the site to break
The chat quoting mechanism will need to be able to generate
markdown for all kinds of uploads. The UploadMarkdown class
was missing generation for video and audio uploads. This
commit adds that in, and also expands the server-side regex
recognition of FileHelper types to match those in uploads.js,
and adds a spec for UploadMarkdown
When a site has secure media enabled and a post is with secure
media, we were incorrectly cooking custom emoji URLs and using the
secure URL for those emojis, even though they should not be
considered secure (their corresponding upload records in the
database are _not_ secure). Now instead of the blanket
post.with_secure_media? boolean for the secure: param, we also
want to make sure the image whose URL is being cooked is also
_not_ a custom emoji.
This is intended for use by plugins which are building their own
topic lists, and want to include PMs alongside regular topics (e.g.
discourse-assign). It does not get used directly in core.
* DEV: Document external topic id endpoints
This commit documents the existing Create Topic endpoint with the
`external_id` param and documents the new get topic by external id
endpoint.
It also refactors the existing topic show endpoint to use the new format
where we load the expected json schema response from a file.
See: 71f7f7ed49
* clean up unused test variables
Breakdown of fixes in this commit:
* `UserStat#topic_count` was not updated when visibility of
the topic changed.
* `UserStat#post_count` was not updated when post was hidden or
unhidden.
* `TopicConverter` was only incrementing or decrementing the counts by 1
even if a user has multiple posts in the topic.
* The commit turns off the verbose logging by default as it is just
noise to normal users who are not debugging this problem.
- Update UI to improve contrast
- Make it clear that the message is only shown to administrators
- Add theme name and id to the console output
- Parse the error backtrace to identify the theme-id for post-decoration errors
- Improve console output to include the theme name / URL
- Add `?safe_mode=no_custom` to the admin panel link, so that it will work even if the theme is causing the site to break
This adds logic to increase an `InvitedUser` record, increase
`redemption_count` and create a `:invitee_accepted` to let the inviter
know that the invitee used the invite.
Initial support for this was implemented in commit 9969631.
In ab5361d69a, we rescue from the PG error
but the transaction is already aborted causing any DB query after to
fail. As such, we avoid triggering the error in the first place by
checking that we would not be insertin a negative number into the
counter cache.
Follow-up to ab5361d69a
Allows to write custom code blocks:
```
```mermaid height=200,foo=bar
test
```
```
Which will then get converted to:
```
<pre data-code-wrap="mermaid" data-code-height="200" data-code-foo="bar">
<code class="lang-nohighlight">
test
</code>
</pre>
```
Sorting group members worked always kept the group owners at the top of
the list. This commit keeps the group owners at the top of the list only
when no order exists.
This commits adds a new advance_draft to PostCreator that controls if
the draft sequence will be advanced or not. If the draft sequence is
advanced then the old drafts will be cleared. This used to happen for
posts created by plugins or through the API and cleared user drafts
by mistake.
There are still spots in the code base which results in us trying to turn the post and topic count negative. However,
we have a job that runs on a daily basis which will correct the count. Therefore, avoid raising an error for now
and log the exception instead.
* FEATURE: Add external_id to topics
This commit allows for topics to be created and fetched by an
external_id. These changes are API only for now as there aren't any
front changes.
* add annotations
* add external_id to this spec
* Several PR feedback changes
- Add guardian to find topic
- 403 is returned for not found as well now
- add `include_external_id?`
- external_id is now case insensitive
- added test for posts_controller
- added test for topic creator
- created constant for max length
- check that it redirects to the correct path
- restrain external id in routes file
* remove puts
* fix tests
* only check for external_id in webhook if exists
* Update index to exclude external_id if null
* annotate
* Update app/controllers/topics_controller.rb
We need to check whether the topic is present first before passing it to the guardian.
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
* Apply suggestions from code review
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
Follow up to 6f7364e48b to add a spec
that tests the full authentication of a Windows Hello algorithm (-257)
webauthn verification. The test added in that commit only tested that
we know about that algorithm, not whether it was actually usable.
* FEATURE: RS512, RS384 and RS256 COSE algorithms
These algorithms are not implemented by cose-ruby, but used in the web
authentication API and were marked as supported.
* FEATURE: Use all algorithms supported by cose-ruby
Previously only a subset of the algorithms were allowed.
This commit introduces our own handling and warning for Sidekiq's new 'non-json-serializable' warning. This decouples us from Sidekiq's own deprecation cycle, and allows us to use our own deprecation system. It also means that the dump/parse happens in test mode, which will help us to catch occurrences before they reach production.
Our discourse_public_exceptions middleware is designed to catch bubbled exceptions from lower in the stack, and then use `ApplicationController.rescue_with_handler` to render an appropriate error response.
When the request itself is invalid, we had an escape-hatch to skip re-dispatching the request to ApplicationController. However, it was possible to work around this by 'layering' the errors. For example, if you made a request which resulted in a 404, but **also** had some other invalidity, the escape hatch would not be triggered.
This commit ensures that these kind of 'layered' errors are properly handled, without logging warnings. It also adds detection for invalid JSON bodies and badly-formed multipart requests.
The user-facing behavior is unchanged. This commit simply prevents warnings being logged for invalid requests.
This commit allows group SMTP emails to be sent with a
different from email address that has been set up as an
alias in the email provider. Emails from the alias will
be grouped correctly using Message-IDs in the mail client,
and replies to the alias go into the correct group inbox.
Ensures that `UserStat#post_count` and `UserStat#topic_count` does not
go below 0. When it does like it did now, we tend to have bugs in our
code since we're usually coding with the assumption that the count isn't
negative.
In order to support the constraints, our post and topic fabricators in
tests will now automatically increment the count for the respective
user's `UserStat` as well. We have to do this because our fabricators
bypasss `PostCreator` which holds the responsibility of updating `UserStat#post_count` and
`UserStat#topic_count`.
* Chinese segmenetation will continue to rely on cppjieba
* Japanese segmentation will use our port of TinySegmenter
* Korean currently does not rely on segmentation which was dropped in c677877e4f
* SiteSetting.search_tokenize_chinese_japanese_korean has been split
into SiteSetting.search_tokenize_chinese and
SiteSetting.search_tokenize_japanese respectively
- Limit bulk re-invite to 1 time per day
- Move bulk invite by csv behind a site setting (hidden by default)
- Bump invite expiry from 30 -> 90 days
## Updates to rate_limiter
When limiting reinvites I found that **staff** are never limited in any way. So I updated the **rate_limiter** model to allow for a few things:
- add an optional param of `staff_limit`, which (when included and passed values, and the user passes `.staff?`) will override the default `max` & `secs` values and apply them to the user.
- in the case you **do** pass values to `staff_limit` but the user **does not** pass `staff?` the standard `max` & `secs` values will be applied to the user.
This should give us enough flexibility to
1. continue to apply a strict rate limit to a standard user
2. but also apply a secondary (less strict) limit to staff
Non-staff users are not allowed to see whisper so this change prevents
non-staff user from seeing a like count that does not make sense to
them. In the future, we might consider adding another like count column
for staff user.
Follow-up to 4492718864
When creating a direct message to a group with group SMTP
set up, and adding another person to that message in the OP,
we send an email to the second person in the OP via the group_smtp
job. This in turn creates an IncomingEmail record to guard against
IMAP double sync.
The issue with this was that this IncomingEmail (which is essentialy
a placeholder/dummy one) was having its Message-ID used as the canonical
References Message-ID for subsequent emails sent out to user_private_message
recipients (such as members of the group), causing threading issues in
the mail client. The canonical <topic/ID@HOST> format should be used
instead for these cases.
This commit fixes the issue by only using the IncomingEmail for the
OP's Message-ID if the OP was created via our handle_mail email receiver
pipeline. It does not make sense to use it in other cases.
When the record is not saved, we should display a proper message.
One potential reason can be plugins for example discourse-calendar is specifying that only first post can contain event
* FIX: Remove svg icons from webmanifest shortcuts
While SVGs are valid in the webmanifest, Chromium has not implemented
support for it in this specific manifest member.
Revert when https://bugs.chromium.org/p/chromium/issues/detail?id=1091612
lands.
* fix test
In an earlier PR, we decided that we only want to block a domain if
the blocked domain in the SiteSetting is the final destination (/t/59305). That
PR used `FinalDestination#get`. `resolve` however is used several places
but blocks domains along the redirect chain when certain options are provided.
This commit changes the default options for `resolve` to not do that. Existing
users of `FinalDestination#resolve` are
- `Oneboxer#external_onebox`
- our onebox helper `fetch_html_doc`, which is used in amazon, standard embed
and youtube
- these folks already go through `Oneboxer#external_onebox` which already
blocks correctly
Sometimes plugins need to have additional data or options available
when rendering custom markdown features/rules that are not available
on the default opts.discourse object. These additional options should
be namespaced to the plugin adding them.
```
Site.markdown_additional_options["chat"] = { limited_pretty_text_markdown_rules: [] }
```
These are passed down to markdown rules on opts.discourse.additionalOptions.
The main motivation for adding this is the chat plugin, which currently stores
chat_pretty_text_features and chat_pretty_text_markdown_rules on
the Site object via additions to the serializer, and the Site object is
not accessible to import via markdown rules (either through
Site.current() or through container.lookup). So, to have this working
for both front + backend code, we need to attach these additional options
from the Site object onto the markdown options object.
When staff visits the user profile of another user, the `email` field
in the model is empty. In this case, staff cannot send the reset email
password because nothing is passed in the `login` field.
This commit changes the behavior for staff users to allow resetting
password by username instead.
This commit fixes a bug where we our `HTMLScrubber` was only searching
for emoji img tags which contains only the "emoji" class. However, our emoji image tags
may contain more than just the "emoji" class like "only-emoji" when an
emoji exists by itself on a single line.
If a model class calls preload_custom_fields twice then
we have to clear this otherwise the fields are cached inside the
already existing proxy and no new ones are added, so when we check
for custom_fields[KEY] an error is likely to occur
In the unlikely, but possible, scenario where a user has no email_tokens, and has an invite record for their email address, login would fail. This commit fixes the `Invite` `user_doesnt_already_exist` validation so that it only applies to new invites, or when changing the email address.
This regressed in d8fe0f4199 (based on `git bisect`)
The UI used to request a password reset by username when the user was
logged in. This did not work when hide_email_already_taken site setting
was enabled, which disables the lookup-by-username functionality.
This commit also introduces a check to ensure that the parameter is an
email when hide_email_already_taken is enabled as the single allowed
type is email (no usernames are allowed).
Our previous implementation used a simple `blocked_domain_array.include?(hostname)`
so some values were not matching. Additionally, in some configurations like ours, we'd used
"cat.*.dog.com" with the assumption we'd support globbing.
This change implicitly allows globbing by blocking "http://a.b.com" if "b.com" is a blocked
domain but does not actively do anything for "*".
An upcoming change might include frontend validation for values that can be inserted.
* FIX: Tag watching for everyone tag groups
Tags in tag groups that have permissions set to everyone were not able
to be saved correctly. A user on their preferences page would mark the
tags that they wanted to save, but the watched_tags in the response
would be empty. This did not apply to admins, just regular users. Even
though the watched tags were being saved in the db, the user serializer
response was filtering them out. When a user refreshed their preferences
pages it would show zero watched tags.
This appears to be a regression introduced by:
0f598ca51e
The issue that needed to be fixed is that we don't track the "everyone"
group (which has an id of 0) in the group_users table. This is because
everyone has access to it, so why fill a row for every single user, that
would be a lot. The fix was to update the query to include tag groups
that had permissions set to the "everyone" group (group_id 0).
I also added another check to the existing spec for updating
watched tags for tags that aren't in a tag group so that it checks the
response body. I then added a new spec which updates watched tags for
tags in a tag group which has permissions set to everyone.
* Resolve failing tests
Improve SQL query syntax for including the "everyone" group with the id
of 0.
This commit also fixes a few failing tests that were introduced. It
turns out that the Fabrication of the Tag Group Permissions was faulty.
What happens when creating the tag groups without any permissions is
that it sets the permission to "everyone". If we then follow up with
fabricating a tag group permission on the tag group instead of having a
single permission it will have 2 (everyone + the group specified)! We
don't want this. To fix it I removed the fabrication of tag group
permissions and just set the permissions directly when creating the tag
group.
* Use response.parsed_body instead of JSON.parse
* FIX: Mark invites flash messages as HTML safe.
This change should be safe as all user inputs included in the errors are sanitized before sending it back to the client.
Context: https://meta.discourse.org/t/html-tags-are-explicit-after-latest-update/214220
* If somebody adds a new error message that includes user input and doesn't sanitize it, using html-safe suddenly becomes unsafe again. As an extra layer of protection, we make the client sanitize the error message received from the backend.
* Escape user input instead of sanitizing
* FEATURE: Export topics to markdown
The route `/raw/TOPIC_ID` will now export whole topics (paginated to 100
posts) in a markdown format.
See https://meta.discourse.org/t/-/152185/12
If the SiteSetting `allowed_onebox_iframes` contains a value of `*`, it will use the values of `all_iframe_origins` during the Oneboxing process. If `all_iframe_origins` itself contains a value of `*`, `origins_to_regexes` will try to return a "catch-all" regex.
Other code assumes `origins_to_regexes`will return an array, so this change ensures the `*` case will return an array containing only the catch-all regex.
1. `html_doc.css('.Box.md')` always returns a truthy value (e.g. `[]`) so the second branch of the if-elsif never ran
2. `node&.css('text()')` was invalid code that would raise an error
3. Matching on h3 elements is no longer correct with the current html structure returned by GitHub
* FIX: Pass category and tag IDs to the emit webhook event job.
Like webhooks won't fire when they're scoped to specific categories or tags because we're not passing the data to the job that emits it.
* Update config/initializers/012-web_hook_events.rb
Co-authored-by: Dan Ungureanu <dan@ungureanu.me>
Co-authored-by: Dan Ungureanu <dan@ungureanu.me>
This reverts commit 2c7906999a.
The changes break some things in local development (putting JS files
into minified files, not allowing debugger, and others)
This reverts commit ea84a82f77.
This is causing problems with `/theme-qunit` on legacy, non-ember-cli production sites. Reverting while we work on a fix
This is quite complex as it means that in production we have to build
Ember CLI test files and allow them to be used by our Rails application.
There is a fair bit of glue we can remove in the future once we move to
Ember CLI completely.
An admin could search for all screened ip addresses in a block by
using wildcards. 192.168.* returned all IPs in range 192.168.0.0/16.
This feature allows admins to search for a single IP address in all
screened IP blocks. 192.168.0.1 returns all IP blocks that match it,
for example 192.168.0.0/16.
* FEATURE: Remove roll up button for screened IPs
* FIX: Match more specific screened IP address first
The new warnings cover more cases and more accurate. Most of the
warnings will be visible only to staff members because otherwise they
would leak information about user's preferences.
This commit extends the options which can be passed to
`PrettyText.markdown` so that which Markdown-it rules and Discourse
Markdown plugins to be used when rendering a text can be customizable.
Currently, this extension is mainly used by plugins.
The warnings on git 2.28+ are:
```
hint: Using 'master' as the name for the initial branch. This default branch name
hint: is subject to change. To configure the initial branch name to use in all
hint: of your new repositories, which will suppress this warning, call:
hint:
hint: git config --global init.defaultBranch <name>
hint:
hint: Names commonly chosen instead of 'master' are 'main', 'trunk' and
hint: 'development'. The just-created branch can be renamed via this command:
hint:
hint: git branch -m <name>
```
Also:
* Remove an unused method (#fill_email)
* Replace a method that was used just once (#generate_username) with `SecureRandom.alphanumeric`
* Remove an obsolete dev puma `tmp/restart` file logic
Adding a spec for documenting the delete post API endpoint for our api
docs. As part of this added detailed info for the `force_destroy`
parameter for permanently deleting a post.