The commit fcc2e7ebbf to promote
polymorphic bookmarks did not correctly set the username for
the quick access bookmark menu based on the new serializer
values, so the username is not being shown in the bookmark
quick access menu. This commit fixes it, and also adds additional
tests for that menu and updates the user fixtures to reflect
the current state of the bookmarks endpoint.
Given this html:
```
<aside class="quote no-group">
<blockquote>
<aside class="quote no-group">
<blockquote>
<p dir="ltr">test</p>
</blockquote>
</aside>
<p dir="ltr">test2</p>
</blockquote>
</aside>
```
The result was an invalid markdown:
```
[quote]
[quote]
> test
> [/quote]
>
>
>
> test2
[/quote]
```
Now the result is:
```
[quote]
[quote]
test
[/quote]
test2
[/quote]
```
The bookmarkable_type instead of the bookmarkable_url
was being used for the link to the bookmark for the quick
access menu, leading to links like /ChatMessage. This
fixes the issue, follow up PR with tests for the quick
access menu to follow.
In f00e282067 we added this
DELETE query to delete duplicate for_topic bookmarks, we
just need this further refinement to the WHERE clause to
avoid deleting post bookmarks.
These incorrect paths were causing the regular jobs to be loaded in a `Jobs::Jobs` module in development mode, which would cause various weird issues.
https://meta.discourse.org/t/228155
Looking up values from the `emojiStore` calls out to the browser's localStorage API and then decodes a JSON blob. This makes it relatively slow.
Previously we were doing this lookup in the emoji-picker's `init()` function, even if `isActive` was false. If many inactive emoji pickers are rendered simultaneously (e.g. for discourse-chat reactions), this performance hit quickly adds up.
This commit updates the service to notify about changes, and uses a computed property to provide a cached value in the emoji-picker.
Describes the behaviour and configuration of the cache_critical_dns
script, mainly cribbed from commit messages. Tries to make this program
a bit less of an enigma.
In 7328a2bfb0 we changed the
InlineOneboxer#onebox_for method to run the title of the
onebox through WatchedWord#censor_text. However, it is
allowable for the title to be nil, which was causing this
error in production:
> NoMethodError : undefined method gsub for nil:NilClass
We just need to check whether the title is nil before trying
to censor it.
Previously we hardcoded the DOWNLOAD_URL_EXPIRES_AFTER_SECONDS const
inside S3Helper to be 5 minutes (300 seconds). For various reasons,
some hosted sites may need this to be longer for other integrations.
The maximum expiry time for presigned URLs is 1 week (which is
604800 seconds), so that has been added as a validation on the
setting as well. The setting is hidden because 99% of the time
it should not be changed.
Censored watched words were not censored inside the title of an inline
oneboxes. Malicious users could exploit this behaviour to insert bad
words. The same issue has been fixed for regular Oneboxes in commit
d184fe59ca.
`run-qunit.js` does not expect QUnit tests to start automatically but
our wizard QUnit setup did not respect the `qunit_disable_auto_start`
URL param. Hence, tests would start running automatically and when a
subsequent `QUnit.start()` function call is made, we ended up getting a
`QUnit.start cannot be called inside a test context.` error.
This error can be consistently reproduced in the `discourse:discourse_test` container but not in
the local development environment. I do not know why and did not feel
like it is important at this point in time to know why.
There is no need for the extra protection on the client side if there is
a bug on the server side. In fact, we want the bug to be surfaced so
that it can be fixed on the server side.
Sometimes we need to render the icon as a call to action
to create a bookmark at which point the bookmark does
not yet exist, so we need to just show the normal bookmark
icon and a create title.
Also adds a CSS class for the bookmark existing and not existing
for styling.
This improves the bookmark-icon title to be more like the
post bookmark icons, to include the special formatted date
as well as the name of the bookmark.
When searching for PMs or PMs in a group inbox, results in the header search were not being limited to 5 with a "More" link to the full page search. This PR fixes that.
It also simplifies the logic and updates the search API docs to include recently added `in:messages` and `group_messages:groupname` options.
When saving / creating bookmarks, we have code to save
the user's preference of bookmark_auto_delete_preference
to their user_options.
Unfortunately this can cause weirdness when plugins
have code using BookmarkManager to set the auto delete preference for
only a specific bookmark.
This commit introduces a save_user_preferences option (false
by default) so that this user preference is not saved unless
specified by the consumer of BookmarkManager, so plugins will
not have to worry about it.
The `PG::Connection#ping` method is only reliable for checking if the
given host is accepting connections, and not if the authentication
details are valid.
This extends the healthcheck to confirm that the auth details are
able to both create a connection and execute queries against the
database.
We expect the empty query to return an empty result set, so we can
assert on that. If a failure occurs for any reason, the healthcheck will
return false.
Previously, with the default `editing_grace_period`, hotlinked images were pulled 5 minutes after a post is created. This delay was added to reduce the chance of automated edits clashing with user edits.
This commit refactors things so that we can pull hotlinked images immediately. URLs are immediately updated in the post's `cooked` HTML. The post's raw markdown is updated later, after the `editing_grace_period`.
This involves a number of behind-the-scenes changes including:
- Schedule Jobs::PullHotlinkedImages immediately after Jobs::ProcessPost. Move scheduling to after the `update_column` call to avoid race conditions
- Move raw changes into a separate job, which is delayed until after the ninja-edit window
- Move disable_if_low_on_disk_space logic into the `pull_hotlinked_images` job
- Move raw-parsing/replacing logic into `InlineUpload` so it can be easily be shared between `UpdateHotlinkedRaw` and `PullUserProfileHotlinkedImages`
Previously this mapping of **cooked** images was only being run for oneboxes. Now it runs for all images, so we can transform hotlinked images without needing to immediately update `raw`
This feature only was only demuxing stdout, not stderr. That means that stdout and stderr output appears out-of-order, and makes debugging migrations very confusing.
In future we may want to add stderr support to the demuxing. But right now, the concurrency variable is hard-coded to 1. Therefore the easiest fix is to bypass the demuxing.
Meta topic: https://meta.discourse.org/t/prevent-to-linkify-when-there-is-a-redirect/226964/2?u=osama.
This commit adds a new site setting `block_onebox_on_redirect` (default off) for blocking oneboxes (full and inline) of URLs that redirect. Note that an initial http → https redirect is still allowed if the redirect location is identical to the source (minus the scheme of course). For example, if a user includes a link to `http://example.com/page` and the link resolves to `https://example.com/page`, then the link will onebox (assuming it can be oneboxed) even if the setting is enabled. The reason for this is a user may type out a URL (i.e. the URL is short and memorizable) with http and since a lot of sites support TLS with http traffic automatically redirected to https, so we should still allow the URL to onebox.
This component will be useful for chat, and also moves
the definition of the icon for with and without reminders
to the bookmark model as consts, so they can easily be
referenced in other places.