Now that we've switched to Ember CLI, these things are no longer used.
- These sprockets manifests are superceded by the assets generated by ember cli
- These vendored scripts are now fetched by ember-auto-import at compile time
At some point in the past we decided to rename the 'regular' notification state of topics/categories to 'normal'. However, some UI copy was missed when the initial renaming was done so this commit changes the spots that were missed to the new name.
Anyone still using `EMBER_CLI_PROD_ASSETS=0` in development or production will be gracefully switched to Ember CLI. In development, a repeated message will be logged to STDERR.
Similarly, passing `QUNIT_EMBER_CLI=0` to the qunit rake task will now do nothing. A warning will be printed, and ember-cli mode will be used. Note that we've chosen not to fail the task, so that existing plugin/theme CI jobs don't immediately start failing. We may switch to a hard fail in the coming days/weeks.
This doesn't cope well with gzipped, binary, or large responses. Ideally we would teach FinalDestination to safely retrieve and decode some of the response body. But for now, let's remove the broken implementation.
* The `javascript:update` rake task failed because recent versions of chart.js use a lowercase filename (`chart.min.js` instead of `Chart.min.js`)
* Changed `loadScript()` to use lowercase keys to lookup scripts
* `svg-arrow.css` seems to have changed slightly (linebreak at the end of file)
When sending emails with delivery_method_options -> return_response
set to true, the SMTP sending code inside Mail will return the SMTP
response when calling deliver! for mail within the app. This commit
ensures that Email::Sender captures this response if it is returned
and stores it against the EmailLog created for the sent email.
A follow up PR will make this visible within the admin email UI.
* FIX: Fix a bug that is accessing the values in a hash wrongly and write tests
I decided to write tests in order to be confident in my refactor that's in the next commit.
Meanwhile I have discovered a potential bug. The `title_attr` key was accessed as a string,
but all the keys are actually symbols so it was never evaluated to be true.
irb(main):025:0> d = {key: 'value'}
=> {:key=>"value"}
irb(main):026:0> d['key']
=> nil
irb(main):027:0> d[:key]
=> "value"
* DEV: Extract methods for readability
I will be adding a new method following the conventions in place for adding a new normalizer. And this will make the readability of the `raw` block even more difficult; so I am extracting self contained private methods beforehand.
* FEATURE: Parse JSON-LD and introduce Movie object
JSON LD data is very easily transferable to Ruby objects because they contain types. If these types are mapped to Ruby objects, it is also better to make all the parsed data very explicit and easily extendable.
JSON-LD has many more standardized item types, with a full list here: https://schema.org/docs/full.html
However in order to decrease the scope, I only adapted the movie type.
* DEV: Change inheritance between normalizers
Normalizers are not supposed to have an inheritance relationships amongst each other. They are all normalizers, but all normalizing separate protocols. This is why I chose to extract a parent class and relieve Open Graph off that responsibility. Removing the parent class altogether could also a possibility, but I am keeping the scope limited to having a more accurate representation of the normalizers while making it easier to add a new one.
* Lint changes
* Bring back the Oembed OpenGraph inheritance
There is one test that caught that this inheritance was necessary. I still think modelling wise this inheritance shouldn't exist, but this can be tackled separately.
* Return empty hash if the json received is invalid
Before this change if there was a parsing error with JSON it would throw an exception. The goal of this commit is to rescue that exception and then log a warning. I chose to use Discourse's logger wrapper `warn_exception` to have the backtrace and not just used Rails logger. I considered raising an `InvalidParameters` error however if the JSON here is invalid it should not block showing of the Onebox, so logging is enough.
* Prep to support more JSONLD schema types with case
* Extract mustache template object created from JSONLD
This PR changes the rescue block to rescue only Net::TimeoutError exceptions and removes the log line to prevent clutter the logs with errors that are ignored. Other errors can bubble up because they're errors we probably want to know about
This table holds associations between uploads and other models. This can be used to prevent removing uploads that are still in use.
* DEV: Create upload_references
* DEV: Use UploadReference instead of PostUpload
* DEV: Use UploadReference for SiteSetting
* DEV: Use UploadReference for Badge
* DEV: Use UploadReference for Category
* DEV: Use UploadReference for CustomEmoji
* DEV: Use UploadReference for Group
* DEV: Use UploadReference for ThemeField
* DEV: Use UploadReference for ThemeSetting
* DEV: Use UploadReference for User
* DEV: Use UploadReference for UserAvatar
* DEV: Use UploadReference for UserExport
* DEV: Use UploadReference for UserProfile
* DEV: Add method to extract uploads from raw text
* DEV: Use UploadReference for Draft
* DEV: Use UploadReference for ReviewableQueuedPost
* DEV: Use UploadReference for UserProfile's bio_raw
* DEV: Do not copy user uploads to upload references
* DEV: Copy post uploads again after deploy
* DEV: Use created_at and updated_at from uploads table
* FIX: Check if upload site setting is empty
* DEV: Copy user uploads to upload references
* DEV: Make upload extraction less strict
Following the Rails 7 upgrade, the `DISCOURSE_SMTP_ENABLE_START_TLS`
setting doesn’t work anymore. This is because Rails upgraded the
`net-smtp` gem to the 0.3.1 version which enables `starttls` by default.
The `mail` gem doesn’t support this new behavior yet and doesn’t know
how to disable TLS. This should be fixed in an upcoming release.
Meanwhile applying this patch allows us to get back the previous
behavior which is expected by many.
This commit introduces a new site setting: `block_hotlinked_media`. When enabled, all attempts to hotlink media (images, videos, and audio) will fail, and be replaced with a linked placeholder. Exceptions to the rule can be added via `block_hotlinked_media_exceptions`.
`download_remote_image_to_local` can be used alongside this feature. In that case, hotlinked images will be blocked immediately when the post is created, but will then be replaced with the downloaded version a few seconds later.
This implementation is purely server-side, and does not impact the composer preview.
Technically, there are two stages to this feature:
1. `PrettyText.sanitize_hotlinked_media` is called during `PrettyText.cook`, and whenever new images are introduced by Onebox. It will iterate over all src/srcset attributes in the post HTML and check if they're allowed. If not, the attributes will be removed and replaced with a `data-blocked-hotlinked-src(set)` attribute
2. In the `CookedPostProcessor`, we iterate over all `data-blocked-hotlinked-src(set)` attributes and check whether we have a downloaded version of the media. If yes, we update the src to use the downloaded version. If not, the entire media element is replaced with a placeholder. The placeholder is labelled 'external media', and is a link to the offsite media.
Twitter does not allow SVGs to be used for twitter:image
metadata (see https://developer.twitter.com/en/docs/twitter-for-websites/cards/overview/markup)
so we should fall back to the site logo if the image option
provided to `crawlable_meta_data` or SiteSetting.site_twitter_summary_large_image_url
is an SVG, and do not add the meta tag for twitter:image at all
if the site logo is an SVG.
This commit seeks to only handle the `f=tracked` and `filter=tracked`
query params for a topic list. There are other "hidden" filters for a
topic list which can be activated by passing the right query param to
the request. However, they are hidden because there is no way to
activate those filters via the UI. We are handling the `f=tracked`
filter because we will soon be adding a link that allows a user to
quickly view their tracked topics.
Previously we limited Discourse Connect provider to 1 secret per domain.
This made it pretty awkward to cycle secrets in environments where config
takes time to propagate
This change allows for the same domain to have multiple secrets
Also fixes internal implementation on DiscourseConnectProvider which was
not thread safe as it leaned on class variables to ferry data around
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
Co-authored-by: David Taylor <david@taylorhq.com>
In 7328a2bfb0 we changed the
InlineOneboxer#onebox_for method to run the title of the
onebox through WatchedWord#censor_text. However, it is
allowable for the title to be nil, which was causing this
error in production:
> NoMethodError : undefined method gsub for nil:NilClass
We just need to check whether the title is nil before trying
to censor it.
Previously we hardcoded the DOWNLOAD_URL_EXPIRES_AFTER_SECONDS const
inside S3Helper to be 5 minutes (300 seconds). For various reasons,
some hosted sites may need this to be longer for other integrations.
The maximum expiry time for presigned URLs is 1 week (which is
604800 seconds), so that has been added as a validation on the
setting as well. The setting is hidden because 99% of the time
it should not be changed.
Censored watched words were not censored inside the title of an inline
oneboxes. Malicious users could exploit this behaviour to insert bad
words. The same issue has been fixed for regular Oneboxes in commit
d184fe59ca.
When searching for PMs or PMs in a group inbox, results in the header search were not being limited to 5 with a "More" link to the full page search. This PR fixes that.
It also simplifies the logic and updates the search API docs to include recently added `in:messages` and `group_messages:groupname` options.
When saving / creating bookmarks, we have code to save
the user's preference of bookmark_auto_delete_preference
to their user_options.
Unfortunately this can cause weirdness when plugins
have code using BookmarkManager to set the auto delete preference for
only a specific bookmark.
This commit introduces a save_user_preferences option (false
by default) so that this user preference is not saved unless
specified by the consumer of BookmarkManager, so plugins will
not have to worry about it.
Previously, with the default `editing_grace_period`, hotlinked images were pulled 5 minutes after a post is created. This delay was added to reduce the chance of automated edits clashing with user edits.
This commit refactors things so that we can pull hotlinked images immediately. URLs are immediately updated in the post's `cooked` HTML. The post's raw markdown is updated later, after the `editing_grace_period`.
This involves a number of behind-the-scenes changes including:
- Schedule Jobs::PullHotlinkedImages immediately after Jobs::ProcessPost. Move scheduling to after the `update_column` call to avoid race conditions
- Move raw changes into a separate job, which is delayed until after the ninja-edit window
- Move disable_if_low_on_disk_space logic into the `pull_hotlinked_images` job
- Move raw-parsing/replacing logic into `InlineUpload` so it can be easily be shared between `UpdateHotlinkedRaw` and `PullUserProfileHotlinkedImages`
Previously this mapping of **cooked** images was only being run for oneboxes. Now it runs for all images, so we can transform hotlinked images without needing to immediately update `raw`
This feature only was only demuxing stdout, not stderr. That means that stdout and stderr output appears out-of-order, and makes debugging migrations very confusing.
In future we may want to add stderr support to the demuxing. But right now, the concurrency variable is hard-coded to 1. Therefore the easiest fix is to bypass the demuxing.
Meta topic: https://meta.discourse.org/t/prevent-to-linkify-when-there-is-a-redirect/226964/2?u=osama.
This commit adds a new site setting `block_onebox_on_redirect` (default off) for blocking oneboxes (full and inline) of URLs that redirect. Note that an initial http → https redirect is still allowed if the redirect location is identical to the source (minus the scheme of course). For example, if a user includes a link to `http://example.com/page` and the link resolves to `https://example.com/page`, then the link will onebox (assuming it can be oneboxed) even if the setting is enabled. The reason for this is a user may type out a URL (i.e. the URL is short and memorizable) with http and since a lot of sites support TLS with http traffic automatically redirected to https, so we should still allow the URL to onebox.
Incorporates learnings from /t/64227:
* Changes the code to set access control posts in the rake
task to be an efficient UPDATE SQL query.
The original version was timing out with 312017 post uploads,
the new query took ~3s to run.
* Changes the code to mark uploads as secure/not secure in
the rake task to be an efficient UPDATE SQL query rather than
using UploadSecurity. This took a very long time previously,
and now takes only a few seconds.
* Spread out ACL syncing for uploads into jobs with batches of
100 uploads at a time, so they can be parallelized instead
of having to wait ~1.25 seconds for each ACL to be changed
in S3 serially.
One issue that still remains is post rebaking. Doing this serially
is painfully slow. We have a way to do this in sidekiq via PeriodicalUpdates
but this is limited by max_old_rebakes_per_15_minutes. It would
be better to fan this rebaking out into jobs like we did for the
ACL sync, but that should be done in another PR.
This commit migrates all bookmarks to be polymorphic (using the
bookmarkable_id and bookmarkable_type) columns. It also deletes
all the old code guarded behind the use_polymorphic_bookmarks setting
and changes that setting to true for all sites and by default for
the sake of plugins.
No data is deleted in the migrations, the old post_id and for_topic
columns for bookmarks will be dropped later on.
22a7905f restructured how we load Ember CLI assets in production. Unfortunately, it also broke sourcemaps for those assets. This commit fixes that regression via a couple of changes:
- It adds the necessary `.map` paths to `config.assets.precompile`
- It swaps Sprockets' default `SourcemappingUrlProcessor` with an extended version which maintains relative URLs of maps
Enabled extensions are not saved in backups and thus not created when
restoring a newer backup (that has a new extension) to an old site
(that does not have the migration).
When we build and send emails using MessageBuilder and Email::Sender
we add custom headers defined in SiteSetting.email_custom_headers.
However this was causing errors in cases where the custom headers
defined a header that we already specify in outbound emails (e.g.
the Precedence: list header for topic/post emails).
This commit makes it so we always use the header value defined in Discourse
core if there is a duplicate, discarding the custom header value
from the site setting.
cf. https://meta.discourse.org/t/email-notifications-fail-if-duplicate-headers-exist/222960/14
These validate/after_create/after_destroy methods were added
back in b8828d4a2d before
the RegisteredBookmarkable API and pattern was nailed down.
This commit updates BookmarkManager to call out to the
relevant bookmarkable for these and bookmark_metadata for
consistency.
Currently the only way to allow tagging on pms is to use the `allow_staff_to_tag_pms` site setting. We are removing that site setting and replacing it with `pm_tags_allowed_for_groups` which will allow for non staff tagging. It will be group based permissions instead of requiring the user to be staff.
If the existing value of `allow_staff_to_tag_pms` is `true` then we include the `staff` groups as a default for `pm_tags_allowed_for_groups`.
This was causing issues on some sites, having the const, because this really is heavily
dependent on upload speed. We request 5-10 URLs at a time with this endpoint; for
a 1.5GB upload with 5mb parts this could mean 60 requests to the server to get all
the part URLs. If the user's upload speed is super fast they may request all 60
batches in a minute, if it is slow they may request 5 batches in a minute.
The other external upload endpoints are not hit as often, so they can stay as constant
values for now. This commit also increases the default to 20 requests/minute.
We have not used anything related to bookmarks for PostAction
or UserAction records since 2020, bookmarks are their own thing
now. Deleting all this is just cleaning up old cruft.
Latest redis interoduces a block form of multi / pipelined, this was incorrectly
passed through and not namespaced.
Fix also updates logster, we held off on upgrading it due to missing functions
This fixes a corner case of the perf optimization in d4e35f5.
When you have the the same post showing in multiple tab/devices and like
said post in one place, we updated the like count but didn't flip the
`acted` bool in the front-end. This caused a small visual desync.
Co-authored-by: Penar Musaraj <pmusaraj@gmail.com>
A bit of a mixed bag, this addresses several edge areas of bookmarks and makes them compatible with polymorphic bookmarks (hidden behind the `use_polymorphic_bookmarks` site setting). The main ones are:
* ExportUserArchive compatibility
* SyncTopicUserBookmarked job compatibility
* Sending different notifications for the bookmark reminders based on the bookmarkable type
* Import scripts compatibility
* BookmarkReminderNotificationHandler compatibility
This PR also refactors the `register_bookmarkable` API so it accepts a class descended from a `BaseBookmarkable` class instead. This was done because we kept having to add more and more lambdas/properties inline and it was very messy, so a factory pattern is cleaner. The classes can be tested independently as well.
Some later PRs will address some other areas like the discourse narrative bot, advanced search, reports, and the .ics endpoint for bookmarks.
Admins won't be able to disable strip_image_metadata if they don't
disable composer_media_optimization_image_enabled first since the later
will strip the same metadata on client during upload, making disabling
the former have no effect.
Bug report at https://meta.discourse.org/t/-/223350
This happened only for languages other than "en" and when `I18n.t` was called without any interpolation keys. The lib still tried to interpolate keys because it interpreted the `overrides` option as interpolation key.
When an admin enters a badly formed regular expression in the
permalink_normalizations site setting, a RegexpError exception is
generated everytime a URL is normalized (see Permalink.normalize_url).
The new validator validates every regular expression present in the
setting value (delimited by '|').
If the identity provider does not provide a precise username value, then we should use our UserNameSuggester to generate one and use it for the override. This makes the override consistent with initial account creation.
c1db9687 introduced an postgres enum type. Our database restore logic did not handle custom types correctly, and would therefore raise a 'type already exists' error when restoring any backup.
This commit adds restore handling for enums, mirroring the similar logic for tables and views.
This will make future changes to the 'pull hotlinked images' system easier. This commit should not introduce any functional change.
For now, the old post_custom_field data is kept in the database. This will be dropped in a future commit.
This isn't a complete fix, it doesn't enable live reloading of color
definition stylesheets. But at least now when working on WCAG overrides
the developer won't need to restart the server to see changes.
Note this commit also introduce a new {{d-popover}} component, example usage:
```hbs
{{#d-popover |state|}}
{{d-button label="foo.things" class="d-popover-trigger"}}
<div class="d-popover-content">
Some content
<div>
{{/d-popover}}
```
This commit introduces a new site setting: `use_name_for_username_suggestions` (default true)
Admins can disable it if they want to stop using Name values when generating usernames for users. This can be useful if you want to keep real names private-by-default or, when used in conjunction with the `use_email_for_username_and_name_suggestions` setting, you would prefer to use email-based username suggestions.
Mozilla Thunderbird email client add links into HTML as follows:
`<p>The link: <a class="moz-txt-link-freetext" href="https://google.com">https://google.com</a></p>`
Current filtering rules strip out the link, leaving only `<p>The link: </p>`.
Properly strip only unnecessary information: quote prefix, signature, forwarded message header.
This pull request follows on from https://github.com/discourse/discourse/pull/16308. This one does the following:
* Changes `BookmarkQuery` to allow for querying more than just Post and Topic bookmarkables
* Introduces a `Bookmark.register_bookmarkable` method which requires a model, serializer, fields and preload includes for searching. These registered `Bookmarkable` types are then used when validating new bookmarks, and also when determining which serializer to use for the bookmark list. The `Post` and `Topic` bookmarkables are registered by default.
* Adds new specific types for Post and Topic bookmark serializers along with preloading of associations in `UserBookmarkList`
* Changes to the user bookmark list template to allow for more generic bookmarkable types alongside the Post and Topic ones which need to display in a particular way
All of these changes are gated behind the `use_polymorphic_bookmarks` site setting, apart from the .hbs changes where I have updated the original `UserBookmarkSerializer` with some stub methods.
Following this PR will be several plugin PRs (for assign, chat, encrypt) that will register their own bookmarkable types or otherwise alter the bookmark serializers in their own way, also gated behind `use_polymorphic_bookmarks`.
This commit also removes `BookmarkQuery.preloaded_custom_fields` and the functionality surrounding it. It was added in 0cd502a558 but only used by one plugin (discourse-assign) where it has since been removed, and is now used by no plugins. We don't need it anymore.
Previously, accessing the Rails app directly in development mode would give you assets from our 'legacy' Ember asset pipeline. The only way to run with Ember CLI assets was to run ember-cli as a proxy. This was quite limiting when working on things which are bypassed when using the ember-cli proxy (e.g. changes to `application.html.erb`). Also, since `ember-auto-import` introduced chunking, visiting `/theme-qunit` under Ember CLI was failing to include all necessary chunks.
This commit teaches Sprockets about our Ember CLI assets so that they can be used in development mode, and are automatically collected up under `/public/assets` during `assets:precompile`. As a bonus, this allows us to remove all the custom manifest modification from `assets:precompile`.
The key changes are:
- Introduce a shared `EmberCli.enabled?` helper
- When ember-cli is enabled, add ember-cli `/dist/assets` as the top-priority Rails asset directory
- Have ember-cli output a `chunks.json` manifest, and teach `preload_script` to read it and append the correct chunks to their associated `afterFile`
- Remove most custom ember-cli logic from the `assets:precompile` step. Instead, rely on Rails to take care of pulling the 'precompiled' assets into the `public/assets` directory. Move the 'renaming' logic to runtime, so it can be used in development mode as well.
- Remove fingerprinting from `ember-cli-build`, and allow Rails to take care of things
Long-term, we may want to replace Sprockets with the lighter-weight Propshaft. The changes made in this commit have been made with that long-term goal in mind.
tldr: when you visit the rails app directly, you'll now be served the current ember-cli assets. To keep these up-to-date make sure either `ember serve`, or `ember build --watch` is running. If you really want to load the old non-ember-cli assets, then you should start the server with `EMBER_CLI_PROD_ASSETS=0`. (the legacy asset pipeline will be removed very soon)
This reverts commit 01107e418e.
We have seen some random occurrences of corrupted assets, and think it may be related to the sprockets 4 update. Reverting for investigation
We intend to switch to the `:json` serializer, which will stringify all keys. However, we need a clean revert path. This commit ensures that our `_t` cookie handling works with both marshal (the current default) and json (the new default) serialization.