IMDb movie links were being rendered as posters. This was because
IMDb was sending `og:type` as `image` randomly in some cases. To
fix this we'll now default all IMDb links as article type. This will
ensure that the IMDb onebox link includes all the information instead
of showing just a poster without any context.
This PR changes the `UserNotification` class to send outbound `user_private_message` using the group's SMTP settings, but only if:
* The first allowed_group on the topic has SMTP configured and enabled
* SiteSetting.enable_smtp is true
* The group does not have IMAP enabled, if this is enabled the `GroupSMTPMailer` handles things
The email is sent using the group's `email_username` as both the `from` and `reply-to` address, so when the user replies from their email it will go through the group's SMTP inbox, which needs to have email forwarding set up to send the message on to a location (such as a hosted site email address like meta@discoursemail.com) where it can be POSTed into discourse's handle_mail route.
Also includes a fix to `EmailReceiver#group_incoming_emails_regex` to include the `group.email_username` so the group does not get a staged user created and invited to the topic (which was a problem for IMAP), as well as updating `Group.find_by_email` to find using the `email_username` as well for inbound emails with that as the TO address.
#### Note
This is safe to merge without impacting anyone seriously. If people had SMTP enabled for a group they would have IMAP enabled too currently, and that is a very small amount of users because IMAP is an alpha product, and also because the UserNotification change has a guard to make sure it is not used if IMAP is enabled for the group. The existing IMAP tests work, and I tested this functionality by manually POSTing replies to the SMTP address into my local discourse.
There will probably be more work needed on this, but it needs to be tested further in a real hosted environment to continue.
Setting a key/value pair in DistributedCache involves waiting on the
write to Redis to finish. In most cases, we don't need to wait on the
setting of the cache to finish. We just need to take our return value
and move on.
This improves the display of available tags in categories that are
configured to require at least (x) tags from a tag group.
There are two changes included:
- regular users will now see all the available tags in the required tag
group (previously they could see a max. of 5 tags)
- staff users will now see the tags from the required tag group when
the tag group contains more tags than the default limit (also set to 5)
Both changes only apply to the default query (i.e. no search terms).
It was not clear that replace watched words can be used to replace text
with URLs. This introduces a new watched word type that makes it easier
to understand.
The query is being executed each time we try and generate the link path
for a stylesheet within the duration of a reqeust. Categories are not
updated that often so repeating this query multiple times a request is
wasteful.
At the time of this commit, there is a `publish_discourse_stylesheet`
ActiveRecord callback on the `Category` model which clears the cache of
`Stylesheet::Manager` each time a category is saved.
`SvgSprite.custom_sprite_paths` does not just fetch custom SVG sprite
from themes. It also picks up any custom svg icons from a pre-defined
plugin folder.
Follow-up to 0700e9382d
The XML parsing of SVGs is done whenever the cache expires or on the
first load after a reboot. In one of our production instance, parsing
ranges from 30ms to 70ms which is not ideal. Instead, we've decided to
make a small memory trade off here by memoizing the core SVGs once on
boot to avoid parsing of the SVG files during the duration of a request.
The memozied hash will take up 57440 bytes or 0.05744 megabytes in size.
I merged this PR in yesterday, finally thinking this was done https://github.com/discourse/discourse/pull/12958 but then a wild performance regression occurred. These are the problem methods:
1aa20bd681/app/serializers/topic_tracking_state_serializer.rb (L13-L21)
Turns out date comparison is super expensive on the backend _as well as_ the frontend.
The fix was to just move the `treat_as_new_topic_start_date` into the SQL query rather than using the slower `UserOption#treat_as_new_topic_start_date` method in ruby. After this change, 1% of the total time is spent with the `created_in_new_period` comparison instead of ~20%.
----
History:
Original PR which had to be reverted **https://github.com/discourse/discourse/pull/12555**. See the description there for what this PR is achieving, plus below.
The issue with the original PR is addressed in 92ef54f402
If you went to the `x unread` link for a tag Chrome would freeze up and possibly crash, or eventually unfreeze after nearly 10 mins. Other routes for unread/new were similarly slow. From profiling the issue was the `sync` function of `topic-tracking-state.js`, which calls down to `isNew` which in turn calls `moment`, a change I had made in the PR above. The time it takes locally with ~1400 topics in the tracking state is 2.3 seconds.
To solve this issue, I have moved these calculations for "created in new period" and "unread not too old" into the tracking state serializer.
When I was looking at the profiler I also noticed this issue which was just compounding the problem. Every time we modify topic tracking state we recalculate the sidebar tracking/everything/tag counts. However this calls `forEachTracked` and `countTags` which can be quite expensive as they go through the whole tracking state (and were also calling the removed moment functions).
I added some logs and this was being called 30 times when navigating to a new /unread route because `sync` is being called from `build-topic-route` (one for each topic loaded due to pagination). So I just added a debounce here and it makes things even faster.
Finally, I changed topic tracking state to use a Map so our counts of the state keys is faster (Maps have .size whereas objects you have to do Object.keys(obj) which is O(n).)
<!-- NOTE: All pull requests should have tests (rspec in Ruby, qunit in JavaScript). If your code does not include test coverage, please include an explanation of why it was omitted. -->
* FIX: return an empty result if response from Amazon is missing attributes
Check we have the basic attributes requires to construct a Onebox for Amazon.
This is an attempt to handle scenarios where we receive a valid 200-status response from an Amazon request that does not include the data we’re expecting.
* Update lib/onebox/engine/amazon_onebox.rb
Co-authored-by: Régis Hanol <regis@hanol.fr>
Co-authored-by: Régis Hanol <regis@hanol.fr>
Refactors `TrustLevel` and moves translations from server to client
Additional changes:
* "staff" and "admin" wasn't translatable in site settings
* it replaces a concatenated string with a translation
* uses translation for trust levels in users_by_trust_level report
* adds a DB migration to rename keys of translation overrides affected by this commit
On the web, we display only an excerpt in a monospace font and the rest
of the body is hidden under ellipsis. The email displayed both of them
and it did not use the same style. This commit leaves only the excerpt
in emails and makes it use a monospace font to display it.
Original PR which had to be reverted **https://github.com/discourse/discourse/pull/12555**. See the description there for what this PR is achieving, plus below.
The issue with the original PR is addressed in 92ef54f402
If you went to the `x unread` link for a tag Chrome would freeze up and possibly crash, or eventually unfreeze after nearly 10 mins. Other routes for unread/new were similarly slow. From profiling the issue was the `sync` function of `topic-tracking-state.js`, which calls down to `isNew` which in turn calls `moment`, a change I had made in the PR above. The time it takes locally with ~1400 topics in the tracking state is 2.3 seconds.
To solve this issue, I have moved these calculations for "created in new period" and "unread not too old" into the tracking state serializer.
When I was looking at the profiler I also noticed this issue which was just compounding the problem. Every time we modify topic tracking state we recalculate the sidebar tracking/everything/tag counts. However this calls `forEachTracked` and `countTags` which can be quite expensive as they go through the whole tracking state (and were also calling the removed moment functions).
I added some logs and this was being called 30 times when navigating to a new /unread route because `sync` is being called from `build-topic-route` (one for each topic loaded due to pagination). So I just added a debounce here and it makes things even faster.
Finally, I changed topic tracking state to use a Map so our counts of the state keys is faster (Maps have .size whereas objects you have to do Object.keys(obj) which is O(n).)
This overhauls the user interface for the group email settings management, aiming to make it a lot easier to test the settings entered and confirm they are correct before proceeding. We do this by forcing the user to test the settings before they can be saved to the database. It also includes some quality of life improvements around setting up IMAP and SMTP for our first supported provider, GMail. This PR does not remove the old group email config, that will come in a subsequent PR. This is related to https://meta.discourse.org/t/imap-support-for-group-inboxes/160588 so read that if you would like more backstory.
### UI
Both site settings of `enable_imap` and `enable_smtp` must be true to test this. You must enable SMTP first to enable IMAP.
You can prefill the SMTP settings with GMail configuration. To proceed with saving these settings you must test them, which is handled by the EmailSettingsValidator.
If there is an issue with the configuration or credentials a meaningful error message should be shown.
IMAP settings must also be validated when IMAP is enabled, before saving.
When saving IMAP, we fetch the mailboxes for that account and populate them. This mailbox must be selected and saved for IMAP to work (the feature acts as though it is disabled until the mailbox is selected and saved):
### Database & Backend
This adds several columns to the Groups table. The purpose of this change is to make it much more explicit that SMTP/IMAP is enabled for a group, rather than relying on settings not being null. Also included is an UPDATE query to backfill these columns. These columns are automatically filled when updating the group.
For GMail, we now filter the mailboxes returned. This is so users cannot use a mailbox like Sent or Trash for syncing, which would generally be disastrous.
There is a new group endpoint for testing email settings. This may be useful in the future for other places in our UI, at which point it can be extracted to a more generic endpoint or module to be included.
Discourse shouldn't dynamically calculate the path of uploads and optimized images after a file has been stored on disk or S3. Otherwise it might calculate the wrong path if the SHA1 or extension stored in the database doesn't match the actual file path.
* FIX: Improve GitHub folder regexp in Onebox
It used to match any GitHub URL that was not matched by the other GitHub
Oneboxes and it did not do a good job at handling those. With this
change, the generic Onebox will handle the remaining URLs.
* FEATURE: Add Onebox for GitHub Actions
* FEATURE: Add Onebox for PR check runs
* FIX: Remove image from GitHub folder Oneboxes
It is a generic, auto-generated image which does not provide any value.
* DEV: Add tests
* FIX: Strip HTML comments from PR body
* Move onebox gem in core library
* Update template file path
* Remove warning for onebox gem caching
* Remove onebox version file
* Remove onebox gem
* Add sanitize gem
* Require onebox library in lazy-yt plugin
* Remove onebox web specific code
This code was used in standalone onebox Sinatra application
* Merge Discourse specific AllowlistedGenericOnebox engine in core
* Fix onebox engine filenames to match class name casing
* Move onebox specs from gem into core
* DEV: Rename `response` helper to `onebox_response`
Fixes a naming collision.
* Require rails_helper
* Don't use `before/after(:all)`
* Whitespace
* Remove fakeweb
* Remove poor unit tests
* DEV: Re-add fakeweb, plugins are using it
* Move onebox helpers
* Stub Instagram API
* FIX: Follow additional redirect status codes (#476)
Don’t throw errors if we encounter 303, 307 or 308 HTTP status codes in responses
* Remove an empty file
* DEV: Update the license file
Using the copy from https://choosealicense.com/licenses/gpl-2.0/#
Hopefully this will enable GitHub to show the license UI?
* DEV: Update embedded copyrights
* DEV: Add Onebox copyright notice
* DEV: Add MIT license, convert COPYRIGHT.txt to md
* DEV: Remove an incorrect copyright claim
Co-authored-by: Jarek Radosz <jradosz@gmail.com>
Co-authored-by: jbrw <jamie@goatforce5.org>
This PR improves the UI of bulk select so that its context is applied to the Dismiss Unread and Dismiss New buttons. Regular users (not just staff) are now able to use topic bulk selection on the /new and /unread routes to perform these dismiss actions more selectively.
For Dismiss Unread, there is a new count in the text of the button and in the modal when one or more topic is selected with the bulk select checkboxes.
For Dismiss New, there is a count in the button text, and we have added functionality to the server side to accept an array of topic ids to dismiss new for, instead of always having to dismiss all new, the same as the bulk dismiss unread functionality. To clean things up, the `DismissTopics` service has been rolled into the `TopicsBulkAction` service.
We now also show the top Dismiss/Dismiss New button based on whether the bottom one is in the viewport, not just based on the topic count.
Re-lands the change initially proposed on #8359 but without a new nginx
location block, so it has less change surface.
Co-authored-by: Jeff Wong <awole20@gmail.com>
Co-authored-by: Jeff Wong <awole20@gmail.com>
Based on feedback from Matt Haughey, we don't need to use so many words when describing a deleted topic or post.
Co-authored-by: Martin Brennan <martin@discourse.org>
The main image_optim gem now includes the timeout feature
that we had in our fork. So it is now safe to switch off of our fork and
back to the image_optim gem.
This is the link to the commit in the image_optim repo that adds the
timeout option:
ec3767dde0
One difference with the new timeout implementation is that image_optim
now handles the timeout exceptions instead of bubbling them up:
1ed0328587/lib/image_optim.rb (L128-L129)
```
rescue Errors::TimeoutExceeded
handler.result
```
So a timeout will just return `nil`, which is the same response if it
couldn't optimize an image. I don't think we were really watching for
or doing anything about these timeout warnings in our logs so I think
this is an okay change to have and we will have less warnings in our
logs now too.
When testing theme components in development, it doesn't make sense to use the `test` environment. The `test` environment almost certainly has 0 themes installed.
This change still works fine when using the `themes:install_and_test` rake task, because that rake task explicitly specifies environment/database-config.
We support two types of custom excerpts. It can be <div class="excerpt"> or <span class="excerpt">: b21f74060e/lib/excerpt_parser.rb (L120)
We also ignore max excerpt length for custom excerpts. But we forgot to process div when ignoring max length.
When editing the first post for the topic we do two AJAX requests
to two separate controllers in this order:
PUT /t/topic-name
PUT /posts/2489523
This causes two post revisor calls, which end up triggering the
:post_edited DiscourseEvent twice. This is then picked up and sent
as a WebHook event twice. However we do not need to send a :post_edited
webhook event if the first post is being edited and topic_changed is
true from the :post_edited DiscourseEvent, because a second event will
shortly come through for just the post.
See https://meta.discourse.org/t/post-webhook-fires-two-times-on-post-edited-for-first-post-in-a-topic/162408
Continued on from https://github.com/discourse/discourse/pull/10590
It used to allow adding email addresses to a group even if invites were
disabled for the site. This does not allow user to input email address
if they cannot invite.
The second thing this commit improves is the message that is displayed
to the user when they hit the invite rate limit.
When uploads are created from the composer (editing or creating a post),
for sites with secure uploads enabled we assume security by default and
that new upload is set to secure. When the post is created, we then
check whether the post uploads _actually_ need to be secure and adjust
accordingly.
We were not doing this when revising a post, so when a new upload was
created when editing a post in a public topic, the secure status stayed
true erroneously causing issues with image previews, among other things.
Over the years we accrued many spelling mistakes in the code base.
This PR attempts to fix spelling mistakes and typos in all areas of the code that are extremely safe to change
- comments
- test descriptions
- other low risk areas
- Task name is themes:qunit, not themes:unit
- Some shells try to expand the square brackets. The whole thing should be enclosed in quotes to avoid this
Previously, we only precompiled the CSS for parent themes but not for
the child themes. As a result, the CSS for child themes were being
compiled during the first request which made the respond time high for
that request.
Watched words are always regular expressions, despite watched_words_
_regular_expressions being enabled or not. Internally, wildcard
characters are replaced with a regular expression that matches any non
whitespace character.
* FIX: Hide tag watched words if tagging is disabled
These 'autotag' words were shown even if tagging was disabled.
* FIX: Make autotag watched words case insensitive
This commit also fixes the bug when no tag was applied if no other tag
was already present.
* DEV: Allow wildcards in Oneboxer optional domain Site Settings
Allows a wildcard to be used as a subdomain on Oneboxer-related SiteSettings, e.g.:
- `force_get_hosts`
- `cache_onebox_response_body_domains`
- `force_custom_user_agent_hosts`
* DEV: fix typos
* FIX: Try doing a GET after receiving a 500 error from a HEAD
By default we try to do a `HEAD` requests. If this results in a 500 error response, we should try to do a `GET`
* DEV: `force_get_hosts` should be a hidden setting
* DEV: Oneboxer Strategies
Have an alternative oneboxing ‘strategy’ (i.e., set of options) to use when an attempt to generate a Onebox fails. Keep track of any non-default strategies that were used on a particular host, and use that strategy for that host in the future.
Initially, the alternate strategy (`force_get_and_ua`) forces the FinalDestination step of Oneboxing to do a `GET` rather than `HEAD`, and forces a custom user agent.
* DEV: change stubbed return code
The stubbed status code needs to be a value not recognized by FinalDestination
In production, each Unicorn child process will currently hold the
default locale in memory on first load. Instead, we should preload it in
the Unicorn master process so that the memory is immediately shared when
forking.
Also, the translations are only memoized on first load now and is
adding considerable overhead to the first few requests after a fresh
boot.
We have a few places in the code where we need to validate various email related settings, and will have another soon with the improved group email settings UI. This PR introduces a class which can validate POP3, IMAP, and SMTP credentials and also provide a friendly error message for issues if they must be presented to an end user.
This PR does not change any existing code to use the new service. I have added a TODO to change POP3 validation and the email test rake task to use the new validator post-release.
* FIX: Ensure the same email cannot be invited twice
When creating a new invite with a duplicated email, the old invite will
be updated and returned. When updating an invite with a duplicated email
address, an error will be returned.
* FIX: not Ember helper does not exist
* FIX: Sync can_invite_to_forum? and can_invite_to?
The two methods should perform the same basic set of checks, such as
check must_approve_users site setting.
Ideally, one of the methods would call the other one or be merged and
that will happen in the future.
* FIX: Show invite to group if user is group owner
* FIX: flaky specs after topic view custom filters
When ensuring TopicView class variables return to the original state it should use empty Hash instead of empty Array. That
https://github.com/discourse/discourse/blob/master/lib/topic_view.rb#L60
* FIX: convert to string for topic view custom filter
We have found when receiving and posting inbound emails to the handle_mail route, it is better to POST the payload as a base64 encoded string to avoid strange encoding issues. This introduces a new param of `email_encoded` and maintains the legacy param of email, showing a deprecation warning. Eventually the old param of `email` will be dropped and the new one `email_encoded` will be the only way to handle_mail.
It's been awhile since we have supported IE11 so it should be safe to remove
IntersectionObserver now.
From a TODO task in this repo:
> drop when we eventually drop IE11
Announcement of when we removed IE11 support:
https://meta.discourse.org/t/137984/40?u=blake
nil was converted to "" and the matching regex would return [] and then be converted to nil with max usage.
Example exception:
```
NoMethodError (undefined method `<=' for nil:NilClass)
lib/text_sentinel.rb:71:in `seems_unpretentious?'
lib/validators/quality_title_validator.rb:13:in `validate_each'
lib/topic_creator.rb:25:in `valid?'
```
Previous refactors have lost usage of read_timeout in `FileHelper.download` and `FinalDestination` was incorrectly using `Net::HTTP.start` by setting `open_timeout` in the block instead of directly during the invocation.
Couldn't figure how to write a good test for this without slowing the spec.
This commit makes a few changes to improve boot time in development environments. It will have no effect on production boot times.
- Skip the SchemaCache warmup. In development mode, the SchemaCache is refreshed every time there is a code change, so warmup is of limited use.
- Skip warming up PrettyText. This adds ~2s to each web worker's boot time. The vast majority of requests do not use PrettyText, so it is more efficient to defer its warmup until it's needed
- Skip the intentional 1 second pause during Unicorn worker forking. The comment (which also exists in Unicorn's documentation) suggests this works around a Unix signal handling bug, but I haven't been able to locate any more information. Skipping it in dev will significantly speed up boot. If we start to see issues, we can revert this change.
On my machine, this improves `/bin/unicorn` boot time from >10s to ~4s
Sometimes, parts of the application pass in the locale as a string, not a symbol. This was causing the translate_accelerator to cache two versions of the locale separately: one cache for the symbol version, and one cache for the string version. For example, in a running production process:
```
irb(main):001:0> I18n.instance_variable_get(:@loaded_locales)
=> [:en, "en"]
```
This commit ensures the `locale` key is always converted to a symbol, and adds a spec to ensure the same locale cannot appear twice in `@loaded_locales`
In development we regularly restart/reload Rails, which wipes out the schema cache. This then has to be regenerated using DDL queries on the database.
Instead, we can make use of the `rake db:schema:cache:dump` command. This will dump the schema cache to a YAML file, and then load it when needed. This is significantly faster than rebuilding the cache from DDL queries every time.
This commit allows site admins to run theme tests in production via a new `/theme-qunit` route. When you visit `/theme-qunit`, you'll see a list of the themes/components installed on your site that have tests, and from there you can select a theme or component that you run its tests.
We also have a new rake task `themes:install_and_test` that can be used to install a list of themes/components on a temporary database and run the tests of the themes/components that are installed. This rake task can be useful when upgrading/deploying a Discourse instance to make sure that the installed themes/components are compatible with the new Discourse version being deployed, and if the tests fail you can abort the build/deploy process so you don't end up with a broken site.
Plugins always store their stylesheets under `/assets/stylesheets`, so we can make the glob pattern much more specific. In my local development environment, this increases the speed of `Stylesheet::Manager.max_file_mtime` from ~65ms to ~3ms (20x faster). This significantly improves stylesheet regeneration time, and the responsiveness of the theme admin UI.
Note that this will have negligible effect in production, because in production the value of `max_file_mtime` is aggressively cached.
This setting allows admin to de/activate automatic trimming of incoming email.
There are instances where it does wonders in trimming all the garbage content and other
instances where it's so bad that it trims the most important part of the email.
FIX: don't remove hidden content using the style attribute when converting HTML to Markdown.
The regexp used was doing more harm than good. It was way too broad.
FIX: properly elide signatures from emails sent with Front App.
This is fairly safe as Front App nicely identifies signatures in the HTML part.
The aim of this PR is to improve the topic tracking state JavaScript code and test coverage so further modifications can be made in plugins and in core. This is focused on making topic tracking state changes easier to respond to with callbacks, and changing it so all state modifications go through a single method instead of modifying `this.state` all over the place. I have also tried to improve documentation, make the code clearer and easier to follow, and make it clear what are public and private methods.
The changes I have made here should not break backwards compatibility, though there is no way to tell for sure if other plugin/theme authors are using tracking state methods that are essentially private methods. Any name changes made in the tracking-state.js code have been reflected in core.
----
We now have a `_trackedTopicLimit` in the tracking state. Previously, if a topic was neither new nor unread it was removed from the tracking state; now it is only removed if we are tracking more than `_trackedTopicLimit` topics (which is set to 4000). This is so plugins/themes adding topics with `TopicTrackingState.register_refine_method` can add topics to track that aren't necessarily new or unread, e.g. for totals counts.
Anywhere where we were doing `tracker.states["t" + data.topic_id] = newObject` has now been changed to flow through central `modifyState` and `modifyStateProp` methods. This is so state objects are not modified until they need to be (e.g. sometimes properties are set based on certain conditions) and also so we can run callback functions when the state is modified.
I added `onStateChange` and `onMessageIncrement` methods to register callbacks that are called when the state is changed and when the message count is incremented, respectively. This was done so we no longer need to do things like `@observes("trackingState.states")` in other Ember classes.
I split up giant functions like `sync` and `establishChannels` into smaller functions for readability and testability, and renamed many small functions to _functionName to designate them as private functions which not be called by consumers of `topicTrackingState`. Public functions are now all documented (well...at least ones that are not immediately obvious).
----
On the backend side, I have changed the MessageBus publish events for TopicTrackingState to send back tags and tag IDs for more channels, and done some extra code cleanup and refactoring. Plugins may override `TopicTrackingState.report` so I have made its footprint as small as possible and externalised the main parts of it into other methods.
When building the `scss_load_paths`, we were creating a full export of the theme (including uploads), and not cleaning it up. With many uploads, this can be extremely slow (because it downloads every upload from S3), and the lack of cleanup could cause a disk to fill up over time.
This commit updates the ZipExporter to provide a `with_export_dir` API, which takes care of cleanup. It also adds a kwarg which allows exporting only extra_scss fields. This should make things much faster for themes with many uploads.
When the admin creates a new custom field they can specify if that field should be searchable or not.
That setting is taken into consideration for quick search results.
For sites with login_required set to true, counting anonymous pageviews is
confusing. Requests to /login and other pages would make it look like
anonymous users have access to site's content.
This commit allows site admins to run theme tests in production via a new `/theme-qunit` route. When you visit `/theme-qunit`, you'll see a list of the themes/components installed on your site that have tests, and from there you can select a theme or component that you run its tests.
We also have a new rake task `themes:install_and_test` that can be used to install a list of themes/components on a temporary database and run the tests of the themes/components that are installed. This rake task can be useful when upgrading/deploying a Discourse instance to make sure that the installed themes/components are compatible with the new Discourse version being deployed, and if the tests fail you can abort the build/deploy process so you don't end up with a broken site.
* DEV: Give a nicer error when `--proxy` argument is missing
* DEV: Improve Ember CLI's bootstrap logic
Instead of having Ember CLI know which URLs to proxy or not, have it try
the URL with a special header `HTTP_X_DISCOURSE_EMBER_CLI`. If present,
and Discourse thinks we should bootstrap the application, it will
instead stop rendering and return a HTTP HEAD with a response header
telling Ember CLI to bootstrap.
In other words, any time Rails would otherwise serve up the HTML for the
Ember app, it stops and says "no, you do it."
* DEV: Support asset filters by path using a new options object
Without this, Ember CLI's bootstrap would not get the assets it wants
because the path it was requesting was different than the browser path.
This adds an optional request header to fix it.
So far this is only used by the styleguide.
Note that this commit is also fixing various mistakes in emojis.
Some of them have been fixed manually in db.json/data.js/groups.json and will need to be fixed in emoji-db gem.
* FEATURE: Review every post using the review queue.
If the `review_every_post` setting is enabled, posts created and edited by regular uses are sent to the review queue so staff can review them. We'll skip PMs and posts created or edited by TL4 or staff users.
Staff can choose to:
- Approve the post (nothing happens)
- Approve and restore the post (if deleted)
- Approve and unhide the post (if hidden)
- Reject and delete it
- Reject and keep deleted (if deleted)
- Reject and suspend the user
- Reject and silence the user
* Update config/locales/server.en.yml
Co-authored-by: Robin Ward <robin.ward@gmail.com>
Co-authored-by: Robin Ward <robin.ward@gmail.com>
Rails 6.1.3.1 deprecates a few API and has some internal changes that break our tests suite, so this commit fixes all the deprecations and errors and now Discourse should be fully compatible with Rails 6.1.3.1. We also have a new release of the rails_failover gem that's compatible with Rails 6.1.3.1.
There is a category setting that enforces 1 or more tags must be added to a topic from a specific tag group before creating it. This validation was not being run before the topic was being sent to a review queue for categories that have that setting enabled.
There was an existing validation in `TopicCreator` but it was not correct; it was only validating when the tags did _not_ exist and also only happened on `create`. I now run the validation in `TopicCreator.valid?`
I also improved the error message shown to the user when they have not added the tags required (showing the tag names from the tag group), and changed the composer tag selector to not show "optional" if there are N tags required from a certain group.
* DEV: ensures stylesheet watcher isn't crashing with gems plugins
This bug has been exhibited since discourse_dev is now including an auth plugin which was loaded as a relative path instead of an absolute path, eg:
`Users/bob/.gem/ruby/2.6.6/gems/discourse_dev-0.1.0/auth/plugin.rb`
Instead of
`/Users/bob/.gem/ruby/2.6.6/gems/discourse_dev-0.1.0/auth/plugin.rb`
Let's say you want to use a gem in a plugin that has a dependencie.
You would write something like this:
```ruby
gem 'dependency-gem', '1.2.3'
gem 'amazing-gem', '4.5.6'
```
However, since when we install a plugin gem we install it in the `gems`
directory created inside the plugins directory, when it comes the time
to install the `amazing-gem`, it won't be able to find the `dependency-gem`.
This fixes that issue by adding the `gems` plugins directory to the global gem path.
Also fixed a frozen string error when specifying a source.
CookedPostProcessor used Loofah to parse the cooked content of a post
and Nokogiri to parse cooked Oneboxes. Even though Loofah is built on
top of Nokogiri, replacing an element from the cooked post (a Nokogiri
node) with a parsed onebox (a Loofah node) produced a strange result
which included XML namespaces. Removing the mix and using Loofah
to parse Oneboxes fixed the problem.
* FIX: Use theme color for anchor icon
* FIX: Do not count anchor links
* FIX: Do not count hashtags links either
* DEV: Add tests for link_count
* FIX: Disable anchors in quotes and preview
* FIX: Try building some anchor slugs for unicode
* DEV: Fix tests
This PR adds a new category setting which is a column in the `categories` table, `allow_unlimited_owner_edits_on_first_post`.
What this does is:
* Inside the `can_edit_post?` method of `PostGuardian`, if the current user editing a post is the owner of the post, it is the first post, and the topic's category has `allow_unlimited_owner_edits_on_first_post`, then we bypass the check for `LimitedEdit#edit_time_limit_expired?` on that post.
* Also, similar to wiki topics, in `PostActionNotifier#after_create_post_revision` we send a notification to all users watching a topic when the OP is edited in a topic with the category setting `allow_unlimited_owner_edits_on_first_post` enabled.
This is useful for forums where there is a Marketplace or similar category, where topics are created and then updated indefinitely by the OP rather than the OP making new topics or additional replies. In a way this acts similar to a wiki that only one person can edit.
Followup to 5deda5ef3e
The first argument to `Open3.capture3` can be an environment variable hash. In this case, we need to insert the `timeout` command after the env hash.
This commit allows themes and theme components to have QUnit tests. To add tests to your theme/component, create a top-level directory in your theme and name it `test`, and Discourse will save all the files in that directory (and its sub-directories) as "tests files" in the database. While tests files/directories are not required to be organized in a specific way, we recommend that you follow Discourse core's tests [structure](https://github.com/discourse/discourse/tree/master/app/assets/javascripts/discourse/tests).
Writing theme tests should be identical to writing plugins or core tests; all the `import` statements and APIs that you see in core (or plugins) to define/setup tests should just work in themes.
You do need a working Discourse install to run theme tests, and you have 2 ways to run theme tests:
* In the browser at the `/qunit` route. `/qunit` will run tests of all active themes/components as well as core and plugins. The `/qunit` now accepts a `theme_name` or `theme_url` params that you can use to run tests of a specific theme/component like so: `/qunit?theme_name=<your_theme_name>`.
* In the command line using the `themes:qunit` rake task. This take is meant to run tests of a single theme/component so you need to provide it with a theme name or URL like so: `bundle exec rake themes:qunit[name=<theme_name>]` or `bundle exec rake themes:qunit[url=<theme_url>]`.
There are some refactors to how Discourse processes JavaScript that comes with themes/components, and these refactors may break your JS customizations; see https://meta.discourse.org/t/upcoming-core-changes-that-may-break-some-themes-components-april-12/186252?u=osama for details on how you can check if your themes/components are affected and what you need to do to fix them.
This commit also improves theme error handling in Discourse. We will now be able to catch errors that occur when theme initializers are run and prevent them from breaking the site and other themes/components.
`GlobalSetting.relative_url_root` comes from the destination site. We
can't be sure whether it was the same on the original site. It's safer
to use a wildcard here, so we can backup/restore sites with different
relative_url_root values.
Previously certain images may lead to convert / identify to run for unreasonable
amounts of time
This adds a maximum amount of time these commands can run prior to forcing
them to stop
Previously we used the raw data indexed to generate blurbs even for cases
when Chinese/Korean/Japanese text was used.
This caused superfluous spaces to show up in excerpts.
To add an extra layer of security, we sanitize settings before shipping them to the client. We don't sanitize those that have the "html" type.
The CookedPostProcessor already uses Loofah for sanitization, so I chose to also use it for this. I added it to our gemfile since we installed it as a transitive dependency.
This commit allows themes and theme components to have QUnit tests. To add tests to your theme/component, create a top-level directory in your theme and name it `test`, and Discourse will save all the files in that directory (and its sub-directories) as "tests files" in the database. While tests files/directories are not required to be organized in a specific way, we recommend that you follow Discourse core's tests [structure](https://github.com/discourse/discourse/tree/master/app/assets/javascripts/discourse/tests).
Writing theme tests should be identical to writing plugins or core tests; all the `import` statements and APIs that you see in core (or plugins) to define/setup tests should just work in themes.
You do need a working Discourse install to run theme tests, and you have 2 ways to run theme tests:
* In the browser at the `/qunit` route. `/qunit` will run tests of all active themes/components as well as core and plugins. The `/qunit` now accepts a `theme_name` or `theme_url` params that you can use to run tests of a specific theme/component like so: `/qunit?theme_name=<your_theme_name>`.
* In the command line using the `themes:qunit` rake task. This take is meant to run tests of a single theme/component so you need to provide it with a theme name or URL like so: `bundle exec rake themes:qunit[name=<theme_name>]` or `bundle exec rake themes:qunit[url=<theme_url>]`.
There are some refactors to internal code that's responsible for processing themes/components in Discourse, most notably:
* `<script type="text/discourse-plugin">` tags are automatically converted to modules.
* The `theme-settings` service is removed in favor of a simple `lib` file responsible for managing theme settings. This was done to allow us to register/lookup theme settings very early in our Ember app lifecycle and because there was no reason for it to be an Ember service.
These refactors should 100% backward compatible and invisible to theme developers.
This feature used to be controlled by two site settings
enable_personal_email_messages and min_trust_to_send_email_messages.
I removed enable_personal_email_messages and unhide
min_trust_to_send_email_messages to simplify the process of
enabling / disabling this feature.
* FEATURE: Cache successful HTTP GET requests during Oneboxing
Some oneboxes may fail if when making excessive and/or odd requests against the target domains. This change provides a simple mechanism to cache the results of succesful GET requests as part of the oneboxing process, with the goal of reducing repeated requests and ultimately improving the rate of successful oneboxing.
To enable:
Set `SiteSetting.cache_onebox_response_body` to `true`
Add the domains you’re interesting in caching to `SiteSetting. cache_onebox_response_body_domains` e.g. `example.com|example.org|example.net`
Optionally set `SiteSetting.cache_onebox_user_agent` to a user agent string of your choice to use when making requests against domains in the above list.
* FIX: Swap order of duration and value in redis call
The correct order for `setex` arguments is `key`, `duration`, and `value`.
Duration and value had been flipped, however the code would not have thrown an error because we were caching the value of `1.day.to_i` for a period of 1 seconds… The intention appears to be to set a value of 1 (purely as a flag) for a period of 1 day.
This is called in DiscourseTagging.tag_topic_by_names only after
all the validations etc. have been passed, and after topic.tags = X
has been called (because this is when the associations are created/
destroyed). The event has the topic, then a second param with the
old and new tag names in arrays for easy inspection.
I was adding specs to ensure that post actions and uploads are removed for permanently deleted posts.
I noticed that post revisions were not permanently destroyed. I added a migration to fix old data.
The instance of the PostRevisor is passed to the post_edited
event. It is useful to know what has happened to the topic in
this event (we already pass a boolean for topic_changed? but that
is not so helpful by itself).
* DEV: upgrade mini_sql
Even though we are not planning on using this quite yet, mini_sql now supports
prepared statements.
Would like this upgrade merged so we can do some benchmarking.
Note, this will not work with pg_bouncer, but sites that are not using it
may benefit from the feature.
* implement multisite friendly prepared statements
Fixes `Rack::Lint::LintError: a header value must be a String, but the value of 'Retry-After' is a Integer`. (see: 14a236b4f0/lib/rack/lint.rb (L676))
I found it when I got flooded by those warning a while back in a test-related accident 😉 (ember CLI tests were hitting a local rails server at a fast rate)
browser-update script does not work correctly in some very old browsers
because the contents of <noscript> is not accessible in JavaScript.
For these browsers, the server can display the crawler page and add the
browser update notice.
Simply loading the browser-update script in the crawler view is not a
solution because that means all crawlers will also see it.
The regular expression to detect private IP addresses did not always detect them successfully.
Changed to use ruby's in-built IPAddr.new(ip_address).private? method instead
which does the same thing but covers all cases.
Users can now pin bookmarks from their bookmark list. This will anchor the bookmark to the top of the list, and show a pin icon next to it. This also applies in the nav bookmarks panel. If there are multiple pinned bookmarks they sort by last updated order.
This PR adds MethodProfiler.output_sql_to_stderr! for easier debugging of SQL queries and their timings from the console.
This is almost the same as ensure_discourse_instrumentation! but should not
be used in production (save for debugging in the console), and is only instrumenting
PostgresSQL queries.
This is almost the same as ensure_discourse_instrumentation! but should not
be used in production. This logs all SQL queries run and their durations
between start and stop.
It also works for super long running queries. If you interrupt the long-running
query the latest query data will still be logged after stopping the profiler.
Usage:
```
MethodProfiler.output_sql_to_stderr!(filter_transactions: true)
MethodProfiler.start
# some code that runs queries
timings = MethodProfiler.stop
```
This PR allows invitations to be used when the DiscourseConnect SSO is enabled for a site (`enable_discourse_connect`) and local logins are disabled. Previously invites could not be accepted with SSO enabled simply because we did not have the code paths to handle that logic.
The invitation methods that are supported include:
* Inviting people to groups via email address
* Inviting people to topics via email address
* Using invitation links generated by the Invite Users UI in the /my/invited/pending route
The flow works like this:
1. User visits an invite URL
2. The normal invitation validations (redemptions/expiry) happen at that point
3. We store the invite key in a secure session
4. The user clicks "Accept Invitation and Continue" (see below)
5. The user is redirected to /session/sso then to the SSO provider URL then back to /session/sso_login
6. We retrieve the invite based on the invite key in secure session. We revalidate the invitation. We show an error to the user if it is not valid. An additional check here for invites with an email specified is to check the SSO email matches the invite email
7. If the invite is OK we create the user via the normal SSO methods
8. We redeem the invite and activate the user. We clear the invite key in secure session.
9. If the invite had a topic we redirect the user there, otherwise we redirect to /
Note that we decided for SSO-based invites the `must_approve_users` site setting is ignored, because the invite is a form of pre-approval, and because regular non-staff users cannot send out email invites or generally invite to the forum in this case.
Also deletes some group invite checks as per https://github.com/discourse/discourse/pull/12353
Highlight.js changed their default branch from master to main. This switches to the @highlightjs/cdn-assets package, thus sidestepping the problem. It's a slightly cleaner integration though (no need to build locally anymore).
Component SCSS compilation should use the current theme's SCSS color
variables as a fallback before using the default core colors.
This is mostly a backwards-compatibility fix, new themes and components
should use CSS custom properties, which offer better support for on-the-fly
color scheme changes (dark mode support, etc.).
This is not a security issue because regular users are not allowed to insert FA icons anywhere in the app. Admins can insert icons via custom badges, but they do have the ability to create themes with JS.
Get rid of deprecation related to Zeitwerk autoloader.
Original PR was reverted because of multisite bug #12381 - thank you @davidtaylorhq for fixing it.
I added the last commit to fix that multisite problem.
Previosly, if the topic embed request had a missing username parameter,
and SiteSetting.embed_by_username was empty we would fail to create the
new topic and not show any errors.
Now we will fallback using the priority:
1. Username parameter
2. SiteSetting.embed_by_username
3. SiteSetting.site_contact_username
4. system user
Staff can send a post to the review queue by clicking the "Flag Post" button next to "Take Action...". Clicking it flags the post using the "Notify moderators" score type and hides it. A custom message will be sent to the user.
It has been observed that doing a HEAD against an Amazon store URL may result in a 405 error being returned.
Skipping the HEAD request may result in an improved oneboxing experience when requesting these URLs.
The user and an admin could create multiple email change requests for
the same user. If any of the requests was validated and it became
primary, the other request could not be deleted anymore.
Added a new task to test if indexes are coherent with a blank database
This allows us to detect for cases where somehow indexes are out of sync
FIX_INDEXES=1 or `rake db:validate_indexes[fix]` to correct the issues it finds.
Detects:
- Badly named indexes that need to be renamed
- Missing indexes
- Extra indexes
Can correct all 3 with the fix option
* FEATURE: allow category group moderators to pin/unpin topics
Category group moderators should be able to pin/unpin any topics within a category where they have appropraite category group moderator permissions.