Set `DEBUG_NODE=1` when running `rake smoke:test` and use your favorite tool to debug the smoke tests. See https://nodejs.org/en/docs/guides/debugging-getting-started/ for more information.
The debugger will break at the beginning of the smoke tests when the env variable is set.
That commit introduced a bug to the system: f69dacf979
Restore works fine for multisite, however, stopped working for non-multisite.
Reason for that was that `establish_connection` method got a check if the multisite instance is available:
```
def self.instance
@instance
end
def self.establish_connection(opts)
@instance.establish_connection(opts) if @instance
end
```
However, the reload method don't have that check
```
def self.reload
@instance = new(instance.config_filename)
end
```
To solve it, let's ensure we are in a multisite environment before call reload
While editing the first post it does't bumped the topic when the new post revision created. Because we wrongly assumed that the hidden tags are changed even when no tags are updated.
Doing .pluck(:column).first is a very common pattern in Discourse and in
most cases, a limit cause isn't being added. Instead of adding a limit
clause to all these callsites, this commit adds two new methods to
ActiveRecord::Relation:
pluck_first, equivalent to limit(1).pluck(*columns).first
and pluck_first! which, like other finder methods, raises an exception
when no record is found
- Increase size of the reviewable's conversation excerpt to prevent truncation of the new copy
- Remove the `domain` parameter from the `flag_linked_posts_as_spam` method in the user model since it is no longer needed
- Remove the `domain` interpolation variable from all translation files
- Add "All posts from this user that include links should be reviewed." to server.en.yml for added clarity on why the posts entered the queue
Trying to truncate encoded slugs will mean that we have to keep the URL
valid, which can be tricky as you have to be aware of multibyte
characters.
Since we already have upper bounds for the title, the slug won't grow
for more than title*6 in the worst case. The slug column in the topic
table can store that just fine.
Added a test to ensure that a generated slug is a valid URL too, so we
don't introduce regressions in the future.
Under exceptional situations the automatic draft feature can fail.
This new **hidden, default off** site setting
`backup_drafts_to_pm_length` will automatically backup any draft that is
saved by the system to a dedicated PM (originating from self)
The body of that PM will contain the text of the reply.
We can enable this feature strategically on sites exhibiting issues to
diagnose issues with the draft system and offer a recourse to users who
appear to lose drafts. We automatically checkpoint these drafts every 5
minutes forcing a new revision each 5 minutes so you can revert to old
content.
Longer term we are considering automatically enabling this kind of feature
for extremely long drafts where the risk is really high one could lose
days of writing.
This reverts commit ab74a50d85.
We really want to upgrade redis, but discovered some edge cases
around failover we need to test.
Holding off on the upgrade till a bit more testing happens
This feature amends it so instead of using one challenge and honeypot
statically per site we have a rotating honeypot and challenge value which
changes every hour.
This means you must grab a fresh copy of honeypot and challenge value once
an hour or account registration will be rejected.
We also now cycle the value of the challenge when after successful account
registration forcing an extra call to hp.json between account registrations
Client has been made aware of these changes.
Additionally this contains a JavaScript workaround for:
https://bugs.chromium.org/p/chromium/issues/detail?id=987293
This is client side code that is specific to Chrome user agent and swaps
a PASSWORD type honeypot with a TEXT type honeypot.
When an admin changes the site setting slug_generation_method to
encoded, we weren't really encoding the slug, but just allowing non-ascii
characters in the slug (unicode).
That brings problems when a user posts a link to topic without the slug, as
our topic controller tries to redirect the user to the correct URL that contains
the slug with unicode characters. Having unicode in the Location header in a
response is a RFC violation and some browsers end up in a redirection loop.
Bug report: https://meta.discourse.org/t/-/125371?u=falco
This commit also checks if a site uses encoded slugs and clear all saved slugs
in the db so they can be regenerated using an onceoff job.
Our instance used for template rendering needs a lock to ensure there is
no race condition where rendering happens on 2 threads at the same time.
This can lead to local poisoning which can cause unexpected results in
emails
* [WIP] - default turbo spec env to test
* FEATURE: support for --fast-fail in bin/turbo_rspec
* fast-fail -> fail_fast to match rspec
* Moved thread killing outside of fail-fast check
* Removed failure_count incrementation from fast_fail_met
If the setting is turned on, then the user will receive information
about the subject: if it was deleted or requires some special access to
a group (only if the group is public). Otherwise, the user will receive
a generic #404 error message. For now, this change affects only the
topics and categories controller.
This commit also tries to refactor some of the code related to error
handling. To make error pages more consistent (design-wise), the actual
error page will be rendered server-side.
Using popups is becoming increasingly rare. Full page redirects are already used on mobile, and for some providers. This commit removes all logic related to popup authentication, leaving only the full page redirect method.
For more info, see https://meta.discourse.org/t/do-we-need-popups-for-login/127988
Currently, if you try to run `./bin/turbo_rspec` you will got that error `There are pending migrations, run rake parallel:migrate`
Reason for that is that command is running in `development` mode which includes plugins migration files in ActiveRecord::Migrator.migrations_paths:
```
["db/migrate",
"/home/lis2/projects/discourse/plugins/discourse-details/db/migrate",
"/home/lis2/projects/discourse/plugins/discourse-details/db/post_migrate",
"/home/lis2/projects/discourse/plugins/discourse-local-dates/db/migrate",
"/home/lis2/projects/discourse/plugins/discourse-local-dates/db/post_migrate",
...
]
```
A workaround solution would be to run the command with the TEST environment like `RAILS_ENV=test ./bin/turbo_rspec`
I want to propose in this PR to override migration_paths to check only Discourse core migrations.
We preload to ensure as much memory as possible is reused from unicorn master
to various workers using copy-on-write (sidekiq, unicorn)
This migrates the preloading code into the Discourse module for easier
reuse and adds 3 notable preloading changes
1. We attempt to localize a string on each site, ensuring we warmup
the i18n
2. We preload all our templates (compiling .erb to class)
3. We warm-up our search tokenizer which uses cppjieba which is a large
memory consumer, this will only cause a warmup on CJK sites or sites with
the special site setting enabled.
Previous to this fix we were leaking methods on the internal action view
template class per render.
This caused email generation to be very low and a steady memory leak in the
application in sidekiq when sending out emails
The behavior change is new to Rails 6 so this fix does not need to be
backported into stable.
It was possible to add a category to more than one default group, e.g. "default categories muted" and "default categories watching first post".
The bug was caused by category validations inadvertently comparing strings and numbers.
Overwriting the same file with 'convert' is not always working as expected.
Adding a temporary file as the destination of the downsize makes this operation much more reliable.
Also switched to using (the more aggressive) 50% resize instead of halving the number of pixels.