Previously every hour we would run a full scan of the entire DB searching
for expired uploads that need to be moved to the tombstone folder.
This commit amends it so we only run the job 2 times per clean_orpha_uploads_grace_period_hours
There is a upper bound of 7 days so even if the grace period is set really
high it will still run at least once a week.
By default we have a 48 grace period so this amends it to run this cleanup
daily instead of hourly. This eliminates 23 times we run this ultra expensive
query.
The query to count how many new users there are since a given date
is expensive. It's the least personalized stat and the one we fallback
to last when no better number can be found for the target user.
Give up accuracy so we can aggressively cache the user counts
that appear in this email.
Break up single large example into multiple examples, using fab! to maintain performance. On my machine, this speeds up the test slightly, and also makes it more readable.
That commit introduced a bug to the system: f69dacf979
Restore works fine for multisite, however, stopped working for non-multisite.
Reason for that was that `establish_connection` method got a check if the multisite instance is available:
```
def self.instance
@instance
end
def self.establish_connection(opts)
@instance.establish_connection(opts) if @instance
end
```
However, the reload method don't have that check
```
def self.reload
@instance = new(instance.config_filename)
end
```
To solve it, let's ensure we are in a multisite environment before call reload
This fix ensures that searches that contain a null byte return a 400
error instead of causing a 500 error.
For some reason from rspec we will reach the raise statement inside
of the `rescue_from ArgumentError` block, but outside of rspec it will
not execute the raise statement and so a 500 is thrown instead of
reaching the `rescue_from Discourse::InvalidParameters` block inside of
the application controller.
This fix raises Discourse::InvalidParameters directly from the search
controller instead of relying on `PG::Connection.escape_string` to
raise the `ArgumentError`.
The payload when receiving a notification webhook is pointless without
knowing which user the notification is for. This fix adds the user_id to
the notification serializer so that when you receive a notification
webhook you can properly identify which user the notification is for.
See
https://meta.discourse.org/t/getting-the-target-user-for-notification-webhook-events/129052?u=blake
for more details.
While editing the first post it does't bumped the topic when the new post revision created. Because we wrongly assumed that the hidden tags are changed even when no tags are updated.
Doing .pluck(:column).first is a very common pattern in Discourse and in
most cases, a limit cause isn't being added. Instead of adding a limit
clause to all these callsites, this commit adds two new methods to
ActiveRecord::Relation:
pluck_first, equivalent to limit(1).pluck(*columns).first
and pluck_first! which, like other finder methods, raises an exception
when no record is found
Trying to truncate encoded slugs will mean that we have to keep the URL
valid, which can be tricky as you have to be aware of multibyte
characters.
Since we already have upper bounds for the title, the slug won't grow
for more than title*6 in the worst case. The slug column in the topic
table can store that just fine.
Added a test to ensure that a generated slug is a valid URL too, so we
don't introduce regressions in the future.
Under exceptional situations the automatic draft feature can fail.
This new **hidden, default off** site setting
`backup_drafts_to_pm_length` will automatically backup any draft that is
saved by the system to a dedicated PM (originating from self)
The body of that PM will contain the text of the reply.
We can enable this feature strategically on sites exhibiting issues to
diagnose issues with the draft system and offer a recourse to users who
appear to lose drafts. We automatically checkpoint these drafts every 5
minutes forcing a new revision each 5 minutes so you can revert to old
content.
Longer term we are considering automatically enabling this kind of feature
for extremely long drafts where the risk is really high one could lose
days of writing.
This feature amends it so instead of using one challenge and honeypot
statically per site we have a rotating honeypot and challenge value which
changes every hour.
This means you must grab a fresh copy of honeypot and challenge value once
an hour or account registration will be rejected.
We also now cycle the value of the challenge when after successful account
registration forcing an extra call to hp.json between account registrations
Client has been made aware of these changes.
Additionally this contains a JavaScript workaround for:
https://bugs.chromium.org/p/chromium/issues/detail?id=987293
This is client side code that is specific to Chrome user agent and swaps
a PASSWORD type honeypot with a TEXT type honeypot.
After a small conversation, we decided that we can set `public_file_server.enabled` to false in the `test` environment to have the same value as `production`.
When a category has a subcategory, we ensure that no one who can see the
subcategory cannot see the parent. However, we don't take into account
the fact that, when no CategoryGroups exist, the default is that
everyone has full permissions.
Moving posts also moves the read state (`topic_users` table) to the destination topic. This changes that behavior so that only users who posted in the destination topic will have the original notification level (probably "watching") of the original topic. The notification level for all other users will be set to "regular".
Previously the 'local_cdn_url' method didn't returned the correct cdn url. So we written few incorrect spec tests too.\n\nf92a6f7ac5228342177bf089d269e2f69a69e2f5
When an admin changes the site setting slug_generation_method to
encoded, we weren't really encoding the slug, but just allowing non-ascii
characters in the slug (unicode).
That brings problems when a user posts a link to topic without the slug, as
our topic controller tries to redirect the user to the correct URL that contains
the slug with unicode characters. Having unicode in the Location header in a
response is a RFC violation and some browsers end up in a redirection loop.
Bug report: https://meta.discourse.org/t/-/125371?u=falco
This commit also checks if a site uses encoded slugs and clear all saved slugs
in the db so they can be regenerated using an onceoff job.