Under exceptional situations the automatic draft feature can fail.
This new **hidden, default off** site setting
`backup_drafts_to_pm_length` will automatically backup any draft that is
saved by the system to a dedicated PM (originating from self)
The body of that PM will contain the text of the reply.
We can enable this feature strategically on sites exhibiting issues to
diagnose issues with the draft system and offer a recourse to users who
appear to lose drafts. We automatically checkpoint these drafts every 5
minutes forcing a new revision each 5 minutes so you can revert to old
content.
Longer term we are considering automatically enabling this kind of feature
for extremely long drafts where the risk is really high one could lose
days of writing.
This reverts commit ab74a50d85.
We really want to upgrade redis, but discovered some edge cases
around failover we need to test.
Holding off on the upgrade till a bit more testing happens
* FIX: Do not encode the URL twice
Now that we encode slugs in the server we don't need this anymore.
Reverts fe5na33
* FIX: More places do deal with encoded slugs
* the param is a string now, not a hash
* FIX: Handle the nil slug on /categories
* DEV: Add seeded? method to identity default categories
* DEV: Use SiteSetting to keep track of seeded categories
Slugs can be the empty string, but the added index didn't account for
that. This commit changes the migration, stopping it from being unique
so that it can be applied everywhere and adds another migration that
recreates the index properly.
From the better_errors README:
> Better Errors works by leaving a lot of context in server process memory. If you're using a web server that runs multiple "workers" it's likely that a second request (as happens when you click on a stack frame) will hit a different worker. That worker won't have the necessary context in memory, and you'll see a Session Expired message.
This feature amends it so instead of using one challenge and honeypot
statically per site we have a rotating honeypot and challenge value which
changes every hour.
This means you must grab a fresh copy of honeypot and challenge value once
an hour or account registration will be rejected.
We also now cycle the value of the challenge when after successful account
registration forcing an extra call to hp.json between account registrations
Client has been made aware of these changes.
Additionally this contains a JavaScript workaround for:
https://bugs.chromium.org/p/chromium/issues/detail?id=987293
This is client side code that is specific to Chrome user agent and swaps
a PASSWORD type honeypot with a TEXT type honeypot.
After a small conversation, we decided that we can set `public_file_server.enabled` to false in the `test` environment to have the same value as `production`.
- `site.json` now returns a list of group objects, not a `group_names` array (a6714e25)
- `c/1/show.json` now includes `custom_fields: {}`, even if no fields exist (b8bd0316)
Previously some local micro-benchmarks revealed it was not giving any perf
benefits.
Now that we upgraded to 2.6.5 we are seeing some segfaults.
No need to carry this dependency around anymore.
We can re-evaluate in future if it improves perf and fix the segfaults.
When a category has a subcategory, we ensure that no one who can see the
subcategory cannot see the parent. However, we don't take into account
the fact that, when no CategoryGroups exist, the default is that
everyone has full permissions.
Moving posts also moves the read state (`topic_users` table) to the destination topic. This changes that behavior so that only users who posted in the destination topic will have the original notification level (probably "watching") of the original topic. The notification level for all other users will be set to "regular".
Previously the 'local_cdn_url' method didn't returned the correct cdn url. So we written few incorrect spec tests too.\n\nf92a6f7ac5228342177bf089d269e2f69a69e2f5