Some cloud providers (Google Memorystore) do not support any CLIENT commands
By setting :id to nil in the redis config hash we can avoid these commands.
This adds a special global setting GCE users can enable:
`DISCOURSE_REDIS_SKIP_CLIENT_COMMANDS = true`
This is a possible solution for https://meta.discourse.org/t/user-api-keys-specification/48536/19
This allows for user-api-key requests to not require a redirect url.
Instead, the encypted payload will just be displayed after creation ( which can be copied
pasted into an env for a CLI, for example )
Also: Show instructions when creating user-api-key w/out redirect
This adds a view to show instructions when requesting a user-api-key
without a redirect. It adds a erb template and json format.
Also adds a i18n user_api_key.instructions for server.en.yml
We have the periodical job that regularly will rebake old posts. This is
used to trickle in update to cooked markdown. The problem is that each rebake
can issue multiple background jobs (post process and pull hotlinked images)
Previously we had no per-cluster limit so cluster running 100s of sites could
flood the sidekiq queue with rebake related jobs.
New system introduces a hard limit of 300 rebakes per 15 minutes across a
cluster to ensure the sidekiq job is not dominated by this.
We also reduced `rebake_old_posts_count` to 80, which is a safer default.
This avoids require dependency on method_profiler and anon cache.
It means that if there is any change to these files the reloader will not pick it up.
Previously the reloader was picking up the anon cache twice causing it to double load on boot.
This caused warnings.
Long term my plan is to give up on require dependency and instead use:
https://github.com/Shopify/autoload_reloader
This validation makes sure that the s3_upload_bucket and the
s3_backup_bucket have different values. The backup bucket is
allowed to be a subfolder of the upload bucket. The other way
around is forbidden because the backup system searches by
prefix and would return all files stored within the backup
bucket and its subfolders.
* Dashboard doesn't timeout anymore when Amazon S3 is used for backups
* Storage stats are now a proper report with the same caching rules
* Changing the backup_location, s3_backup_bucket or creating and deleting backups removes the report from the cache
* It shows the number of backups and the backup location
* It shows the used space for the correct backup location instead of always showing used space on local storage
* It shows the date of the last backup as relative date
This provides us with instrumentation missing after rails upgrade
Latest version of rails uses exec_params internally which is no longer
routed to intercepted methods in mini profiler 1.0.0
The wizard searches for:
* a topic that with the "is_welcome_topic" custom field
* a topic with the correct slug for the current default locale
* a topic with the correct slug for the English locale
* the oldest globally pinned topic
It gives up if it didn't find any of the above.