This allows us to run regular rebakes without starving the normal queue.
It additionally adds the ability to specify queue with `Jobs.enqueue` so
we can specifically queue a job with lower priority using the `queue` arg.
Before this patch, a high trust level user could flag something
and have an action be taken, as well as skipping the flag queue.
Now, if a TL3/TL4 cause an action, the flag will skip the minimum
visibility check and allow staff to review it.
FIX: buildTranslationTree was erroring when translations overlapped (ie. ":-)" and ":-))")
FIX: emoji translations wasn't working properly when translations overlapped
Previously we only allowed one image optimization per machine, this meant there
was cross talk between avatar resizing and Sidekiq. This could lead to large
amounts of starvation when optimized image version changed which in turn could
block the Sidekiq queue.
This increases amount of allowed load on machines but this is preferable to
having crosstalk between avatar resizing and Sidekiq.
Some cloud providers (Google Memorystore) do not support any CLIENT commands
By setting :id to nil in the redis config hash we can avoid these commands.
This adds a special global setting GCE users can enable:
`DISCOURSE_REDIS_SKIP_CLIENT_COMMANDS = true`
We have the periodical job that regularly will rebake old posts. This is
used to trickle in update to cooked markdown. The problem is that each rebake
can issue multiple background jobs (post process and pull hotlinked images)
Previously we had no per-cluster limit so cluster running 100s of sites could
flood the sidekiq queue with rebake related jobs.
New system introduces a hard limit of 300 rebakes per 15 minutes across a
cluster to ensure the sidekiq job is not dominated by this.
We also reduced `rebake_old_posts_count` to 80, which is a safer default.
This reverts commit 993f847a2c.
There is an edge case where the link click redirect fails when the URL has trailing slash. Need to figure out a better fix for this.
Previously we had no idea what algorithm generated thumbnails, this starts tracking the version.
We also bumped up the version to force all optimized images to be generated. This is important cause we recently introduced pngquant which results in much smaller images.
This feature ensures optimized images run via pngquant, this results extreme amounts of savings for resized images. Effectively the only impact is that the color palette on small resized images is reduced to 256.
To ensure safety we only apply this optimisation to images smaller than 500k.
This commit also makes a bunch of image specs less fragile.
Previously if upload had missing width and height we would calculate
on first use BUT we (me) forgot to save this to the database
This was particularly bad on home page cause category images (when old)
miss dimensions.
This generates a 10x10 PNG thumbnail for each lightboxed image.
If Image Lazy Loading is enabled (IntersectionObserver API) then
we'll load the low res version when offscreen. As the image scrolls
in we'll swap it for the high res version.
We use a WeakMap to track the old image attributes. It's much less
memory than storing them as `data-*` attributes and swapping them
back and forth all the time.
* Dashboard doesn't timeout anymore when Amazon S3 is used for backups
* Storage stats are now a proper report with the same caching rules
* Changing the backup_location, s3_backup_bucket or creating and deleting backups removes the report from the cache
* It shows the number of backups and the backup location
* It shows the used space for the correct backup location instead of always showing used space on local storage
* It shows the date of the last backup as relative date
`SiteSerializer#is_readonly` is cached for an anonymous user so we have
to clear the cache when disabling readonly mode. Otherwise, the site may
appear to be in readonly mode for an extended period of time.