Previously we were only catching one type of data export, the new job will
catch every csv export we have
Job is pretty safe as it filters on system user id / pm with a particular
slug
Historically we would keep the user data export posts around but delete
the uploads.
This leaves a lot of broken uploads in the system.
This rake task allows us to clean up old mess.
Filename on disk may mismatch sha of file in some old 1X setups. This will
attempt to recover file even if sha1 mismatches. We had an old bug that
caused this.
This also adds `uploads:fix_relative_upload_links` which attempts to replace
urls of the format `/upload/default/...` with `upload://`
Rebaking posts can be expensive instead of blocking here simply mark posts
for rebake.
We can then work through them faster in other jobs, plus this should not
hold of a datacenter migration.
Previously this rake job would only run on a single site which is a bit
misleading
This also adds `VERBOSE=1 rake posts:missing_uploads` that will provide a
full report of missing uploads
This allows you to wait up to N seconds for the smoke test url to come up
in some cases you want to kick off the smoke test prior to having the smoke
test env ready to accept connections
This reduces chances of errors where consumers of strings mutate inputs
and reduces memory usage of the app.
Test suite passes now, but there may be some stuff left, so we will run
a few sites on a branch prior to merging
`#find` raises an error if the id given to it is invalid. As a result,
the conditional to check whether a `group` or `badge` is `present?` will
not be executed if any of the ids are invalid.
Follow up to
6ba914033c.
#b9d82818 makes enormous improvements to our bootstrap time, however going
to still keep compress for now despite the cost and watch it for a few weeks
* Do not brotli all locales in precompile
* Try without gzip
* uglify without compressing, always gzip
* skip uglify for unused locales
* FIX: Uglifier needs harmony for ES6 compatibility
* Use node uglifier if available
* Minor refactor
No point moving all optimized image files to tombstone when the store is
changing. Also, `destroy_all` can easily blow memory since we are no
loading in batches.
This removes all uses of both `send` and `public_send` from consumers of
SiteSetting and instead introduces a `get` helper for dynamic lookup
This leads to much cleaner and safer code long term as we are always explicit
to test that a site setting is really there before sending an arbitrary
string to the class
It also removes a couple of risky stubs from the auth provider test