Commit Graph

159 Commits

Author SHA1 Message Date
Martin Brennan
e4350bb966
FEATURE: Direct S3 multipart uploads for backups (#14736)
This PR introduces a new `enable_experimental_backup_uploads` site setting (default false and hidden), which when enabled alongside `enable_direct_s3_uploads` will allow for direct S3 multipart uploads of backup .tar.gz files.

To make multipart external uploads work with both the S3BackupStore and the S3Store, I've had to move several methods out of S3Store and into S3Helper, including:

* presigned_url
* create_multipart
* abort_multipart
* complete_multipart
* presign_multipart_part
* list_multipart_parts

Then, S3Store and S3BackupStore either delegate directly to S3Helper or have their own special methods to call S3Helper for these methods. FileStore.temporary_upload_path has also removed its dependence on upload_path, and can now be used interchangeably between the stores. A similar change was made in the frontend as well, moving the multipart related JS code out of ComposerUppyUpload and into a mixin of its own, so it can also be used by UppyUploadMixin.

Some changes to ExternalUploadManager had to be made here as well. The backup direct uploads do not need an Upload record made for them in the database, so they can be moved to their final S3 resting place when completing the multipart upload.

This changeset is not perfect; it introduces some special cases in UploadController to handle backups that was previously in BackupController, because UploadController is where the multipart routes are located. A subsequent pull request will pull these routes into a module or some other sharing pattern, along with hooks, so the backup controller and the upload controller (and any future controllers that may need them) can include these routes in a nicer way.
2021-11-11 08:25:31 +10:00
Martin Brennan
9a72a0945f
FIX: Ensure CORS rules exist for S3 using rake task (#14802)
This commit introduces a new s3:ensure_cors_rules rake task
that is run as a prerequisite to s3:upload_assets. This rake
task calls out to the S3CorsRulesets class to ensure that
the 3 relevant sets of CORS rules are applied, depending on
site settings:

* assets
* direct S3 backups
* direct S3 uploads

This works for both Global S3 settings and Database S3 settings
(the latter set directly via SiteSetting).

As it is, only one rule can be applied, which is generally
the assets rule as it is called first. This commit changes
the ensure_cors! method to be able to apply new rules as
well as the existing ones.

This commit also slightly changes the existing rules to cover
direct S3 uploads via uppy, especially multipart, which requires
some more headers.
2021-11-08 09:16:38 +10:00
Martin Brennan
a059c7251f
DEV: Add tests to S3Helper.ensure_cors and move rules to class (#14767)
In preparation for adding automatic CORS rules creation
for direct S3 uploads, I am adding tests here and moving the
CORS rule definitions into a dedicated class so they are all
in the one place.

There is a problem with ensure_cors! as well -- if there is
already a CORS rule defined (presumably the asset one) then
we do nothing and do not apply the new rule. This means that
the S3BackupStore.ensure_cors method does nothing right now
if the assets rule is already defined, and it will mean the
same for any direct S3 upload rules I add for uppy. We need
to be able to add more rules, not just one.

This is not a problem on our hosting because we define the
rules at an infra level.
2021-11-01 08:23:13 +10:00
Martin Brennan
b659e94a8e
DEV: Delete vacate_legacy_prefix_backups code (#14735)
Introduced in 3037617327, we no
longer need this code, as all of the backups have been
migrated.
2021-10-28 07:53:21 +10:00
Bianca Nenciu
e2c415457c
FEATURE: Attach backup log as upload (#13849)
Discourse automatically sends a private message after backup or
restore finished. The private message used to contain the log inline
even when it was very long. A very long log can create issues because
the length of the post will be over the maximum allowed length of a
post. When that happens, Discourse will try to create an upload with
the logs. If that fails, it will trim the log and inline it.
2021-08-03 20:06:50 +03:00
Gerhard Schlager
a37b5b177b
FIX: Remapping of uploads could fail during restore of backup (#13897)
Sidekiq should be paused until the end of the restore, otherwise it might interfere (deadlock) with the remap of upload URLs in posts.
2021-07-30 11:25:28 +02:00
David Taylor
d2c5165052 FIX: Check all migrations for dropped columns/tables during restore
Previously only post-deploy migrations were being checked for DROPPED_(COLUMNS|TABLES) constants
2021-06-23 17:43:38 +01:00
David Taylor
3436fef5e3
FIX: Update database_restorer to avoid shell use (#12731)
Follow-up to 0ec5fd5262
2021-04-16 13:39:45 +01:00
Gerhard Schlager
58c218a4bf
FIX: Remap old S3 endpoints during backup restore (#12276)
It also starts outputting exceptions on the console.
2021-03-03 21:10:09 +01:00
Gerhard Schlager
154bfcf750
FEATURE: Include details about S3 backup storage errors (#12257) 2021-03-02 15:29:37 +01:00
Gerhard Schlager
4b05fc2d2d
FIX: Restoring backup via UI was broken (#12089) 2021-02-15 16:46:44 +01:00
Gerhard Schlager
0b05302cfe FIX: Restoring could fail due to missing path 2021-02-09 17:28:03 +01:00
Gerhard Schlager
4f5ea4fbde FIX: Restoring backup could fail due to missing uploads
Clearing theme and emoji cache might require uploaded files.
2021-02-09 17:28:03 +01:00
Gerhard Schlager
4d719725c8
FEATURE: Allow overriding the backup location when restoring via CLI (#12015)
You can use `discourse restore --location=local FILENAME` if you want to restore a backup that is stored locally even though the `backup_location` has the value `s3`.
2021-02-09 16:02:44 +01:00
Gerhard Schlager
d5ef6188ed
PERF: Disable Sidekiq only during database restore (#10857)
It pauses Sidekiq, clears Redis (namespaced to the current site), clears Sidekiq jobs for the current site, restores the database and unpauses Sidekiq. Previously it stayed paused until the end of the restore.

Redis is cleared because we don't want any old data lying around (e.g. old Sidekiq jobs). Most data in Redis is prefixed with the name of the multisite, but Sidekiq jobs in a multisite are all stored in the same keys. So, deleting those jobs requires a little bit more logic.
2020-10-16 15:19:02 +02:00
Gerhard Schlager
1febf11362 FIX: Backup didn't work anymore after a running backup was canceled 2020-10-13 19:48:53 +02:00
Gerhard Schlager
ac70c48be4 FIX: Prevent "uploads are missing in S3" alerts after restoring a backup
After restoring a backup it takes up to 48 hours for uploads stored on S3 to appear in the S3 inventory. This change prevents alerts about missing uploads by preventing the EnsureS3UploadsExistence job from running in the first 48 hours after a restore. During the restore it  deletes the count of missing uploads from the PluginStore, so that an alert isn't triggered by an old number.
2020-09-10 21:37:48 +02:00
Gerhard Schlager
c0cd69d280
DEV: Restoring backups didn't work on macOS
This is a follow-up for f51ccea028 because specs were failing on macOS. macOS uses bsdtar by default and the --transform option isn't supported by bsdtar. But the -s option is and it works the same.
2020-08-27 19:08:47 +02:00
Gerhard Schlager
f51ccea028
FIX: Backups should use relative paths for local uploads
This also ensures that restoring a backup works when it was created with the wrong upload paths in the time between ab4c0a4970 (shortly after v2.6.0.beta1) and this fix.
2020-08-21 15:22:28 +02:00
Guo Xiang Tan
c7f1777ac9
DEV: No need to pause Sidekiq during backups.
This was done for historical reasons where we stop all writes going to
the DB during a backup by setting the database to readonly mode.
2020-07-29 10:52:31 +08:00
Gerhard Schlager
ab4c0a4970 FEATURE: Create SQL-only backup if there are no uploads
It doesn't make sense to compress the database dump twice if the backup doesn't contain any uploaded files.
2020-07-07 16:23:47 +02:00
Gerhard Schlager
fc8e842773 FIX: Sometimes not all output of psql was logged during restores
There was a race condition which could prevent Discourse from logging the last couple of lines of output from psql.
2020-06-30 16:52:50 +02:00
Gerhard Schlager
859d9b75a7 FIX: Restoring backup from PG12 could fail on PG10
The `EXECUTE FUNCTION` syntax for `CREATE TRIGGER` statements was introduced in PostgreSQL 11. We need to replace `EXECUTE FUNCTION` with `EXECUTE PROCEDURE` in order to be able to restore backups created with PG12 on PG10.
2020-06-16 16:04:14 +02:00
Andrew Schleifer
b2c94cc8ea FIX: do not migrate backups in the new prefix 2020-06-12 02:56:07 +00:00
Andrew Schleifer
74d28a43d1
new S3 backup layout (#9830)
* DEV: new S3 backup layout

Currently, with $S3_BACKUP_BUCKET of "bucket/backups", multisite backups
end up in "bucket/backups/backups/dbname/" and single-site will be in
"bucket/backups/".

Both _should_ be in "bucket/backups/dbname/"

- remove MULTISITE_PREFIX,
- always include dbname,
- method to move to the new prefix
- job to call the method

* SPEC: add tests for `VacateLegacyPrefixBackups` onceoff job.

Co-authored-by: Vinoth Kannan <vinothkannan@vinkas.com>
2020-05-29 00:28:23 +05:30
Michael Brown
d9a02d1336
Revert "Revert "Merge branch 'master' of https://github.com/discourse/discourse""
This reverts commit 20780a1eee.

* SECURITY: re-adds accidentally reverted commit:
  03d26cd6: ensure embed_url contains valid http(s) uri
* when the merge commit e62a85cf was reverted, git chose the 2660c2e2 parent to land on
  instead of the 03d26cd6 parent (which contains security fixes)
2020-05-23 00:56:13 -04:00
Jeff Atwood
20780a1eee Revert "Merge branch 'master' of https://github.com/discourse/discourse"
This reverts commit e62a85cf6f, reversing
changes made to 2660c2e21d.
2020-05-22 20:25:56 -07:00
Gerhard Schlager
0a700d81fc FIX: Restoring backups could fail for database dumps > 8GiB
This is a temporary fix until we ship a new image with bsdtar.
2020-05-19 22:36:59 +02:00
Gerhard Schlager
6d5e9db883 FIX: Restoring backup didn't clear cached translation overrides 2020-05-18 18:51:51 +02:00
Dan Ungureanu
3d9c320aab
PERF: Cache Category.subcategory_ids (#9350)
Also reset category cache after backup restore.
2020-04-09 15:42:24 +03:00
Gerhard Schlager
ad6709772a PERF: Backup with lots of uploads stored on S3 was slow
Creating the backup needs a lot more disk space after this change, but it is a lot faster.
2020-04-03 18:13:34 +02:00
Gerhard Schlager
13b4eb9cce FIX: Restore failed if schema contained objects not owned by the current DB user 2020-04-01 18:04:43 +02:00
Roman Rizzi
57321b90f0
Completely remove read only mode during backups (#9279) 2020-03-27 07:38:55 -03:00
Gerhard Schlager
8022e51179 FIX: Failed to restore backups from versions without translation overrides
Rails calls I18n.translate during initialization and by default translation overrides are used. Database migrations would fail if the system tried to migrate from an old version that didn't have the `translation_overrides` table with all its columns yet.

This makes restoring really old backups work again. Running `DISABLE_TRANSLATION_OVERRIDES=1 rake db:migrate` will allow you to upgrade such an old database as well.
2020-03-14 00:00:22 +01:00
Roman Rizzi
87687c0819
Drop unnecessary readonly_during_backup setting (#9112) 2020-03-06 14:29:00 -03:00
Gerhard Schlager
8fa8bab9ff FIX: Don't optimize icons during db:migrate when restoring backup
Uploads are extracted after the DB migration, so this could lead to a failure during the restore. Site icons get optimized after extracting uploads.
2020-03-04 16:59:49 +01:00
Gerhard Schlager
7c30986b5e FIX: Failed to notify user after restoring backup 2020-01-25 22:07:41 +01:00
Gerhard Schlager
f216c6d60b FEATURE: Drop "backup" schema 7 days after restore
The "backup" schema is used to rollback a failed restore. It isn't useful after a longer period of time and turns into a waste of disk space.
2020-01-16 17:48:47 +01:00
Gerhard Schlager
5e3fc31f2c DEV: Less hacky way of rolling back DB changes
Some specs use psql to test database restores and dropping the table after the test needs to happen outside of rspec because of transactions. The previous attempt lead to some changes to be stored in the test database.
2020-01-15 23:37:42 +01:00
Gerhard Schlager
68a7ae3091 REFACTOR: Simplify backup version check
Adds specs for inalid version number in metadata file.
Follow-up to c3cd2389fe
2020-01-15 23:37:40 +01:00
Martin Brennan
c3cd2389fe SECURITY: use strict JSON parsing when parsing backup metadata 2020-01-15 11:24:41 +01:00
Gerhard Schlager
e474cda321 REFACTOR: Restoring of backups and migration of uploads to S3 2020-01-14 11:41:35 +01:00
Vinoth Kannan
3b7f5db5ba
FIX: parallel spec system needs a dedicated upload folder for each worker. (#8547) 2019-12-18 11:21:57 +05:30
David Taylor
481efebe76
DEV: Update backup/restore pipeline to avoid cd (#8347) 2019-11-13 15:52:28 +00:00
Justin DiRose
c3f06943c7
FIX: Account for empty uploads directory upon backup restore (#8262)
This commit fixes a case where backup restores would fail if the uploads/default directory is empty.
2019-10-30 09:33:07 -05:00
Krzysztof Kotlarek
f530378df3 FIX: Restore for non-multisite is not raising an error on reconnect step (#8237)
That commit introduced a bug to the system: f69dacf979

Restore works fine for multisite, however, stopped working for non-multisite.

Reason for that was that `establish_connection` method got a check if the multisite instance is available:
```
    def self.instance
      @instance
    end

    def self.establish_connection(opts)
      @instance.establish_connection(opts) if @instance
    end
```
However, the reload method don't have that check
```
    def self.reload
      @instance = new(instance.config_filename)
    end
```

To solve it, let's ensure we are in a multisite environment before call reload
2019-10-24 11:46:22 +11:00
Krzysztof Kotlarek
f69dacf979 FIX: Reconnect in restore process connects to correct DB (#8218)
Simplified flow of restore is like that
```
migrate_database
reconnect
extract_uploads
```

Problem with incorrect current database started with this fix https://github.com/discourse/discourse/commit/025d4ee91f4

Dump task is reconnecting to default database https://github.com/rails/rails/blob/master/activerecord/lib/active_record/railties/databases.rake#L429

And then, we are trying to reconnect to the original database with that code:
```
def reconnect_database
  log "Reconnecting to the database..."
  RailsMultisite::ConnectionManagement::establish_connection(db: @current_db)
end
```

This reconnect is not switching us back to correct database because of that check
https://github.com/discourse/rails_multisite/blob/master/lib/rails_multisite/connection_management.rb#L181
Basically, it finds existing handler and it thinks that we are connected to correct DB and this step can be skipped.

To solve it, we can reload RailsMultisite::ConnectionManagement which creates a new instance of that class
https://github.com/discourse/rails_multisite/blob/master/lib/rails_multisite/connection_management.rb#L38
2019-10-23 17:23:50 +11:00
Roman Rizzi
01bc465db8
DEV: Split max decompressed setting for themes and backups (#8179) 2019-10-11 14:38:10 -03:00
Roman Rizzi
5357ab3324
SECURITY: Safely decompress backups when restoring. (#8166)
* SECURITY: Safely decompress backups when restoring.

* Fix tests and update theme_controller_spec to work with zip files instead of .tar.gz
2019-10-09 11:41:16 -03:00
Krzysztof Kotlarek
427d54b2b0 DEV: Upgrading Discourse to Zeitwerk (#8098)
Zeitwerk simplifies working with dependencies in dev and makes it easier reloading class chains. 

We no longer need to use Rails "require_dependency" anywhere and instead can just use standard 
Ruby patterns to require files.

This is a far reaching change and we expect some followups here.
2019-10-02 14:01:53 +10:00