The test TestIntegration/FsMkdir/FsPutFiles/FromRoot/ListR fails in
the integration test because there is a broken bucket in the test
account which support haven't been able to remove.
This tries to fix the integration tests by only allowing one
premiumizeme test to run at once, in the hope it will stop rclone
hitting the rate limits and breaking the tests.
See: #5734
The TestIntegration/FsMkdir/FsPutFiles/PublicLink test doesn't work on
a standard onedrive account, it returns
accessDenied: accountUpgradeRequired: Account Upgrade is required for this operation.
See: #5734
The TestIntegration/FsMkdir/FsPutFiles/PublicLink test doesn't work on
a standard dropbox account, only on an enterprise account because it
sets expiry dates.
See: #5734
- setup correct path encoding (fixes backend test FsEncoding)
- ignore range option if file is empty (fixes VFS test TestFileReadAtZeroLength)
- cleanup stray files left after failed upload (fixes test FsPutError)
- rebase code on master, adapt backend for rclone context passing
- translate Siad errors to rclone native FS errors in sia errorHandler
- TestSia: return proper backend options from the script
- TestSia: use uptodate AntFarm image, nebulouslabs/siaantfarm is stale
Some storage providers e.g. S3 don't have an efficient rename operation.
Before this change, when chunker finished an upload, the server-side copy
and delete operations that renamed temporary chunks to their final names
could take a significant amount of time.
This PR records transaction identifier (versioning) in the metadata of
chunker composite objects striving to remove the need for rename
operations on such backends.
This approach will be triggered be the new "transactions" configuration
option, which can be "rename" (the default) or "norename".
We implement the new approach for uploads (Put operations).
The chunker Move operation still uses the rename operation of
underlying backend. Filling this gap is left for a later PR.
Co-authored-by: Ivan Andreev <ivandeex@gmail.com>
This includes an HDFS docker image to use with the integration tests.
Co-authored-by: Ivan Andreev <ivandeex@gmail.com>
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
Occasionally the b2 tests fail because the integration tests don't
retry hard enough with their new setting of -list-retries 3. Override
this setting to 5 for the b2 tests only.
Running all the tests for 1fichier takes too long due to the directory
reading rate limiter.
The backend tests do complete in a reasonable time (21 mins).
This is what I wrote to Digital Ocean support on July 10, 2020 - alas
it didn't result in the rate limits dropping, so reluctantly I'm going
to remove DO from the integration tests since they never pass and have
no hope of ever passing while this rate limit is in effect.
----
Somewhere towards the end of June 2020 or the start of July 2020 my
integration tests between rclone ( https://rclone.org ) and Digital
Ocean started failing.
I tried moving the tests to different regions (currently they are
using AMS1 because I'm in Europe) with no improvement.
Rclone seems to be hitting this rate limit as documented here:
https://www.digitalocean.com/docs/spaces/#limits
- 2 COPYs per 5 minutes on any individual object in a Space
Rclone creates small objects about 100 bytes in size and renames them
a few times - this involves using the COPY call as S3 does not have a
rename API. The tests do this more than twice per object so hit the 5
minute timeout I think. Rclone does exponential backoff and fails
after 10 retries not having reached 5 minutes delay after 10 retries.
Having a 5 minute lockout on an S3 compatible API is surprising!
Rclone integration tests with about 30 other providers, none of which
have a rate limit like this.
I understand the need for a COPY rate limit as server side copying
large files can be resource intensive. However a 5 minute lockout for
copying 100 byte files seems excessive!
Might I humbly suggest that you reduce or eliminate this rate limit
for small files?
----
This was the reply
Unfortunately it is not possible to raise this limit or remove it
currently on our platform. I do see how this would interfere with type
of applications that need to copy many small files and will be happy
to take the feedback to our engineering team to see how we can improve
the spaces system in the future
This has been reported to Wasabi and they've confirmed as a known
issue that multipart uploads can't be 0 sized even though that is
incompatible with AWS S3.
changes:
- chunker: remove GetTier and SetTier
- remove wdmrcompat metaformat
- remove fastopen strategy
- make hash_type option non-advanced
- adverise hash support when possible
- add metadata field "ver", run strict checks
- describe internal behavior in comments
- improve documentation
note:
wdmrcompat used to write file name in the metadata, so maximum metadata
size was 1K; removing it allows to cap size by 200 bytes now.
This was started by Fionera, finished off by Laura with fixes and more
docs from Nick.
Co-authored-by: Fionera <fionera@fionera.de>
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
Before this change backend integration tests depended on each other,
so tests could not be retried.
After this change we nest tests to ensure that tests are provided with
the starting state they expect.
Tell the integration test runner that it can retry backend tests also.
This also includes bin/test_independence.go which runs each test
individually for a backend to prove that they are independent.
- Make integration tests use a config file
- Output individual logs for each test
- Make HTML report and open browser
- Optionally email and upload results