Before this change if you tried to create a bucket that already
existed, but someone else owned then rclone did not return an error.
This now will return an error on providers that return the
AlreadyOwnedByYou error code or no error on bucket creation of an
existing bucket owned by you.
This introduces a new provider quirk and this has been set or cleared
for as many providers as can be tested. This can be overridden by the
--s3-use-already-exists flag.
Fixes#7351
Before this change, attempting to server side copy a google form would
give this error
No export formats found for "application/vnd.google-apps.form"
Adding this flag allows the form to be server side copied but not
downloaded.
Fixes#6302
Summary:
In cases where cmount is not available in macOS, we alias nfsmount to mount command and transparently start the NFS server and mount it to the target dir.
The NFS server is started on localhost on a random port so it is reasonably secure.
Test Plan:
```
go run rclone.go mount --http-url https://beta.rclone.org :http: nfs-test
```
Added mount tests:
```
go test ./cmd/nfsmount
```
Summary:
Adding a new command to serve any remote over NFS. This is only useful for new macOS versions where FUSE mounts are not available.
* Added willscot/go-nfs dependency and updated go.mod and go.sum
Test Plan:
```
go run rclone.go serve nfs --http-url https://beta.rclone.org :http:
```
Test that it is serving correctly by mounting the NFS directory.
```
mkdir nfs-test
mount -oport=58654,mountport=58654 localhost: nfs-test
```
Then we can list the mounted directory to see it is working.
```
ls nfs-test
```
billy defines a common file system interface that is used in multiple go packages.
vfs.Handle implements billy.File mostly, only two methods needed to be added to
make it compliant.
An interface check is added as well.
This is a preliminary work for adding serve nfs command.
A subtle bug where dir modification time is not updated when the dir already exists
in the cache. It is only noticeable when some clients use dir modification time to
invalidate cache.
Name() method was originally left out and defaulted to the base
class which always returns empty. This trigerred incorrect behavior
in serve nfs where it relied on the Name() of the interafce to figure
out what file it was modifying.
This method is copied from RWFileHandle struct.
Added extra assert in the tests.
This almost 100% backwards compatible. The only difference being that
in the rc options/get output DumpMode will be output as strings
instead of integers. This is a lot more convenient for the user. They
still accept integer inputs though so the fallout from this should be
minimal.
This almost 100% backwards compatible. The only difference being that
in the rc options/get output CacheMode will be output as strings
instead of integers. This is a lot more convenient for the user. They
still accept integer inputs though so the fallout from this should be
minimal.
This almost 100% backwards compatible. The only difference being that
in the rc options/get output CutoffMode, LogLevel, TerminalColorMode
will be output as strings instead of integers. This is a lot more
convenient for the user. They still accept integer inputs though so
the fallout from this should be minimal.
Before this change backend types were printing incorrectly as the name
of the type, not what was defined by the Type() method.
This was not working due to not calling the Type() method. However
this needed to be defined on a non-pointer type due to the way the
options are handled.
In v1.63 memory usage in the b2 backend was limited to `--transfers` *
`--b2-chunk-size`
However in v1.64 this was raised to `--transfers` * `--b2-chunk-size`
* `--b2-upload-concurrency`.
The default value for this was accidently set quite high at 16 which
means by default rclone could use up to 6.4GB of memory!
The new default sets a more reasonable (but still high) max memory of 1.6GB.
Before this change, the lock was held while the upload URL was being
fetched from the server.
This meant that any other threads were blocked from getting upload
URLs unecessarily.
It also increased the potential for deadlock.
Before this change, the maximum number of connections was set to 10.
This means that b2 could deadlock while uploading multipart uploads
due to a lock being held longer than it should have been.
Most useful is the addition of the file created timestamp, but also a timestamp for
when the file was uploaded.
Currently supporting a rather minimalistic set of metadata items, see PR #6359 for
some thoughts about possible extensions.
In this commit:
5f938fb9ed s3: fix "Entry doesn't belong in directory" errors when using directory markers
We checked that the remote has the prefix and then changed the remote
before removing the prefix. This sometimes causes:
panic: runtime error: slice bounds out of range [56:55]
The fix is to do the modification of the remote after removing the
prefix.
See: https://forum.rclone.org/t/cryptcheck-panic-runtime-error-slice-bounds-out-of-range/41977
This was always the intention, it was just implemented wrong.
This shortens the s3 docs by 1369 bringing them down to half the size
just about.
Fixes#7325
Before this change the b2 backend listed all the buckets to turn a
single bucket name into an ID.
However in July 26, 2018 a parameter was added to the list buckets API
to make listing all the buckets unecessary.
This code sets the bucketName parameter so that only the results for
the desired bucket are returned.