Before this change, if --create-empty-src-dirs was specified, bisync would
include directories in the list of deltas to evaluate by their modtime,
relative to the prior sync. This was unnecessary, as rclone does not yet
support setting modtime for directories.
After this change, we skip directories when comparing modtimes. (In other
words, we care only if a directory is created or deleted, not whether it is
newer or older.)
Before this change, if there were changes to sync, bisync listed each path
twice: once before the sync and once after. The second listing caused quite
a lot of problems, in addition to making each run much slower and more
expensive. A serious side-effect was that file changes could slip through
undetected, if they happened to occur while a sync was running (between the
first and second listing snapshots.)
After this change, the second listing is eliminated by getting the underlying
sync operation to report back a list of what it changed. Not only is this more
efficient, but also much more robust to concurrent modifications. It should no
longer be necessary to avoid make changes while it's running -- bisync will
simply learn about those changes next time and handle them on the next run.
Additionally, this also makes --check-sync usable again.
For further discussion, see:
https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=5.%20Final%20listings%20should%20be%20created%20from%20initial%20snapshot%20%2B%20deltas%2C%20not%20full%20re%2Dscans%2C%20to%20avoid%20errors%20if%20files%20changed%20during%20sync
Allows rclone sync to accept the same output file flags as rclone check,
for the purpose of writing results to a file.
A new --dest-after option is also supported, which writes a list file using
the same ListFormat flags as lsf (including customizable options for hash,
modtime, etc.) Conceptually it is similar to rsync's --itemize-changes, but
not identical -- it should output an accurate list of what will be on the
destination after the sync.
Note that it has a few limitations, and certain scenarios
are not currently supported:
--max-duration / CutoffModeHard
--compare-dest / --copy-dest (because equal() is called multiple times for the
same file)
server-side moves of an entire dir at once (because we never get the individual
file objects in the dir)
High-level retries, because there would be dupes
Possibly some error scenarios that didn't come up on the tests
Note also that each file is logged during the sync, as opposed to after, so it
is most useful as a predictor of what SHOULD happen to each file
(which may or may not match what actually DID.)
Only rclone sync is currently supported -- support for copy and move may be
added in the future.
Before this change, bisync handled copies and deletes in separate operations.
After this change, they are combined in one sync operation, which is faster
and also allows bisync to support --track-renames and --backup-dir.
Bisync uses a --files-from filter containing only the paths bisync has
determined need to be synced. Just like in sync (but in both directions),
if a path is present on the dst but not the src, it's interpreted as a delete
rather than a copy.
Before this change, lsf's time format was hard-coded to "2006-01-02 15:04:05",
regardless of the Fs's precision. After this change, a new optional
--time-format flag is added to allow customizing the format (the default is
unchanged).
Examples:
rclone lsf remote:path --format pt --time-format 'Jan 2, 2006 at 3:04pm (MST)'
rclone lsf remote:path --format pt --time-format '2006-01-02 15:04:05.000000000'
rclone lsf remote:path --format pt --time-format '2006-01-02T15:04:05.999999999Z07:00'
rclone lsf remote:path --format pt --time-format RFC3339
rclone lsf remote:path --format pt --time-format DateOnly
rclone lsf remote:path --format pt --time-format max
--time-format max will automatically truncate '2006-01-02 15:04:05.000000000'
to the maximum precision supported by the remote.
This updates the direct dependencies.
The latest github.com/willscott/go-nfs has changed the interface
slightly so this implements a dummy InvalidateHandle method in order
to satisfy it.
Before this change, listing a subdirectory gave errors like this:
Entry doesn't belong in directory "" (contains subdir) - ignoring
It also did full recursive listings when it didn't need to.
This was caused by the code using the underlying Fs to do recursive
listings on bucket based backends.
Using both the VFS and the underlying Fs is a mistake so this patch
removes the code which uses the underlying Fs and just uses the VFS.
Fixes#7500
- make compile on all unix OSes - this will make the docs appear on linux and rclone.org!
- add --sudo flag for using with mount
- improve error reporting
- fix option handling
Before this change overwriting an existing file with a 0 length file
didn't update the file size.
This change corrects the issue and makes sure the file is truncated
properly.
This was discovered by the full integration tests.
Before this change serve s3 would return NoSuchKey errors when a non
existent prefix was listed.
This change fixes it to return an empty list like AWS does.
This was discovered by the full integration tests.
- Changes
- Rename `--s3-authkey` to `--auth-key` to get it out of the s3 backend namespace
- Enable `Content-MD5` integrity checks
- Remove locking after code audit
- Documentation
- Factor out documentation into seperate file
- Add Quickstart to docs
- Add Bugs section to docs
- Add experimental tag to docs
- Add rclone provider to s3 backend docs
- Fixes
- Correct quirks in s3 backend
- Change fmt.Printlns into fs.Logs
- Make metadata storage per backend not global
- Log on startup if anonymous access is enabled
- Coding style fixes
- rename fs to vfs to save confusion with the rest of rclone code
- rename db to b for *s3Backend
Fixes#7062
- add context to log and fallthrough to error log level
- test: use rclone random lib to generate random strings
- calculate hash from vfs cache if file is uploading
- add server started log with server url
- remove md5 hasher
The upstream library rclone uses for rclone mount no longer supports
freebsd. Not only is it broken, but it no longer compiles.
This patch disables rclone mount for freebsd.
However all is not lost for freebsd users - compiling rclone with the
`cmount` tag, so `go install -tags cmount` will install a working
`rclone mount` command which uses cgofuse and the libfuse C library
directly.
Note that the binaries from rclone.org will not have mount support as
we don't have a freebsd build machine in CI and it is very hard to
cross compile cmount.
See: https://github.com/bazil/fuse/issues/280Fixes#5843
ncdu stores the position that it was in for each directory. However
doing a rescan can cause those positions to be out of range if the
number of files decreased in a directory. When re-entering the
directory, this causes an index out of range error.
This fixes the problem by detecting the index out of range and
flushing the saved directory position.
See: https://forum.rclone.org/t/slice-bounds-out-of-range-during-ncdu/42492/
This was caused by a change to the upstream library
ProtonMail/go-crypto checking the flags on the keys more strictly.
However the signing key for rclone is very old and does not have those
flags. Adding those flags using `gpg --edit-key` and then the
`change-usage` subcommand to remove, save, quite then re-add, save
quit the signing capabilities caused the key to work.
This also adds tests for the verification and adds the selfupdate
tests into the integration test harness as they had been disabled on
CI because they rely on external sources and are sometimes unreliable.
Fixes#7373
With automount the target mount drive appears twice in /proc/self/mountinfo.
379 27 0:70 / /mnt/rclone rw,relatime shared:433 - autofs systemd-1 rw,fd=57,...
566 379 0:90 / /mnt/rclone rw,nosuid,nodev,relatime shared:488 - fuse.rclone remote: rw,...
Before this fix we only looked for the mount once in
/proc/self/mountinfo. It finds the automount line and since this
doesn't have fs type rclone it concludes the mount isn't ready yet.
This patch makes rclone look through all the mounts and if any of them
have fs type rclone it concludes the mount is ready.
See: https://forum.rclone.org/t/systemd-mount-works-but-automount-does-not/42287/
Before this change, if a hardlink command was issued, rclone would
just ignore it and not return an error.
This changes any unknown operations (including hardlink) to return an
unsupported error.
Summary:
In cases where cmount is not available in macOS, we alias nfsmount to mount command and transparently start the NFS server and mount it to the target dir.
The NFS server is started on localhost on a random port so it is reasonably secure.
Test Plan:
```
go run rclone.go mount --http-url https://beta.rclone.org :http: nfs-test
```
Added mount tests:
```
go test ./cmd/nfsmount
```
Summary:
Adding a new command to serve any remote over NFS. This is only useful for new macOS versions where FUSE mounts are not available.
* Added willscot/go-nfs dependency and updated go.mod and go.sum
Test Plan:
```
go run rclone.go serve nfs --http-url https://beta.rclone.org :http:
```
Test that it is serving correctly by mounting the NFS directory.
```
mkdir nfs-test
mount -oport=58654,mountport=58654 localhost: nfs-test
```
Then we can list the mounted directory to see it is working.
```
ls nfs-test
```
This was always the intention, it was just implemented wrong.
This shortens the s3 docs by 1369 bringing them down to half the size
just about.
Fixes#7325
Before this change, bisync ignored the dryRun parameter (only when specified
via the rc.)
This change fixes the issue, so that the dryRun rc parameter is equivalent to
the --dry-run flag.