If mkdir fails then before this change it would have thrown an
error.
After this change, if the error indicated that the directory
already exists then the error is not returned to the user.
This fixes a race condition when two rclone threads are trying to
create the same directory.
Fixes issue with spacing between icon and text in backend docs headers.
This reverts the changes from PR #5889 and #5701, which aligned menu/dropdown items when
icons have different sizes, and implements an alternative fix which gives slightly better
results, and also is more of a native Font Awesome solution:
Font Awesome icons are designed on grid and share a consistent height. But they vary in
width depending on how wide or narrow each symbol is. If you prefer to work with icons
that have a consistent width, adding fa-fw will render each icon using the same width.
A very common mistake for new users of rclone is to use a remote name
without a colon. This can be on the command line or in the config when
setting up a crypt backend.
This change checks to see if the user uses a path which matches a
remote name and gives an NOTICE like this if they do
NOTICE: "remote" refers to a local folder, use "remote:" to refer to your remote or "./remote" to hide this warning
See: https://forum.rclone.org/t/sync-to-onedrive-personal-lands-file-in-localfilesystem-but-not-in-onedrive/32956
In this commit
8d1fff9a82 local: obey file filters in listing to fix errors on excluded files
We started using filters in the local backend so the user could short
circuit troublesome files/directories at a low level.
However this caused a number of integration tests to fail. This turned
out to be in backends wrapping the local backend. For example the
combine backend test failed because it changes the paths passed to the
local backend so they no longer match the paths in the current filter.
To fix this, a new feature flag `FilterAware` was added and the
UseFilter context flag is only passed to backends which support it. As
the wrapping backends don't support the flag, this fixes the problems
in the integration tests.
In future the wrapping backends could modify the active filters to
match the path modifications and then they could set the FilterAware
flag.
See #6376
Before this change we assumed that github.com/Unknwon/goconfig was
threadsafe as documented.
However it turns out it is not threadsafe and looking at the code it
appears that making it threadsafe might be quite hard.
So this change increases the lock coverage in configfile to cover the
goconfig uses also.
Fixes#6378
Before this fix, the chunksize calculator was using the previous size
of the object, not the new size of the object to calculate the chunk
sizes.
This meant that uploading a replacement object which needed a new
chunk size would fail, using too many parts.
This fix fixes the calculator to take the size explicitly.
Before this change, if rclone was run with `-M` on a filesystem
without xattr support, it would error out.
This patch makes rclone detect the not supported errors and disable
xattrs from then on. It prints one ERROR level message about this.
See: https://forum.rclone.org/t/metadata-update-local-s3/32277/7
Changes in github.com/anacrolix/dms changed upnp.ServiceURN to include a
namespace identifier. This identifier was previously hardcoded, but is
now parsed out of the URN. The old SOAP action header parsing logic was
duplicated in rclone and did not handle this field. Resulting responses
included a URN with an empty namespace identifier, breaking clients.
Before this change, rclone serve sftp operating with a new rclone
after the md5sum/sha1sum detection was reworked to just run a plain
`md5sum`/`sha1sum` command in
3ea82032e7 sftp: support md5/sha1 with rsync.net #3254
Failed to signal to the remote that md5sum/sha1sum wasn't supported as
in
71e172a139 serve/sftp: support empty "md5sum" and "sha1sum" commands
We unconditionally return good hashes even if the remote being served
doesn't support the hash type in question.
This fix checks the hash type is supported and returns an error
MD5 hash not supported
When the backend is first contacted this will cause the sftp backend
to detect that the hash type isn't available.
Unfortunately this may have cached the wrong state so editing or
remaking the config may be necessary to fix it.
Before this fix, the parsing code gave an error like this
parsing "2022-08-02 07:00:00" as fs.Time failed: expected newline
This was due to the Scan call failing to read all the data.
This patch fixes that, and redoes the tests
Before this change, if an object compressed with "Content-Encoding:
gzip" was downloaded, a length and hash mismatch would occur since the
go runtime automatically decompressed the object on download.
If --s3-decompress is set, this change erases the length and hash on
compressed objects so they can be downloaded successfully, at the cost
of not being able to check the length or the hash of the downloaded
object.
If --s3-decompress is not set the compressed files will be downloaded
as-is providing compressed objects with intact size and hash
information.
See #2658