Includes adding support for additional size input suffix Mi and MiB, treated equivalent to M.
Extends binary suffix output with letter i, e.g. Ki and Mi.
Centralizes creation of bit/byte unit strings.
This includes an HDFS docker image to use with the integration tests.
Co-authored-by: Ivan Andreev <ivandeex@gmail.com>
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
Yandex appears to ignore mime types set as part of the PUT request or
as part of a PATCH request.
The docs make no mention of being able to set a mime type, so set
WriteMimeType=false indicating the backend can't set mime types on
uploaded files.
Create a full loop of documentation for rclone about, backends overview
and individual backend pages.
Discussion:
https://github.com/rclone/rclone/pull/4774 relates
Previously pull was requested, in part, under ref
https://github.com/rclone/rclone/pull/4801
Notes:
Introduce a tentative draft see-link format the end of sections to try
rather than lots of in-para links.
Update about.go incl link to list of backends not supporting about.
In list of backends not supporting about, include link to about command
reference.
I appreciate there may be decisions to make going forward about whether
command links should be code formatted, and using proper pretty url
links, but I have fudged that for now.
Update backend pages that do not support about with wording used
previously for ftp - it is in passive voice but I can live with it. (my
own wording and fault). The note is applied to a limitations section. If
one does not already exist it is created (even if there are other
limitations with their own sections)
This implements `rclone cleanup` to remove multipart uploads over 24
hours old. It also implements the backend command
`list-multipart-uploads` to see which ones are available and `cleanup`
to delete them with a configurable expiry interval.
See #4302
This was started by Fionera, finished off by Laura with fixes and more
docs from Nick.
Co-authored-by: Fionera <fionera@fionera.de>
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
* Fix errcheck and golint warnings
* Remove unused constants and fix comments
* Parse error responses properly
* Fix Open with RangeOption
* Fix Move, Copy and DirMove
* Implement DirCacheFlush
* Check interfaces are correct
* Remove debugs and update overview
* Correct feature flags
* Pare replacement characters down to the minimum set
* Add to the integration tests
* Add Mkdir, Rmdir, Purge, Delete, SetModTime, Copy, Move, DirMove
* Update file size after upload
* Add Open seek
* Set private permission for new folder and uploaded file
* Add docs
* Update List function
* Fix UserSessionInfo struct
* Fix socket leaks
* Don’t close resp.Body in Open method
* Get hash when listing files
* Implement about for:
* local, crypt, cache, drive, swift, hubic, onedrive, pcloud, dropbox
* Implement `--json` and `---full` flag for `rclone about`
* change About interface to return a Usage structure
* Remove operations.About as it is too thin an interface
* Implement Integration test
Relates to #1138 and #1564
* Using single object to uploaded when files less than or equal to 67108864 bytes
* Using multi-part object to uploaded when files large than 67108864 bytes, and
calculate MD5SUMS in the upload process
* For Mkdir and Rmdir, Add block to wait qingstor service sync status to
handling extreme cases that try to create a just deleted bucket or delete
a just created bucket etc
* Fixup bitrot (rclone and Azure library)
* Implement Copy
* Add modtime to metadata under mtime key as RFC3339Nano
* Make multipart upload work
* Make it pass the integration tests
* Fix uploading of zero length blobs
* Rename to azureblob as it seems likely we will do azurefile
* Add docs
Add new package qingstor to support QingStor API.
Add new unit test for its and tested through; But I commented
on some tests case because of some of the features of QingStor.
Add new docs for it.
* Fix remaining problems
* Refactor to make testing easier and add a test suite
* Make path parsing more robust.
* Add single file operations
* Add MimeType reading for objects
* Add documentation
* Note go1.7+ is required to build
* add support to hashing module
* add dbhashsum to list the hashes
* add support to dropbox module
This means objects up and downloaded to/from Dropbox will have their
hashes checked.
Note after this change local objects are calculating MD5, SHA1 and
DBHASH which is excessive and needs to be fixed.