Commit Graph

146 Commits

Author SHA1 Message Date
kelv
9e87f5090f s3: add requester pays option - fixes #301 2020-12-27 15:43:44 +00:00
Anagh Kumar Baranwal
8a429d12cf s3: Added error handling for error code 429 indicating too many requests
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2020-12-01 18:13:31 +00:00
Nick Craig-Wood
9d574c0d63 fshttp: read config from ctx not passed in ConfigInfo #4685 2020-11-26 16:40:12 +00:00
Nick Craig-Wood
2e21c58e6a fs: deglobalise the config #4685
This is done by making fs.Config private and attaching it to the
context instead.

The Config should be obtained with fs.GetConfig and fs.AddConfig
should be used to get a new mutable config that can be changed.
2020-11-26 16:40:12 +00:00
Nick Craig-Wood
76ee3060d1 s3: Add MD5 metadata to objects uploaded with SSE-AWS/SSE-C
Before this change, small objects uploaded with SSE-AWS/SSE-C would
not have MD5 sums.

This change adds metadata for these objects in the same way that the
metadata is stored for multipart uploaded objects.

See: #1824 #2827
2020-11-25 12:28:02 +00:00
Nick Craig-Wood
4bb241c435 s3: store md5 in the Object rather than the ETag
This enables us to set the md5 to cache it.

See: #1824 #2827
2020-11-25 12:28:02 +00:00
Nick Craig-Wood
a06f4c2514 s3: fix hashes on small files with aws:kms and sse-c
If rclone is configured for server side encryption - either aws:kms or
sse-c (but not sse-s3) then don't treat the ETags returned on objects
as MD5 hashes.

This fixes being able to upload small files.

Fixes #1824
2020-11-25 12:28:02 +00:00
Nick Craig-Wood
53aa03cc44 s3: complete sse-c implementation
This now can complete all operations with SSE-C enabled.

Fixes #2827
See: https://forum.rclone.org/t/issues-with-aws-s3-sse-c-getting-strange-log-entries-and-errors/20553
2020-11-25 12:28:02 +00:00
Nick Craig-Wood
8b96933e58 fs: Add context to fs.Features.Fill & fs.Features.Mask #3257 #4685 2020-11-09 18:05:54 +00:00
Nick Craig-Wood
d846210978 fs: Add context to NewFs #3257 #4685
This adds a context.Context parameter to NewFs and related calls.

This is necessary as part of reading config from the context -
backends need to be able to read the global config.
2020-11-09 18:05:54 +00:00
Josh Soref
0a6196716c docs: style: avoid double-nesting parens
Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
2020-10-28 18:16:23 +00:00
Josh Soref
a15f50254a docs: grammar: if, then
Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
2020-10-28 18:16:23 +00:00
Josh Soref
5d4f77a022 docs: grammar: Oxford comma
Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
2020-10-28 18:16:23 +00:00
Josh Soref
a089de0964 docs: grammar: uncountable: links
Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
2020-10-28 18:16:23 +00:00
Josh Soref
3068ae8447 docs: grammar: count agreement: files
Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
2020-10-28 18:16:23 +00:00
Josh Soref
67ff153b0c docs: grammar: article: a-file
Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
2020-10-28 18:16:23 +00:00
Josh Soref
e4a87f772f docs: spelling: e.g.
Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
2020-10-28 18:16:23 +00:00
Josh Soref
d4f38d45a5 docs: spelling: high-speed
Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
2020-10-28 18:16:23 +00:00
Josh Soref
bbe7eb35f1 docs: spelling: server-side
Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
2020-10-28 18:16:23 +00:00
Josh Soref
d0888edc0a Spelling fixes
Fix spelling of: above, already, anonymous, associated,
authentication, bandwidth, because, between, blocks, calculate,
candidates, cautious, changelog, cleaner, clipboard, command,
completely, concurrently, considered, constructs, corrupt, current,
daemon, dependencies, deprecated, directory, dispatcher, download,
eligible, ellipsis, encrypter, endpoint, entrieslist, essentially,
existing writers, existing, expires, filesystem, flushing, frequently,
hierarchy, however, implementation, implements, inaccurate,
individually, insensitive, longer, maximum, metadata, modified,
multipart, namedirfirst, nextcloud, obscured, opened, optional,
owncloud, pacific, passphrase, password, permanently, persimmon,
positive, potato, protocol, quota, receiving, recommends, referring,
requires, revisited, satisfied, satisfies, satisfy, semver,
serialized, session, storage, strategies, stringlist, successful,
supported, surprise, temporarily, temporary, transactions, unneeded,
update, uploads, wrapped

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
2020-10-14 15:21:31 +01:00
Anagh Kumar Baranwal
fc5b14b620
s3: Added --s3-disable-http2 to disable http/2
Fixes #4673

Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2020-10-13 17:11:22 +01:00
Anagh Kumar Baranwal
e3a5bb9b48 s3: Add missing regions for AWS
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2020-10-06 16:54:42 +01:00
Christopher Stewart
f3cf6fcdd7
s3: fix spelling mistake
Fix spelling mistake "patific" => "pacific"
2020-09-18 12:03:13 +01:00
wjielai
22937e8982
docs: add Tencent COS to s3 provider list - fixes #4468
* add Tencent COS to s3 provider list.

Co-authored-by: wjielai <wjielai@tencent.com>
2020-09-08 16:34:25 +01:00
Nick Craig-Wood
725ae91387 s3: reduce the default --s3-copy-cutoff to < 5GB
The maximum value for the --s3--copy-cutoff should be 5GiB as tested
with AWS S3.

However b2 have implemented this as 5GB rather than 5GiB so having the
default at 5 GiB makes the b2s3 server side copy of a large file by
default.

This patch sets the default to 4768 MiB which is slightly less than
5GB.

This should have very little effect on anything.

If in future rclone can lower this limit more if Copy can multithread.

See: https://forum.rclone.org/t/copying-files-within-a-b2-bucket/16680/76
2020-09-01 18:53:29 +01:00
Nick Craig-Wood
b7dd3ce608 s3: preserve metadata when doing multipart copy
Before this change the s3 multipart server side copy was not
preserving the metadata of the object. This was most noticeable
because the modtime was not preserved.

This change fetches the metadata from the object before starting the
copy and overwrites it if requires.

It will also mean any other metadata is preserved.

See: https://forum.rclone.org/t/copying-files-within-a-b2-bucket/16680/70
2020-09-01 18:39:30 +01:00
Egor Margineanu
921e384c4d
s3: update IBM COS endpoints - fixes #4522 2020-08-30 17:21:11 +01:00
Nick Craig-Wood
801a820c54 s3: fix detection of bucket existing
This reverts part of

151f03378f s3: fix upload of single files into buckets without create permission

This erroneously assumed that a HEAD request on a non existent object
would return "NotFound" if the bucket was found. In fact it returns
"NotFound" when the bucket isn't found also.

This will break the fix for #4297 - however that can be made to work
using the new --s3-assume-bucket-exists flag
2020-08-21 13:28:08 +01:00
Nick Craig-Wood
d5f4c74697 s3: implement cleanup and backend command to list & remove multipart uploads
This implements `rclone cleanup` to remove multipart uploads over 24
hours old. It also implements the backend command
`list-multipart-uploads` to see which ones are available and `cleanup`
to delete them with a configurable expiry interval.

See #4302
2020-07-28 11:37:46 +01:00
Nick Craig-Wood
2288a5c617 s3: implement profile and shared_credentials_file options
It is impossible to use two different profiles at the same time -
these config vars enable that.

See: https://forum.rclone.org/t/s3-source-destination-named-profile/17417
2020-07-28 11:32:32 +01:00
Nick Craig-Wood
f406dbbb4d s3: add --s3-no-check-bucket for minimising rclone transactions and perms
Fixes #4449
2020-07-27 17:49:40 +01:00
Nick Craig-Wood
80d2f38192 s3: fix bucket Region auto detection when Region unset in config #2915
Previous to this fix if Region was not set and Endpoint was not set
then we set the endpoint to "https://s3.amazonaws.com/".

This is unecessary because if the Region alone isn't set then we set
it to "us-east-1" which has the same endpoint.

Having the endpoint set breaks the bucket region auto detection with
the error "Failed to update region for bucket: can't set region to
"xxx" as endpoint is set".

This fix removes that check.
2020-07-10 17:16:59 +01:00
Nick Craig-Wood
c820576329 fs: define SlowModTime and SlowHash features in the relevant backends 2020-06-30 12:01:36 +01:00
David
9058ec32e1 s3: Use regional s3 us-east-1 endpoint 2020-06-26 16:25:52 +01:00
Nick Craig-Wood
fd7c63bc78 s3: add backend restore command to restore objects from GLACIER
See: https://forum.rclone.org/t/rclone-settier-fails-with-scaleway-entitytoolarge/17384
2020-06-25 21:33:23 +01:00
Nick Craig-Wood
5f75444ef6 s3: cancel in progress multipart uploads and copies on rclone exit #4300 2020-06-25 12:55:56 +01:00
Nick Craig-Wood
85bcacac90 s3: Cap expiry duration to 1 Week and return error when sharing dir 2020-06-18 17:50:50 +01:00
Vincent Feltz
f4d7e41f24 s3: add Scaleway provider - fixes #4338 2020-06-13 11:55:37 +01:00
Nick Craig-Wood
2ea15a72bc s3: fix --header-upload - Fixes #4303
Before this change we were setting the headers on the PUT
request for normal and multipart uploads. For normal uploads this caused the error

    403 Forbidden: There were headers present in the request which were not signed

After this fix we set the headers in the object upload request itself
as the s3 SDK expects.

This means that we only support a limited range of headers

- Cache-Control
- Content-Disposition
- Content-Encoding
- Content-Language
- Content-Type
- X-Amz-Tagging
- X-Amz-Meta-

Note for the last of those are for setting custom metadata in the form
"X-Amz-Meta-Key: value".

This now works for multipart uploads and single part uploads

See also #59
2020-06-10 12:28:48 +01:00
Kamil Trzciński
7458d37d2a
s3: add max_upload_parts support - fixes #4159
* s3: add `max_upload_parts` support

This allows to configure a maximum amount of chunks used to upload file:

- Support Scaleway which has a limit of 1k chunks currently
- Reduce a cost on S3 when each request costs some money at the expense of memory used

Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
2020-06-08 18:22:34 +01:00
Roman Kredentser
c0521791db s3: implement link sharing with PublicLink 2020-06-05 14:51:05 +01:00
Nick Craig-Wood
151f03378f s3: fix upload of single files into buckets without create permission
Before this change, attempting to upload a single file into an s3
bucket which did not have create permission gave AccessDenied: Access
Denied error when it tried to create the bucket.

This was masked until e2bf91452a was
fixed.

This fix marks the bucket as OK if a fetch on an object indicates it
is OK. This stops rclone thinking it has to create the bucket in the
first place.

Fixes #4297
2020-06-02 14:33:21 +01:00
Martin Michlmayr
4aee962233 doc: fix typos throughout docs and code 2020-05-20 15:54:51 +01:00
Nick Craig-Wood
8a58e0235d s3: don't leak memory or tokens in edge cases for multipart upload 2020-05-14 07:48:18 +01:00
Nick Craig-Wood
4e869e03f7 s3: improve docs for --s3-disable-checksum 2020-04-28 17:47:10 +01:00
Tim Gallant
5cb7229a16 s3: add support for HTTPOption 2020-04-23 11:07:21 +01:00
Nick Craig-Wood
f8039deb7c s3: fix detection of BucketAlreadyOwnedByYou and BucketAlreadyExists error
This was being silently ignored until this commit

e2bf91452a s3: report errors on bucket creation (mkdir) correctly
2020-04-22 18:14:03 +01:00
Nick Craig-Wood
e2bf91452a s3: report errors on bucket creation (mkdir) correctly
Before this fix errors on bucket creation were being silently
swallowed.

See: https://forum.rclone.org/t/rclone-with-brand-new-aws-account-for-s3/15590
2020-04-15 13:13:13 +01:00
Michał Matczuk
6893ce0bbf s3: do not resize buf on put to memBuf
This is handled by Pool implementation.
2020-04-11 16:35:48 +01:00
Michał Matczuk
399cf18013 s3: use single memory pool
Previously we had a map of pools for different chunk sizes.
In practice the mapping is not very useful and requires a lock.
Pools of size other that ChunkSize can only happen when we have a huge file (over 10k * ChunkSize).
We need to have a bunch of identically sized huge files.
In such case most likely ChunkSize should be increased.

The mapping and its lock is replaced with a single initialised pool for ChunkSize, in other cases pool is allocated and freed on per file basis.
2020-04-11 16:34:05 +01:00