rclone/vendor/cloud.google.com/go
2018-06-17 17:59:12 +01:00
..
bigquery vendor: update all dependencies 2018-06-17 17:59:12 +01:00
bigtable vendor: update all dependencies 2018-06-17 17:59:12 +01:00
civil vendor: update all dependencies to latest versions 2017-09-30 15:27:27 +01:00
cloudtasks/apiv2beta2 vendor: update all dependencies 2018-06-17 17:59:12 +01:00
cmd/go-cloud-debug-agent vendor: update all dependencies 2018-03-19 15:51:38 +00:00
compute/metadata vendor: update all dependencies 2017-07-23 08:51:42 +01:00
container vendor: update all dependencies 2018-03-19 15:51:38 +00:00
dataproc/apiv1 vendor: update all dependencies 2018-03-19 15:51:38 +00:00
datastore vendor: update all dependencies 2018-06-17 17:59:12 +01:00
debugger/apiv2 vendor: update all dependencies 2018-03-19 15:51:38 +00:00
dialogflow/apiv2 vendor: update all dependencies 2018-06-17 17:59:12 +01:00
dlp/apiv2 vendor: update all dependencies 2018-06-17 17:59:12 +01:00
errorreporting vendor: update all dependencies 2018-05-05 15:52:24 +01:00
firestore vendor: update all dependencies 2018-06-17 17:59:12 +01:00
httpreplay vendor: update all dependencies 2018-06-17 17:59:12 +01:00
iam vendor: update all dependencies 2018-06-17 17:59:12 +01:00
internal vendor: update all dependencies 2018-06-17 17:59:12 +01:00
language vendor: update all dependencies 2018-03-19 15:51:38 +00:00
logging vendor: update all dependencies 2018-06-17 17:59:12 +01:00
longrunning vendor: update all dependencies 2018-05-05 15:52:24 +01:00
monitoring/apiv3 vendor: update all dependencies 2018-05-05 15:52:24 +01:00
oslogin/apiv1beta vendor: update all dependencies 2018-03-19 15:51:38 +00:00
profiler vendor: update all dependencies 2018-06-17 17:59:12 +01:00
pubsub vendor: update all dependencies 2018-06-17 17:59:12 +01:00
redis/apiv1beta1 vendor: update all dependencies 2018-06-17 17:59:12 +01:00
rpcreplay vendor: update all dependencies 2018-05-05 15:52:24 +01:00
spanner vendor: update all dependencies 2018-06-17 17:59:12 +01:00
speech vendor: update all dependencies 2018-06-17 17:59:12 +01:00
storage vendor: update all dependencies 2018-06-17 17:59:12 +01:00
trace vendor: update all dependencies 2018-06-17 17:59:12 +01:00
translate vendor: update all dependencies to latest versions 2017-09-30 15:27:27 +01:00
videointelligence vendor: update all dependencies 2018-03-19 15:51:38 +00:00
vision vendor: update all dependencies 2018-05-05 15:52:24 +01:00
.travis.yml vendor: update all dependencies 2018-06-17 17:59:12 +01:00
appveyor.yml Switch to using the dep tool and update all the dependencies 2017-05-11 15:39:54 +01:00
authexample_test.go vendor: update all dependencies 2018-06-17 17:59:12 +01:00
AUTHORS Update vendor dependencies 2016-11-19 10:05:20 +00:00
cloud.go vendor: update all dependencies 2018-06-17 17:59:12 +01:00
CONTRIBUTING.md vendor: update all dependencies 2018-06-17 17:59:12 +01:00
CONTRIBUTORS vendor: update all dependencies 2018-05-05 15:52:24 +01:00
examples_test.go vendor: update all dependencies 2018-06-17 17:59:12 +01:00
import_test.go vendor: update all dependencies to latest versions 2018-01-16 13:20:59 +00:00
issue_template.md vendor: update all dependencies 2018-05-05 15:52:24 +01:00
keys.tar.enc vendor: update all dependencies to latest versions 2018-01-16 13:20:59 +00:00
LICENSE vendor: update all dependencies 2018-03-19 15:51:38 +00:00
license_test.go vendor: update all dependencies 2018-06-17 17:59:12 +01:00
MIGRATION.md vendor: update all dependencies to latest versions 2017-09-30 15:27:27 +01:00
old-news.md vendor: update all dependencies 2018-03-19 15:51:38 +00:00
README.md vendor: update all dependencies 2018-06-17 17:59:12 +01:00
regen-gapic.sh vendor: update all dependencies 2018-06-17 17:59:12 +01:00
RELEASING.md vendor: update all dependencies 2018-06-17 17:59:12 +01:00
run-tests.sh vendor: update all dependencies 2018-06-17 17:59:12 +01:00

Google Cloud Client Libraries for Go

GoDoc

Go packages for Google Cloud Platform services.

import "cloud.google.com/go"

To install the packages on your system,

$ go get -u cloud.google.com/go/...

NOTE: Some of these packages are under development, and may occasionally
make backwards-incompatible changes.

NOTE: Github repo is a mirror of https://code.googlesource.com/gocloud.

News

May 18, 2018

v0.23.0

  • bigquery: Add DDL stats to query statistics.
  • bigtable:
    • cbt: Add cells-per-column limit for row lookup.
    • cbt: Make it possible to combine read filters.
  • dlp: v2beta2 client removed. Use the v2 client instead.
  • firestore, spanner: Fix compilation errors due to protobuf changes.

May 8, 2018

v0.22.0

  • bigtable:

    • cbt: Support cells per column limit for row read.
    • bttest: Correctly handle empty RowSet.
    • Fix ReadModifyWrite operation in emulator.
    • Fix API path in GetCluster.
  • bigquery:

    • BEHAVIOR CHANGE: Retry on 503 status code.
    • Add dataset.DeleteWithContents.
    • Add SchemaUpdateOptions for query jobs.
    • Add Timeline to QueryStatistics.
    • Add more stats to ExplainQueryStage.
    • Support Parquet data format.
  • datastore:

    • Support omitempty for times.
  • dlp:

    • BREAKING CHANGE: Remove v1beta1 client. Please migrate to the v2 client,
      which is now out of beta.
    • Add v2 client.
  • firestore:

    • BEHAVIOR CHANGE: Treat set({}, MergeAll) as valid.
  • iam:

    • Support JWT signing via SignJwt callopt.
  • profiler:

    • BEHAVIOR CHANGE: PollForSerialOutput returns an error when context.Done.
    • BEHAVIOR CHANGE: Increase the initial backoff to 1 minute.
    • Avoid returning empty serial port output.
  • pubsub:

    • BEHAVIOR CHANGE: Don't backoff during next retryable error once stream is healthy.
    • BEHAVIOR CHANGE: Don't backoff on EOF.
    • pstest: Support Acknowledge and ModifyAckDeadline RPCs.
  • redis:

    • Add v1 beta Redis client.
  • spanner:

    • Support SessionLabels.
  • speech:

    • Add api v1 beta1 client.
  • storage:

    • BEHAVIOR CHANGE: Retry reads when retryable error occurs.
    • Fix delete of object in requester-pays bucket.
    • Support KMS integration.

April 9, 2018

v0.21.0

  • bigquery:

    • Add OpenCensus tracing.
  • firestore:

    • BREAKING CHANGE: If a document does not exist, return a DocumentSnapshot
      whose Exists method returns false. DocumentRef.Get and Transaction.Get
      return the non-nil DocumentSnapshot in addition to a NotFound error.
      DocumentRef.GetAll and Transaction.GetAll return a non-nil
      DocumentSnapshot instead of nil.
    • Add DocumentIterator.Stop. Call Stop whenever you are done with a
      DocumentIterator.
    • Added Query.Snapshots and DocumentRef.Snapshots, which provide realtime
      notification of updates. See https://cloud.google.com/firestore/docs/query-data/listen.
    • Canceling an RPC now always returns a grpc.Status with codes.Canceled.
  • spanner:

    • Add CommitTimestamp, which supports inserting the commit timestamp of a
      transaction into a column.

March 22, 2018

v0.20.0

  • bigquery: Support SchemaUpdateOptions for load jobs.

  • bigtable:

    • Add SampleRowKeys.
    • cbt: Support union, intersection GCPolicy.
    • Retry admin RPCS.
    • Add trace spans to retries.
  • datastore: Add OpenCensus tracing.

  • firestore:

    • Fix queries involving Null and NaN.
    • Allow Timestamp protobuffers for time values.
  • logging: Add a WriteTimeout option.

  • spanner: Support Batch API.

  • storage: Add OpenCensus tracing.

February 26, 2018

v0.19.0

  • bigquery:

    • Support customer-managed encryption keys.
  • bigtable:

    • Improved emulator support.
    • Support GetCluster.
  • datastore:

    • Add general mutations.
    • Support pointer struct fields.
    • Support transaction options.
  • firestore:

    • Add Transaction.GetAll.
    • Support document cursors.
  • logging:

    • Support concurrent RPCs to the service.
    • Support per-entry resources.
  • profiler:

    • Add config options to disable heap and thread profiling.
    • Read the project ID from $GOOGLE_CLOUD_PROJECT when it's set.
  • pubsub:

    • BEHAVIOR CHANGE: Release flow control after ack/nack (instead of after the
      callback returns).
    • Add SubscriptionInProject.
    • Add OpenCensus instrumentation for streaming pull.
  • storage:

    • Support CORS.

January 18, 2018

v0.18.0

  • bigquery:

    • Marked stable.
    • Schema inference of nullable fields supported.
    • Added TimePartitioning to QueryConfig.
  • firestore: Data provided to DocumentRef.Set with a Merge option can contain
    Delete sentinels.

  • logging: Clients can accept parent resources other than projects.

  • pubsub:

    • pubsub/pstest: A lighweight fake for pubsub. Experimental; feedback welcome.
    • Support updating more subscription metadata: AckDeadline,
      RetainAckedMessages and RetentionDuration.
  • oslogin/apiv1beta: New client for the Cloud OS Login API.

  • rpcreplay: A package for recording and replaying gRPC traffic.

  • spanner:

    • Add a ReadWithOptions that supports a row limit, as well as an index.
    • Support query plan and execution statistics.
    • Added OpenCensus support.
  • storage: Clarify checksum validation for gzipped files (it is not validated
    when the file is served uncompressed).

December 11, 2017

v0.17.0

  • firestore BREAKING CHANGES:
    • Remove UpdateMap and UpdateStruct; rename UpdatePaths to Update.
      Change
      docref.UpdateMap(ctx, map[string]interface{}{"a.b", 1})
      to
      docref.Update(ctx, []firestore.Update{{Path: "a.b", Value: 1}})

      Change
      docref.UpdateStruct(ctx, []string{"Field"}, aStruct)
      to
      docref.Update(ctx, []firestore.Update{{Path: "Field", Value: aStruct.Field}})

    • Rename MergePaths to Merge; require args to be FieldPaths

    • A value stored as an integer can be read into a floating-point field, and vice versa.

  • bigtable/cmd/cbt:
    • Support deleting a column.
    • Add regex option for row read.
  • spanner: Mark stable.
  • storage:
    • Add Reader.ContentEncoding method.
    • Fix handling of SignedURL headers.
  • bigquery:
    • If Uploader.Put is called with no rows, it returns nil without making a
      call.
    • Schema inference supports the "nullable" option in struct tags for
      non-required fields.
    • TimePartitioning supports "Field".

Older news

Supported APIs

Google API Status Package
BigQuery stable cloud.google.com/go/bigquery
Bigtable stable cloud.google.com/go/bigtable
Container alpha cloud.google.com/go/container/apiv1
Data Loss Prevention alpha cloud.google.com/go/dlp/apiv2beta1
Datastore stable cloud.google.com/go/datastore
Debugger alpha cloud.google.com/go/debugger/apiv2
ErrorReporting alpha cloud.google.com/go/errorreporting
Firestore beta cloud.google.com/go/firestore
Language stable cloud.google.com/go/language/apiv1
Logging stable cloud.google.com/go/logging
Monitoring beta cloud.google.com/go/monitoring/apiv3
OS Login alpha cloud.google.com/compute/docs/oslogin/rest
Pub/Sub beta cloud.google.com/go/pubsub
Spanner stable cloud.google.com/go/spanner
Speech stable cloud.google.com/go/speech/apiv1
Storage stable cloud.google.com/go/storage
Translation stable cloud.google.com/go/translate
Video Intelligence beta cloud.google.com/go/videointelligence/apiv1beta1
Vision stable cloud.google.com/go/vision/apiv1

Alpha status: the API is still being actively developed. As a
result, it might change in backward-incompatible ways and is not recommended
for production use.

Beta status: the API is largely complete, but still has outstanding
features and bugs to be addressed. There may be minor backwards-incompatible
changes where necessary.

Stable status: the API is mature and ready for production use. We will
continue addressing bugs and feature requests.

Documentation and examples are available at
https://godoc.org/cloud.google.com/go

Visit or join the
google-api-go-announce group
for updates on these packages.

Go Versions Supported

We support the two most recent major versions of Go. If Google App Engine uses
an older version, we support that as well. You can see which versions are
currently supported by looking at the lines following go: in
.travis.yml.

Authorization

By default, each API will use Google Application Default Credentials
for authorization credentials used in calling the API endpoints. This will allow your
application to run in many environments without requiring explicit configuration.

client, err := storage.NewClient(ctx)

To authorize using a
JSON key file,
pass
option.WithServiceAccountFile
to the NewClient function of the desired package. For example:

client, err := storage.NewClient(ctx, option.WithServiceAccountFile("path/to/keyfile.json"))

You can exert more control over authorization by using the
golang.org/x/oauth2 package to
create an oauth2.TokenSource. Then pass
option.WithTokenSource
to the NewClient function:
snip:# (auth-ts)

tokenSource := ...
client, err := storage.NewClient(ctx, option.WithTokenSource(tokenSource))

Cloud Datastore GoDoc

Example Usage

First create a datastore.Client to use throughout your application:

client, err := datastore.NewClient(ctx, "my-project-id")
if err != nil {
	log.Fatal(err)
}

Then use that client to interact with the API:

type Post struct {
	Title       string
	Body        string `datastore:",noindex"`
	PublishedAt time.Time
}
keys := []*datastore.Key{
	datastore.NameKey("Post", "post1", nil),
	datastore.NameKey("Post", "post2", nil),
}
posts := []*Post{
	{Title: "Post 1", Body: "...", PublishedAt: time.Now()},
	{Title: "Post 2", Body: "...", PublishedAt: time.Now()},
}
if _, err := client.PutMulti(ctx, keys, posts); err != nil {
	log.Fatal(err)
}

Cloud Storage GoDoc

Example Usage

First create a storage.Client to use throughout your application:

client, err := storage.NewClient(ctx)
if err != nil {
	log.Fatal(err)
}
// Read the object1 from bucket.
rc, err := client.Bucket("bucket").Object("object1").NewReader(ctx)
if err != nil {
	log.Fatal(err)
}
defer rc.Close()
body, err := ioutil.ReadAll(rc)
if err != nil {
	log.Fatal(err)
}

Cloud Pub/Sub GoDoc

Example Usage

First create a pubsub.Client to use throughout your application:

client, err := pubsub.NewClient(ctx, "project-id")
if err != nil {
	log.Fatal(err)
}

Then use the client to publish and subscribe:

// Publish "hello world" on topic1.
topic := client.Topic("topic1")
res := topic.Publish(ctx, &pubsub.Message{
	Data: []byte("hello world"),
})
// The publish happens asynchronously.
// Later, you can get the result from res:
...
msgID, err := res.Get(ctx)
if err != nil {
	log.Fatal(err)
}

// Use a callback to receive messages via subscription1.
sub := client.Subscription("subscription1")
err = sub.Receive(ctx, func(ctx context.Context, m *pubsub.Message) {
	fmt.Println(m.Data)
	m.Ack() // Acknowledge that we've consumed the message.
})
if err != nil {
	log.Println(err)
}

Cloud BigQuery GoDoc

Example Usage

First create a bigquery.Client to use throughout your application:
snip:# (bq-1)

c, err := bigquery.NewClient(ctx, "my-project-ID")
if err != nil {
	// TODO: Handle error.
}

Then use that client to interact with the API:
snip:# (bq-2)

// Construct a query.
q := c.Query(`
    SELECT year, SUM(number)
    FROM [bigquery-public-data:usa_names.usa_1910_2013]
    WHERE name = "William"
    GROUP BY year
    ORDER BY year
`)
// Execute the query.
it, err := q.Read(ctx)
if err != nil {
	// TODO: Handle error.
}
// Iterate through the results.
for {
	var values []bigquery.Value
	err := it.Next(&values)
	if err == iterator.Done {
		break
	}
	if err != nil {
		// TODO: Handle error.
	}
	fmt.Println(values)
}

Stackdriver Logging GoDoc

Example Usage

First create a logging.Client to use throughout your application:
snip:# (logging-1)

ctx := context.Background()
client, err := logging.NewClient(ctx, "my-project")
if err != nil {
	// TODO: Handle error.
}

Usually, you'll want to add log entries to a buffer to be periodically flushed
(automatically and asynchronously) to the Stackdriver Logging service.
snip:# (logging-2)

logger := client.Logger("my-log")
logger.Log(logging.Entry{Payload: "something happened!"})

Close your client before your program exits, to flush any buffered log entries.
snip:# (logging-3)

err = client.Close()
if err != nil {
	// TODO: Handle error.
}

Cloud Spanner GoDoc

Example Usage

First create a spanner.Client to use throughout your application:

client, err := spanner.NewClient(ctx, "projects/P/instances/I/databases/D")
if err != nil {
	log.Fatal(err)
}
// Simple Reads And Writes
_, err = client.Apply(ctx, []*spanner.Mutation{
	spanner.Insert("Users",
		[]string{"name", "email"},
		[]interface{}{"alice", "a@example.com"})})
if err != nil {
	log.Fatal(err)
}
row, err := client.Single().ReadRow(ctx, "Users",
	spanner.Key{"alice"}, []string{"email"})
if err != nil {
	log.Fatal(err)
}

Contributing

Contributions are welcome. Please, see the
CONTRIBUTING
document for details. We're using Gerrit for our code reviews. Please don't open pull
requests against this repo, new pull requests will be automatically closed.

Please note that this project is released with a Contributor Code of Conduct.
By participating in this project you agree to abide by its terms.
See Contributor Code of Conduct
for more information.