Commit 6e3be7f3 authored by Yong Tang's avatar Yong Tang

Update vendor with `go dep`

This fix updates vendor with `go dep`
Signed-off-by: default avatarYong Tang <yong.tang.github@outlook.com>
parent e08fb277

Too many changes to show.

To preserve performance only 1000 of 1000+ files are displayed.

sudo: false
language: go
go:
- 1.6
- 1.7
- 1.8
install:
- go get -v cloud.google.com/go/...
script:
- openssl aes-256-cbc -K $encrypted_912ff8fa81ad_key -iv $encrypted_912ff8fa81ad_iv -in key.json.enc -out key.json -d
- GCLOUD_TESTS_GOLANG_PROJECT_ID="dulcet-port-762" GCLOUD_TESTS_GOLANG_KEY="$(pwd)/key.json"
go test -race -v cloud.google.com/go/...
env:
matrix:
# The GCLOUD_TESTS_API_KEY environment variable.
secure: VdldogUOoubQ60LhuHJ+g/aJoBiujkSkWEWl79Zb8cvQorcQbxISS+JsOOp4QkUOU4WwaHAm8/3pIH1QMWOR6O78DaLmDKi5Q4RpkVdCpUXy+OAfQaZIcBsispMrjxLXnqFjo9ELnrArfjoeCTzaX0QTCfwQwVmigC8rR30JBKI=
# This is the official list of cloud authors for copyright purposes.
# This file is distinct from the CONTRIBUTORS files.
# See the latter for an explanation.
# Names should be added to this file as:
# Name or Organization <email address>
# The email address is not required for organizations.
Filippo Valsorda <hi@filippo.io>
Google Inc.
Ingo Oeser <nightlyone@googlemail.com>
Palm Stone Games, Inc.
Paweł Knap <pawelknap88@gmail.com>
Péter Szilágyi <peterke@gmail.com>
Tyler Treat <ttreat31@gmail.com>
# Contributing
1. Sign one of the contributor license agreements below.
1. `go get golang.org/x/review/git-codereview` to install the code reviewing tool.
1. You will need to ensure that your `GOBIN` directory (by default
`$GOPATH/bin`) is in your `PATH` so that git can find the command.
1. Get the cloud package by running `go get -d cloud.google.com/go`.
1. If you have already checked out the source, make sure that the remote git
origin is https://code.googlesource.com/gocloud:
git remote set-url origin https://code.googlesource.com/gocloud
1. Make sure your auth is configured correctly by visiting
https://code.googlesource.com, clicking "Generate Password", and following
the directions.
1. Make changes and create a change by running `git codereview change <name>`,
provide a commit message, and use `git codereview mail` to create a Gerrit CL.
1. Keep amending to the change with `git codereview change` and mail as your receive
feedback. Each new mailed amendment will create a new patch set for your change in Gerrit.
## Integration Tests
In addition to the unit tests, you may run the integration test suite.
To run the integrations tests, creating and configuration of a project in the
Google Developers Console is required.
After creating a project, you must [create a service account](https://developers.google.com/identity/protocols/OAuth2ServiceAccount#creatinganaccount).
Ensure the project-level **Owner** [IAM role](console.cloud.google.com/iam-admin/iam/project)
(or **Editor** and **Logs Configuration Writer** roles) are added to the
service account.
Once you create a project, set the following environment variables to be able to
run the against the actual APIs.
- **GCLOUD_TESTS_GOLANG_PROJECT_ID**: Developers Console project's ID (e.g. bamboo-shift-455)
- **GCLOUD_TESTS_GOLANG_KEY**: The path to the JSON key file.
- **GCLOUD_TESTS_API_KEY**: Your API key.
Install the [gcloud command-line tool][gcloudcli] to your machine and use it
to create some resources used in integration tests.
From the project's root directory:
``` sh
# Set the default project in your env.
$ gcloud config set project $GCLOUD_TESTS_GOLANG_PROJECT_ID
# Authenticate the gcloud tool with your account.
$ gcloud auth login
# Create the indexes used in the datastore integration tests.
$ gcloud preview datastore create-indexes datastore/testdata/index.yaml
# Create a Google Cloud storage bucket with the same name as your test project,
# and with the Stackdriver Logging service account as owner, for the sink
# integration tests in logging.
$ gsutil mb gs://$GCLOUD_TESTS_GOLANG_PROJECT_ID
$ gsutil acl ch -g cloud-logs@google.com:O gs://$GCLOUD_TESTS_GOLANG_PROJECT_ID
# Create a Spanner instance for the spanner integration tests.
$ gcloud beta spanner instances create go-integration-test --config regional-us-central1 --nodes 1 --description 'Instance for go client test'
# NOTE: Spanner instances are priced by the node-hour, so you may want to delete
# the instance after testing with 'gcloud beta spanner instances delete'.
```
Once you've set the environment variables, you can run the integration tests by
running:
``` sh
$ go test -v cloud.google.com/go/...
```
## Contributor License Agreements
Before we can accept your pull requests you'll need to sign a Contributor
License Agreement (CLA):
- **If you are an individual writing original source code** and **you own the
- intellectual property**, then you'll need to sign an [individual CLA][indvcla].
- **If you work for a company that wants to allow you to contribute your work**,
then you'll need to sign a [corporate CLA][corpcla].
You can sign these electronically (just scroll to the bottom). After that,
we'll be able to accept your pull requests.
## Contributor Code of Conduct
As contributors and maintainers of this project,
and in the interest of fostering an open and welcoming community,
we pledge to respect all people who contribute through reporting issues,
posting feature requests, updating documentation,
submitting pull requests or patches, and other activities.
We are committed to making participation in this project
a harassment-free experience for everyone,
regardless of level of experience, gender, gender identity and expression,
sexual orientation, disability, personal appearance,
body size, race, ethnicity, age, religion, or nationality.
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery
* Personal attacks
* Trolling or insulting/derogatory comments
* Public or private harassment
* Publishing other's private information,
such as physical or electronic
addresses, without explicit permission
* Other unethical or unprofessional conduct.
Project maintainers have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct.
By adopting this Code of Conduct,
project maintainers commit themselves to fairly and consistently
applying these principles to every aspect of managing this project.
Project maintainers who do not follow or enforce the Code of Conduct
may be permanently removed from the project team.
This code of conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community.
Instances of abusive, harassing, or otherwise unacceptable behavior
may be reported by opening an issue
or contacting one or more of the project maintainers.
This Code of Conduct is adapted from the [Contributor Covenant](http://contributor-covenant.org), version 1.2.0,
available at [http://contributor-covenant.org/version/1/2/0/](http://contributor-covenant.org/version/1/2/0/)
[gcloudcli]: https://developers.google.com/cloud/sdk/gcloud/
[indvcla]: https://developers.google.com/open-source/cla/individual
[corpcla]: https://developers.google.com/open-source/cla/corporate
# People who have agreed to one of the CLAs and can contribute patches.
# The AUTHORS file lists the copyright holders; this file
# lists people. For example, Google employees are listed here
# but not in AUTHORS, because Google holds the copyright.
#
# https://developers.google.com/open-source/cla/individual
# https://developers.google.com/open-source/cla/corporate
#
# Names should be added to this file as:
# Name <email address>
# Keep the list alphabetically sorted.
Alexis Hunt <lexer@google.com>
Andreas Litt <andreas.litt@gmail.com>
Andrew Gerrand <adg@golang.org>
Brad Fitzpatrick <bradfitz@golang.org>
Burcu Dogan <jbd@google.com>
Dave Day <djd@golang.org>
David Sansome <me@davidsansome.com>
David Symonds <dsymonds@golang.org>
Filippo Valsorda <hi@filippo.io>
Glenn Lewis <gmlewis@google.com>
Ingo Oeser <nightlyone@googlemail.com>
Johan Euphrosine <proppy@google.com>
Jonathan Amsterdam <jba@google.com>
Luna Duclos <luna.duclos@palmstonegames.com>
Magnus Hiie <magnus.hiie@gmail.com>
Michael McGreevy <mcgreevy@golang.org>
Omar Jarjur <ojarjur@google.com>
Paweł Knap <pawelknap88@gmail.com>
Péter Szilágyi <peterke@gmail.com>
Sarah Adams <shadams@google.com>
Thanatat Tamtan <acoshift@gmail.com>
Toby Burress <kurin@google.com>
Tuo Shan <shantuo@google.com>
Tyler Treat <ttreat31@gmail.com>
This diff is collapsed.
# This file configures AppVeyor (http://www.appveyor.com),
# a Windows-based CI service similar to Travis.
# Identifier for this run
version: "{build}"
# Clone the repo into this path, which conforms to the standard
# Go workspace structure.
clone_folder: c:\gopath\src\cloud.google.com\go
environment:
GOPATH: c:\gopath
GCLOUD_TESTS_GOLANG_PROJECT_ID: dulcet-port-762
GCLOUD_TESTS_GOLANG_KEY: c:\gopath\src\cloud.google.com\go\key.json
KEYFILE_CONTENTS:
secure: IvRbDAhM2PIQqzVkjzJ4FjizUvoQ+c3vG/qhJQG+HlZ/L5KEkqLu+x6WjLrExrNMyGku4znB2jmbTrUW3Ob4sGG+R5vvqeQ3YMHCVIkw5CxY+/bUDkW5RZWsVbuCnNa/vKsWmCP+/sZW6ICe29yKJ2ZOb6QaauI4s9R6j+cqBbU9pumMGYFRb0Rw3uUU7DKmVFCy+NjTENZIlDP9rmjANgAzigowJJEb2Tg9sLlQKmQeKiBSRN8lKc5Nq60a+fIzHGKvql4eIitDDDpOpyHv15/Xr1BzFw2yDoiR4X1lng0u7q0X9RgX4VIYa6gT16NXBEmQgbuX8gh7SfPMp9RhiZD9sVUaV+yogEabYpyPnmUURo0hXwkctKaBkQlEmKvjHwF5dvbg8+yqGhwtjAgFNimXG3INrwQsfQsZskkQWanutbJf9xy50GyWWFZZdi0uT4oXP/b5P7aklPXKXsvrJKBh7RjEaqBrhi86IJwOjBspvoR4l2WmcQyxb2xzQS1pjbBJFQfYJJ8+JgsstTL8PBO9d4ybJC0li1Om1qnWxkaewvPxxuoHJ9LpRKof19yRYWBmhTXb2tTASKG/zslvl4fgG4DmQBS93WC7dsiGOhAraGw2eCTgd0lYZOhk1FjWl9TS80aktXxzH/7nTvem5ohm+eDl6O0wnTL4KXjQVNSQ1PyLn4lGRJ5MNGzBTRFWIr2API2rca4Fysyfh/UdmazPGlNbY9JPGqb9+F04QzLfqm+Zz/cHy59E7lOSMBlUI4KD6d6ZNNKNRH+/g9i+fSiyiXKugTfda8KBnWGyPwprxuWGYaiQUGUYOwJY5R6x5c4mjImAB310V+Wo33UbWFJiwxEDsiCNqW1meVkBzt2er26vh4qbgCUIQ3iM3gFPfHgy+QxkmIhic7Q1HYacQElt8AAP41M7cCKWCuZidegP37MBB//mjjiNt047ZSQEvB4tqsX/OvfbByVef+cbtVw9T0yjHvmCdPW1XrhyrCCgclu6oYYdbmc5D7BBDRbjjMWGv6YvceAbfGf6ukdB5PuV+TGEN/FoQ1QTRA6Aqf+3fLMg4mS4oyTfw5xyYNbv3qoyLPrp+BnxI53WB9p0hfMg4n9FD6NntBxjDq+Q3Lk/bjC/Y4MaRWdzbMzF9a0lgGfcw9DURlK5p7uGJC9vg34feNoQprxVEZRQ01cHLeob6eGkYm4HxSRx8JY39Mh+9wzJo+k/aIvFleNC3e35NOrkXr6wb5e42n2DwBdPqdNolTLtLFRglAL1LTpp27UjvjieWJAKfoDTR5CKl01sZqt0wPdLLcvsMj6CiPFmccUIOYeZMe86kLBD61Qa5F1EwkgO3Om2qSjW96FzL4skRc+BmU5RrHlAFSldR1wpUgtkUMv9vH5Cy+UJdcvpZ8KbmhZ2PsjF7ddJ1ve9RAw3cP325AyIMwZ77Ef1mgTM0NJze6eSW1qKlEsgt1FADPyeUu1NQTA2H2dueMPGlArWTSUgyWR9AdfpqouT7eg0JWI5w+yUZZC+/rPglYbt84oLmYpwuli0z8FyEQRPIc3EtkfWIv/yYgDr2TZ0N2KvGfpi/MAUWgxI1gleC2uKgEOEtuJthd3XZjF2NoE7IBqjQOINybcJOjyeB5vRLDY1FLuxYzdg1y1etkV4XQig/vje
install:
# Info for debugging.
- echo %PATH%
- go version
- go env
- go get -v -d -t ./...
# Provide a build script, or AppVeyor will call msbuild.
build_script:
- go install -v ./...
- echo %KEYFILE_CONTENTS% > %GCLOUD_TESTS_GOLANG_KEY%
test_script:
- go test -v ./...
// Copyright 2016 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package cloud_test
import (
"cloud.google.com/go/datastore"
"golang.org/x/net/context"
"google.golang.org/api/option"
)
func Example_applicationDefaultCredentials() {
// Google Application Default Credentials is the recommended way to authorize
// and authenticate clients.
//
// See the following link on how to create and obtain Application Default Credentials:
// https://developers.google.com/identity/protocols/application-default-credentials.
client, err := datastore.NewClient(context.Background(), "project-id")
if err != nil {
// TODO: handle error.
}
_ = client // Use the client.
}
func Example_serviceAccountFile() {
// Use a JSON key file associated with a Google service account to
// authenticate and authorize. Service Account keys can be created and
// downloaded from https://console.developers.google.com/permissions/serviceaccounts.
//
// Note: This example uses the datastore client, but the same steps apply to
// the other client libraries underneath this package.
client, err := datastore.NewClient(context.Background(),
"project-id", option.WithServiceAccountFile("/path/to/service-account-key.json"))
if err != nil {
// TODO: handle error.
}
_ = client // Use the client.
}
// Copyright 2015 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package bigquery
// TODO(mcgreevy): support dry-run mode when creating jobs.
import (
"fmt"
"google.golang.org/api/option"
"google.golang.org/api/transport"
"golang.org/x/net/context"
bq "google.golang.org/api/bigquery/v2"
)
const prodAddr = "https://www.googleapis.com/bigquery/v2/"
// ExternalData is a table which is stored outside of BigQuery. It is implemented by GCSReference.
type ExternalData interface {
externalDataConfig() bq.ExternalDataConfiguration
}
const Scope = "https://www.googleapis.com/auth/bigquery"
const userAgent = "gcloud-golang-bigquery/20160429"
// Client may be used to perform BigQuery operations.
type Client struct {
service service
projectID string
}
// NewClient constructs a new Client which can perform BigQuery operations.
// Operations performed via the client are billed to the specified GCP project.
func NewClient(ctx context.Context, projectID string, opts ...option.ClientOption) (*Client, error) {
o := []option.ClientOption{
option.WithEndpoint(prodAddr),
option.WithScopes(Scope),
option.WithUserAgent(userAgent),
}
o = append(o, opts...)
httpClient, endpoint, err := transport.NewHTTPClient(ctx, o...)
if err != nil {
return nil, fmt.Errorf("dialing: %v", err)
}
s, err := newBigqueryService(httpClient, endpoint)
if err != nil {
return nil, fmt.Errorf("constructing bigquery client: %v", err)
}
c := &Client{
service: s,
projectID: projectID,
}
return c, nil
}
// Close closes any resources held by the client.
// Close should be called when the client is no longer needed.
// It need not be called at program exit.
func (c *Client) Close() error {
return nil
}
// Copyright 2016 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package bigquery
import (
"golang.org/x/net/context"
bq "google.golang.org/api/bigquery/v2"
)
// CopyConfig holds the configuration for a copy job.
type CopyConfig struct {
// JobID is the ID to use for the copy job. If unset, a job ID will be automatically created.
JobID string
// Srcs are the tables from which data will be copied.
Srcs []*Table
// Dst is the table into which the data will be copied.
Dst *Table
// CreateDisposition specifies the circumstances under which the destination table will be created.
// The default is CreateIfNeeded.
CreateDisposition TableCreateDisposition
// WriteDisposition specifies how existing data in the destination table is treated.
// The default is WriteAppend.
WriteDisposition TableWriteDisposition
}
// A Copier copies data into a BigQuery table from one or more BigQuery tables.
type Copier struct {
CopyConfig
c *Client
}
// CopierFrom returns a Copier which can be used to copy data into a
// BigQuery table from one or more BigQuery tables.
// The returned Copier may optionally be further configured before its Run method is called.
func (t *Table) CopierFrom(srcs ...*Table) *Copier {
return &Copier{
c: t.c,
CopyConfig: CopyConfig{
Srcs: srcs,
Dst: t,
},
}
}
// Run initiates a copy job.
func (c *Copier) Run(ctx context.Context) (*Job, error) {
conf := &bq.JobConfigurationTableCopy{
CreateDisposition: string(c.CreateDisposition),
WriteDisposition: string(c.WriteDisposition),
DestinationTable: c.Dst.tableRefProto(),
}
for _, t := range c.Srcs {
conf.SourceTables = append(conf.SourceTables, t.tableRefProto())
}
job := &bq.Job{Configuration: &bq.JobConfiguration{Copy: conf}}
setJobRef(job, c.JobID, c.c.projectID)
return c.c.service.insertJob(ctx, c.c.projectID, &insertJobConf{job: job})
}
// Copyright 2015 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package bigquery
import (
"reflect"
"testing"
"golang.org/x/net/context"
bq "google.golang.org/api/bigquery/v2"
)
func defaultCopyJob() *bq.Job {
return &bq.Job{
Configuration: &bq.JobConfiguration{
Copy: &bq.JobConfigurationTableCopy{
DestinationTable: &bq.TableReference{
ProjectId: "d-project-id",
DatasetId: "d-dataset-id",
TableId: "d-table-id",
},
SourceTables: []*bq.TableReference{
{
ProjectId: "s-project-id",
DatasetId: "s-dataset-id",
TableId: "s-table-id",
},
},
},
},
}
}
func TestCopy(t *testing.T) {
testCases := []struct {
dst *Table
srcs []*Table
config CopyConfig
want *bq.Job
}{
{
dst: &Table{
ProjectID: "d-project-id",
DatasetID: "d-dataset-id",
TableID: "d-table-id",
},
srcs: []*Table{
{
ProjectID: "s-project-id",
DatasetID: "s-dataset-id",
TableID: "s-table-id",
},
},
want: defaultCopyJob(),
},
{
dst: &Table{
ProjectID: "d-project-id",
DatasetID: "d-dataset-id",
TableID: "d-table-id",
},
srcs: []*Table{
{
ProjectID: "s-project-id",
DatasetID: "s-dataset-id",
TableID: "s-table-id",
},
},
config: CopyConfig{
CreateDisposition: CreateNever,
WriteDisposition: WriteTruncate,
},
want: func() *bq.Job {
j := defaultCopyJob()
j.Configuration.Copy.CreateDisposition = "CREATE_NEVER"
j.Configuration.Copy.WriteDisposition = "WRITE_TRUNCATE"
return j
}(),
},
{
dst: &Table{
ProjectID: "d-project-id",
DatasetID: "d-dataset-id",
TableID: "d-table-id",
},
srcs: []*Table{
{
ProjectID: "s-project-id",
DatasetID: "s-dataset-id",
TableID: "s-table-id",
},
},
config: CopyConfig{JobID: "job-id"},
want: func() *bq.Job {
j := defaultCopyJob()
j.JobReference = &bq.JobReference{
JobId: "job-id",
ProjectId: "client-project-id",
}
return j
}(),
},
}
for _, tc := range testCases {
s := &testService{}
c := &Client{
service: s,
projectID: "client-project-id",
}
tc.dst.c = c
copier := tc.dst.CopierFrom(tc.srcs...)
tc.config.Srcs = tc.srcs
tc.config.Dst = tc.dst
copier.CopyConfig = tc.config
if _, err := copier.Run(context.Background()); err != nil {
t.Errorf("err calling Run: %v", err)
continue
}
if !reflect.DeepEqual(s.Job, tc.want) {
t.Errorf("copying: got:\n%v\nwant:\n%v", s.Job, tc.want)
}
}
}
// Copyright 2015 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package bigquery
import (
"reflect"
"testing"
"time"
"golang.org/x/net/context"
bq "google.golang.org/api/bigquery/v2"
)
type createTableRecorder struct {
conf *createTableConf
service
}
func (rec *createTableRecorder) createTable(ctx context.Context, conf *createTableConf) error {
rec.conf = conf
return nil
}
func TestCreateTableOptions(t *testing.T) {
s := &createTableRecorder{}
c := &Client{
projectID: "p",
service: s,
}
ds := c.Dataset("d")
table := ds.Table("t")
exp := time.Now()
q := "query"
if err := table.Create(context.Background(), TableExpiration(exp), ViewQuery(q), UseStandardSQL()); err != nil {
t.Fatalf("err calling Table.Create: %v", err)
}
want := createTableConf{
projectID: "p",
datasetID: "d",
tableID: "t",
expiration: exp,
viewQuery: q,
useStandardSQL: true,
}
if !reflect.DeepEqual(*s.conf, want) {
t.Errorf("createTableConf: got:\n%v\nwant:\n%v", *s.conf, want)
}
sc := Schema{fieldSchema("desc", "name", "STRING", false, true)}
if err := table.Create(context.Background(), TableExpiration(exp), sc); err != nil {
t.Fatalf("err calling Table.Create: %v", err)
}
want = createTableConf{
projectID: "p",
datasetID: "d",
tableID: "t",
expiration: exp,
// No need for an elaborate schema, that is tested in schema_test.go.
schema: &bq.TableSchema{
Fields: []*bq.TableFieldSchema{
bqTableFieldSchema("desc", "name", "STRING", "REQUIRED"),
},
},
}
if !reflect.DeepEqual(*s.conf, want) {
t.Errorf("createTableConf: got:\n%v\nwant:\n%v", *s.conf, want)
}
partitionCases := []struct {
timePartitioning TimePartitioning
expectedExpiration time.Duration
}{
{TimePartitioning{}, time.Duration(0)},
{TimePartitioning{time.Second}, time.Second},
}
for _, c := range partitionCases {
if err := table.Create(context.Background(), c.timePartitioning); err != nil {
t.Fatalf("err calling Table.Create: %v", err)
}
want = createTableConf{
projectID: "p",
datasetID: "d",
tableID: "t",
timePartitioning: &TimePartitioning{c.expectedExpiration},
}
if !reflect.DeepEqual(*s.conf, want) {
t.Errorf("createTableConf: got:\n%v\nwant:\n%v", *s.conf, want)
}
}
}
// Copyright 2015 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package bigquery
import (
"time"
"golang.org/x/net/context"
"google.golang.org/api/iterator"
)
// Dataset is a reference to a BigQuery dataset.
type Dataset struct {
ProjectID string
DatasetID string
c *Client
}
type DatasetMetadata struct {
CreationTime time.Time
LastModifiedTime time.Time // When the dataset or any of its tables were modified.
DefaultTableExpiration time.Duration
Description string // The user-friendly description of this table.
Name string // The user-friendly name for this table.
ID string
Location string // The geo location of the dataset.
Labels map[string]string // User-provided labels.
// TODO(jba): access rules
}
// Dataset creates a handle to a BigQuery dataset in the client's project.
func (c *Client) Dataset(id string) *Dataset {
return c.DatasetInProject(c.projectID, id)
}
// DatasetInProject creates a handle to a BigQuery dataset in the specified project.
func (c *Client) DatasetInProject(projectID, datasetID string) *Dataset {
return &Dataset{
ProjectID: projectID,
DatasetID: datasetID,
c: c,
}
}
// Create creates a dataset in the BigQuery service. An error will be returned
// if the dataset already exists.
func (d *Dataset) Create(ctx context.Context) error {
return d.c.service.insertDataset(ctx, d.DatasetID, d.ProjectID)
}
// Delete deletes the dataset.
func (d *Dataset) Delete(ctx context.Context) error {
return d.c.service.deleteDataset(ctx, d.DatasetID, d.ProjectID)
}
// Metadata fetches the metadata for the dataset.
func (d *Dataset) Metadata(ctx context.Context) (*DatasetMetadata, error) {
return d.c.service.getDatasetMetadata(ctx, d.ProjectID, d.DatasetID)
}
// Table creates a handle to a BigQuery table in the dataset.
// To determine if a table exists, call Table.Metadata.
// If the table does not already exist, use Table.Create to create it.
func (d *Dataset) Table(tableID string) *Table {
return &Table{ProjectID: d.ProjectID, DatasetID: d.DatasetID, TableID: tableID, c: d.c}
}
// Tables returns an iterator over the tables in the Dataset.
func (d *Dataset) Tables(ctx context.Context) *TableIterator {
it := &TableIterator{
ctx: ctx,
dataset: d,
}
it.pageInfo, it.nextFunc = iterator.NewPageInfo(
it.fetch,
func() int { return len(it.tables) },
func() interface{} { b := it.tables; it.tables = nil; return b })
return it
}
// A TableIterator is an iterator over Tables.
type TableIterator struct {
ctx context.Context
dataset *Dataset
tables []*Table
pageInfo *iterator.PageInfo
nextFunc func() error
}
// Next returns the next result. Its second return value is Done if there are
// no more results. Once Next returns Done, all subsequent calls will return
// Done.
func (it *TableIterator) Next() (*Table, error) {
if err := it.nextFunc(); err != nil {
return nil, err
}
t := it.tables[0]
it.tables = it.tables[1:]
return t, nil
}
// PageInfo supports pagination. See the google.golang.org/api/iterator package for details.
func (it *TableIterator) PageInfo() *iterator.PageInfo { return it.pageInfo }
func (it *TableIterator) fetch(pageSize int, pageToken string) (string, error) {
tables, tok, err := it.dataset.c.service.listTables(it.ctx, it.dataset.ProjectID, it.dataset.DatasetID, pageSize, pageToken)
if err != nil {
return "", err
}
for _, t := range tables {
t.c = it.dataset.c
it.tables = append(it.tables, t)
}
return tok, nil
}
// Datasets returns an iterator over the datasets in the Client's project.
func (c *Client) Datasets(ctx context.Context) *DatasetIterator {
return c.DatasetsInProject(ctx, c.projectID)
}
// DatasetsInProject returns an iterator over the datasets in the provided project.
func (c *Client) DatasetsInProject(ctx context.Context, projectID string) *DatasetIterator {
it := &DatasetIterator{
ctx: ctx,
c: c,
projectID: projectID,
}
it.pageInfo, it.nextFunc = iterator.NewPageInfo(
it.fetch,
func() int { return len(it.items) },
func() interface{} { b := it.items; it.items = nil; return b })
return it
}
// DatasetIterator iterates over the datasets in a project.
type DatasetIterator struct {
// ListHidden causes hidden datasets to be listed when set to true.
ListHidden bool
// Filter restricts the datasets returned by label. The filter syntax is described in
// https://cloud.google.com/bigquery/docs/labeling-datasets#filtering_datasets_using_labels
Filter string
ctx context.Context
projectID string
c *Client
pageInfo *iterator.PageInfo
nextFunc func() error
items []*Dataset
}
// PageInfo supports pagination. See the google.golang.org/api/iterator package for details.
func (it *DatasetIterator) PageInfo() *iterator.PageInfo { return it.pageInfo }
func (it *DatasetIterator) Next() (*Dataset, error) {
if err := it.nextFunc(); err != nil {
return nil, err
}
item := it.items[0]
it.items = it.items[1:]
return item, nil
}
func (it *DatasetIterator) fetch(pageSize int, pageToken string) (string, error) {
datasets, nextPageToken, err := it.c.service.listDatasets(it.ctx, it.projectID,
pageSize, pageToken, it.ListHidden, it.Filter)
if err != nil {
return "", err
}
for _, d := range datasets {
d.c = it.c
it.items = append(it.items, d)
}
return nextPageToken, nil
}
// Copyright 2015 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package bigquery
import (
"errors"
"strconv"
"testing"
"golang.org/x/net/context"
itest "google.golang.org/api/iterator/testing"
)
// readServiceStub services read requests by returning data from an in-memory list of values.
type listTablesServiceStub struct {
expectedProject, expectedDataset string
tables []*Table
service
}
func (s *listTablesServiceStub) listTables(ctx context.Context, projectID, datasetID string, pageSize int, pageToken string) ([]*Table, string, error) {
if projectID != s.expectedProject {
return nil, "", errors.New("wrong project id")
}
if datasetID != s.expectedDataset {
return nil, "", errors.New("wrong dataset id")
}
const maxPageSize = 2
if pageSize <= 0 || pageSize > maxPageSize {
pageSize = maxPageSize
}
start := 0
if pageToken != "" {
var err error
start, err = strconv.Atoi(pageToken)
if err != nil {
return nil, "", err
}
}
end := start + pageSize
if end > len(s.tables) {
end = len(s.tables)
}
nextPageToken := ""
if end < len(s.tables) {
nextPageToken = strconv.Itoa(end)
}
return s.tables[start:end], nextPageToken, nil
}
func TestTables(t *testing.T) {
t1 := &Table{ProjectID: "p1", DatasetID: "d1", TableID: "t1"}
t2 := &Table{ProjectID: "p1", DatasetID: "d1", TableID: "t2"}
t3 := &Table{ProjectID: "p1", DatasetID: "d1", TableID: "t3"}
allTables := []*Table{t1, t2, t3}
c := &Client{
service: &listTablesServiceStub{
expectedProject: "x",
expectedDataset: "y",
tables: allTables,
},
projectID: "x",
}
msg, ok := itest.TestIterator(allTables,
func() interface{} { return c.Dataset("y").Tables(context.Background()) },
func(it interface{}) (interface{}, error) { return it.(*TableIterator).Next() })
if !ok {
t.Error(msg)
}
}
type listDatasetsFake struct {
service
projectID string
datasets []*Dataset
hidden map[*Dataset]bool
}
func (df *listDatasetsFake) listDatasets(_ context.Context, projectID string, pageSize int, pageToken string, listHidden bool, filter string) ([]*Dataset, string, error) {
const maxPageSize = 2
if pageSize <= 0 || pageSize > maxPageSize {
pageSize = maxPageSize
}
if filter != "" {
return nil, "", errors.New("filter not supported")
}
if projectID != df.projectID {
return nil, "", errors.New("bad project ID")
}
start := 0
if pageToken != "" {
var err error
start, err = strconv.Atoi(pageToken)
if err != nil {
return nil, "", err
}
}
var (
i int
result []*Dataset
nextPageToken string
)
for i = start; len(result) < pageSize && i < len(df.datasets); i++ {
if df.hidden[df.datasets[i]] && !listHidden {
continue
}
result = append(result, df.datasets[i])
}
if i < len(df.datasets) {
nextPageToken = strconv.Itoa(i)
}
return result, nextPageToken, nil
}
func TestDatasets(t *testing.T) {
service := &listDatasetsFake{projectID: "p"}
client := &Client{service: service}
datasets := []*Dataset{
{"p", "a", client},
{"p", "b", client},
{"p", "hidden", client},
{"p", "c", client},
}
service.datasets = datasets
service.hidden = map[*Dataset]bool{datasets[2]: true}
c := &Client{
projectID: "p",
service: service,
}
msg, ok := itest.TestIterator(datasets,
func() interface{} { it := c.Datasets(context.Background()); it.ListHidden = true; return it },
func(it interface{}) (interface{}, error) { return it.(*DatasetIterator).Next() })
if !ok {
t.Fatalf("ListHidden=true: %s", msg)
}
msg, ok = itest.TestIterator([]*Dataset{datasets[0], datasets[1], datasets[3]},
func() interface{} { it := c.Datasets(context.Background()); it.ListHidden = false; return it },
func(it interface{}) (interface{}, error) { return it.(*DatasetIterator).Next() })
if !ok {
t.Fatalf("ListHidden=false: %s", msg)
}
}
// Copyright 2015 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
/*
Package bigquery provides a client for the BigQuery service.
Note: This package is in beta. Some backwards-incompatible changes may occur.
The following assumes a basic familiarity with BigQuery concepts.
See https://cloud.google.com/bigquery/docs.
Creating a Client
To start working with this package, create a client:
ctx := context.Background()
client, err := bigquery.NewClient(ctx, projectID)
if err != nil {
// TODO: Handle error.
}
Querying
To query existing tables, create a Query and call its Read method:
q := client.Query(`
SELECT year, SUM(number) as num
FROM [bigquery-public-data:usa_names.usa_1910_2013]
WHERE name = "William"
GROUP BY year
ORDER BY year
`)
it, err := q.Read(ctx)
if err != nil {
// TODO: Handle error.
}
Then iterate through the resulting rows. You can store a row using
anything that implements the ValueLoader interface, or with a slice or map of bigquery.Value.
A slice is simplest:
for {
var values []bigquery.Value
err := it.Next(&values)
if err == iterator.Done {
break
}
if err != nil {
// TODO: Handle error.
}
fmt.Println(values)
}
You can also use a struct whose exported fields match the query:
type Count struct {
Year int
Num int
}
for {
var c Count
err := it.Next(&c)
if err == iterator.Done {
break
}
if err != nil {
// TODO: Handle error.
}
fmt.Println(c)
}
You can also start the query running and get the results later.
Create the query as above, but call Run instead of Read. This returns a Job,
which represents an asychronous operation.
job, err := q.Run(ctx)
if err != nil {
// TODO: Handle error.
}
Get the job's ID, a printable string. You can save this string to retrieve
the results at a later time, even in another process.
jobID := job.ID()
fmt.Printf("The job ID is %s\n", jobID)
To retrieve the job's results from the ID, first look up the Job:
job, err = client.JobFromID(ctx, jobID)
if err != nil {
// TODO: Handle error.
}
Use the Job.Read method to obtain an iterator, and loop over the rows.
Query.Read is just a convenience method that combines Query.Run and Job.Read.
it, err = job.Read(ctx)
if err != nil {
// TODO: Handle error.
}
// Proceed with iteration as above.
Datasets and Tables
You can refer to datasets in the client's project with the Dataset method, and
in other projects with the DatasetInProject method:
myDataset := client.Dataset("my_dataset")
yourDataset := client.DatasetInProject("your-project-id", "your_dataset")
These methods create references to datasets, not the datasets themselves. You can have
a dataset reference even if the dataset doesn't exist yet. Use Dataset.Create to
create a dataset from a reference:
if err := myDataset.Create(ctx); err != nil {
// TODO: Handle error.
}
You can refer to tables with Dataset.Table. Like bigquery.Dataset, bigquery.Table is a reference
to an object in BigQuery that may or may not exist.
table := myDataset.Table("my_table")
You can create, delete and update the metadata of tables with methods on Table.
Table.Create supports a few options. For instance, you could create a temporary table with:
err = myDataset.Table("temp").Create(ctx, bigquery.TableExpiration(time.Now().Add(1*time.Hour)))
if err != nil {
// TODO: Handle error.
}
We'll see how to create a table with a schema in the next section.
Schemas
There are two ways to construct schemas with this package.
You can build a schema by hand, like so:
schema1 := bigquery.Schema{
&bigquery.FieldSchema{Name: "Name", Required: true, Type: bigquery.StringFieldType},
&bigquery.FieldSchema{Name: "Grades", Repeated: true, Type: bigquery.IntegerFieldType},
}
Or you can infer the schema from a struct:
type student struct {
Name string
Grades []int
}
schema2, err := bigquery.InferSchema(student{})
if err != nil {
// TODO: Handle error.
}
// schema1 and schema2 are identical.
Struct inference supports tags like those of the encoding/json package,
so you can change names or ignore fields:
type student2 struct {
Name string `bigquery:"full_name"`
Grades []int
Secret string `bigquery:"-"`
}
schema3, err := bigquery.InferSchema(student2{})
if err != nil {
// TODO: Handle error.
}
// schema3 has fields "full_name" and "Grade".
Having constructed a schema, you can pass it to Table.Create as an option:
if err := table.Create(ctx, schema1); err != nil {
// TODO: Handle error.
}
Copying
You can copy one or more tables to another table. Begin by constructing a Copier
describing the copy. Then set any desired copy options, and finally call Run to get a Job:
copier := myDataset.Table("dest").CopierFrom(myDataset.Table("src"))
copier.WriteDisposition = bigquery.WriteTruncate
job, err = copier.Run(ctx)
if err != nil {
// TODO: Handle error.
}
You can chain the call to Run if you don't want to set options:
job, err = myDataset.Table("dest").CopierFrom(myDataset.Table("src")).Run(ctx)
if err != nil {
// TODO: Handle error.
}
You can wait for your job to complete:
status, err := job.Wait(ctx)
if err != nil {
// TODO: Handle error.
}
Job.Wait polls with exponential backoff. You can also poll yourself, if you
wish:
for {
status, err := job.Status(ctx)
if err != nil {
// TODO: Handle error.
}
if status.Done() {
if status.Err() != nil {
log.Fatalf("Job failed with error %v", status.Err())
}
break
}
time.Sleep(pollInterval)
}
Loading and Uploading
There are two ways to populate a table with this package: load the data from a Google Cloud Storage
object, or upload rows directly from your program.
For loading, first create a GCSReference, configuring it if desired. Then make a Loader, optionally configure
it as well, and call its Run method.
gcsRef := bigquery.NewGCSReference("gs://my-bucket/my-object")
gcsRef.AllowJaggedRows = true
loader := myDataset.Table("dest").LoaderFrom(gcsRef)
loader.CreateDisposition = bigquery.CreateNever
job, err = loader.Run(ctx)
// Poll the job for completion if desired, as above.
To upload, first define a type that implements the ValueSaver interface, which has a single method named Save.
Then create an Uploader, and call its Put method with a slice of values.
u := table.Uploader()
// Item implements the ValueSaver interface.
items := []*Item{
{Name: "n1", Size: 32.6, Count: 7},
{Name: "n2", Size: 4, Count: 2},
{Name: "n3", Size: 101.5, Count: 1},
}
if err := u.Put(ctx, items); err != nil {
// TODO: Handle error.
}
You can also upload a struct that doesn't implement ValueSaver. Use the StructSaver type
to specify the schema and insert ID by hand, or just supply the struct or struct pointer
directly and the schema will be inferred:
type Item2 struct {
Name string
Size float64
Count int
}
// Item implements the ValueSaver interface.
items2 := []*Item2{
{Name: "n1", Size: 32.6, Count: 7},
{Name: "n2", Size: 4, Count: 2},
{Name: "n3", Size: 101.5, Count: 1},
}
if err := u.Put(ctx, items2); err != nil {
// TODO: Handle error.
}
Extracting
If you've been following so far, extracting data from a BigQuery table
into a Google Cloud Storage object will feel familiar. First create an
Extractor, then optionally configure it, and lastly call its Run method.
extractor := table.ExtractorTo(gcsRef)
extractor.DisableHeader = true
job, err = extractor.Run(ctx)
// Poll the job for completion if desired, as above.
Authentication
See examples of authorization and authentication at
https://godoc.org/cloud.google.com/go#pkg-examples.
*/
package bigquery // import "cloud.google.com/go/bigquery"
// Copyright 2015 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package bigquery
import (
"fmt"
bq "google.golang.org/api/bigquery/v2"
)
// An Error contains detailed information about a failed bigquery operation.
type Error struct {
// Mirrors bq.ErrorProto, but drops DebugInfo
Location, Message, Reason string
}
func (e Error) Error() string {
return fmt.Sprintf("{Location: %q; Message: %q; Reason: %q}", e.Location, e.Message, e.Reason)
}
func errorFromErrorProto(ep *bq.ErrorProto) *Error {
if ep == nil {
return nil
}
return &Error{
Location: ep.Location,
Message: ep.Message,
Reason: ep.Reason,
}
}
// A MultiError contains multiple related errors.
type MultiError []error
func (m MultiError) Error() string {
switch len(m) {
case 0:
return "(0 errors)"
case 1:
return m[0].Error()
case 2:
return m[0].Error() + " (and 1 other error)"
}
return fmt.Sprintf("%s (and %d other errors)", m[0].Error(), len(m)-1)
}
// RowInsertionError contains all errors that occurred when attempting to insert a row.
type RowInsertionError struct {
InsertID string // The InsertID associated with the affected row.
RowIndex int // The 0-based index of the affected row in the batch of rows being inserted.
Errors MultiError
}
func (e *RowInsertionError) Error() string {
errFmt := "insertion of row [insertID: %q; insertIndex: %v] failed with error: %s"
return fmt.Sprintf(errFmt, e.InsertID, e.RowIndex, e.Errors.Error())
}
// PutMultiError contains an error for each row which was not successfully inserted
// into a BigQuery table.
type PutMultiError []RowInsertionError
func (pme PutMultiError) Error() string {
plural := "s"
if len(pme) == 1 {
plural = ""
}
return fmt.Sprintf("%v row insertion%s failed", len(pme), plural)
}
// Copyright 2015 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package bigquery
import (
"errors"
"reflect"
"strings"
"testing"
bq "google.golang.org/api/bigquery/v2"
)
func rowInsertionError(msg string) RowInsertionError {
return RowInsertionError{Errors: []error{errors.New(msg)}}
}
func TestPutMultiErrorString(t *testing.T) {
testCases := []struct {
errs PutMultiError
want string
}{
{
errs: PutMultiError{},
want: "0 row insertions failed",
},
{
errs: PutMultiError{rowInsertionError("a")},
want: "1 row insertion failed",
},
{
errs: PutMultiError{rowInsertionError("a"), rowInsertionError("b")},
want: "2 row insertions failed",
},
}
for _, tc := range testCases {
if tc.errs.Error() != tc.want {
t.Errorf("PutMultiError string: got:\n%v\nwant:\n%v", tc.errs.Error(), tc.want)
}
}
}
func TestMultiErrorString(t *testing.T) {
testCases := []struct {
errs MultiError
want string
}{
{
errs: MultiError{},
want: "(0 errors)",
},
{
errs: MultiError{errors.New("a")},
want: "a",
},
{
errs: MultiError{errors.New("a"), errors.New("b")},
want: "a (and 1 other error)",
},
{
errs: MultiError{errors.New("a"), errors.New("b"), errors.New("c")},
want: "a (and 2 other errors)",
},
}
for _, tc := range testCases {
if tc.errs.Error() != tc.want {
t.Errorf("PutMultiError string: got:\n%v\nwant:\n%v", tc.errs.Error(), tc.want)
}
}
}
func TestErrorFromErrorProto(t *testing.T) {
for _, test := range []struct {
in *bq.ErrorProto
want *Error
}{
{nil, nil},
{
in: &bq.ErrorProto{Location: "L", Message: "M", Reason: "R"},
want: &Error{Location: "L", Message: "M", Reason: "R"},
},
} {
if got := errorFromErrorProto(test.in); !reflect.DeepEqual(got, test.want) {
t.Errorf("%v: got %v, want %v", test.in, got, test.want)
}
}
}
func TestErrorString(t *testing.T) {
e := &Error{Location: "<L>", Message: "<M>", Reason: "<R>"}
got := e.Error()
if !strings.Contains(got, "<L>") || !strings.Contains(got, "<M>") || !strings.Contains(got, "<R>") {
t.Errorf(`got %q, expected to see "<L>", "<M>" and "<R>"`, got)
}
}
This diff is collapsed.
// Copyright 2016 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package bigquery
import (
"golang.org/x/net/context"
bq "google.golang.org/api/bigquery/v2"
)
// ExtractConfig holds the configuration for an extract job.
type ExtractConfig struct {
// JobID is the ID to use for the extract job. If empty, a job ID will be automatically created.
JobID string
// Src is the table from which data will be extracted.
Src *Table
// Dst is the destination into which the data will be extracted.
Dst *GCSReference
// DisableHeader disables the printing of a header row in exported data.
DisableHeader bool
}
// An Extractor extracts data from a BigQuery table into Google Cloud Storage.
type Extractor struct {
ExtractConfig
c *Client
}
// ExtractorTo returns an Extractor which can be used to extract data from a
// BigQuery table into Google Cloud Storage.
// The returned Extractor may optionally be further configured before its Run method is called.
func (t *Table) ExtractorTo(dst *GCSReference) *Extractor {
return &Extractor{
c: t.c,
ExtractConfig: ExtractConfig{
Src: t,
Dst: dst,
},
}
}
// Run initiates an extract job.
func (e *Extractor) Run(ctx context.Context) (*Job, error) {
conf := &bq.JobConfigurationExtract{}
job := &bq.Job{Configuration: &bq.JobConfiguration{Extract: conf}}
setJobRef(job, e.JobID, e.c.projectID)
conf.DestinationUris = append([]string{}, e.Dst.uris...)
conf.Compression = string(e.Dst.Compression)
conf.DestinationFormat = string(e.Dst.DestinationFormat)
conf.FieldDelimiter = e.Dst.FieldDelimiter
conf.SourceTable = e.Src.tableRefProto()
if e.DisableHeader {
f := false
conf.PrintHeader = &f
}
return e.c.service.insertJob(ctx, e.c.projectID, &insertJobConf{job: job})
}
// Copyright 2015 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package bigquery
import (
"reflect"
"testing"
"golang.org/x/net/context"
bq "google.golang.org/api/bigquery/v2"
)
func defaultExtractJob() *bq.Job {
return &bq.Job{
Configuration: &bq.JobConfiguration{
Extract: &bq.JobConfigurationExtract{
SourceTable: &bq.TableReference{
ProjectId: "project-id",
DatasetId: "dataset-id",
TableId: "table-id",
},
DestinationUris: []string{"uri"},
},
},
}
}
func TestExtract(t *testing.T) {
s := &testService{}
c := &Client{
service: s,
projectID: "project-id",
}
testCases := []struct {
dst *GCSReference
src *Table
config ExtractConfig
want *bq.Job
}{
{
dst: defaultGCS(),
src: c.Dataset("dataset-id").Table("table-id"),
want: defaultExtractJob(),
},
{
dst: defaultGCS(),
src: c.Dataset("dataset-id").Table("table-id"),
config: ExtractConfig{DisableHeader: true},
want: func() *bq.Job {
j := defaultExtractJob()
f := false
j.Configuration.Extract.PrintHeader = &f
return j
}(),
},
{
dst: func() *GCSReference {
g := NewGCSReference("uri")
g.Compression = Gzip
g.DestinationFormat = JSON
g.FieldDelimiter = "\t"
return g
}(),
src: c.Dataset("dataset-id").Table("table-id"),
want: func() *bq.Job {
j := defaultExtractJob()
j.Configuration.Extract.Compression = "GZIP"
j.Configuration.Extract.DestinationFormat = "NEWLINE_DELIMITED_JSON"
j.Configuration.Extract.FieldDelimiter = "\t"
return j
}(),
},
}
for _, tc := range testCases {
ext := tc.src.ExtractorTo(tc.dst)
tc.config.Src = ext.Src
tc.config.Dst = ext.Dst
ext.ExtractConfig = tc.config
if _, err := ext.Run(context.Background()); err != nil {
t.Errorf("err calling extract: %v", err)
continue
}
if !reflect.DeepEqual(s.Job, tc.want) {
t.Errorf("extracting: got:\n%v\nwant:\n%v", s.Job, tc.want)
}
}
}
// Copyright 2016 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package bigquery
import (
"io"
bq "google.golang.org/api/bigquery/v2"
)
// A ReaderSource is a source for a load operation that gets
// data from an io.Reader.
type ReaderSource struct {
r io.Reader
FileConfig
}
// NewReaderSource creates a ReaderSource from an io.Reader. You may
// optionally configure properties on the ReaderSource that describe the
// data being read, before passing it to Table.LoaderFrom.
func NewReaderSource(r io.Reader) *ReaderSource {
return &ReaderSource{r: r}
}
func (r *ReaderSource) populateInsertJobConfForLoad(conf *insertJobConf) {
conf.media = r.r
r.FileConfig.populateLoadConfig(conf.job.Configuration.Load)
}
// FileConfig contains configuration options that pertain to files, typically
// text files that require interpretation to be used as a BigQuery table. A
// file may live in Google Cloud Storage (see GCSReference), or it may be
// loaded into a table via the Table.LoaderFromReader.
type FileConfig struct {
// SourceFormat is the format of the GCS data to be read.
// Allowed values are: CSV, Avro, JSON, DatastoreBackup. The default is CSV.
SourceFormat DataFormat
// FieldDelimiter is the separator for fields in a CSV file, used when
// reading or exporting data. The default is ",".
FieldDelimiter string
// The number of rows at the top of a CSV file that BigQuery will skip when
// reading data.
SkipLeadingRows int64
// AllowJaggedRows causes missing trailing optional columns to be tolerated
// when reading CSV data. Missing values are treated as nulls.
AllowJaggedRows bool
// AllowQuotedNewlines sets whether quoted data sections containing
// newlines are allowed when reading CSV data.
AllowQuotedNewlines bool
// Indicates if we should automatically infer the options and
// schema for CSV and JSON sources.
AutoDetect bool
// Encoding is the character encoding of data to be read.
Encoding Encoding
// MaxBadRecords is the maximum number of bad records that will be ignored
// when reading data.
MaxBadRecords int64
// IgnoreUnknownValues causes values not matching the schema to be
// tolerated. Unknown values are ignored. For CSV this ignores extra values
// at the end of a line. For JSON this ignores named values that do not
// match any column name. If this field is not set, records containing
// unknown values are treated as bad records. The MaxBadRecords field can
// be used to customize how bad records are handled.
IgnoreUnknownValues bool
// Schema describes the data. It is required when reading CSV or JSON data,
// unless the data is being loaded into a table that already exists.
Schema Schema
// Quote is the value used to quote data sections in a CSV file. The
// default quotation character is the double quote ("), which is used if
// both Quote and ForceZeroQuote are unset.
// To specify that no character should be interpreted as a quotation
// character, set ForceZeroQuote to true.
// Only used when reading data.
Quote string
ForceZeroQuote bool
}
// quote returns the CSV quote character, or nil if unset.
func (fc *FileConfig) quote() *string {
if fc.ForceZeroQuote {
quote := ""
return &quote
}
if fc.Quote == "" {
return nil
}
return &fc.Quote
}
func (fc *FileConfig) populateLoadConfig(conf *bq.JobConfigurationLoad) {
conf.SkipLeadingRows = fc.SkipLeadingRows
conf.SourceFormat = string(fc.SourceFormat)
conf.Autodetect = fc.AutoDetect
conf.AllowJaggedRows = fc.AllowJaggedRows
conf.AllowQuotedNewlines = fc.AllowQuotedNewlines
conf.Encoding = string(fc.Encoding)
conf.FieldDelimiter = fc.FieldDelimiter
conf.IgnoreUnknownValues = fc.IgnoreUnknownValues
conf.MaxBadRecords = fc.MaxBadRecords
if fc.Schema != nil {
conf.Schema = fc.Schema.asTableSchema()
}
conf.Quote = fc.quote()
}
func (fc *FileConfig) populateExternalDataConfig(conf *bq.ExternalDataConfiguration) {
format := fc.SourceFormat
if format == "" {
// Format must be explicitly set for external data sources.
format = CSV
}
// TODO(jba): support AutoDetect.
conf.IgnoreUnknownValues = fc.IgnoreUnknownValues
conf.MaxBadRecords = fc.MaxBadRecords
conf.SourceFormat = string(format)
if fc.Schema != nil {
conf.Schema = fc.Schema.asTableSchema()
}
if format == CSV {
conf.CsvOptions = &bq.CsvOptions{
AllowJaggedRows: fc.AllowJaggedRows,
AllowQuotedNewlines: fc.AllowQuotedNewlines,
Encoding: string(fc.Encoding),
FieldDelimiter: fc.FieldDelimiter,
SkipLeadingRows: fc.SkipLeadingRows,
Quote: fc.quote(),
}
}
}
// DataFormat describes the format of BigQuery table data.
type DataFormat string
// Constants describing the format of BigQuery table data.
const (
CSV DataFormat = "CSV"
Avro DataFormat = "AVRO"
JSON DataFormat = "NEWLINE_DELIMITED_JSON"
DatastoreBackup DataFormat = "DATASTORE_BACKUP"
)
// Encoding specifies the character encoding of data to be loaded into BigQuery.
// See https://cloud.google.com/bigquery/docs/reference/v2/jobs#configuration.load.encoding
// for more details about how this is used.
type Encoding string
const (
UTF_8 Encoding = "UTF-8"
ISO_8859_1 Encoding = "ISO-8859-1"
)
// Copyright 2016 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package bigquery
import (
"reflect"
"testing"
"cloud.google.com/go/internal/pretty"
bq "google.golang.org/api/bigquery/v2"
)
func TestQuote(t *testing.T) {
ptr := func(s string) *string { return &s }
for _, test := range []struct {
quote string
force bool
want *string
}{
{"", false, nil},
{"", true, ptr("")},
{"-", false, ptr("-")},
{"-", true, ptr("")},
} {
fc := FileConfig{
Quote: test.quote,
ForceZeroQuote: test.force,
}
got := fc.quote()
if (got == nil) != (test.want == nil) {
t.Errorf("%+v\ngot %v\nwant %v", test, pretty.Value(got), pretty.Value(test.want))
}
if got != nil && test.want != nil && *got != *test.want {
t.Errorf("%+v: got %q, want %q", test, *got, *test.want)
}
}
}
func TestPopulateLoadConfig(t *testing.T) {
hyphen := "-"
fc := FileConfig{
SourceFormat: CSV,
FieldDelimiter: "\t",
SkipLeadingRows: 8,
AllowJaggedRows: true,
AllowQuotedNewlines: true,
Encoding: UTF_8,
MaxBadRecords: 7,
IgnoreUnknownValues: true,
Schema: Schema{
stringFieldSchema(),
nestedFieldSchema(),
},
Quote: hyphen,
}
want := &bq.JobConfigurationLoad{
SourceFormat: "CSV",
FieldDelimiter: "\t",
SkipLeadingRows: 8,
AllowJaggedRows: true,
AllowQuotedNewlines: true,
Encoding: "UTF-8",
MaxBadRecords: 7,
IgnoreUnknownValues: true,
Schema: &bq.TableSchema{
Fields: []*bq.TableFieldSchema{
bqStringFieldSchema(),
bqNestedFieldSchema(),
}},
Quote: &hyphen,
}
got := &bq.JobConfigurationLoad{}
fc.populateLoadConfig(got)
if !reflect.DeepEqual(got, want) {
t.Errorf("got:\n%v\nwant:\n%v", pretty.Value(got), pretty.Value(want))
}
}
// Copyright 2015 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package bigquery
import bq "google.golang.org/api/bigquery/v2"
// GCSReference is a reference to one or more Google Cloud Storage objects, which together constitute
// an input or output to a BigQuery operation.
type GCSReference struct {
// TODO(jba): Export so that GCSReference can be used to hold data from a Job.get api call and expose it to the user.
uris []string
FileConfig
// DestinationFormat is the format to use when writing exported files.
// Allowed values are: CSV, Avro, JSON. The default is CSV.
// CSV is not supported for tables with nested or repeated fields.
DestinationFormat DataFormat
// Compression specifies the type of compression to apply when writing data
// to Google Cloud Storage, or using this GCSReference as an ExternalData
// source with CSV or JSON SourceFormat. Default is None.
Compression Compression
}
// NewGCSReference constructs a reference to one or more Google Cloud Storage objects, which together constitute a data source or destination.
// In the simple case, a single URI in the form gs://bucket/object may refer to a single GCS object.
// Data may also be split into mutiple files, if multiple URIs or URIs containing wildcards are provided.
// Each URI may contain one '*' wildcard character, which (if present) must come after the bucket name.
// For more information about the treatment of wildcards and multiple URIs,
// see https://cloud.google.com/bigquery/exporting-data-from-bigquery#exportingmultiple
func NewGCSReference(uri ...string) *GCSReference {
return &GCSReference{uris: uri}
}
// Compression is the type of compression to apply when writing data to Google Cloud Storage.
type Compression string
const (
None Compression = "NONE"
Gzip Compression = "GZIP"
)
func (gcs *GCSReference) populateInsertJobConfForLoad(conf *insertJobConf) {
conf.job.Configuration.Load.SourceUris = gcs.uris
gcs.FileConfig.populateLoadConfig(conf.job.Configuration.Load)
}
func (gcs *GCSReference) externalDataConfig() bq.ExternalDataConfiguration {
conf := bq.ExternalDataConfiguration{
Compression: string(gcs.Compression),
SourceUris: append([]string{}, gcs.uris...),
}
gcs.FileConfig.populateExternalDataConfig(&conf)
return conf
}
This diff is collapsed.
// Copyright 2015 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package bigquery
import (
"fmt"
"reflect"
"golang.org/x/net/context"
"google.golang.org/api/iterator"
)
// A pageFetcher returns a page of rows, starting from the row specified by token.
type pageFetcher interface {
fetch(ctx context.Context, s service, token string) (*readDataResult, error)
setPaging(*pagingConf)
}
func newRowIterator(ctx context.Context, s service, pf pageFetcher) *RowIterator {
it := &RowIterator{
ctx: ctx,
service: s,
pf: pf,
}
it.pageInfo, it.nextFunc = iterator.NewPageInfo(
it.fetch,
func() int { return len(it.rows) },
func() interface{} { r := it.rows; it.rows = nil; return r })
return it
}
// A RowIterator provides access to the result of a BigQuery lookup.
type RowIterator struct {
ctx context.Context
service service
pf pageFetcher
pageInfo *iterator.PageInfo
nextFunc func() error
// StartIndex can be set before the first call to Next. If PageInfo().Token
// is also set, StartIndex is ignored.
StartIndex uint64
rows [][]Value
schema Schema // populated on first call to fetch
structLoader structLoader // used to populate a pointer to a struct
}
// Next loads the next row into dst. Its return value is iterator.Done if there
// are no more results. Once Next returns iterator.Done, all subsequent calls
// will return iterator.Done.
//
// dst may implement ValueLoader, or may be a *[]Value, *map[string]Value, or struct pointer.
//
// If dst is a *[]Value, it will be set to to new []Value whose i'th element
// will be populated with the i'th column of the row.
//
// If dst is a *map[string]Value, a new map will be created if dst is nil. Then
// for each schema column name, the map key of that name will be set to the column's
// value.
//
// If dst is pointer to a struct, each column in the schema will be matched
// with an exported field of the struct that has the same name, ignoring case.
// Unmatched schema columns and struct fields will be ignored.
//
// Each BigQuery column type corresponds to one or more Go types; a matching struct
// field must be of the correct type. The correspondences are:
//
// STRING string
// BOOL bool
// INTEGER int, int8, int16, int32, int64, uint8, uint16, uint32
// FLOAT float32, float64
// BYTES []byte
// TIMESTAMP time.Time
// DATE civil.Date
// TIME civil.Time
// DATETIME civil.DateTime
//
// A repeated field corresponds to a slice or array of the element type.
// A RECORD type (nested schema) corresponds to a nested struct or struct pointer.
// All calls to Next on the same iterator must use the same struct type.
func (it *RowIterator) Next(dst interface{}) error {
var vl ValueLoader
switch dst := dst.(type) {
case ValueLoader:
vl = dst
case *[]Value:
vl = (*valueList)(dst)
case *map[string]Value:
vl = (*valueMap)(dst)
default:
if !isStructPtr(dst) {
return fmt.Errorf("bigquery: cannot convert %T to ValueLoader (need pointer to []Value, map[string]Value, or struct)", dst)
}
}
if err := it.nextFunc(); err != nil {
return err
}
row := it.rows[0]
it.rows = it.rows[1:]
if vl == nil {
// This can only happen if dst is a pointer to a struct. We couldn't
// set vl above because we need the schema.
if err := it.structLoader.set(dst, it.schema); err != nil {
return err
}
vl = &it.structLoader
}
return vl.Load(row, it.schema)
}
func isStructPtr(x interface{}) bool {
t := reflect.TypeOf(x)
return t.Kind() == reflect.Ptr && t.Elem().Kind() == reflect.Struct
}
// PageInfo supports pagination. See the google.golang.org/api/iterator package for details.
func (it *RowIterator) PageInfo() *iterator.PageInfo { return it.pageInfo }
func (it *RowIterator) fetch(pageSize int, pageToken string) (string, error) {
pc := &pagingConf{}
if pageSize > 0 {
pc.recordsPerRequest = int64(pageSize)
pc.setRecordsPerRequest = true
}
if pageToken == "" {
pc.startIndex = it.StartIndex
}
it.pf.setPaging(pc)
var res *readDataResult
var err error
for {
res, err = it.pf.fetch(it.ctx, it.service, pageToken)
if err != errIncompleteJob {
break
}
}
if err != nil {
return "", err
}
it.rows = append(it.rows, res.rows...)
it.schema = res.schema
return res.pageToken, nil
}
This diff is collapsed.
// Copyright 2015 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package bigquery
import (
"cloud.google.com/go/internal"
gax "github.com/googleapis/gax-go"
"golang.org/x/net/context"
bq "google.golang.org/api/bigquery/v2"
)
// A Job represents an operation which has been submitted to BigQuery for processing.
type Job struct {
service service
projectID string
jobID string
isQuery bool
}
// JobFromID creates a Job which refers to an existing BigQuery job. The job
// need not have been created by this package. For example, the job may have
// been created in the BigQuery console.
func (c *Client) JobFromID(ctx context.Context, id string) (*Job, error) {
jobType, err := c.service.getJobType(ctx, c.projectID, id)
if err != nil {
return nil, err
}
return &Job{
service: c.service,
projectID: c.projectID,
jobID: id,
isQuery: jobType == queryJobType,
}, nil
}
func (j *Job) ID() string {
return j.jobID
}
// State is one of a sequence of states that a Job progresses through as it is processed.
type State int
const (
Pending State = iota
Running
Done
)
// JobStatus contains the current State of a job, and errors encountered while processing that job.
type JobStatus struct {
State State
err error
// All errors encountered during the running of the job.
// Not all Errors are fatal, so errors here do not necessarily mean that the job has completed or was unsuccessful.
Errors []*Error
}
// setJobRef initializes job's JobReference if given a non-empty jobID.
// projectID must be non-empty.
func setJobRef(job *bq.Job, jobID, projectID string) {
if jobID == "" {
return
}
// We don't check whether projectID is empty; the server will return an
// error when it encounters the resulting JobReference.
job.JobReference = &bq.JobReference{
JobId: jobID,
ProjectId: projectID,
}
}
// Done reports whether the job has completed.
// After Done returns true, the Err method will return an error if the job completed unsuccesfully.
func (s *JobStatus) Done() bool {
return s.State == Done
}
// Err returns the error that caused the job to complete unsuccesfully (if any).
func (s *JobStatus) Err() error {
return s.err
}
// Status returns the current status of the job. It fails if the Status could not be determined.
func (j *Job) Status(ctx context.Context) (*JobStatus, error) {
return j.service.jobStatus(ctx, j.projectID, j.jobID)
}
// Cancel requests that a job be cancelled. This method returns without waiting for
// cancellation to take effect. To check whether the job has terminated, use Job.Status.
// Cancelled jobs may still incur costs.
func (j *Job) Cancel(ctx context.Context) error {
return j.service.jobCancel(ctx, j.projectID, j.jobID)
}
// Wait blocks until the job or th context is done. It returns the final status
// of the job.
// If an error occurs while retrieving the status, Wait returns that error. But
// Wait returns nil if the status was retrieved successfully, even if
// status.Err() != nil. So callers must check both errors. See the example.
func (j *Job) Wait(ctx context.Context) (*JobStatus, error) {
var js *JobStatus
err := internal.Retry(ctx, gax.Backoff{}, func() (stop bool, err error) {
js, err = j.Status(ctx)
if err != nil {
return true, err
}
if js.Done() {
return true, nil
}
return false, nil
})
if err != nil {
return nil, err
}
return js, nil
}
// Copyright 2016 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package bigquery
import (
"golang.org/x/net/context"
bq "google.golang.org/api/bigquery/v2"
)
// LoadConfig holds the configuration for a load job.
type LoadConfig struct {
// JobID is the ID to use for the load job. If unset, a job ID will be automatically created.
JobID string
// Src is the source from which data will be loaded.
Src LoadSource
// Dst is the table into which the data will be loaded.
Dst *Table
// CreateDisposition specifies the circumstances under which the destination table will be created.
// The default is CreateIfNeeded.
CreateDisposition TableCreateDisposition
// WriteDisposition specifies how existing data in the destination table is treated.
// The default is WriteAppend.
WriteDisposition TableWriteDisposition
}
// A Loader loads data from Google Cloud Storage into a BigQuery table.
type Loader struct {
LoadConfig
c *Client
}
// A LoadSource represents a source of data that can be loaded into
// a BigQuery table.
//
// This package defines two LoadSources: GCSReference, for Google Cloud Storage
// objects, and ReaderSource, for data read from an io.Reader.
type LoadSource interface {
populateInsertJobConfForLoad(conf *insertJobConf)
}
// LoaderFrom returns a Loader which can be used to load data into a BigQuery table.
// The returned Loader may optionally be further configured before its Run method is called.
func (t *Table) LoaderFrom(src LoadSource) *Loader {
return &Loader{
c: t.c,
LoadConfig: LoadConfig{
Src: src,
Dst: t,
},
}
}
// Run initiates a load job.
func (l *Loader) Run(ctx context.Context) (*Job, error) {
job := &bq.Job{
Configuration: &bq.JobConfiguration{
Load: &bq.JobConfigurationLoad{
CreateDisposition: string(l.CreateDisposition),
WriteDisposition: string(l.WriteDisposition),
},
},
}
conf := &insertJobConf{job: job}
l.Src.populateInsertJobConfForLoad(conf)
setJobRef(job, l.JobID, l.c.projectID)
job.Configuration.Load.DestinationTable = l.Dst.tableRefProto()
return l.c.service.insertJob(ctx, l.c.projectID, conf)
}
// Copyright 2015 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package bigquery
import (
"reflect"
"strings"
"testing"
"golang.org/x/net/context"
"cloud.google.com/go/internal/pretty"
bq "google.golang.org/api/bigquery/v2"
)
func defaultLoadJob() *bq.Job {
return &bq.Job{
Configuration: &bq.JobConfiguration{
Load: &bq.JobConfigurationLoad{
DestinationTable: &bq.TableReference{
ProjectId: "project-id",
DatasetId: "dataset-id",
TableId: "table-id",
},
SourceUris: []string{"uri"},
},
},
}
}
func stringFieldSchema() *FieldSchema {
return &FieldSchema{Name: "fieldname", Type: StringFieldType}
}
func nestedFieldSchema() *FieldSchema {
return &FieldSchema{
Name: "nested",
Type: RecordFieldType,
Schema: Schema{stringFieldSchema()},
}
}
func bqStringFieldSchema() *bq.TableFieldSchema {
return &bq.TableFieldSchema{
Name: "fieldname",
Type: "STRING",
}
}
func bqNestedFieldSchema() *bq.TableFieldSchema {
return &bq.TableFieldSchema{
Name: "nested",
Type: "RECORD",
Fields: []*bq.TableFieldSchema{bqStringFieldSchema()},
}
}
func TestLoad(t *testing.T) {
c := &Client{projectID: "project-id"}
testCases := []struct {
dst *Table
src LoadSource
config LoadConfig
want *bq.Job
}{
{
dst: c.Dataset("dataset-id").Table("table-id"),
src: NewGCSReference("uri"),
want: defaultLoadJob(),
},
{
dst: c.Dataset("dataset-id").Table("table-id"),
config: LoadConfig{
CreateDisposition: CreateNever,
WriteDisposition: WriteTruncate,
JobID: "ajob",
},
src: NewGCSReference("uri"),
want: func() *bq.Job {
j := defaultLoadJob()
j.Configuration.Load.CreateDisposition = "CREATE_NEVER"
j.Configuration.Load.WriteDisposition = "WRITE_TRUNCATE"
j.JobReference = &bq.JobReference{
JobId: "ajob",
ProjectId: "project-id",
}
return j
}(),
},
{
dst: c.Dataset("dataset-id").Table("table-id"),
src: func() *GCSReference {
g := NewGCSReference("uri")
g.MaxBadRecords = 1
g.AllowJaggedRows = true
g.AllowQuotedNewlines = true
g.IgnoreUnknownValues = true
return g
}(),
want: func() *bq.Job {
j := defaultLoadJob()
j.Configuration.Load.MaxBadRecords = 1
j.Configuration.Load.AllowJaggedRows = true
j.Configuration.Load.AllowQuotedNewlines = true
j.Configuration.Load.IgnoreUnknownValues = true
return j
}(),
},
{
dst: c.Dataset("dataset-id").Table("table-id"),
src: func() *GCSReference {
g := NewGCSReference("uri")
g.Schema = Schema{
stringFieldSchema(),
nestedFieldSchema(),
}
return g
}(),
want: func() *bq.Job {
j := defaultLoadJob()
j.Configuration.Load.Schema = &bq.TableSchema{
Fields: []*bq.TableFieldSchema{
bqStringFieldSchema(),
bqNestedFieldSchema(),
}}
return j
}(),
},
{
dst: c.Dataset("dataset-id").Table("table-id"),
src: func() *GCSReference {
g := NewGCSReference("uri")
g.SkipLeadingRows = 1
g.SourceFormat = JSON
g.Encoding = UTF_8
g.FieldDelimiter = "\t"
g.Quote = "-"
return g
}(),
want: func() *bq.Job {
j := defaultLoadJob()
j.Configuration.Load.SkipLeadingRows = 1
j.Configuration.Load.SourceFormat = "NEWLINE_DELIMITED_JSON"
j.Configuration.Load.Encoding = "UTF-8"
j.Configuration.Load.FieldDelimiter = "\t"
hyphen := "-"
j.Configuration.Load.Quote = &hyphen
return j
}(),
},
{
dst: c.Dataset("dataset-id").Table("table-id"),
src: NewGCSReference("uri"),
want: func() *bq.Job {
j := defaultLoadJob()
// Quote is left unset in GCSReference, so should be nil here.
j.Configuration.Load.Quote = nil
return j
}(),
},
{
dst: c.Dataset("dataset-id").Table("table-id"),
src: func() *GCSReference {
g := NewGCSReference("uri")
g.ForceZeroQuote = true
return g
}(),
want: func() *bq.Job {
j := defaultLoadJob()
empty := ""
j.Configuration.Load.Quote = &empty
return j
}(),
},
{
dst: c.Dataset("dataset-id").Table("table-id"),
src: func() *ReaderSource {
r := NewReaderSource(strings.NewReader("foo"))
r.SkipLeadingRows = 1
r.SourceFormat = JSON
r.Encoding = UTF_8
r.FieldDelimiter = "\t"
r.Quote = "-"
return r
}(),
want: func() *bq.Job {
j := defaultLoadJob()
j.Configuration.Load.SourceUris = nil
j.Configuration.Load.SkipLeadingRows = 1
j.Configuration.Load.SourceFormat = "NEWLINE_DELIMITED_JSON"
j.Configuration.Load.Encoding = "UTF-8"
j.Configuration.Load.FieldDelimiter = "\t"
hyphen := "-"
j.Configuration.Load.Quote = &hyphen
return j
}(),
},
}
for i, tc := range testCases {
s := &testService{}
c.service = s
loader := tc.dst.LoaderFrom(tc.src)
tc.config.Src = tc.src
tc.config.Dst = tc.dst
loader.LoadConfig = tc.config
if _, err := loader.Run(context.Background()); err != nil {
t.Errorf("%d: err calling Loader.Run: %v", i, err)
continue
}
if !reflect.DeepEqual(s.Job, tc.want) {
t.Errorf("loading %d: got:\n%v\nwant:\n%v",
i, pretty.Value(s.Job), pretty.Value(tc.want))
}
}
}
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
// Copyright 2014 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package cloud is the root of the packages used to access Google Cloud
// Services. See https://godoc.org/cloud.google.com/go for a full list
// of sub-packages.
//
// This package documents how to authorize and authenticate the sub packages.
package cloud // import "cloud.google.com/go"
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment