Title: | An Interface to Google's 'BigQuery' 'API' |
---|---|
Description: | Easily talk to Google's 'BigQuery' database from R. |
Authors: | Hadley Wickham [aut, cre] , Jennifer Bryan [aut] , Posit Software, PBC [cph, fnd] |
Maintainer: | Hadley Wickham <[email protected]> |
License: | MIT + file LICENSE |
Version: | 1.5.1.9000 |
Built: | 2024-12-16 03:43:48 UTC |
Source: | https://github.com/r-dbi/bigrquery |
Basic create-read-update-delete verbs for datasets.
bq_dataset_create(x, location = "US", ...) bq_dataset_meta(x, fields = NULL) bq_dataset_exists(x) bq_dataset_update(x, ...) bq_dataset_delete(x, delete_contents = FALSE) bq_dataset_tables(x, page_size = 50, max_pages = Inf, warn = TRUE, ...)
bq_dataset_create(x, location = "US", ...) bq_dataset_meta(x, fields = NULL) bq_dataset_exists(x) bq_dataset_update(x, ...) bq_dataset_delete(x, delete_contents = FALSE) bq_dataset_tables(x, page_size = 50, max_pages = Inf, warn = TRUE, ...)
x |
|
location |
Dataset location |
... |
Additional arguments passed on to the underlying API call. snake_case names are automatically converted to camelCase. |
fields |
An optional field specification for partial response |
delete_contents |
If |
page_size |
Number of items per page. |
max_pages |
Maximum number of pages to retrieve. Use |
warn |
If |
ds <- bq_dataset(bq_test_project(), "dataset_api") bq_dataset_exists(ds) bq_dataset_create(ds) bq_dataset_exists(ds) str(bq_dataset_meta(ds)) bq_dataset_delete(ds) bq_dataset_exists(ds) # Use bq_test_dataset() to create a temporary dataset that will # be automatically deleted ds <- bq_test_dataset() bq_table_create(bq_table(ds, "x1")) bq_table_create(bq_table(ds, "x2")) bq_table_create(bq_table(ds, "x3")) bq_dataset_tables(ds)
ds <- bq_dataset(bq_test_project(), "dataset_api") bq_dataset_exists(ds) bq_dataset_create(ds) bq_dataset_exists(ds) str(bq_dataset_meta(ds)) bq_dataset_delete(ds) bq_dataset_exists(ds) # Use bq_test_dataset() to create a temporary dataset that will # be automatically deleted ds <- bq_test_dataset() bq_table_create(bq_table(ds, "x1")) bq_table_create(bq_table(ds, "x2")) bq_table_create(bq_table(ds, "x3")) bq_dataset_tables(ds)
To perform a job, see api-perform. These functions all retrieve metadata (in various forms) about an existing job.
bq_job_meta(x, fields = NULL) bq_job_status(x) bq_job_show_statistics(x) bq_job_wait( x, quiet = getOption("bigrquery.quiet"), pause = 0.5, call = caller_env() )
bq_job_meta(x, fields = NULL) bq_job_status(x) bq_job_show_statistics(x) bq_job_wait( x, quiet = getOption("bigrquery.quiet"), pause = 0.5, call = caller_env() )
x |
A bq_job |
fields |
An optional field specification for partial response |
quiet |
If |
pause |
amount of time to wait between status requests |
call |
The execution environment of a currently
running function, e.g. |
jobs <- bq_project_jobs(bq_test_project()) jobs[[1]] # Show statistics about job bq_job_show_statistics(jobs[[1]]) # Wait for job to complete bq_job_wait(jobs[[1]])
jobs <- bq_project_jobs(bq_test_project()) jobs[[1]] # Show statistics about job bq_job_show_statistics(jobs[[1]]) # Wait for job to complete bq_job_wait(jobs[[1]])
Projects have two primary components: datasets and jobs. Unlike other
BigQuery objects, is no accompanying bq_project
S3 class because a project
is a simple string.
bq_project_datasets(x, page_size = 100, max_pages = 1, warn = TRUE) bq_project_jobs(x, page_size = 100, max_pages = 1, warn = TRUE)
bq_project_datasets(x, page_size = 100, max_pages = 1, warn = TRUE) bq_project_jobs(x, page_size = 100, max_pages = 1, warn = TRUE)
x |
A string giving a project name. |
page_size |
Number of items per page. |
max_pages |
Maximum number of pages to retrieve. Use |
warn |
If |
bq_project_datasets()
: a list of bq_datasets
bq_project_jobs()
: a list of bq_jobs.
One day we might also expose the general project metadata.
bq_project_datasets("bigquery-public-data") bq_project_datasets("githubarchive") bq_project_jobs(bq_test_project(), page_size = 10)
bq_project_datasets("bigquery-public-data") bq_project_datasets("githubarchive") bq_project_jobs(bq_test_project(), page_size = 10)
Basic create-read-update-delete verbs for tables, as well as functions
uploading data (bq_table_upload()
), saving to/loading from Google
Cloud Storage (bq_table_load()
, bq_table_save()
), and getting
various values from the metadata.
bq_table_create(x, fields = NULL, ...) bq_table_meta(x, fields = NULL) bq_table_fields(x) bq_table_size(x) bq_table_nrow(x) bq_table_exists(x) bq_table_delete(x) bq_table_copy(x, dest, ..., quiet = NA) bq_table_upload(x, values, ..., quiet = NA) bq_table_save(x, destination_uris, ..., quiet = NA) bq_table_load(x, source_uris, ..., quiet = NA) bq_table_patch(x, fields)
bq_table_create(x, fields = NULL, ...) bq_table_meta(x, fields = NULL) bq_table_fields(x) bq_table_size(x) bq_table_nrow(x) bq_table_exists(x) bq_table_delete(x) bq_table_copy(x, dest, ..., quiet = NA) bq_table_upload(x, values, ..., quiet = NA) bq_table_save(x, destination_uris, ..., quiet = NA) bq_table_load(x, source_uris, ..., quiet = NA) bq_table_patch(x, fields)
x |
A bq_table, or an object coercible to a |
fields |
A bq_fields specification, or something coercible to it (like a data frame). |
... |
Additional arguments passed on to the underlying API call. snake_case names are automatically converted to camelCase. |
dest |
Source and destination bq_tables. |
quiet |
If |
values |
Data frame of values to insert. |
destination_uris |
A character vector of fully-qualified Google Cloud
Storage URIs where the extracted table should be written. Can export
up to 1 Gb of data per file. Use a wild card URI (e.g.
|
source_uris |
The fully-qualified URIs that point to your data in Google Cloud. For Google Cloud Storage URIs: Each URI can contain one
For Google Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups: Exactly one URI can be specified. Also, the '*' wildcard character is not allowed. |
bq_table_copy()
, bq_table_create()
, bq_table_delete()
, bq_table_upload()
:
an invisible bq_table
bq_table_exists()
: either TRUE
or FALSE
.
bq_table_size()
: the size of the table in bytes
bq_table_fields()
: a bq_fields.
ds <- bq_test_dataset() bq_mtcars <- bq_table(ds, "mtcars") bq_table_exists(bq_mtcars) bq_table_create( bq_mtcars, fields = mtcars, friendly_name = "Motor Trend Car Road Tests", description = "The data was extracted from the 1974 Motor Trend US magazine", labels = list(category = "example") ) bq_table_exists(bq_mtcars) bq_table_upload(bq_mtcars, mtcars) bq_table_fields(bq_mtcars) bq_table_size(bq_mtcars) str(bq_table_meta(bq_mtcars)) bq_table_delete(bq_mtcars) bq_table_exists(bq_mtcars) my_natality <- bq_table(ds, "mynatality") bq_table_copy("publicdata.samples.natality", my_natality)
ds <- bq_test_dataset() bq_mtcars <- bq_table(ds, "mtcars") bq_table_exists(bq_mtcars) bq_table_create( bq_mtcars, fields = mtcars, friendly_name = "Motor Trend Car Road Tests", description = "The data was extracted from the 1974 Motor Trend US magazine", labels = list(category = "example") ) bq_table_exists(bq_mtcars) bq_table_upload(bq_mtcars, mtcars) bq_table_fields(bq_mtcars) bq_table_size(bq_mtcars) str(bq_table_meta(bq_mtcars)) bq_table_delete(bq_mtcars) bq_table_exists(bq_mtcars) my_natality <- bq_table(ds, "mynatality") bq_table_copy("publicdata.samples.natality", my_natality)
Creates a BigQuery DBI driver for use in DBI::dbConnect()
.
## S4 method for signature 'BigQueryDriver' dbConnect( drv, project, dataset = NULL, billing = project, page_size = 10000, quiet = NA, use_legacy_sql = FALSE, bigint = c("integer", "integer64", "numeric", "character"), ... )
## S4 method for signature 'BigQueryDriver' dbConnect( drv, project, dataset = NULL, billing = project, page_size = 10000, quiet = NA, use_legacy_sql = FALSE, bigint = c("integer", "integer64", "numeric", "character"), ... )
drv |
an object that inherits from DBIDriver, or an existing DBIConnection object (in order to clone an existing connection). |
project , dataset
|
Project and dataset identifiers |
billing |
Identifier of project to bill. |
page_size |
Number of items per page. |
quiet |
If |
use_legacy_sql |
If |
bigint |
The R type that BigQuery's 64-bit integer types should be mapped to.
The default is |
... |
Other arguments for compatibility with generic; currently ignored. |
con <- DBI::dbConnect( bigquery(), project = "publicdata", dataset = "samples", billing = bq_test_project() ) con DBI::dbListTables(con) DBI::dbReadTable(con, "natality", n_max = 10) # Create a temporary dataset to explore ds <- bq_test_dataset() con <- DBI::dbConnect( bigquery(), project = ds$project, dataset = ds$dataset ) DBI::dbWriteTable(con, "mtcars", mtcars) DBI::dbReadTable(con, "mtcars")[1:6, ] DBI::dbGetQuery(con, "SELECT count(*) FROM mtcars") res <- DBI::dbSendQuery(con, "SELECT cyl, mpg FROM mtcars") dbColumnInfo(res) dbFetch(res, 10) dbFetch(res, -1) DBI::dbHasCompleted(res)
con <- DBI::dbConnect( bigquery(), project = "publicdata", dataset = "samples", billing = bq_test_project() ) con DBI::dbListTables(con) DBI::dbReadTable(con, "natality", n_max = 10) # Create a temporary dataset to explore ds <- bq_test_dataset() con <- DBI::dbConnect( bigquery(), project = ds$project, dataset = ds$dataset ) DBI::dbWriteTable(con, "mtcars", mtcars) DBI::dbReadTable(con, "mtcars")[1:6, ] DBI::dbGetQuery(con, "SELECT count(*) FROM mtcars") res <- DBI::dbSendQuery(con, "SELECT cyl, mpg FROM mtcars") dbColumnInfo(res) dbFetch(res, 10) dbFetch(res, -1) DBI::dbHasCompleted(res)
Authorize bigrquery to view and manage your BigQuery projects. This function is a
wrapper around gargle::token_fetch()
.
By default, you are directed to a web browser, asked to sign in to your Google account, and to grant bigrquery permission to operate on your behalf with Google BigQuery. By default, with your permission, these user credentials are cached in a folder below your home directory, from where they can be automatically refreshed, as necessary. Storage at the user level means the same token can be used across multiple projects and tokens are less likely to be synced to the cloud by accident.
bq_auth( email = gargle::gargle_oauth_email(), path = NULL, scopes = c("https://www.googleapis.com/auth/bigquery", "https://www.googleapis.com/auth/cloud-platform"), cache = gargle::gargle_oauth_cache(), use_oob = gargle::gargle_oob_default(), token = NULL )
bq_auth( email = gargle::gargle_oauth_email(), path = NULL, scopes = c("https://www.googleapis.com/auth/bigquery", "https://www.googleapis.com/auth/cloud-platform"), cache = gargle::gargle_oauth_cache(), use_oob = gargle::gargle_oob_default(), token = NULL )
email |
Optional. If specified,
Defaults to the option named |
path |
JSON identifying the service account, in one of the forms
supported for the |
scopes |
A character vector of scopes to request. Pick from those listed at https://developers.google.com/identity/protocols/oauth2/scopes. |
cache |
Specifies the OAuth token cache. Defaults to the option named
|
use_oob |
Whether to use out-of-band authentication (or, perhaps, a
variant implemented by gargle and known as "pseudo-OOB") when first
acquiring the token. Defaults to the value returned by
If the OAuth client is provided implicitly by a wrapper package, its type
probably defaults to the value returned by
|
token |
A token with class Token2.0 or an object of
httr's class |
Most users, most of the time, do not need to call bq_auth()
explicitly – it is triggered by the first action that requires
authorization. Even when called, the default arguments often suffice.
However, when necessary, bq_auth()
allows the user to explicitly:
Declare which Google identity to use, via an email
specification.
Use a service account token or workload identity federation via
path
.
Bring your own token
.
Customize scopes
.
Use a non-default cache
folder or turn caching off.
Explicitly request out-of-band (OOB) auth via use_oob
.
If you are interacting with R within a browser (applies to RStudio
Server, Posit Workbench, Posit Cloud, and Google Colaboratory), you need
OOB auth or the pseudo-OOB variant. If this does not happen
automatically, you can request it explicitly with use_oob = TRUE
or,
more persistently, by setting an option via
options(gargle_oob_default = TRUE)
.
The choice between conventional OOB or pseudo-OOB auth is determined
by the type of OAuth client. If the client is of the "installed" type,
use_oob = TRUE
results in conventional OOB auth. If the client is of
the "web" type, use_oob = TRUE
results in pseudo-OOB auth. Packages
that provide a built-in OAuth client can usually detect which type of
client to use. But if you need to set this explicitly, use the
"gargle_oauth_client_type"
option:
options(gargle_oauth_client_type = "web") # pseudo-OOB # or, alternatively options(gargle_oauth_client_type = "installed") # conventional OOB
For details on the many ways to find a token, see
gargle::token_fetch()
. For deeper control over auth, use
bq_auth_configure()
to bring your own OAuth client or API key.
To learn more about gargle options, see gargle::gargle_options.
Other auth functions:
bq_auth_configure()
,
bq_deauth()
## Not run: ## load/refresh existing credentials, if available ## otherwise, go to browser for authentication and authorization bq_auth() ## force use of a token associated with a specific email bq_auth(email = "[email protected]") ## force a menu where you can choose from existing tokens or ## choose to get a new one bq_auth(email = NA) ## use a 'read only' scope, so it's impossible to change data bq_auth( scopes = "https://www.googleapis.com/auth/devstorage.read_only" ) ## use a service account token bq_auth(path = "foofy-83ee9e7c9c48.json") ## End(Not run)
## Not run: ## load/refresh existing credentials, if available ## otherwise, go to browser for authentication and authorization bq_auth() ## force use of a token associated with a specific email bq_auth(email = "[email protected]") ## force a menu where you can choose from existing tokens or ## choose to get a new one bq_auth(email = NA) ## use a 'read only' scope, so it's impossible to change data bq_auth( scopes = "https://www.googleapis.com/auth/devstorage.read_only" ) ## use a service account token bq_auth(path = "foofy-83ee9e7c9c48.json") ## End(Not run)
These functions give more control over and visibility into the auth
configuration than bq_auth()
does. bq_auth_configure()
lets the user specify their own:
OAuth client, which is used when obtaining a user token.
See the vignette("get-api-credentials", package = "gargle")
for more.
If the user does not configure these settings, internal defaults
are used.
bq_oauth_client()
retrieves the currently configured OAuth client.
bq_auth_configure(client, path, app = deprecated()) bq_oauth_client()
bq_auth_configure(client, path, app = deprecated()) bq_oauth_client()
client |
A Google OAuth client, presumably constructed via
|
path |
JSON downloaded from Google Cloud Console, containing a client id and
secret, in one of the forms supported for the |
app |
bq_auth_configure()
: An object of R6 class
gargle::AuthState, invisibly.
bq_oauth_client()
: the current user-configured OAuth client.
Other auth functions:
bq_auth()
,
bq_deauth()
# see and store the current user-configured OAuth client (probably `NULL`) (original_client <- bq_oauth_client()) # the preferred way to configure your own client is via a JSON file # downloaded from Google Developers Console # this example JSON is indicative, but fake path_to_json <- system.file( "extdata", "data", "client_secret_123.googleusercontent.com.json", package = "bigrquery" ) bq_auth_configure(path = path_to_json) # confirm the changes bq_oauth_client() # restore original auth config bq_auth_configure(client = original_client)
# see and store the current user-configured OAuth client (probably `NULL`) (original_client <- bq_oauth_client()) # the preferred way to configure your own client is via a JSON file # downloaded from Google Developers Console # this example JSON is indicative, but fake path_to_json <- system.file( "extdata", "data", "client_secret_123.googleusercontent.com.json", package = "bigrquery" ) bq_auth_configure(path = path_to_json) # confirm the changes bq_oauth_client() # restore original auth config bq_auth_configure(client = original_client)
Clears any currently stored token. The next time bigrquery needs a token, the
token acquisition process starts over, with a fresh call to bq_auth()
and,
therefore, internally, a call to gargle::token_fetch()
. Unlike some other
packages that use gargle, bigrquery is not usable in a de-authorized state.
Therefore, calling bq_deauth()
only clears the token, i.e. it does NOT
imply that subsequent requests are made with an API key in lieu of a token.
bq_deauth()
bq_deauth()
Other auth functions:
bq_auth()
,
bq_auth_configure()
## Not run: bq_deauth() ## End(Not run)
## Not run: bq_deauth() ## End(Not run)
bq_field()
and bq_fields()
create; as_bq_field()
and as_bq_fields()
coerce from lists.
bq_field(name, type, mode = "NULLABLE", fields = list(), description = NULL) bq_fields(x) as_bq_field(x) as_bq_fields(x)
bq_field(name, type, mode = "NULLABLE", fields = list(), description = NULL) bq_fields(x) as_bq_field(x) as_bq_fields(x)
name |
The field name. The name must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_), and must start with a letter or underscore. The maximum length is 300 characters. |
type |
The field data type. Possible values include:
|
mode |
The field mode. Possible values include: |
fields |
For a field of type "record", a list of sub-fields. |
description |
The field description. The maximum length is 1,024 characters. |
x |
A list of |
bq_field()
corresponds to a TableFieldSchema
, see
https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#TableFieldSchema
for more details.
bq_field("name", "string") as_bq_fields(list( list(name = "name", type = "string"), bq_field("age", "integer") )) # as_bq_fields() can also take a data frame as_bq_fields(mtcars)
bq_field("name", "string") as_bq_fields(list( list(name = "name", type = "string"), bq_field("age", "integer") )) # as_bq_fields() can also take a data frame as_bq_fields(mtcars)
Reports whether bigrquery has stored a token, ready for use in downstream requests.
bq_has_token()
bq_has_token()
Logical.
Other low-level API functions:
bq_token()
bq_has_token()
bq_has_token()
List all projects that you have access to. You can also work with
public datasets,
but you will need to provide a billing
project whenever you perform
any non-free operation.
bq_projects(page_size = 100, max_pages = 1, warn = TRUE)
bq_projects(page_size = 100, max_pages = 1, warn = TRUE)
page_size |
Number of items per page. |
max_pages |
Maximum number of pages to retrieve. Use |
warn |
If |
A character vector.
bq_projects()
bq_projects()
These submit a query (using bq_perform_query()
) and then wait for it
complete (with bq_job_wait()
). All BigQuery queries save their results
into a table (temporary or otherwise), so these functions return a bq_table
which you can then query for more information.
bq_project_query(x, query, destination_table = NULL, ..., quiet = NA) bq_dataset_query( x, query, destination_table = NULL, ..., billing = NULL, quiet = NA )
bq_project_query(x, query, destination_table = NULL, ..., quiet = NA) bq_dataset_query( x, query, destination_table = NULL, ..., billing = NULL, quiet = NA )
x |
Either a project (a string) or a bq_dataset. |
query |
SQL query string. |
destination_table |
A bq_table where results should be stored. If not supplied, results will be saved to a temporary table that lives in a special dataset. You must supply this parameter for large queries (> 128 MB compressed). |
... |
Passed on to |
quiet |
If |
billing |
If you query a dataset that you only have read access
for, such as a public dataset, you must also submit a |
A bq_table
# Querying a project requires full name in query tb <- bq_project_query( bq_test_project(), "SELECT count(*) FROM publicdata.samples.natality" ) bq_table_fields(tb) bq_table_download(tb) # Querying a dataset sets default dataset so you can use bare table name, # but for public data, you'll need to set a project to bill. ds <- bq_dataset("publicdata", "samples") tb <- bq_dataset_query(ds, query = "SELECT count(*) FROM natality", billing = bq_test_project() ) bq_table_download(tb) tb <- bq_dataset_query(ds, query = "SELECT count(*) FROM natality WHERE state = @state", parameters = list(state = "KS"), billing = bq_test_project() ) bq_table_download(tb)
# Querying a project requires full name in query tb <- bq_project_query( bq_test_project(), "SELECT count(*) FROM publicdata.samples.natality" ) bq_table_fields(tb) bq_table_download(tb) # Querying a dataset sets default dataset so you can use bare table name, # but for public data, you'll need to set a project to bill. ds <- bq_dataset("publicdata", "samples") tb <- bq_dataset_query(ds, query = "SELECT count(*) FROM natality", billing = bq_test_project() ) bq_table_download(tb) tb <- bq_dataset_query(ds, query = "SELECT count(*) FROM natality WHERE state = @state", parameters = list(state = "KS"), billing = bq_test_project() ) bq_table_download(tb)
Create references to BigQuery datasets, jobs, and tables. Each class
has a constructor function (bq_dataset()
, bq_table()
, bq_job()
)
and a coercion function (as_bq_dataset()
, as_bq_table()
, as_bq_job()
).
The coercions functions come with methods for strings (which find components
by splitting on .
), and lists (which look for named components like
projectId
or project_id
).
All bq_table_
, bq_dataset_
and bq_job_
functions call the appropriate
coercion functions on their first argument, allowing you to flexible specify
their inputs.
bq_dataset(project, dataset) as_bq_dataset(x, ..., error_arg = caller_arg(x), error_call = caller_env()) bq_table(project, dataset, table = NULL, type = "TABLE") as_bq_table(x, ..., error_arg = caller_arg(x), error_call = caller_env()) bq_job(project, job, location = "US") as_bq_job(x, ..., error_arg = caller_arg(x), error_call = caller_env())
bq_dataset(project, dataset) as_bq_dataset(x, ..., error_arg = caller_arg(x), error_call = caller_env()) bq_table(project, dataset, table = NULL, type = "TABLE") as_bq_table(x, ..., error_arg = caller_arg(x), error_call = caller_env()) bq_job(project, job, location = "US") as_bq_job(x, ..., error_arg = caller_arg(x), error_call = caller_env())
project , dataset , table , job , type
|
Individual project, dataset, table, job identifiers and table type (strings). For |
x |
An object to coerce to a |
... |
Other arguments passed on to methods. |
error_arg |
An argument name as a string. This argument will be mentioned in error messages as the input that is at the origin of a problem. |
error_call |
The execution environment of a currently
running function, e.g. |
location |
Job location |
api-job, api-perform, api-dataset, and api-table for functions that work with these objects.
# Creation ------------------------------------------------ samples <- bq_dataset("publicdata", "samples") natality <- bq_table("publicdata", "samples", "natality") natality # Or bq_table(samples, "natality") bq_job("bigrquery-examples", "m0SgFu2ycbbge6jgcvzvflBJ_Wft") # Coercion ------------------------------------------------ as_bq_dataset("publicdata.shakespeare") as_bq_table("publicdata.samples.natality") as_bq_table(list( project_id = "publicdata", dataset_id = "samples", table_id = "natality" )) as_bq_job(list( projectId = "bigrquery-examples", jobId = "job_m0SgFu2ycbbge6jgcvzvflBJ_Wft", location = "US" ))
# Creation ------------------------------------------------ samples <- bq_dataset("publicdata", "samples") natality <- bq_table("publicdata", "samples", "natality") natality # Or bq_table(samples, "natality") bq_job("bigrquery-examples", "m0SgFu2ycbbge6jgcvzvflBJ_Wft") # Coercion ------------------------------------------------ as_bq_dataset("publicdata.shakespeare") as_bq_table("publicdata.samples.natality") as_bq_table(list( project_id = "publicdata", dataset_id = "samples", table_id = "natality" )) as_bq_job(list( projectId = "bigrquery-examples", jobId = "job_m0SgFu2ycbbge6jgcvzvflBJ_Wft", location = "US" ))
This function provides two ways to download data from BigQuery, transfering
data using either JSON or arrow, depending on the api
argument. If
bigrquerystorage is installed, api = "arrow"
will be used (because it's
so much faster, but see the limitions below), otherwise you can select
deliberately by using api = "json"
or api = "arrow"
.
The arrow API is much faster, but has heavier dependencies: bigrquerystorage requires the arrow package, which can be tricky to compile on Linux (but you usually should be able to get a binary from Posit Public Package Manager.
There's one known limitation of api = "arrow"
: when querying public data,
you'll now need to provide a billing
project.
The JSON API retrieves rows in chunks of page_size
. It is most suitable
for results of smaller queries (<100 MB, say). Unfortunately due to
limitations in the BigQuery API, you may need to vary this parameter
depending on the complexity of the underlying data.
The JSON API will convert nested and repeated columns in to list-columns as follows:
Repeated values (arrays) will become a list-column of vectors.
Records will become list-columns of named lists.
Repeated records will become list-columns of data frames.
bq_table_download( x, n_max = Inf, page_size = NULL, start_index = 0L, max_connections = 6L, quiet = NA, bigint = c("integer", "integer64", "numeric", "character"), api = c("json", "arrow"), billing = x$project, max_results = deprecated() )
bq_table_download( x, n_max = Inf, page_size = NULL, start_index = 0L, max_connections = 6L, quiet = NA, bigint = c("integer", "integer64", "numeric", "character"), api = c("json", "arrow"), billing = x$project, max_results = deprecated() )
x |
A bq_table |
n_max |
Maximum number of results to retrieve. Use |
page_size |
(JSON only) The number of rows requested per chunk. It is
recommended to leave this unspecified until you have evidence that the
When |
start_index |
(JSON only) Starting row index (zero-based). |
max_connections |
(JSON only) Number of maximum simultaneous connections to BigQuery servers. |
quiet |
If |
bigint |
The R type that BigQuery's 64-bit integer types should be
mapped to. The default is |
api |
Which API to use? The Because the |
billing |
(Arrow only) Project to bill; defaults to the project of |
max_results |
Because data retrieval may generate list-columns and the data.frame
print method can have problems with list-columns, this method returns
a tibble. If you need a data.frame
, coerce the results with
as.data.frame()
.
df <- bq_table_download("publicdata.samples.natality", n_max = 35000, billing = bq_test_project())
df <- bq_table_download("publicdata.samples.natality", n_max = 35000, billing = bq_test_project())
For internal use or for those programming around the BigQuery API.
Returns a token pre-processed with httr::config()
. Most users
do not need to handle tokens "by hand" or, even if they need some
control, bq_auth()
is what they need. If there is no current
token, bq_auth()
is called to either load from cache or
initiate OAuth2.0 flow.
If auth has been deactivated via bq_deauth()
, bq_token()
returns NULL
.
bq_token()
bq_token()
A request
object (an S3 class provided by httr).
Other low-level API functions:
bq_has_token()
## Not run: bq_token() ## End(Not run)
## Not run: bq_token() ## End(Not run)
Reveals the email address of the user associated with the current token. If no token has been loaded yet, this function does not initiate auth.
bq_user()
bq_user()
An email address or, if no token has been loaded, NULL
.
gargle::token_userinfo()
, gargle::token_email()
,
gargle::token_tokeninfo()
## Not run: bq_user() ## End(Not run)
## Not run: bq_user() ## End(Not run)
This collect method is specialised for BigQuery tables, generating the
SQL from your dplyr commands, then calling bq_project_query()
or bq_dataset_query()
to run the query, then bq_table_download()
to download the results. Thus the arguments are a combination of the
arguments to dplyr::collect()
, bq_project_query()
/bq_dataset_query()
,
and bq_table_download()
.
collect.tbl_BigQueryConnection( x, ..., n = Inf, api = c("json", "arrow"), page_size = NULL, max_connections = 6L )
collect.tbl_BigQueryConnection( x, ..., n = Inf, api = c("json", "arrow"), page_size = NULL, max_connections = 6L )
x |
A data frame, data frame extension (e.g. a tibble), or a lazy data frame (e.g. from dbplyr or dtplyr). See Methods, below, for more details. |
... |
Other arguments passed on to
|
n |
Maximum number of results to retrieve.
The default, |
api |
Which API to use? The Because the |
page_size |
(JSON only) The number of rows requested per chunk. It is
recommended to leave this unspecified until you have evidence that the
When |
max_connections |
(JSON only) Number of maximum simultaneous connections to BigQuery servers. |
Create the connection to the database with DBI::dbConnect()
then
use dplyr::tbl()
to connect to tables within that database. Generally,
it's best to provide the fully qualified name of the table (i.e.
project.dataset.table
) but if you supply a default dataset
in the
connection, you can use just the table name. (This, however, will
prevent you from making joins across datasets.)
src_bigquery(project, dataset, billing = project, max_pages = 10)
src_bigquery(project, dataset, billing = project, max_pages = 10)
project |
project id or name |
dataset |
dataset name |
billing |
billing project, if different to |
max_pages |
(IGNORED) maximum pages returned by a query |
## Not run: library(dplyr) # To run this example, replace billing with the id of one of your projects # set up for billing con <- DBI::dbConnect(bigquery(), project = bq_test_project()) shakespeare <- con %>% tbl(I("publicdata.samples.shakespeare")) shakespeare shakespeare %>% group_by(word) %>% summarise(n = sum(word_count, na.rm = TRUE)) %>% arrange(desc(n)) ## End(Not run)
## Not run: library(dplyr) # To run this example, replace billing with the id of one of your projects # set up for billing con <- DBI::dbConnect(bigquery(), project = bq_test_project()) shakespeare <- con %>% tbl(I("publicdata.samples.shakespeare")) shakespeare shakespeare %>% group_by(word) %>% summarise(n = sum(word_count, na.rm = TRUE)) %>% arrange(desc(n)) ## End(Not run)