- Add missing numeric type for BigQuery schema parsing (#65)
- Align Google APIs URLs to Google Cloud Discovery docs. This enables support for Private and Restricted Google APIs configurations. (@husseyd, #81)
- Substitute
https://bigquery.googleapis.com
forhttps://www.googleapis.com
- Substitute
- Support listing more than 50 datasets in
bqr_list_dataset
- Change
bqr_list_tables
to list all tables in a dataset by default - Add
bqr_copy_dataset
- Add
Table
andbqr_update_table
- Support uploading nested lists via
toJSON
- Add writeDisposition to table loads
- Allow creation of empty tables
- Supporting supplying SQL via a file ending with
.sql
forbqr_query()
- Update to new
googleAuthR>1.1.1
- support
nullMarker
,maxBadRecords
,fieldDelimiter
in upload jobs - Support BigQuery type
DATE
for R classDate
data.frame columns (BigQuery typeTIMESTAMP
still default forPOSIXct
columns) (#48) - Allow custom user schema for uploads of data.frames (#48)
- Rename misnamed global functions from
bq_
prefix tobqr_
prefix - Add
allowJaggedRows
andallowQuotedNewlines
options to upload viabqr_upload_data()
bqr_get_job
now accepts a job object as well as the jobId- Fix bug with
bqr_upload_data
whereautodetect=TRUE
didn't work withgcs://
loads from Cloud Storage - Fix bug with
bqr_query()
that caused a 404 error sometimes.
- Move to new batch endpoint (#41)
- Fix asynch job fail if user previously
set.seed()
(#37) - skip tests on CRAN causing error
- fix warning in scope check (#40)
- Add support for realtime queries,
useQueryCache = FALSE
- Add support for standard SQL (#21)
- Add support for hms/timestamp class uploads (#27)
- Add support for partitioned tables (#28)
- Fix bug that only returned one row for single column queries (#31 - thanks Rob)
- Allow loading of data from Google Cloud Storage to BigQuery for large files
- no error if delete non-existent table (#26)
- Add auto authentication if set environment var
BQ_AUTH_FILE
to location of auth file - Add default project if set environment var
BQ_DEFAULT_PROJECT_ID
to project-id - Add default dataset if set environment var
BQ_DEFAULT_DATASET
to dataset-id - Add auto paging through table lists in
bqr_list_tables()
(#29) - Make it clearer when jobs resulted in errors in the job print methods
- Migrate to using
googleCloudStorageR
for Cloud Storage stuff - Set default authentication scope to
https://www.googleapis.com/auth/cloud-platform
- Unit tests
- Upload table will now correctly report errors
- More user feedback on BigQuery jobs when running
- Allow upload of data.frames asynchrnously
- Allow auto-detection of schema for uploads
- Download asynch queries straight to disk via googleCloudStorageR
- Added a
NEWS.md
file to track changes to the package. - Initial release