Releases: dolthub/dolt
0.22.2
Merged PRs
- 1059: pass in-memory gc gen when we conjoin
On the conjoin path, we're not passing "garbage collection generation" when we update the manifest. NomsBlockStore interprets this as the c
onjoin having been preempted by and out-of-process write and blocks the write. - 1052: Vinai/dolt commit author no config
Add a bats test that models the following behavior.- Unsets name and user.
- Makes a sql change
- Add a commit with --author
0.22.1
Merged PRs
- 1045: Conslidated benchmark directory
- 1043: Vinai/1034 add author info
This adds the --author tag to dolt commit - 1042: Vinai/clean up tags
Cleans up some of the comments left on #1041 - 1041: Vinai/1023 remove tag info
Fixes #1022 and #1023 - 1040: go/libraries/doltcore/remotestorage: Add hedged download requests to remotestorage/chunk_store.
- 1039: go/libraries/doltcore/remotestorage: Refactor urlAndRanges a bit.
- 1038: go/libraries/doltcore/remotestorage: Simplify concurrentExec implementation with errgroup.
- 1037: proto: Add StreamDownloadLocations to ChunkStoreService.
- 1036: go/cmd/dolt/commands: added write queries and ancestor commit to filter-branch
- 1033: Temporary parallelism implementation on indexes
- 1031: reset --hard
- 1029: Added dolt_version()
Asversion()
is used to emulate the target MySQL version, I've addeddolt_version()
so that one may specifically query the dolt version. - 1026: Increased the default sql server timeout to 8 hours
- 1025: go/libraries/doltcore/remotestorage: chunk_store.go: Small improvements to GetDownloadLocations batch size and HTTP GET error logging.
- 1024: dolt filter-branch
- 1022: Skipping tests broken by recent changes to info schema (EXTRA)
- 1021: != operator now uses indexes
Closed Issues
0.22.0
We are excited to announce the minor version release of Dolt 0.22.0.
SQL Tables
We continue to expand the SQL tables that surface information about the commit graph, in this release we added:
dolt_commits
dolt_commit_ancestors
dolt_commit_diffs_<table>
SQL
We added support for prepared statements to our SQL server.
Merged PRs
- 1019: Fix
dolt ls --all
- 1018: mysql-client-tests: Add some simple client connector tests for prepared statements.
- 1016: Rewrote the README
- 1015: go/go.mod: Bump go-mysql-server; support prepared statements.
- 1014: Added bats test for index merging from branch without index
- 1013: dolt_commits and dolt commit_ancestors tables
- 1012: added reset_hard() sql function
- 1011: Bh/commit diff
- 1009: Richer commit message for Dolt Homebrew bump
- 1008: Mergeable Indexes Pt. 2
Tests for mergeable indexes - 1002: s/liquidata-inc/dolthub/ for ishell and mmap-go
- 1001: NewCreatingWriter breaks dolthubapi with recent changes
There might be a better fix for this, butdolthubapi
usesNewCreatingWriter
which breaks with Andy's recent changes (it's being used in dolthubapi here) - 233: Reorder Master
- 232: Indexes search for exact match rather than submatches
- 231: added 'auto_increment' to EXTRA field for DESCRIBE table;
- 229: Add support for prepared statements.
Closed Issues
0.21.4
0.21.2
Merged PRs
- 995: support for ALTER TABLE AUTO_INCREMENT
- 994: Updated namespace for sqllogictest
- 993: Added WSL notice to README
- 990: mysql auto increment semantics
- 989: Fix a few docs typos
- 988: {bats, go}: Some fixes to InferSchema and add bats test
- 987: Turbine Import Fix
- 985: go/**/*.go: Update copyright headers for company name change.
- 982: go/libraries/utils/async: Have ActionExecutor use sync.WaitGroup.
- 981: Attempt to clean up error signaling in diff summary.
- 980: In prettyPrint, defer closing the iterator before doing anything else
We were missing close() when an UPDATE or INSERT etc. had an error during cursor iteration, therefore leaving a server process running. Also save sql history file before executing the query, so it gets saved even if the user interrupts execution. - 976: /.github/workflows/ci-go-tests.yaml: run go tests only when go/ changes
I think this might be a good addition... will only run go tests when there are go changes - 975: Extract some import logic to be used in dolthubapi
In reference to this comment https://github.com/dolthub/ld/pull/5262#discussion_r514465176
I had some duplicate logic indolthubapi
for the import table api. I extracted some logic so that I can useInferSchema
andMoveDataToRoot
to root to reduce some of the duplications - 974: Skipped two newly added test queries that don't work in dolt yet
- 973: Support for COM_LIST_FIELDS, fixed SHOW INDEXES
- 972: Update README.md
Removed errant Liquidata reference - 971: Added GitHub workflow tests for race conditions
Will fail until #967 is merged intomaster
, however the workflow only works when the PR is based againstmaster
. Therefore this PR does not target the aforementioned PR's branch. - 970: Memory fix for CREATE INDEX
Used a pre-existing 16 million row repo to testCREATE INDEX
memory usage on.
Before:
72.47GB RAM Usage
18min 48sec
After:
1.88GB RAM Usage
2min 2sec
Copied the same strategy as used intable_editor.go
to periodically flush the contents once some arbitrary amount of operations have been performed. - 967: go: Make all tests pass under -race.
- 966: go/store/types/edits: Rework AsyncSortedEdits to use errgroup, and a transient goroutine for each work item.
- 965: dolt merge --no-ff
- 225: Andy/mysql auto increment
- 224: Zachmu/xx
Use xxhash everywhere, and standardize the construction of hash keys. - 223: Zachmu/in subquery
Implemented hashed lookups for IN (SELECT ... ) expressions. This is about 5x faster than using indexed lookups into the subquery table in tests.
In a followup I'm going to replace the existing CRC64 hashing with xxhash everywhere it's used, so we're back to a single hash function. - 221: Fixed bug in delete and update caused by indexes being pushed down to tables
- 220: Support for COM_LIST_FIELDS, fixed SHOW INDEXES
- 219: Zachmu/turbine perf
- Do pushdown analysis within subqueries
- Push index lookups down to tables in subqueries
- 218: Fix unit tests to run with -race.
- 217: validate auto_increment on in-line and out-of-line PR defs
Closed Issues
0.21.1
We are excited to announce the release of Dolt 0.21.1, a patch release with functionality and performance improvements.
Benchmarks
A significant new aspect of the Dolt release process will be providing SQL benchmarks. You can read a blog about our approach to benchmarking using sysbench here, and you can find the benchmarking tools here. By way of example the benchmarks for this release were created with the following command:
./run_benchmarks.sh bulk_insert oscarbatori v0.21.0 v0.22.1
This produced the following result, which we host on DoltHub:
Merged PRs
- 957: go/store/{datas,util/tempfiles}: Fix some races in map writes. One effects clones, one effects only tests.
- 953: create auto_increment tables with out-of-line PR defs
- 952: go/libraries/doltcore/sqle: Add support for UPDATE and DELETE using table indexes.
- 949: auto increment
- 947: don't drop column values on column drop
- 946: go/cmd/dolt: commands/sql: Small improvement to only call rowIter.Close() once on sql results iterators.
- 945: Use docker-compose for orchestrating benchmarking
- 944: go/store/types: value_store: Optimize GC to work in parallel and use less memory.
- 942: feature gating
- 941: Upgraded to latest go-mysql-server and re-enabled query plan tests
- 939: Added new indexes overwriting auto-generated indexes
- 938: go/store/{nbs,chunks}: Convert some core methods to provide results in callbacks. Convert some functions to use errgroup.
- 937: Update README.md with the latest dolt commands
- 934: Add go routine to clone
I parallelized the table file writing process by using go routines. Specifically, I made use of the "golang.org/x/sync/errgroup" package which allows for convenient error management across a waitgroup.
A couple of benchmarks I tested this on were- Dolt-benchmarks-test: No difference in speed really
- Coronavirus: Original ~30sec. Current 15sec
- Tatoeba Sentence Translation: Original: ~17mins Current: 10mins
- 933: /go/libraries/doltcore/diff: Ignore NULLs in cell-wise diff
fix for #899
The from root in this repo has NULL values written to the map which causes erroneous diffs.
https://www.dolthub.com/repositories/dolthub/us-supreme-court-cases/compare/master/hb502v6tf3uj43ijfhot6dopmgdm1muk - 932: /go/cmd/dolt/commands: Help Text Fix
- 216: Updated sql.MergeableIndexLookup interface
- 215: memory: *_index.go: Construct sql equality evaluations with accurate types in the literals.
- 214: auto increment
- 213: triggers bugfix
Fixed bug in insert triggers, which couldn't handle out-of-order column insertions.
Fixes #950 - 212: sql/analyzer: pushdown.go: Allow pushdown on Update, RowUpdateAccumulator and DeleteFrom plan nodes.
- 211: join bugs
- 210: sql/plan: {update,insert,update,process}.go: Fix some potential issues with context lifecycle and reuse.
- insert, update, delete: Only call underlying table editors with our captured
context once when we are Close(). Return anil
error after that. - process: Change to only call
onDone
when the rowTrackingIter is Closed. - process: Change to call childIter.Close() before
onDone
is called. Child
iterators have a right to Close() before the context in which they are
running is canceled.
- insert, update, delete: Only call underlying table editors with our captured
- 208: Create UNIQUE index if present in column definition
- 207: Pushdown and plan decoration
Two major changes:- Changes to pushdown operation, to push table predicates below join nodes and to fix many bugs and deficiencies. Also large refactoring.
- Added DecoratedNodes to query plans to illustrate when indexes are being used to access tables outside the context of a join
0.21.0
We are excited to announce the release of Dolt 0.21.0. This release contains a host of exciting features, as well as our usual blend of bug fixes and performance improvements.
Squash merge
As a result of our own internal data collaboration projects, we realized that a squash
command for condensing change sets as a consideration for collaborators was an essential tool. This is now in Dolt.
NFS Mounted Drives
A user highlighted that Dolt didn't work with NFS mounted drives due to the way it was interacting with the filesystem. We have now fixed this.
Garbage Collection
We now have a dolt gc
command for cleaning up unreferenced data. This was requested by several users as a space saving mechanism in production settings.
Performance Improvements
We continue to aggressively pursue performance improvements, most notably a huge improvement in full table scans.
sysbench
tooling
As we detailed in a blogpost yesterday we have created a tooling to provide our development team and contributors with a simple way to measure SQL performance. For example, to compare a arbitrary commit to the current working set (to test whether changes introduce expected performance benefits):
$ ./run_benchmarks.sh bulk_insert <username> 19f9e571d033374ceae2ad5c277b9cfe905cdd66
This will build Dolt at the appropriate commits, spin up containers with sysbench
, and execute the benchmarks.
Documentation Fixes
An open source contributor provided several fixes to our CLI documentation, which we have gratefully merged.
GCP Remotes
We have fixed Google Cloud Platform remotes motivated by a bug report from a user experimenting with Dolt.
Merged PRs
- 930: Bump go-mysql-server
- 929: store/types: value_store.go: GC implementation uses errgroup instead of atomicerr.
- 928: gc chunks
Implements garbage collection by traversing aDatabase
from its root chunk and coping all reachable chunks to a new set of NBS tables.
While "garbage collection generation" will protect the NBS from corruption by out-of-process writers, GC is not currently thread safe for concurrent use in-process. Getting to online GC will require work around protecting in-progress writes that are not yet reachable from the root chunk. - 927: /.github/workflows/ci-bats-tests.yaml: skip aws tests if no secrets found
- 925: benchmark tools
- 923: doc corrections
fixed some typos (I think 😊) - 922: go/util/sremotesrv: grpc.go: Echo the client's NbsVersion in GetRepoMetadata.
- 921: fix gcp remotes
- 920: go/go.mod: Adopt dolthub/fslock fork. Forked version uses Open(RDRW) for lock file on *nix, which works on NFS.
- 918: /.github/workflows/ci-bats-tests.yaml: remove deprecated syntax
- 917: Increase maxiumum SQL statement length to 100MB (initially 512K)
Signed-off-by: Zach Musgrave [email protected] - 915: Daylon's suggestions for bheni perf PR Pt. 2
- 914: Fix for reading old dolt_schemas
- 913: squash merge
- 912: go/store/{datas,nbs}: Use application-level temp dir for byte sink chunk files with datas.Puller.
- 911: Daylon's suggestions for bheni perf PR
- 910: Adding "Garbage Collection Generation" to manifest file
This new manifest field will supportNomsBlockStore
garbage collection and protect againstNBS
corruption. StoringgcGen
in the manifest will support deleting chunks from anNBS
in a safe way.NBS
instances that see a differentgcGen
than they saw when they last read the manifest will error and require clients to re-attempt their write.
NBS
will now have three forms of write errors (not including IO errors or other kinds of unexpected errors):nbs.errOptimisticLockFailedTables
: Another writer landed a manifest update since the last time we read the manifest. The root chunk is unchanged and the set of chunks referenced in the manifest is either the same or has strictly grown. Therefore the NBS can handle this by rebasing on the new set of tables in the manifest and re-attempting to add the same set of novel tables.nbs.errOptimisticLockFailedRoot
: Another writer landed a manifest update that includes a new root chunk. The set of chunks referenced in the manifest is either the same or has strictly grown, but it is not know which chunk are reachable from the new root chunk. The NBS has to pass this value to the client and let them decide. If the client is adatas.database
it will attempt to rebase, read the head of the dataset it is committing to, and execute itsmergePolicy
(Dolt passes a noopmergePolicy
).chunks.ErrGCGenerationExpired
: This is similar to a moved root chunk, but with no guarantees about what chunks remain in the ChunkStore. Any information fromCS.Has(ctx, chunk)
is now stale. Writers must rewrite all data to the chunkstore.
- 909: use tr to lowercase output instead of {output,,}
lowercasing via parameter expansion${output,,}
is only supported on Bash 4+. I switched to usingtr
so I could run the tests locally. - 205: Implemented drop trigger
As discussed, we disallow dropping any triggers that are referenced in other triggers. - 204: Added trigger statements
0.20.2
We are excited to announce the release of Dolt 0.20.2, including a minor version bump as we introduce a new feature SQL triggers.
SQL Triggers
SQL triggers are SQL snippets that can be executed every time a row is inserted. Here is a simple example taken from the blog post announcing the feature:
$ dolt sql
> create table a (x int primary key);
> create table b (y int primary key);
> create trigger adds_one before insert on a for each row set new.x = new.x + 1;
> insert into a values (1), (3);
Query OK, 2 rows affected
trigger_blog> select * from a;
+---+
| x |
+---+
| 2 |
| 4 |
+---+
Any legal SQL statement can be executed as a trigger, here we just defined a simple increment.
Merged PRs
- 908: Added comments for clarity
- 907: Release
- 906: Fixed conflict resolution and additional trigger tests
- 905: Updated to latest go-mysql-server
- 904: Added trigger functionality to Dolt
- 900: Reference new org name
- 897: Fixed CREATE LIKE multi-db
Fixes #654 - 896: Moved everything over to SHOW CREATE TABLE and fixed diff panic
- 894: Fixed UNIQUE NULL handling to match MySQL
- 892: Andy/gc table files
- 890: Working Ruby ruby/mysql test
Not to be confused with mysql/ruby which uses the MySQL C API. - 889: Release
- 202: Zachmu/triggers 5
Added additional validation for trigger creation and execution:- Use of NEW / OLD row references
- Circular trigger chains
- 200: Zachmu/triggers 4
Support for DELETE and UPDATE triggers - 199: Reference new org name
- 198: Added proper support for SET NAMES, and also turned off strict checking for setting unknown system variables.
- 197: Zachmu/user vars
User vars now working. Can stomp on a system var of the same name, as before my last batch of changes. - 196: Allow CREATE TABLE LIKE to reference different databases
- 195: Zachmu/triggers 3
This getsSET new.x = foo
expressions working for triggers. This required totally rewriting how we were handling setting system variable as well, since these two kinds of statements are equivalent in the parser.
Also deletes the convert_dates analyzer rule, which impacts 0 engine tests. - 194: No longer return span iters from most nodes by default.
- 193: Implemented CREATE TABLE LIKE and updated information_schema
Tests will come in a separate PR
0.19.2
Merged PRs
- 886: no parallelism if GOMAXPROCS == 1
- 885: cpp mysql client tests
- 883: mysql client tests install golang
- 878: Go MySQL client test
- 877: validate ref strings when resolving ref specs
fix for #874 - 875: go: Changes to support some commit walks used in Dolthub diffs when the commits come from different repositories.
- 874: Added skip bats test for ref spec panic on diff
- 873: Fixed ActionExecutor causing duplicate key error loop
- 870: Fixed bug with diffing column defaults
- 869: update vitess
- 868: Added perl mysql client tests
- 867: Added Python SQLAlchemy test to mysql-client-tests
- 190: Harrison pr
https://github.com/liquidata-inc/go-mysql-server/pull/189/files and a couple fixes - 188: triggers 2
Insert triggers working for the following cases:- Insert some rows
- Delete some rows
- Update some rows
Missing, needs to be added: set new.x = blah
as part of a BEFORE INSERT trigger. Need to rewrite the SET handling parser logic for that.- error testing for bad triggers (like inserting on the same table the trigger is defined on)
As part of this, I rewrote the execution logic for Update, Delete, and Insert entirely.
0.19.1
Merged PRs
- 862: /go/{go.mod, go.sum}: Update go.mod with go-mysql-server@master
- 861: dotnet mysql client test
- 860: Fixed column renaming breaking default values
- 859: mysql client test c
- 187: Additions to utc_timestamp and timediff
- 186: fix collations
- 185: Fixed bug with column renames breaking default values
- 184: /sql/expression/function/{date.go, date_test.go, registery.go}: Add utc_timestamp
- 183: Fixes for bugs found during integration of column defaults
- 180: triggers
- 178: Column Defaults Implementation Part 2
Here is a comprehensive set of tests for default logic. Practically everything that was added in dolthub/go-mysql-server#174 is covered here, including some edge cases. In addition, the memory table implementation was broken/insufficient in a few ways, so I patched those up.
The biggest change besides the tests is the additional pass when projecting expressions. This is required in order for defaults that reference other columns (especially those that come after) to be able to pull the correct value. This was something I noticed only after I wrote a test that wasn't behaving as expected (compared to the MySQL client). In fact, all of the changes outside ofenginetests
were due to fixing bugs that were found from testing. - 176: Add import statements to readme example
The example in the readme has no import statements, so it's unclear to someone new to the project. So I added some import statements! - 174: Column Defaults Implementation Part 1
This is missing basically all of the new tests, which will come in a separate PR. Proper expressions -> string behavior will also come in a separate PR. Besides that, this is pretty much most of it barring additional bug fixes. All existing tests (some of which use defaults already) pass.
For integrators, they'll make use of the newengine.ApplyDefaults
method.