-
Notifications
You must be signed in to change notification settings - Fork 155
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat (client): client-side support for int8 #665
feat (client): client-side support for int8 #665
Conversation
VAX-1325 DAL: add support for int8 type
*NOTE: it is only possible to implement a mapping between 64-bit integers in SQLite and *
|
9a070cd
to
df2a1ab
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Stellar job! ⭐ 💪
My only piece of feedback is that I'm not seeing any tests that verify the validation is working for bigints. I.e. trying to write a value that would exceed the range of what a 64-bit integer can hold.
// the DAL will convert the string into a BigInt in the `fromSqlite` function from `../conversions/sqlite.ts`. | ||
const pgType = this._tableDescription.fields.get(field) | ||
if (pgType === PgBasicType.PG_INT8) { | ||
return query.returning(`cast(${field} as TEXT) AS ${field}`) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This string should be returned from a function if it's used in multiple places like this. If, say, we add quoting for these field identifiers later, it'll be easier to make the change once and not worry about missing some occurrences that will be left unquoted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You want to introduce a local getField()
function that returns the field?
something like:
private returnAllFields<T extends QueryBuilder & ReturningMixin>(
query: T
): T {
const getField = (field: string) => field
return this._fields.reduce((query, field) => {
// if field is of type BigInt cast the result to TEXT
// because not all adapters deal well with BigInts
// the DAL will convert the string into a BigInt in the `fromSqlite` function from `../conversions/sqlite.ts`.
const pgType = this._tableDescription.fields.get(field)
if (pgType === PgBasicType.PG_INT8) {
return query.returning(`cast(${getField()} as TEXT) AS ${getField()}`)
}
return query.returning(getField())
}, query)
}
}
I don't really see the value of adding this now as we merely use the field without any changes.
I do see the value once we do more than just using the field name, e.g. add quoting, but now it is literally the identity function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was only referring to the act of interpolating a variable into a string which is then passsed as SQL for execution by SQLite. Depending on the context, this can lead to SQL injection attacks or merely client-side fatal errors.
What I'm saying is that we should properly quote all identifiers and values when interpolating them into strings of SQL. It doesn't take much effort for a particular field name to break our client and as a consequence the app that depends on it. Here's a quick example:
$ sqlite3
SQLite version 3.36.0 2021-06-18 18:36:39
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> create table foo(cast integer);
sqlite> insert into foo values (1), (2), (3);
sqlite> select * from foo;
1
2
3
sqlite> select cast(cast as text) from foo;
Error: near "as": syntax error
sqlite> select cast("cast" as text) from foo;
1
2
3
Perhaps the value in the field
variable is already quoted? In that case, disregard this comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed, the arguments we pass to the function are already quoted, e.g.:
castIfNeeded(col, `"${col}"`)
const tpes = colTypes[col] | ||
const sqliteType = tpes.sqliteType | ||
const pgType = tpes.pgType | ||
if (sqliteType === 'REAL' || pgType === 'INT8' || pgType === 'BIGINT') { | ||
return `cast(${targettedCol} as TEXT)` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seeing this for the first time now, it's surprising we're not using a disciplined approach for interpolating fargments into SQL strings. I know it's out of scope for this PR but I wanted to hear your opinion on it: shouldn't we audit all table column references that are interpolated into strings like this and replace them with a centralized function that can quote all identifiers uniformly?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could indeed change all places were we quote explicitly by calling a quote
function that does the quoting. That improves maintainability if at some later point in time we want to change the way we quote identifiers but i don't think we will ever change the quoting as it seems the only way to quote identifiers is to use double quotes.
df2a1ab
to
4813b32
Compare
4813b32
to
144ee6c
Compare
Thanks, i added a unit test for values that are out of range :-) |
…s and the merge logic (#699) In addition to the tests, the following logic is removed because at the oplog level, the Record values cannot be BigInt, as the oplog trigger serializes those values as string. Is the `deserialize` who checks the postgres type to turn it into the correct JS object. Leaving the type check there might cause confusion in the future. ```js if (typeof value === 'bigint') { return value.toString() } ```
This PR adds client-side support for int8 aka bigint.
Supporting BigInts is tricky because of inconsistencies in the sqlite drivers.
For example, better-sqlite3 requires BigInts to be explicitly enabled but then all integers are returned as BigInts because SQLite does not distinguish between them (they are all just 64bit integers). On the other hand, wa-sqlite returns a regular JS number if the integer fits, otherwise it returns a BigInt.
To avoid these problems, we always cast BigInt to text when reading from the DB and then let the DAL turn it into a BigInt;
i.e., when the Postgres type of a column "foo" is bigint, we read as follows:
SELECT cast(foo as TEXT) AS foo FROM table
. Similarly, we cast returned values:INSERT INTO table VALUES (9223372036854775807) RETURNING cast(foo as TEXT) AS foo
.A similar problem occurs in the oplog triggers. These triggers insert an oplog entry for every insert/update/delete. This oplog entry contains the old and new rows as JSON objects. However, JSON does not support BigInts:
We also can't write it as a regular number because when parsing the JSON it is read as a number and rounding errors occur:
Hence, the triggers also need to cast BigInts to text and the deserialisation function for oplog entries (
deserialiseRow
) needs to correctly deserialise them as BigInts (when the column type is int8/bigint).