SQL Tagged Templates (sqltt) allows to easily manage SQL queries from Javascript (or even non Javascript) Projects taking advantadge of the ES6+ Tagged Templates feature.
It is not and is not meant to be an ORM: It's just a template system intended to keep queries readable and easy to maintain while inforcing reusability and helping us to keep them organized.
It also let us to handle syntax mismatchings between database engines in order to only keep and maintain single version of each query even if it needs to be executed on different RDBMS.
(Moreβ¦)
Following are a few examples to better understand what SQLTT does and how powerful it is just in a glance.
π Unless stated otherwise, all following examples are for PostgreSQL engines. Either case, if engine is not specified, a "generic" one gets used (which nowadays is exactly the same as Postgres one).
The following example shows how a SQLTT template looks like and how we can use that template to generate actual SQL suitable for multiple database engines or even directly execute it through a CLI interpreter.
$ cat personnel.sql.js
const sqltt = require("sqltt");
const commonFields = ["dptId", "name", "sex", "birth"];
const tpl = {};
tpl.list = new sqltt(`
select id, dptName, name
from personnel
join depts using(dptId)
`);
tpl.listByDept = new sqltt($=>$`
${tpl.list} ${$.REM("Same as ${$.include(tpl.list)}")}
where dptId = ${"dptId"} ${$.REM("Same as ${$.arg('dptId')}")}
`);
tpl.show = new sqltt($=>$`
select id, dptName, name, birth, ctime
from personnel
join depts using(dptId)
where id = ${"id"}
`);
tpl.insert = new sqltt($=>$`
insert into personnel (${$.keys(commonFields)})
values (${$.values(commonFields)})
`);
tpl.update = new sqltt($=>$`
update personnel set ${$.entries(commonFields)}
where id = ${"id"}
`);
sqltt.publish(module, tpl); // Export and make available from CLI
As long as SQLTT template files are javascript files you would probably get javascript syntax highlighting in your preferred editor by default. But you would probably prefer SQL hilighting instead or, even better, both wherever they apply.
Most obvious solutions is to use .sql extension instead (or use a modeline specifying different file type in editors that support it such as vim):
$ mv personnel.sql.js personnel.js.sql
But this way you will loose javascript syntax highlighting in place.
Better solution would be to keep default javascritp syntax highlighting and change it to SQL just for the sections where it's needed.
This is why in this documentation uses '.sql.js' extension for template files instead of '.js.sql'.
This can be esasily be done in vim, and probably in many other editors (if you know it for any other, please send me a reference to include in this section).
To enable block-specific syntax highlighting in vim see Different syntax highlighting within regions of a file.
tpl.someQuery = new sqltt( /* @@sql@@ */ $=>$`
-- Your query here
` /* @@/sql@@ */);
π You can find more complete examples in the Examples directory of this GIT repository.
This specific template example provide multiple queries in single file so, when invoked from command line without arguments, it will ask us for a query selection:
$ node personnel.sql.js
Available queries:
β list: (Undocumented)
β listByDept: (Undocumented)
β show: (Undocumented)
β insert: (Undocumented)
β update: (Undocumented)
π If single SQLTT template where published in that file, we would had got its SQL instead just like we are going to obtain right now by specifying it.
Now we can obtain desired query just by specifying it as a parameter:
$ node personnel.sql.js list
select id, dptId, dptName, name, sex
from personnel
join depts using(dptId)
...and, of course, we can pipe it to our preferred database engine too:
$ node personnel.sql.js list | psql tiaDB
id | dptid | dptname | name | sex
----+------------+----------------+-----------+-----
1 | management | Management | Super | m
3 | oper | Operations | Filemon | m
2 | oper | Operations | Mortadelo | m
4 | adm | Administration | Ofelia | f
5 | i+d | I+D | Bacterio | m
(5 rows)
Other queries may require arguments, so we just provide them as additional command line arguments:
$ node personnel.sql.js listByDept oper
\set dptId '''oper'''
select id, dptId, dptName, name, sex
from personnel
join depts using(dptId)
where dptId = :dptId
...and, again, we can execute obtained sql too:
$ node personnel.sql.js listByDept oper | psql tiaDB
id | dptid | dptname | name | sex
----+-------+------------+-----------+-----
2 | oper | Operations | Mortadelo | m
3 | oper | Operations | Filemon | m
(2 rows)
In cli-mode we have no way to precisely determine which type is each argument and hence neither which would be the correct quoting to use.
To overcome it, following convention is used:
-
When it is enclosed by single or double quotes, it is a character-based type (string).
- Note that those quotes need to avoid shell escapping. I.E.:
"'This is a string'"
.
- Note that those quotes need to avoid shell escapping. I.E.:
-
When unquoted:
-
'null' (Case insensitive) stands for NULL.
-
'true' (Case insensitive) stands for boolean true.
-
'null' (Case insensitive) stands for boolean true.
-
Valid numeric value stands for Number.
-
Else case, it is supposed to be a string, even unquoted.
-
Since we export our templates as SQLTT instances we just need to require and start using them.
π In previous example we published our templates through
sqltt.publish(module, tpl);
statement which, from our point of view now, is the exact same as exporting them throughmodule.exports = tpl;
except for the fact that this way it wouldn't have been usable from CLI too like it had.
Either if we exported/published single SQLTT Template or multiple ones, now we are able to use them through multiple methods which we call them our Template API.
Example:
const personnelSQL = require('path/to/personnel.sql.js');
// Show rendered SQL of each query:
console.log (personnelSQL.list.sql());
console.log (personnelSQL.listByDept.sql());
console.log (personnelSQL.show.sql());
console.log (personnelSQL.insert.sql());
console.log (personnelSQL.update.sql());
π
.sql()
method also accepts an optional parameter to specify desired engine flavour. If not specified, default one is used.
Another commonly used Template API method is .args()
which let us to convert a {key: value} pairs object to a properly sorted
arguments array to feed our database query method.
Example using ppooled-pg:
// Insert new item
const personnelSQL = require('path/to/personnel.sql.js');
const db = require('ppooled-pg')(/*connection data*/);
const newPerson = {
name: "Chorizez",
dptId: "robbers",
sex: "m",
// birth: "", // Unknown data will default to null
connection: { // Unused data will be ignored
ip: "10.0.1.25",
port: 80,
trusted: false,
}
};
db.queryRows(
personnelSQL.insert.sql("postgresql")
, personnelSQL.insert.args(newPerson)
).then(rows=>console.log(rows);
π From version 0.3.0, ppooled-pg natively supports for SQL Tagged Templates so we could simply have wrote:
db.queryRows(personnelSQL.insert, newPerson)
.But this example was intended to work with most database libraries with minimum changes.
See Template API secton for more details of available methods and their options.
Athough they're javascript, we can take advantadge of SQLTT templates even for other language programs.
Despite they won't be able directly use Template API methods such as args(), we are still able to use SQLTT templates to keep our queries readable and easy to mantain.
To use them from our non-javascript application, all we need to do is to compile them using SQLTT CLI capabilities with _nocli engines:
$ node --engine=postgresql_nocli personnel.sql.js listByDept oper
select id, dptId, dptName, name, sex
from personnel
join depts using(dptId)
where dptId = $1
This way we can build simple simple compilation scripts such as following example:
#!/bin/env sh
export SQL_ENGINE=postgresql_nocli
node sqlsrc/personnel.sql.js list > sql/personnel.list.sql
node sqlsrc/personnel.sql.js listByDept > sql/personnel.listByDept.sql
node sqlsrc/personnel.sql.js show > sql/personnel.show.sql
node sqlsrc/personnel.sql.js insert > sql/personnel.insert.sql
node sqlsrc/personnel.sql.js update > sql/personnel.update.sql
node sqlsrc/articles.sql.js list > sql/articles.list.sql
node sqlsrc/articles.sql.js find > sql/articles.find.sql
node sqlsrc/articles.sql.js show > sql/articles.show.sql
node sqlsrc/articles.sql.js insert > sql/articles.insert.sql
node sqlsrc/articles.sql.js update > sql/articles.update.sql
# ....
- ABOUT SQLTT
- FEATURES
- BASIC CONCEPTS
- SETUP AND USAGE
- TEMPLATE FORMAT
- API REFERENCE
- Advanced Features
- TODO
- Contributing
SQL is a powerful language, but most databases come with their own variations and nuances.
Even for the same database engine, the syntax used in application libraries and CLI interpreters usually differ. At least for parametyzed queries.
This often forces developers to modify their queries back and forth to test them in database CLI or, even worst, when they need to support different database engines. In which case they are most times forced to mantain completely different versions of the same query for each supported database.
ORM solutions solve that problem at the cost of generating suboptimal queries and disallowing most powerful SQL and/or database-specific features.
SQLTT allows us to maintain single version of each query while preserving the whole power of actual SQL also providing many advanced features such as reusing snipppets or whole queries and much more fully embracing the DRY principle.
π‘ Original Idea: Original idea comes from this StackOverflow answer which I have been using and progressively evolving until it had become too big to agglutinate all possible features I have been adding over time.
I started publishing prereleases because I've decided that next SQLTT version will be 1.0.0 because it has so breaking changes to require increasing major version number.
I also fixed as a goal to publish it as a mature library with a complete documentation and test suite.
(Not really un-) fortunately, during that process, many exciting ideas such as (.keys(), .values() and .entries(), "nocli", enhnanced operations, etc...) came in to my head so I continuously postposed 1.0.0 release.
Meanwhile I started to need already implemented features in a real project (from which in fact SQLTT comes from) so I started to publish prereleases in NPM.
I'm working in finishing tests and documentation every time I can. But there still new Ideas that I think they are a "must to" because of the power they will conver to SQLTT as a tool.
For this reason, I added this section and the following Release TODO to track things that left to be implemented before final SQLTT-1.0.0
I hope it won't continue growing much more and I could deliver final SQLTT-1.0.0 soon.
.concat()
, .limit()
and .wrap()
all wraps the output of original
.sql() method call.
This is wrong when engines, such as oracle, provide their own wrapping (resulting in an inverse wrapping order).
I.E. (from CLI) $ node myTpl.sql.js someQuery --oracle --limit 10
would now
output select foo from bar; LIMIT 10
instead of select foo from bar LIMIT 10;
-
(DONE) Implement --engine=eng_name to override SQL_ENGINE env. var.
- "--<eng_name>" shorthands implementd too (i.e.: "--postgresql").
-
Implement --all to render all defined templates.
- Precede each one with a comment showing its name.
- Implement it by multiple inclusion, so that arguments will be declared globally on top.
It will give access to data declared in data key from template source and will be able to be used from other methods such as .entries() to access data defined inside the template.
I.e.:
tpl.getUserData = new sqltt({
data: {
columns: ["id", "name", "sex", "birth", "ctime"],
filters: ["name"],
},
sql: $=>$`
select ${$.arg($.data("columns"))}
from users
${$.entries($.data("filters"), "and", "where %")}
// For the third .entries() parameter see next TODO
`,
});
(DONE)...Additionally, methods that currently doesn't accept string as its first argument, would be modified to automatically call .data() with it when string were provided. This way, previous call to .entries() could be simplified as
$.entries("filters", "and", "where %")
.
Implement wrapStr additional argument at least for .keys(), .values() and .entries().
It will provide a simple wrapping string (see previous example) that will be applied only if that method renders something.
This way, if in previous example, filters were had been an empty array even the where clause (provided through this wrapping argument) weren't got rendered so, executing that query, all rows would be returned.
(DONE)After previous step, we can implement new Template API method with the same name (.data(patch)).
This method will return a new instance of the template with its template data property patched.
SYNTAX: myTpl.data(dataPatch)
Everey newData entry will replace existing one.
To remove existing entries, they can be set to null, or undefined.
This third piece will allow us to "mutate" queries by specifying different column list to show or filters to apply (required arguments would change in this case).
(DONE)Modify .data() Template API method so that if provided dataPatch is an array or a string of comma separated keys instead of an object, it will check for a "presets" prooperty in template source and then for every specified key.
Next, those objects will all be applied as data patches.
(DONE)Extend CLI controller so that, in multitemplate case, when addressing a temlate (such as 'list') we could add a literal data specification in parentheses and, in that case, template will get mutated according that.
I.e. following our initial example, 'list' and listByDept' queries would be merged in single one:
tpl.list = new sqltt({
data: {
columns: ["id", "dptName", "name"],
// filters: [], // Non declared data sets defaults to empty array.
},
sql: $=>`
select ${$.arg($.data("columns"))}
from personnel
join depts using(dptId)
${$.entries($.data("filters"), "and", "where %")}
`,
});
Now, to get previously named listByDept query from CLI, we just need to run:
$ node personnel.sql.js 'list({filters: ["dptId"]})' | psql tiaDB
id | dptid | dptname | name | sex
----+-------+------------+-----------+-----
2 | oper | Operations | Mortadelo | m
3 | oper | Operations | Filemon | m
(2 rows)
Consider now this slight modification of prevous example.
tpl.list = new sqltt({
data: {
columns: ["id", "dptid", "dptname", "name", "sex"],
},
presets: {
detailed: {columns: ["id", "dptId", "dptName", "name", "sex", "ctime"]},
bySection: {filters: ["dptId"]},
},
sql: $=>`
select ${$.arg($.data("columns"))}
from personnel
join depts using(dptId)
${$.entries($.data("filters"), "and", "where %")}
`,
});
Presets should be allowed from cli-mode too just invoking them by its name, so now we could simply had done:
$ node personnel.sql.js 'list(bySection)' | psql tiaDB
id | dptid | dptname | name | sex
----+-------+------------+-----------+-----
2 | oper | Operations | Mortadelo | m
3 | oper | Operations | Filemon | m
(2 rows)
π Since
.data()
accepts multiple presets, we could also pick more than one preset:$
node personnel.sql.js 'list(bySection,detailed)' | psql tiaDB
id | name | sex | dptName | birth | ctime ----+-----------+-----+----------------+------------+---------------------------- 1 | Mortadelo | m | Operations | 1969-03-10 | 2019-05-31 10:58:09.346467 2 | Filemon | m | Operations | 1965-08-15 | 2019-05-31 10:58:46.291629
-
Modify
.data()
Template API method so that it accept any number of arguments.- That arguments could be both preset names or data patches to apply.
- They will be applied in order.
- ...and every key will be MERGED (concatenated in case of Arrays) instead of replaces.
- Keys with null values will be resetted:
- Current effect of
.data({foo: {bar: "baz"}})
. - ...will be
.data({foo: null}, {foo: {bar: "baz"}})
. - ...otherwise previous and non overwritten foo data would be preserved.
- Current effect of
- A global null value (
.data(null [, ...])
) will fully reset whole data object.
-
Modify cli-mode mutation specification in order to allow to combine preset names with data patches in the same way.
7.1 Rename .data() method to .mutation()
7.2 Implement a new (and simpler) .data() method which just replaces whole data property (semi-backward compatibility).
7.3 Enhance .mutation method in order to allow presets to be callbacks
returning the actual mutation.
- Those callbackw would accept parameters too, that could be passed ussing
function-like syntax too.
- They will also be bounded to the tag function so Tag API methods will be
available though this
.
ποΈ Example:
tpl.list = new sqltt({
presets: {
page: function(num, itemsPerPage = 10) {
const $ = this; // Tag API
if (num === undefined) return { // Render them as parameters.
offset: $.arg("offset"),
limit: $.arg("limit"),
};
if (num === null) return {}; // No offset or limit rendered at all.
// Use specified literals otherwise:
return {
offset: num * itemsPerPage,
limit: itemsPerPage,
};
},
},
sql: $=>`
select * from personnel
${$.keys($.data("offset"), $.default, "order by %")}
${$.keys($.data("limit"), $.default, "limit %")}
`,
});
myQuery.sql();
// select * from personnel;
myQuery.mutation('page(null)').sql();
// select * from personnel;
myQuery.mutation('page(20)').sql();
// select * from personnel offset 200 limit 10;
myQuery.mutation('page(3, 50)').sql();
// select * from personnel offset 150 limit 50;
myQuery.mutation('page(null)').sql();
// select * from personnel;
myQuery.mutation('page()').sql();
// select * from personnel offset $1 limit $2;
Update documentation with that functionalities.
Remember to consider examples for GraphQL APIs implementations.
(ALMOST DONE)-
Added new optional attribute 'WITH' to the template soure specification for Common Table Expressions ("with" clause).
-
It accepts a {key: value} object of accepting same values that accepts $.include() method.
-
They are considered as CTE's to be prepended to the actual query.
- In case of {key: value} object, each key will be used as CTE alias.
- Otherwise, alias attribute will be mandatory for resolved SQLTT instance.
-
They're processed recursively. That is: In case a CTE have its own CTE's, they will be flattened to the parent query CTE block.
- In case of a CTE having a CTE with the same alias as already added CTE, it is checked to be the same object and added single time.
- ...otherwise (same alias but not same object) an error is thrown.
-
Also accept any of the following (template 'name' attribute would be mandatory too in this case):
- An array of the same kind of values.
- Single item of the same type.
-
CTEs from an $.include()'d query won't be flattened (except internally to its own subctes) so aliases won't also collide.
- Nowadays $.include()ing queryes with CTE is not suported and throws an error.
- Not enumerable.
- Only if not "options" query exported.
- It will let to globally alter default options. (Not strictly necessary for 1.0.0, but maybe...)
Modify publish() static method so that unnamed templates get automatically named by its key in the object being published.
-
Very simple, readable and non-intrusive template format.
-
Don't Repeat Yourself (DRY):
- Render SQL for your application properly
formatted for one or more database engines. Ex.:
- PostgreSQL:
myTpl.sql('postgresql') // select [...] where baz = $1
- Oracle:
myTpl.sql('oracle')' // select [...] where baz = :1
- ...
- PostgreSQL:
- Generate database-specific CLI versions too.
- PostgreSQL:
myTpl.sql('postgresql_cli') // select [...] where baz = :baz
- Oracle:
myTpl.sql('oracle_cli') // select [...] where BAZ = '&baz'
- Auto:
myTpl.sql('cli') // Use 'default_cli' unless SQL_ENGINE env var defined
or --engine=<eng_name> modifier used. - Or even simpler: Direct call from command line if '.publish()' method used.
- PostgreSQL:
- ...all with single SQL template source.
- Render SQL for your application properly
formatted for one or more database engines. Ex.:
-
Easy placeholders hanling:
- Readable strings as argument placeholders instead of
$1
,$2
, etc... (postgresql) or:1
,:2
, etc... (oracle), for example. - Numeration can be (even partially) explicitly specified by enumerating
them in the
args
template property or infered by its apparition order Non specified arguments are automatically fulfilled in query appearing order.
- Readable strings as argument placeholders instead of
-
Arguments generator helper:
- Generates properly sorted arguments array from a {key: value,...} object.
- Missing keys defaults to null and unused ones are silently ignored.
- Ex.:
myTpl.args({foo: "fooVal", bar: "barVal"})
.
-
Direct execution: standard output can obviously redirected to any database sql interpreter.
- Ex.:
node myTplLib someQuery value1 "second value" | psql myDb
- Ex.:
-
Query nesting: If, instead of a regular string, another sqltt instance is interpolated, it will be correctly rendered in place and even its argument declarations will be conveniently assumed in the right order and without duplications.
- Ex.:
$=>$`${listQuery} and typeid = ${"tid"}`
.
- Ex.:
-
Advanced Interpolation Api: When
$=>$`...`
form is used, the tag function ("$" argument) comes with a bunch of methods providing more advanced functionalities.- In fact,
${"someArg"}
is, in fact, a shorthand for${$.arg("someArg"})}
. - ...and
${someSubtemplate}
is the same as${$.include(someSubtemplate)}
. - But they can take more arguments. Ex.:
${$.include(someSubtemplate, {arg1: "fixedValue"})}
- And there are a few more:
$.literal()
$.keys()
and$.values()
.- Ex.:
insert into foo (${$.keys(someObj)}) values (${$.values(someObj)})
- Ex.:
$.entries()
.- Ex.:
update foo set ${$.entries(someObj)} where ...
- Ex.:
- And more comming (
$.if(cnd, thenCase, elseCase)
, ...).
- In fact,
-
Debugging capabilities: A debug option can be enabled for single query or whole published repository, those queries will log supplied arguments every time .arg() method is called.
- This allows to cherry-pick debug information only for specific queries.
- Special "verbose" and "cli" values can be specified to show whole provided (unparsed) argoments or to get kcli-suitable string.
-
Query CachΓ©:
- SQL for every database are generated and cached the first time they're required and then always consumed from cachΓ©.
-
SQL Syntax Highlighting.
SQLTT is database agnostic in the sense that it only provide a templating layer and you are responsible to write SQL suitable for your specific database(s) engine(s).
But, at the same time, it provide you the tools to support multiple SQL syntax variations with single codebase.
So, depending on targetted database or even if we are generating SQL for an application library or to be executed in a CLI interpreter, there could be subtle syntax differences SQLTT must care on.
The most obvious one is the way positional parameters must be specified in an SQL string. Say:
$1
,$2
,$3
, ... (Postgresql):1
,:2
,:3
, ... (Oracle)- Etc...
Or even by variable name, at least in many database CLIs:
:var1
,:var2
,:var3
, ... (Postgresql)'&var1'
,'&var2'
,'&var3'
, ... (Oracle)- Etc...
To handle these specific differences between targetted databases SQLTT uses small specialyzed libraries called engines.
Currently supported engines by SQLTT are:
Name | Description |
---|---|
default |
Generic standard (TODO: ANSI compilant) SQL. |
default_cli |
Generic standard SQL suitable for CLI interpreters. |
postgresql |
PostgreSQL-specific SQL. |
postgresql_cli |
PostgreSQL-specific SQL for its CLI (pgsql) client. |
oracle |
Oracle-specific SQL. |
oracle_cli |
Oracle-specific SQL for its CLI (sqlplus) client. |
If you are interested in adding more engines or improving existing ones, check
lib/engines.js
file (They're too easy to implement) and please, feel free to
send me patches to include your improvements.
As you could see from previous table, we have two engines for each specific database flavour (PostgreSQL, Oracle, etc..): one targetting SQL for specific database application libraries and the other, suffixed by _cli, for database CLI interpreters.
Sometimes we will only be interested in addressing desired flavour or desired target.
For example, when we call the .sql()
method, we are supposed to expect SQL
for a database library. Not for CLI usage. So specifying 'postgresql' stands
for 'posgresql' engine: not 'postgresql_cli'.
On the other hand, when we want to use it from CLI we may be interested in only select the cli-specific target but allowing to change the actual database flavour.
To do so we can simply specify 'cli'. This way 'default_cli' will be addressed by default, but it can be overridden by 'SQL_ENGINE' environment variable (only if exactly 'cli' is specified).
On the other hand, even when 'cli' or someFlavour_cli is specified, we can set SQL_ENGINE to 'nocli' or someFlavour_nocli in order to override CLI engine of selected database flavour.
This can be useful if we only want to visually inspect how our query will be
served to our application through .sql()
(or .sql(flavour)
) method.
Additionally, in cli-mode, --engine=<engine_name>
or simply its shorhands
--<engine_name>
can be used to override SQL_ENGINE environment variable or
simply as a more handy way to select it.
Install sqltt executing npm install --save sqltt
within your
project directory.
To load SQLTT:
const sqltt = require("sqltt");
To create single SQLTT template:
const q = new sqltt(source, options);
Where:
-
source
: Defines the template and possibly other parameters (see template parts: below). -
options
: Optional options object to specify a few behavioral modifiers.
-
default_engine: (Optional) Change the default rendering engine for that template. That is: the engine that will be used to render SQL when it is not explicitly specified in
.sql()
method call. -
check_arguments (default: true): Allows to avoid template's args validation checks (they will be auto-corrected instead of throwing an error).
-
debug (default: false): When true, every call to .args() method of this query will be logged. Valid values:
- verbose: Will log original arguments object.
- cli: Generate output suitable for cli usage.
- true (Any other true value): Array of the actually provided arguments.
Every template file may contain single or multiple SQLTT templates and may export them either in its source form as already constructed SQLTT instances.
But preferred way is as follows:
For single template:
const sqltt = require("sqltt");
const tpl = new sqltt(
/* Template source in any valid format */
);
sqltt.publish(module, tpl);
For multiple templates:
const sqltt = require("sqltt");
const tpl = {
aQuery: new sqltt( /* Template source... */),
anotherQuery: new sqltt( /* Template source... */),
/* ... */
};
sqltt.publish(module, tpl);
See Template Format below to learn about the syntax of template sources.
The final sqltt.publish(module, tpl)
statement in previous
examples replaces classic module.exports = tpl
and is
almost equivalent to:
module.exports = tpl; // Exports template.
module.parent || console.log( // Allow CLI usage.
tpl.sql('cli')
);
...except for that it also renders properly formattend "set/define" (depending on database engine) arguments form commandline arguments.
π In fact it's slightly more complicated in order to properly handle multiple-template files too as well as other slight nuances.
This allows us to use our constructed sqltt instances:
- From application: As a module from NodeJS application.
- From CLI: As a command line tool to get whatever_our_database_cli suitable rendered SQL.
Single Template Example:
const myQuery = require('path/to/myTemplate.sql.js');
const sql = myQuery.sql('postgresql');
const args = myQuery.args({
arg1: "val1",
arg2: "val2",
/* ... */
});
// myDb.query(sql, args);
π The
'postgresql'
argument inmyQuery.sql('postgresql')
statement tells SQLTT to render PostgreSQL flavoured SQL syntax.Nowadays
'postgresql'
and'default'
engine flavours works just the same. But other databases may have some nuances such as Oracle's way to specify arguments as:1
,:2
... instead of$1
,$2
...Additionally, templates can specify a
default_engine
property to override the default one.
Multiple Template Example:
const myQueries = require('path/to/myTemplate.sql.js');
const userList_sql = myQueries.userList.sql('postgresql');
const userProfile_sql = myQueies.userProfile.sql('postgresql');
const userProfile_args = myQueries.userProfile.args({
userId: "someId",
/* ... */
});
// myDb.query(userList_sql, []);
// myDb.query(userProfile_sql, userProfile_args);
From command line we just need to execute our template through node:
node path/to/myTemplate.sql.js
If it is a single template file it will directly render its SQL.
Otherwise, and if no arguments provided, it will output a list of available queries.
Example:
user@host:~/examples$ node personnel.sql.js
Available queries: list, listByDept, show, insert, update
...then we just need to pick for the desired query to render:
user@host:~/examples$ node personnel.sql.js list
select * from personnel;
If our query requires arguments, we can feed it by simply adding them to the command line:
Example:
user@host:~/examples$ node personnel.sql.js show 23
\set user_id '''23'''
select *
from personnel
where id = :id
π From command line, when an argument is numeric, we can't tell whether it is intended to be an actual number type or string.
For this reason all arguments are quoted unconditionally given that most database engines will automatically cast them as numbers when needed.
If we want to directly execute the query instead, we just need to pipe it to our preferred database CLI interpreter.
Example:
user@host:~/examples$ node personnel.sql.js list | psql tiaDB
id | name | sex | dptName | birth | ctime
----+-----------+-----+----------------+------------+----------------------------
1 | Mortadelo | m | Operations | 1969-03-10 | 2019-05-31 10:58:09.346467
2 | Filemon | m | Operations | 1965-08-15 | 2019-05-31 10:58:46.291629
3 | Ofelia | f | Administration | 1972-08-29 | 2019-05-31 11:05:16.594719
4 | Bacterio | m | I+D | 1965-08-15 | 2019-05-31 11:05:35.807663
(...)
To render SQL from CLI, default_cli engine is selected by default except if
default_engine
option is set. For example, for
temlate_engine: "postgresql"
, postgresqsl_cli will be picked for instead.
On the other hand, in case we want to specifically pick for given database engine flavour when we are going to generate SQL from CLI, we can set the SQL_ENGINE environment variable in our shell either by:
a) Exporting it (Ex.: export SQL_ENGINE=postgresql
).
b) Setting just for single execution (Ex.: SQL_ENGINE=oracle node myTpl.sql.js ...
).
Lastly, SQL_ENGINE can be overridden by --engine=<eng_name>
modifier or any
of its per-engine shorthands (--<eng_name>
) such as --postgresql
,
--oracle_noci
, etc...
TODO: (nocli
, *_nocli
)...
SQLTT templates consist in a JSON object with one or more of the following keys:
-
name: (Optional)
-
description: (Optional)
-
sql: (Mandatory) a SQL string or a SQL Callback. Using a simple string provides a leaner way to define SQL string. But no interpolated arguments are possible in this case.
-
args: (Optional) An array of strings declaring argument names and the order in which they must be numbered. If ommitted or incomplete, the rest of arguments will be appended in appearing order.
-
alias: (Optional) Provide an alias name to be used in case of de whole query being included beside others through
$.include([subq1, ...])
. -
altsql: (Optional) One of the main goals of SQLTT is not having to mantain multiple versions of the same query for different databases. But when there is no other option, altsql let us to provide alternatives for specific database engines. Ex.:
altsql: { oracle: /* Oracle-specific sql string or cbk */}
. -
data: (UNIMPLEMENTED)
-
with: (UNIMPLEMENTED)
Examples:
- Arguments in given order:
{args: ["baz"], sql: $=>$`select foo from bar where baz = ${"baz"}`}
. - Arguments in appearence order:
$=>$`select foo from bar where baz = ${"baz"}`
. - Simple string:
"select foo from bar"
(no argumments in this case)
An optional name to identyfy the query.
To be used as default alias when query is included or used as CTE (with) for another and also for documentation and debug logging purposes.
An optional description.
The SQL Callback receives single parameter (named $
, even we can name it
whatever we like).
This parameter is expected to receive a tag function and the whole callback is expected to return an ES6+ Tagged Template Literal generating an SQL statement.
Example:
$=>$`
select foo
from bar
where baz = ${"value"}
NOTES:
-
Arrow function syntax is used too in order to minimize verbosity.
-
In
${"value"}
, the$
sign is part of ES6+ Tagged Templates interpolation syntax (${...}
). Not the tag function.- That is:
X=>X`select ... = ${"value"}
could had been used instead.
- That is:
-
The
$
argument, despite being the tag function, it has also various methods (which we call the Tag API) providing several extra functionalities. -
In fact,
${"value"}
is just a shorthand for${$.arg("value")}
(or${X.arg("value")}
ifX
is used instead of$
).
(Not yet implemented...)
(Not yet implemented...)
SQLTT involves two API interfaces:
-
Template API: That is the methods we have available from any SQLTT instantiated template.
-
Tag API: Consisting on various methods attached to the tag function our template will receive during its compilation.
After instantiating our template as SQLTT (const myQuery = new sqltt(_my_template_)
), we are allowed to use below methods:
Arguments:
- engName: (Optional)
Arguments:
- argData: Can be a simple array
Let to override initially specified options:.
It returns a new sqltt instance identical to original except for the modified options.
Options not specified in provided optsObject will remain the same as it were specified in the original instance.
Tag API methods outputs rendered SQL substrings in the propper syntax for targetted database engine.
π Further examples will follow PostgreSQL syntax unless otherwise said.
Provide the ability to interpolate an argument by its name.
Argument names can be repeated. They will be rendered in apparition order when args() Template API meghod called unless different order were specified through args property in template source.
π SYNTAX:
arg(argName[, alias])
ποΈ PARAMETERS:
- argName: Argument name.
- alias: (Optional) Alias.
π SHORTHAND:
In its simplest form (when argName is string and alias is not provided) simple string can be used as a shorthand.
ποΈ Examples:
- Explicit:
${$.arg("argName")}
β‘$argName
.- Using shorthand:
${"argName"}
β‘$argName
.
π ENHNANCED BEHAVIOUR:
- If alias is provided, it is added after argument interpolation.
ποΈ Example:
${$.arg("foo", "afoo")}
β‘$foo as afoo
.
π Arguments aren't usually aliased in a query, but they can be placed in the projection too in order to complete it with constant data.
- Passing an object as argName it will interpolate all keys as comma
separated SQL arguments using their values as alias (alias argument, if
given will be ignored).
- Boolean false will disable alias for given key.
- Boolean true make key to be used as alias instead.
ποΈ Example:
${$.arg({foo: "afoo", bar: false, baz: true})}
β‘$foo as afoo, $bar, $baz as baz
.
- Using an array instead, will produce the same effect without aliases (if alias not given or evaluates to false) or using the same name (else case).
ποΈ Example:
${$.arg([foo, bar])}
β‘$foo, $bar
.${$.arg([foo, bar], true)}
β‘$foo as foo, $bar as bar
.
β The "as" keyword in previous examples had been written for the sake of clarity only.
No "as" keyword is currently rendered as it is invalid in many database engines and, for those which accept it, it is optional anyway. Future SQLTT versions may render it for engines that support it.
Provide the ability to nest other templates.
π SYNTAX:
include(src [, bindings])
ποΈ PARAMETERS:
- src: SQLTT instance or any valid SQLTT source (including raw string).
- bindings: (Optional) Argument bindings.
π SHORTHAND:
If src is an already instantiated SQLTT template and no bindings are needed, you don't need to use .include() at all.
ποΈ Examples:
Considering this simple snippet:
const src0 = $=>$`select foo from bar where baz = ${"baz"}`; const q0 = new sqltt(src0)
- Explicit:
insert into sometable ${$.include(q0)}
β‘insert into sometable select foo from bar where baz = $baz
.- Using shorthand:
insert into sometable ${q0}
β‘insert into sometable select foo from bar where baz = $baz
.- From Source (using .include() required):
insert into sometable ${$.include(src0)}
β‘insert into sometable select foo from bar where baz = $baz
.
π ENHNANCED BEHAVIOUR:
ποΈ EXAMPLES:
π SYNTAX:
keys(argSpec [, sep [, wrapStr]])
values(argSpec [, sep [, wrapStr]])
entries(argSpec [, sep [, wrapStr]])
Having regular strings are normally interpreted as shorthand for simple
argument interpolations, .literal()
provide a way to inject a raw string.
Since select ${$.literal("foo")} from bar
is the exact same of select foo from bar
(with no interpolation at all), .literal()
is mostly used
internally by other Tag API functions.
But it can also be useful in case we need to insert some calculated substring.
ποΈ Example:
tpl.getUserData = new sqltt($ => ({
sql: $`
select *
from ${$.literal(get_table_name("users"))}
where user_id = ${"user_id"}
`,
}));
TODO: Rewrite more detailed...
- Publishing helper:
sqltt.publish(module, myTpl);
- Assigns
myTpl
tomodule.exports
(so exports it). - If template file is directly invoked, outputs CLI SQL to stdout.
node myTpl.sql.js
outputs general cli output.SQL_ENGINE=postgresql node myTpl.sql.js
outputs postgresql flavoured CLI output.- ... or simply
node --postgresql myTpl.sql.js
.
- If myTpl is a key: value object instead, first argument is expected to
select which query is required.
- Ex.:
node myTplLib.sql.js listQuery
- If no argument provided in this case, a list of available keys will be shown instead.
- Ex.:
- Arguments are wrapped in a set commands.
- Ex.:
node myTpl.sql parameter1 parameter2 "third parameter"
- Ex.:
node myTplLib.sql listBySection sectionId
- Ex.:
- Assigns
Hooks lets us to wrap arguments differently according to the actual engine.
They consist on a function that takes the original string and the engine name. If this function returns a non falsy value, the argument is replaced by that in the query. Otherwise it remains untouched.
π Hooks can also be applied to non argument keywords. To do so we need interpolate them using literal() tag method.
to escape them as if it were real arguments and then wrap it as an array. This will avoid its interpolation as argument.
Example:
tpl.getUserData = new sqltt({
hooks: {
// Prettier formatting on CLI output:
user_profile: (arg, eng) => eng.match(/(^|_)cli$/) && "jsonb_pretty("+arg+") as "+arg,
// Rename "bigint" cast to
bigint: (arg, eng) => eng.match(/^oracle/) && "int",
},
sql: $=>$`
select id, name, ${$.literal("user_profile")}
from users
where cast(strCtime as ${$.literal("bigint")}) > ${"fromTimestamp"}
`,
});
There's a shorthand consisting in to simply specify an alternative string. In this case the replacement would be done inconditionally. But this could be helpful in case we want to manually enable/disable some tweaks without editing the actual SQL (just commenting in and out that hook).
ποΈ Example:
// If we wanted to apply this hook to all engines:
json_data: (arg, eng) => eng.match(/(^|_)cli$/) && "jsonb_pretty("+arg+") as "+arg,
// We could have written it as:
json_data: "jsonb_pretty(%) as %",
If it is impossible or unreasonable to use the same sql structure for some database engines, sqltt allows to specify a completely different sql source for given database through altsql property.
ποΈ Example:
tpl.someQuery = new sqltt({
sql: $=>$`
/* Regular SQL */
`,
altsql: {
oracle: $=>$`
/* Oracle specific SQL */
`
}
});
π Argument names and order are checked to be the same in all query alternatives to ensure its consistency so using args property to fix their order is hardly encouraged.
sqltt template instances provide a .concat(<string>)
method returning a new
instance whose sql(<whatever>)
method will return provided string
concatenated at the end.
This is useful to add simple clauses such as limit
, order by
or group by
from our application logic.
ποΈ Example:
const myQuery = require('path/to/myQuery.sql.js')
.concat("limit 100")
;
db.queryRows(
myQuery.sql("postgresql")
, myQuery.args(inputData) // or simply "inputData" if db is sqltt aware*
// (*) Such as ppooled-pg
).then(rows=>console.log(rows);
-
Implement Custom Engines:
- Engines are really simple libraries very easy to implement. But nowadays they must be included in SQLTT package.
- Custom engines will allow users to implement their own engines extending default ones just passing them as a specific option.
-
Implement customisation opitons:
- Smart indentation.
- Comments Removal.
-
Incorporate Oracle comptibility Shims
If you are interested in contributing with this project, you can do it in many ways:
-
Creating and/or mantainig documentation.
-
Implementing new features or improving code implementation.
-
Reporting bugs and/or fixing it.
-
Sending me any other feedback.
-
Whatever you like...
Please, contact-me, open issues or send pull-requests thought this project GIT repository