-
-
Notifications
You must be signed in to change notification settings - Fork 22
Revisit & improve the knexify
concept.
#5
Comments
I'm not working at Ghost so this is just a random person who worked on a similar problem's opinion. I wrote a boolean-logic (plus some basic math operators) querying language that would then translate to ElasticSearch aggregations. Rather than using a parser-generator I preferred for debugging and error messaging purposes to create my own tokenizer and lexer. The way it worked was that the query was tokenized into tokens, then the lexer would create lexemes and build the AST. Once that was done, a ElasticSearch specific module would traverse the AST and just build the individual subqueries by switching on the name of the operators in the AST recursively. For the Ghost use-case I think that a similar structure could be interesting. Something like:
I might work on this if there is any interest in it. |
Hey @Mickael-van-der-Beek sorry for the slow reply. There are some similar conversations around the future direction for GQL happening on TryGhost/Ghost#5604 and #17 |
This is a brain dump of thoughts on the currently very temporary
knexify
module that exists in #2.Currently, the 'knexifier' is just a mish-mash of functions which provide glue between the JSON format output from the GQL parser, and a knex query builder instance. The key part is the
buildWhere
function which organises the filter JSON into a set of query builder calls. I envisage we need a similarbuildJoin
function to do the same for joins, but I'm not sure if that truly should live in GQL.(currently it's a dirty hack in Ghost's
core/server/models/base/utils.js
similar to the existing 'query' behaviour that handles joins for the existing implementation of tag/author filtering)For now,
knexify
contains a bunch of contextual information that it uses to do its thing, and it needs a whole load more - like which attributes are permitted on a model, what their valid values are, and so on.All of this is information that is already encoded inside of Ghost's model layer, so having it splatted in here is just a duplication for convenience sake whilst we're building all this fancy stuff out and trying to connect the dots between GQL and Ghost.
Long term, we should be grabbing all of the contextual information we need from Ghost's models, in a uniform and predictable way. This suggests that what we want to hook into with GQL is bookshelf, rather than knex.
I believe that the best approach for doing this might be to provide a bookshelf plugin as part of GQL, one that can hook into any bookshelf models and produce the same effect, providing those models expose the same functions or properties.
Short term, this might be too time-consuming, and it may be best to continue duplicating the information inside GQL in the most useful format, whilst the API in Ghost is also extended out, and revisit doing this in a better way in the future.
A question for right now is: is there some sort of middle ground? E.g. restructuring knexify as a bookshelf plugin, but still using hard-coded contextual information for now, or perhaps providing both the utils like
buildWhere
andbuildJoin
in a 'knexify' module, and having a second bookshelf plugin module to hook in?I'm loosely aiming to make GQL useful to other people, although it's not a priority, for now the main priority is getting it both working AND highly testable.
Some interesting things, mostly for later, that I'm not sure how to do as a bookshelf plugin are:
permittedAttributes
function - can we include that in the plugin in some way?permittedJoins
type function as wellThe text was updated successfully, but these errors were encountered: