-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Generate Elm modules from a schema #17
Comments
I'd love to get started on this. Currently still researching how other projects handle this. The Elm community has a package to parse Elm files to an AST or construct an AST by hand (http://package.elm-lang.org/packages/Bogdanp/elm-ast/latest) but there is not |
@patrickdet I was looking at that package too, it looks like the right thing to use. Two nice things about this:
If you want to get started on this, please do! And please do share your work-in-progress along the way. There is already code in elm-graphql to decode a schema introspection response (which, by the way, we could modify the representation of if needed because it isn't exposed publicly). It should be pretty straightforward to get started on some function that takes that schema representation and returns a list of modules as a Happy to chat on elm-slack either synchronously or asynchronously if we aren't online at the same time! |
Thinking about it a bit more, |
yeah i will get on that then. probably monday |
I've had a chat with @jamesmacaulay regarding this issue. We've broken it down into a set of things that need to get done and some high-level ideas.
Benefits:
@jamesmacaulay has done some experimentation into this and put it into a branch: https://github.com/jamesmacaulay/elm-graphql/compare/code-generation-experimental |
Hi, I don't quite understand how this will work. You talk about generating types from a schema. For example my schema may say that
Correct? But what if I'm not interested in the
Then this type alias in invalid as it cannot be fulfilled. Seems to me that any code generation needs to:
Care to explain more? Thanks |
@sporto I haven't figured out the types in detail yet, but essentially graphql's type system is weaker than Elm's is, so we have to map between the two somewhere. In the existing library, we do it when defining the extract method. This is what defines our encoders/decoders and therefore, how we go from Elm records to graphql queries and vice versa. Each of one of these queries is given a name as it stands, so we will just continue that approach. As far as generating the types from our graphql queries, I think a reasonable way of doing it would be to use the file name as the name for the query in Elm. I'll give an example: userPermissions.graphql
userInfo.graphql
These two queries cannot map to the same Elm record (unless we jammed a million
Of course, when you get the response, you can map these structures into your model however you'd like; For example into a single User type or whatever. EDIT:
|
@Vynlar that makes sense, thanks for the explanation Alternatively these could generate different Elm modules e.g. |
@Vynlar Another way to get names for the types is to actually use named queries. Other libraries, such as Apollo, require this. query UserPermissions {
user {
isAdmin
canPost
}
} On the topic of validating the query against the schema, in my setup I simply use a few tools combined as an NPM script to download the remote schema, convert it to a JSON AST, and use a few editor plugins for auto-complete and real-time validation with ESLint. Maybe not ideal, but it guarantees safety for now. |
As @Vynlar clarified (thanks!) my intention is to have these three different ways of using the library: (1) no code-gen, (2) code-gen of helper modules from a GraphQL schema, and (3) code-gen of modules from GraphQL queries. This issue and the little sketch of code in its description are really about option 2. When generating helper modules from a schema, we don’t know anything about the queries that are going to be used, so we can’t generate any record type aliases that could be used for decoding objects in query responses. For option 3, generating modules from GraphQL queries, my intention is to have each query result in a single independent Elm module. Keeping everything in separate modules makes naming things a whole lot easier, since each query has its own namespace. I think what I need to do next for this issue is come up with a small example schema and query and fully write out the Elm modules that would result from generating code via option 2 and 3 respectively. |
@jamesmacaulay I've taken a first stab at creating some examples of what I hope we can eventually generate. I'm happy for people to modify it as there are still some potential concerns. Let me know what you all think. |
@Vynlar thanks for this :) This is mostly what I had in mind, and provides a great starting point for more detailed discussion. Here are some notes:
The "simplest thing" would be to represent each field's arguments as its own record type alias. However, this would mean that:
type ArgValue a vars
= Fixed a
| Variable (GraphQL.Request.Builder.Variable.Variable vars) Also note that the
|
Also about field arguments: I'm currently playing around with an alternative to records that would allow you to only specify the arguments you actually want to pass in. The tradeoff is that it makes the type signatures of the API more complicated and more difficult to document clearly. It could look something like this: queryRoot =
QueryRoot.extract <|
QueryRoot.allUsers
[ .first => 10
, .after |> fromVariable cursorVar
]
(User.extract User.name) ...where the Here's a basic proof of concept for implementing this kind of DSL: https://ellie-app.com/3N8yyQsKkW6a1/5 |
For comparison, here’s what the same query might look like with an interface that uses record types for arguments: queryRoot =
QueryRoot.extract <|
QueryRoot.allUsers
{ first = Optional 10
, after = OptionalVar cursorVar
, last = Absent
, before = Absent
}
(User.extract User.name) All of these args are optional, which means all of them need to be specified, and the type for the args record is then something like this (assuming variables remain non-type-safe, which wouldn’t want but is easier to show for now): type OptionalArgValue a vars
= Optional a
| OptionalVar (GraphQL.Request.Builder.Variable.Variable vars)
| Null
| Absent
type RequiredArgValue a vars
= Required a
| RequiredVar (GraphQL.Request.Builder.Variable.Variable vars)
type alias AllUsersArgs vars =
{ first : OptionalArgValue Int vars
, after : OptionalArgValue String vars
, last : OptionalArgValue Int vars
, before : OptionalArgValue String vars
} This could be modeled in other ways of course, such as using |
@jamesmacaulay I don't have experience using graphql in production -- what is a reasonable upper bound on the amount of arguments that a given field has? Because if it's common to have more than say... 5(?) nullable arguments, I would say your first solution would be the best choice; however, if it's just a couple arguments, then having to include them every time it not a huge deal. I value the developers experience quite highly and having to have a bunch of |
We use GraphQL here, in our case we rarely have optional arguments, maybe one or two. The most we have here is relay connections arguments ( |
@sporto That's good to hear. I've been working on trying to come up with examples for both of these options that @jamesmacaulay has presented here. I realize the second one is in fact much simpler, so if it's practical, that seems like the right choice. @jamesmacaulay One issue I'm having is in trying to convert one of these argument type aliases into the older arg format. Here's an example: Convert this:
To this:
Is it not possible to do this conversion? Should I approach this in a different way? |
I think the most annoying situations are going to be mutations where you’re passing in the attributes of some big possibly-nested entity through arguments. In those cases it really all depends on the application to say how many fields are involved and how many of them are optional. Even so, I’m still finding myself leaning heavily toward some kind of record-based option. One other thing to consider is what the application code would look like to take your app’s internal representation of some complex entity and convert it to this weird data structure that’s specific to this library. @Vynlar there’s no way to do the conversion you’re talking about generically, i.e. without knowing about the structure of the record. Luckily the generated code can have that knowledge, and there are lots of ways it might be implemented. One way might involve making a list of tuples of field name strings and associated functions that extract a allUsersArgExtractors : List (String, AllUsersArgsRecord vars -> Maybe (Arg.Value vars))
allUsersArgExtractors =
[ ( “count”, .count >> requiredToMaybeArgValue Arg.int )
, ( “order”, .order >> optionalToMaybeArgValue Arg.string )
]
requiredToMaybeArgValue : (a -> Arg.Value vars) -> RequiredArgValue a vars -> Maybe (Arg.Value vars)
requiredToMaybeArgValue convert required =
case required of
Required x ->
Just (convert x)
RequiredVar v ->
Just v
optionalToMaybeArgValue : (a -> Arg.Value vars) -> OptionalArgValue a vars -> Maybe (Arg.Value vars)
optionalToMaybeArgValue convert optional =
case optional of
Optional x ->
Just (convert x)
OptionalVar v ->
Just v
Null ->
Just Arg.null
Absent ->
Nothing Then in the generated implementation for the field helper, it would (And again, all of the variable stuff should really be a bit different because we need to make variables type-safe.) |
@jamesmacaulay Thanks for the clarification and examples! Additionally, for my (and possibly other's) benefit, would you explain what makes the current variables not type-safe? It appears that they each define a type in the form of a |
@Vynlar the type of Let’s say you end up using two variables in your query, one defined with This is all great and provided the mechanism for a decent DSL in 1.x, but it involves discarding the type information from the original In the case of generating these modules, I’m not even sure if the original type parameter of the appropriate Either way, I think it should be possible somehow to give |
Back on the topic of argument records vs argument lists: this comment over in #24 reminded me that graph.cool, which has grown into a very popular GraphQL-backend-as-a-service, generates absolutely monstrous numbers of optional argument fields for the purpose of filtering result sets in just about any way you can imagine: I feel like this is such a stunning counterexample to our previous discussion that it is a dealbreaker for the straight records-based approach – there’s just no way I would want to force anyone to write out that many The only way I can think of that a records-based approach could work in the face of that would be if we leveraged a defaults-record as well, and either provided it for the user to use when they see fit, or ask for a function instead that transforms the defaults: QueryRoot.allUsers
(\args -> { args | first = Optional 10, after = OptionalVar cursorVar })
(extract User.id) This is also pretty flawed though, because while the defaults for all optional args would be QueryRoot.findUserById
{ id = Required “user123” }
(\args -> { args | foo = Optional True })
(extract User.name) I think this confusing and would get much worse with nested objects. And unfortunately the list-of-args approach ( I feel like every option is bad in its own terrible way 😕 |
|
@Vynlar I think you're right about ditching this. Most people just want to write the queries in GraphQL anyway. The schema-generated middle-ground API is a "nice idea" but it appears too problematic to make worth it. The only thing is: even with query code gen, if someone hands us a GraphQL query that declares a variable that is a complex input object like the graph.cool filters I linked to, we are back in the same position of wanting a nice type-safe way of asking the user for it in the generated query code. At least there it's a smaller problem because the complexity is only there when the user chooses to expose whole complex objects as variables for their queries. |
@jamesmacaulay To get around that issue, someone could write a query like this:
This way of writing a query would avoid even exposing the optional arguments to Elm at all, right? Of course you could shoot yourself in the foot and define the query like this:
We could issue warnings if there are lots of optional variables, and also clearly document the better approach in the docs. |
Yes, having options like that (at least most of the time) definitely makes the situation better. |
Maybe this idea will be revived at a later time, but for now I will close this issue. Meanwhile I've created a new issue for discussion of query code generation. |
Hello @jamesmacaulay, I started working on something like this a couple of weeks ago. I just stumbled on this issue and it looks like we actually reached a lot of the same conclusions. The main place we seem to diverge is that I think it would be really awesome to have a tool that does codegen like this! I've got an MVP here that handles the vast majority of cases. I would love feedback if anyone here has any impressions. Here's an ellie example using some I built this package because I wanted to have something that
I wanted to get the same feeling of if it compiles it works that I get from Elm, style-elements, remote-data, etc. I think our design goals are a little different, so I think they can coexist pretty well. I hope you feel the same way. Cheers, Dillon |
@dillonkearns cool! |
Develop concurrently with #18.
The package should be able to power a command-line tool to generate Elm modules from a schema.json file. Each module should provide helper functions for a single type in the schema.
Here's a sketch of the kind of thing you should be able to do with them (generated modules are
Schema.User
andSchema.Project
below):The text was updated successfully, but these errors were encountered: