Replies: 1 comment 5 replies
-
Not sure what others would recommend but I would certainly go with the second approach. It is a nice separation of concerns, it keeps your parsing grammar simpler, and it will allow you to have finer grain control on the error messages you return in case validation fails. |
Beta Was this translation helpful? Give feedback.
5 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I am building an interface definition language, it supports generic data types like Structs
But it also has many "special" types that only allow users to define fields from an explicit Schema, e.g
Now I have setup a fairly abstrtact AST so one way of parsing this would be via a generic AST Statement with only
id
andproperties
in it.But this would allow users to define fields that we do not allow,
How should we handle a language with an explicit schema, e.g HashiCorpLanguage or similar
1. My First attempt was to parse the schema in Lalrpop itself
But I get an error
I believe I have to add all these schema properties into the Lexer tokens, which feels a bit wrong as they are not exactly Keywords, just data properties. There also may be quite a lot of them.
2. My second attempt was to handle this schema validation on a second pass after lalrpop has parsed the service
This feels like the most sensible approach, first I define an IntermediateRepresentation enum for each of the explicit schema parts.
Then I can handle all the conversions and validation on a second pass using a
TryFrom
impl(this is just a quick draft, I think I can probably use a crate for validation to make this conversion less verbose)_
What is the recommended way to handle something like this?
Beta Was this translation helpful? Give feedback.
All reactions