-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A simpler DSL? (Pass code literal to _avx_!) #74
Comments
I agree that the variable name mangling is a problem. It forces me to do a lot of untangling to figure out what's going on. I hate bugs that show up with I think two reasonable solutions are:
The reason I favored the "condense -> reconstruct" model was to support differentiating loops in a (Bayesian) modeling DSL, and have it be fast*. I'd call this a work in progress, but it's been about a month since I was last able to spend any time on it. I hope to be able to focus on this again soon to actually have "progress". *Because MLIR was originally created as part of TensorFlow, they must have this motivation too? I really should find the time to play around with TensorFlow Probability. Maybe it already is what I've been dreaming of. An additional note on the original implementation of the
This doesn't sound like too much to re-implement. I do think the "reconstruct_loopset.jl" is simpler to work on and extend than Some code is also needed to figure out what the call to |
I'm 100% on board with the idea of pushing forward with the current design. Getting something that you can use to persuade others of its value will provide the stability and permission you need to keep working on this. In the longer run (when your deadlines are not so pressing) I'd argue we should think about a refactor. The crux of the issue is that currently
In my opinion, a lot of what makes some things a bit difficult here is that you have to do quite a lot of analysis with only half the information available. To me it seems likely that a fair bit of the complexity (e.g., all the CartesianIndex offsets due to needing to expand symbol names into tuples) will just disappear if we can get both types of information in the same place. This would be true if we wired this in as a compiler pass in Julia itself, but in the shorter term I think encoding the Expr as a type is by far the easiest way to make these gymnastics prettier. I don't think compile time will be that big a deal if we can prevent recompilation of the internal methods of LoopVectorization (#76, likely to require improvements to Julia itself), other than for the compilation of the functions this produces. |
It occurred to me that one thing making it a bit hard to debug problems is that in the final generated code, the names of the variables are mostly unrelated to the original code. Have you considered passing the raw expression as a type parameter to
_avx_!
? I'm envisioning something like translatinginto
This is basically just an
Expr
but one that could be passed as a type-parameter.If you like this idea I can work on it.
The text was updated successfully, but these errors were encountered: