-
Notifications
You must be signed in to change notification settings - Fork 208
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[macros] Consider not overloading metadata syntax with macro metadata. #3604
Comments
Agreed. There's also the added fact that some macro parameters would be allowed that are not possible with current annotations. @Macro(
(arg) => 'Hello $arg',
) but that's not legal with annotations. I'd personally suggest using #data
class Foo {} We could also tweak the formatting for macros specifically. I think folks would prefer #data class Foo {} |
This has certainly come up in the past, I am not necessarily opposed to using a different syntax, but I do think the motivation needs to be strong, and I do think there are advantages to re-using the metadata syntax:
|
Does that really work in practice? When considering: @Macro(
(arg) => 'Hello $arg',
) older language versions would fail to compile. |
For this particular feature, since it does require new parsing features, maybe not. Although, technically it probably could, but it would be a bit weird. It would not be possible to use a macro unless your SDK supported macros, so it would for sure have the ability to parse that code. Whether or not we language version the support is a different question. But for instance the Record analogy I made is actually quite similar to this. You can use records, without being opted in to the feature, but you can't actually write out explicitly a record type or literal still. |
If we allow function literals inside macro annotations, we should just allow them in any constant expression. There's a request for that, constant function literals, and it would be useful in other places.
We shouldn't have special rules for macro annotations. |
These aren't actually const functions. What happens when you do this is an implicit conversion to a Code object, resolved within the same scope as the annotation itself. That code object can then be used by the macro as a part of their own code that they generate. This is somewhat problematic already though, and we probably need to revisit it. Ultimately, we need to be able to emit valid Dart code, and the scoping rules as defined would not only be potentially challenging to implement, they also may not even always be possible to implement, using regular Dart code (because of shadowing). Also, there has been some general discomfort with the idea of any form of implicit conversion like this. Macros do this quite a lot for their constructor parameters though, it isn't isolated to Code parameters. |
If |
Yeah, using functions for this is strange. If we are considering to add that - we should instead just add quasi-quoting. Because it looks quite arbitrary that you can use QQ in macro application, but can't use QQ to actually implement macros. But that's the topic for #1989 |
Another reason in favor of using metadata annotations for macro applications is that macros will often need metadata annotations applied to declarations anyway, to configure how the macro behaves. For example, consider a serialization macro that allows the user declare that some fields shouldn't be serialized: @serialized
class SomeClass {
final int x;
final String y;
@noSerialize
final String z;
} Using metadata annotation syntax for macro applications makes the syntax uniform between applying a macro and configuring a macro. In some cases, it may be an implementation detail as to which annotation is actually the macro and which is the configuration. For example, consider a macro that, I don't, establishes some kind of connection between a Flutter widget class and its corresponding state: @widget
class SomeWidget extension StatefulWidget {
...
}
@widgetState
class SomeWidgetState extends State<SomeWidget> {
...
} From the user's perspective, they don't care if Another way to think about it is that in a world where Dart has macros, it's likely that almost all uses of metadata annotations will be for use with macros. And at that point, it probably makes sense to retcon the syntax to mean "static metaprogramming stuff here". |
I don't like that argument. For example, I'd like to be able to do: Widget build(context) {
final snapshot = @streamBuilder(stream: stream);
return Text(snapshot.data);
} But using the annotation syntax for a macro here feels confusing.
Even with macros, I expect most annotations to be I'm personally worried that newcomers would view a |
IMO that's an issue that has more to do with I'm not sure I agree with the general argument being made. In cases like |
One thing that does worry me about using the same syntax, is that macros must run before identifiers can be resolved. An We cannot conclusively say which names are exported by a library before we have run macros. But that means we must traverse all transitive dependencies of a library and look for Then we can look for applications of what looks like macros. (It's only "looks like", we don't yet know if a macro will generate a member with the same name in this library, which would shadow the original annotation. We also don't know that the current library, or non-macro libraries it depends on, are valid yet, and we don't know what they export.) I'd suggest instead marking the import: @macro import "package:mymacro/mymacro.dart"; This import is special, it's (assumed, we don't know before we can resolve identifiers) annotated by a constant Inside that we have Then the current library can check for annotations of the form Then the current library is complete, and we can start checking what it means. (We can make it a compile-time error if the It does mean that we are optimistically resolving We could do that: @import "...."
@@MyMacro(args) This syntax makes the |
cc @davidmorgan This is related to recent discussions about build time vs run time deps etc as well. If we mark macro imports specially, we could also potentially say that the things from those imports are only available at compile time. And also possibly add some extra rules to make the assumptions about these identifiers safe (such as saying you can't shadow declarations from macro imports). And if we get reflected imports (which I think we do need, or at least an equivalent of those), then we could also specialize those and say they are added to the runtime dependencies of any library that imports the macro. That is all a bit hand-wavy to be sure, but I think might be an interesting idea to flesh out? |
Thanks Lasse, thanks Jake. I am starting to think we will need to pull considerable metadata related to macros out of the dart files and into pubspecs. If Dart only lets you discover what needs doing during the compile then that severely limits the possible parallelization across processes/machines. Pretty much what I would like is that you can turn the set of pubspecs (-->package configs, really) for your program into a set of build steps--command lines--to run that have all macro compiles and macro invocations already accounted for. If we can't do that fully, we need to understand the difference and how expensive it could be for build performance. |
I would say that the current spec is actually very highly parallelizable, without any configuration. You can discover macros via parsing only, which is a key part of this. Parsing itself is highly parallelizable, so macro discovery is as well. Parsing also gives you a dependency graph, etc, so you can order your macro compilation and application steps early still. For me what pubspec configuration would primarily provide is the ability to share compilation of macros across tools, which would have nice benefits, but I don't believe it is a necessity, we already duplicate all other compilation work across all tools today. |
I'd really prefer if you don't need to do anything special in the The dev-dependencies are only needed to run things outside of What's special about macros may be that they are needed during compilation, but their code is not linked into the resulting program. And that's where we need to be clear about separation. A macro import+application happens before other imports are considered, and are not included in the resulting program. As Jake says, if we can detect by parsing that something is a macro import, we know that it needs to be treated as such. If a macro library also needs some names to be available at runtime, it needs to inject an import into the library. Or have a way for an augmentation file to (What if an augmentation file had two kinds of imports: Local imports which are not shared outside of the file, and global imports which are library-wide? We can make either the default, as long as we can write the other.) |
Thanks. Yes, "macro import" would achieve the same separation of deps, and if it's the only way to get the separation, I'll take it ;) But then there is extra work around the UX for a new type of import: a user has to know that they import macros in a special way, but not annotations or other const expressions--why? They are unlikely to want to care. The macro declaration is a natural place to put utility methods intended to be used at runtime with the macro: so now the user has to know to "import macro" and "import" the same file...? I'm pretty convinced macro implementations don't belong in the same file as the user-facing API. Separating them out is a better user experience: it hides the internals, and the user-facing import can add those runtime utils in a non-confusing way. It fixes the imports being mixed up, and it turns the implementation into normal Dart code. The closer macros are to normal Dart code the more standard tools e.g. test libraries can just be directly used. If not pubspec.yaml (actually: package config generated from pubspec.yaml) the metadata could go in annotations. But what we should plan to have either way is a tool that processes a Dart codebase quickly and outputs the metadata you need if you're going to build (or analyze!) with macros in a modular way. As long as that processing (not: "analysis") is fast and absolutely never launches macros, I'm happy ;) Incidentally, I though of another type of metadata we could want: restrictions on what the macro can do. e.g. if a macro declares "I will not introspect outside the library I'm applied to", that's good to know ahead of time. We could determine that during compilation, but that's too late, we want to know ahead of time so we can simplify the build graph (caching+invalidation). Then we enforce the restriction during the build. Re: pulling in additional imports, pulling in imports dynamically as a result of macro execution is a problem: bazel for example can't support that at all, by design, the build graph has to be knowable before any build actions. So we need a way to compute the max possible import set ahead of time. It's okay if it the macro can then decide not to use it and we e.g. tree shake later. |
I do think the current design is a pretty optimal user experience. You don't have to think about macros differently from normal code, you only need the one import which can provide both the macro and its runtime deps, and clicking on a macro navigates you directly to its implementation which is imo a benefit not a downside. It does have some implications for builds, and the potential of compile-time deps "leaking" into the runtime deps of a program.
This is only relevant I think for something like bazel - where the deps all have to be known and provided ahead of time. So I think it is also appropriate for this to be a bazel specific optimization, not something that leaks into external Dart apps.
Many modular build systems probably do want static configuration, at least as an optimization. But this can (and likely should) be bespoke (ie: dart_macro_library for internal bazel rules). For build_runner, we already have the infrastructure to parse the world and collect a bunch of information quickly in the way you describe (not actually in parallel today, though it could be), the |
Thanks Jake. Re: "only relevant for something like bazel"--I can imagine macros declaring they will play nice to be useful for the analyzer+CFE, too. But all that we can build as needed / later. Yes, bespoke modular builds will need bespoke work--I think all we should provide from our side is a nice tool / library to get the information; it'd be nice if folks don't have to reinvent the parse+extract part :) ... and even nicer if they can just directly pass whatever the tool spits out to our other tools. |
That's the part I cannot believe. The entire import containing the But if that library is also imported normally into the macro application library, then every dependency used by the macro implementation, including I think a stronger separation of phases would be better. Maybe that can be in the macro implementation library instead of the macro application library, but just putting |
Fwiw, I have agreed from the start that I don't like the mixing of runtime/compile time deps here. I just couldn't get alignment on a fix that keeps the user experience I want, and in particular that makes testing reasonable.. Macros are not fundamentally different from any other annotation today in terms of this though. Those have exactly the same issue. It should be trivial to tree shake classes which are only referenced via annotations, and their associated deps. But, it does make me uncomfortable to rely on that, and it also implies more compile time work to actually achieve that. |
Would a general purpose The analyzer team is also shipping a separate mechanism for importing things specifically to reference them in docs. Could a language level cc @bwilkerson |
Why would we not have any desire to force it for consts? Because suddenly you would have to both And this same problem would apply to libraries declaring macros. It will be very common to provide both a macro and some code and/or deps that are additional to the macro, to help the user use it and for use in the implementation. So you would have to both The important distinction for macros is not "used only at compile time", it's "goes in the macro program" as distinguished from "goes in the user program". I remain convinced ;) that the right way out of this is to physically split the macro program and user program by putting the macro implementation elsewhere. For dartdoc, the import is not an import, it's only for the dartdoc tool; I think it belongs in metadata such as an annotation or comment. Thanks :) |
Fwiw, annotations today do have the exact same issue as macros, in terms of mixing compile time and runtime dependencies (modulo dart:mirrors, but the VM could choose to just have compile time deps available I suppose, to support reflecting on annotations). It is possibly easier to not do the wrong thing with an annotation, but nothing prevents them from having tons of deps, just due to a pathological package structure. So, I do think that an |
A compile-time only import, which can only be used for annotations and dartdoc, could be useful. That would still make a distinction between annotations that are compile time-only, and those that can survive until runtime. Even if an import is compile-time only, the names it introduces should probably still be counted as being there, and can conflict with "normal" imports. Type just not allowed to refer to them. Maybe such an import should always be prefixed, so it's import "helper.dart" as static $;
@$.Annotation(42) |
Unfortunately, conflating imports for dartdoc with imports needed for the compile would severely threaten the performance, and even the feasability, of the Dart compile. By their very nature dartdoc "imports" are likely to introduce dependencies that you would never allow in the build: it's useful to mention symbols from a wider scope, introducing all kinds of circular dependencies and layering violations. They would be bad for performance and would break any build system that prohibits circular dependencies, such as bazel. Modular builds would be forced keep track of which static imports are just for dartdoc in order to ignore them. Build graphs of large projects already require a lot of maintenance work / manual pruning just to keep the build viable. Any feature that causes over-globbing risks making those builds not viable, and would certainly land back on us as additional issues filed like "this workflow / whole program compile is too slow, please optimize". What we need is that with as little fuss as possible all deps end up in buckets that say precisely when they are needed :) |
Like, a macro library? Fwiw, this proposal more directly and more efficiently solves the problem of mixed runtime/comptime deps. It is not possible to write a macro that is still there at runtime with this feature, unlike all the other proposals. The user experience is also better IMO, for both the consumer and the producer - modulo testing getting a bit more difficult. |
Thanks Jake ... taking a closer look at "macro library" is instructive, I now think it is unfortunately simpler than what we need. To see this, consider two nested macros: macro A which in its implementation code applies macro B, as a fully hidden implementation detail. Macro A and macro B are both "macro libraries", but from the point of view of macro A, macro B's implementation code is different to code that it uses directly. They are both "compile time" dependencies, yes, but it's a different compile, a different program. There are not just two programs, the user program and the macro program: there are an unlimited number of programs, one per macro, with an arbitrary dependency graph between them. And you want to be able to "unwrap" macro code, for example for testing, and run it as "user" code while keeping more-nested macro code as "macro" code. I suspect that the simplest concept that is sufficiently powerful here gives us, without too much fuss, a build graph in which "applies as a macro" is a different kind of dependency. One particular file might be used only in the user program, or in the user program + the macro A program, or in just the macro A program, or in the macro A program and the macro B program. Any build system that is modular and/or incremental can reasonably want to treat these cases differently, and wants all of this information to be readily available. Once the build graph is available to tools, "compile time only" could maybe be implemented as a lint that you would suppress to run the macro code as user code for testing :) |
Fwiw, I would like this to be supported via reflected imports (this would replace the resolveIdentifier API). So if a macro is only emitting into code another macro annotation, we would have a separation there, if we also had that feature. If a macro wants to directly invoke another macro, I think we would say that "macro libraries" are allowed to do that, unlike regular libraries (I can't recall if the proposal covers this already). Maybe then macro unit tests become "macro libraries" too, but I haven't had a chance to fully explore this. |
I think reflected imports is different to what I meant--but it's another good example of a case where there is a new type of dependency being introduced that build systems might want to handle differently for performance. I think we will want to surface this at the package metadata level, too. (Similarly: bazel will need them). Back to what I was rambling about ;) ... really the key difference here is that we now have multiple entrypoints in the same program. We have the main entrypoint, and zero or more macro entrypoints, each of which corresponds to a separately-compiled program; all the programs are different and they almost always have different source sets. Then, there is a dep graph between the entrypoints: to compile this entrypoint you must first compile this one, and for this one you need this other one, and so on. Currently the source sets are usually nested: if a program uses a macro, all the sources in the macro program are in the user program, whether they are needed or not; unless conditional imports are used, then they can be different, you can have macro source that is not user source. |
Closing in favour of #3728 which looks like the route we'll take here. |
@davidmorgan, as per #3728 (comment), that issue focuses on a different aspect of the macro proposal. This issue is about introducing a unique syntax for macro metadata to be used by macro users (e.g., Could you reopen this issue, or was there a definitive decision made not to consider it? Unfortunately, I don't have the time currently to contribute extensively to the discussion, but I still believe this is an approach that would lead to fewer issues long term. It seems to me that being explicit about macro applications has been very successful in the Rust ecosystem, and not doing so is a common annoyance in the Java ecosystem. |
Thanks @modulovalue. We are not currently considering any approaches that involve new syntaxes or anything else new; rather we are working towards #3728 which reduces the difference between macros and annotations, making macros just annotations with special properties. We would have to change direction from #3728 for there to be any possibility of movement on this issue, which is why I closed this in favour of that. [edit: fix the link to be to the correct issue] |
Currently, the macro proposal reuses metadata syntax (e.g.,
@foo
) for marking declarations as being eligible for macro expansion.This invalidates an important invariant that users of Dart are used to: metadata can only affect declarations in ways that any other declaration outside of a library can. That is, If we see some metadata on a declaration, we can assume that there's nothing "funky" going on. Third-party tooling might generate some code, but we can assume that there is no surprising behavior that changes the behavior of the annotated declaration.
I think that this is valuable, so I'd like to propose for the macro proposal to introduce a new type of metadata: "macro metadata" with custom syntax (e.g.,
@!foo
, that is, a!
needs to follow@
for metadata to become macro metadata.). Only declarations marked with macro metadata will be eligible for macro expansion.This allows tooling to skip macro expansion processing on declarations that have no macro metadata, and users can immediately see which declarations can be expected to be affected by macro expansion.
The text was updated successfully, but these errors were encountered: