-
Notifications
You must be signed in to change notification settings - Fork 208
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Non-nullable type operator #199
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we'd need to use something that we could call "legacification" in order to obtain the desired subtyping properties for legacy types. This would happen mostly outside this document, because we would produce expression types that are fully "legacified", but it does affect the subtyping relationship in a few cases (such as interface types; e.g., with class IntList implements List<int>
I think IntList*
should have List<int*>*
as superinterface, not List<int>*
).
- if `T0` is `Null`, `dynamic`, `void`, or `S?` for any `S`, then the | ||
subtyping does not hold (per above, the result of the subtyping query is | ||
false). | ||
- Otherwise `T0 <: T1` is true. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can understand why it may seem attractive to use the "when we have chosen a rule, we know we won't need to try any other rule" property, because we may then have negative rules ("does not hold").
However, we could stick to a positive model and I believe it wouldn't have worse performance: If T0 is Function, a function type, a parameterized type, T*, or T!, then T0 <: Object. I'd expect the representation of types to allow for an O(1) test to recognize that a given type has one of these three shapes.
Maybe it's just me, but I suspect that the ruleset as a whole will be a bit more readable if we try to make it order-independent, especially if it doesn't cost anything.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
However, we could stick to a positive model and I believe it wouldn't have worse performance:
By definition it has the same performance - we're specifying the same relation either way, so it admits the same implementations. The only difference is that the implementation has to rediscover that there is a total order on the checks.
Maybe it's just me, but I suspect that the ruleset as a whole will be a bit more readable if we try to make it order-independent, especially if it doesn't cost anything.
Well... I rather suspect the opposite. But I'm happy to review an attempt to rewrite this in an order independent way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Drive-by comment: speaking as one of the people who is going to have to write the implementation, I prefer the order-dependent formulation, because it admits a straightforward direct translation to code. So I'll have more confidence that the implementation and the spec match.
- If `T1` is `FutureOr<S>` for some `S`, then the query is true iff `Null <: | ||
S`. | ||
- If `T1` is `Null`, `S?` or `S*` for some `S`, then the query is true. | ||
- Otherwise, the query is false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This time the positive form is actually simpler:
- if
T1
isNull
,T?
, orT*
, the query is true. - if
T1
isFutureOr<S>
for someS
andNull <: S
, the query is true.
We may then state that 'otherwise the query is false', but that's understood in several other rules so we could instead rely on the general promise that "no other rule needs to be considered".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ack. I don't think this holds for some other rules though.
|
||
- **Left and Right Legacy** `T0` is `S0*` and `T1` is `S1*` and `S0 <: S1`. | ||
- **Left Legacy** if `T0` is `S0*` then: | ||
- `T0 <: T1` iff `S0! <: T1`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We discussed examples like this one:
// Opt-in library 'optin.dart'.
List<int?> foo() => [];
void bar(List<int> l) {}
// Legacy
import 'optin.dart';
main() {
var xs = foo();
bar(xs);
}
Presumably, the inferred type of xs
could (naively) be List<int?>*
, or we could apply a deep "legacification" and make it List<int*>*
.
If we use the former then we'd later ask int? <: int
and get no, and I guess the bar(xs)
should not be rejected.
So do we assume such a legacification when we consider these rules?
With contravariance and function types we get a few more pieces of the required properties of legacification:
// Opt-in library 'optin.dart'.
void Function(int?) foo() => (int? i) {};
void bar(void Function(int) f) {}
// Legacy
import 'optin.dart';
main() {
var f = foo();
bar(f);
}
The inferred type of f
would have to be void* Function(int*)*
rather than void Function(int?)*
in order to allow bar(f)
.
Of course, we'd have to deeply legacify every expression type in legacy code in order to get the required permissiveness.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right. Examples like this don't get handled by the type system, they get handled by the demotion from errors to warnings in weak null checking.
`S2*` for any `S2`) | ||
- and `S0 <: S1`. | ||
- **Right Legacy** `T1` is `S1*` then: | ||
- `T0 <: T1` iff `T0 <: S1?`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This would also be a case where legacification is needed in order to obtain the required permissiveness.
- `T0 <: T1` iff any of the following hold: | ||
- either `T0 <: Future<S1>` | ||
- or `T0 <: S1` | ||
- or `T0` is `S0!` and `Object <: S1` or `S0 <: T1` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, in this case the implicit "otherwise false" clause is actually significant: We could prove the S0 <: T1
case using Left non-nullable
, but we still have to have it here in order to know for sure that we're done inside 'Right nullable'.
We might want to flag S0 <: T1
as "near-admissible" in order to keep track of the fact that this particular case of redundancy is intentional.
- `T0 <: T1` iff any of the following hold: | ||
- either `T0 <: Future<S1>` | ||
- or `T0 <: S1` | ||
- or `T0` is `S0!` and `Object <: S1` or `S0 <: T1` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, in this case the implicit "otherwise false" clause is actually significant: We could prove the S0 <: T1
case using Left non-nullable
, but we still have to have it here in order to know for sure that we're done inside 'Right nullable'.
We might want to flag S0 <: T1
as "near-admissible" in order to keep track of the fact that this particular case of redundancy is intentional.
Axioms for `Null` and `Object`: | ||
|
||
Left non-nullable `Null`: | ||
- `Null! <: T` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This would allow us to prove Never <: Null!
and Null! <: Never
, but not Null! == Never
. I guess that's because you think it would actually be more trouble and not more useful to introduce some sort of normalization?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know what you mean by Null! == Never
other than that they are mutual subtypes, which this allows us to prove. I'm open to trying to specify a normalization process that reduces mutual subtyping to normalization + syntactic equality, but such a normalization process, if it exists, is an implementation strategy. Unless you're proposing to define an equality that is more restrictive than mutual subtyping? Note that if not, then your normalization strategy needs to deal with things like X extends Never
Y extends Never
, which are mutual subtypes. Probably that means that you have to normalize them to Never
? In any case, lots of bits to work out.
|
||
|
||
The declarative legacy subtyping rules are derived from the possible completions | ||
of the legacy type to a non-legacy type. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would expect that we would have a situation where legacy types should not arise, except as "consistently legacy"; so we wouldn't encounter List<int>*
, only List<int*>*
, and we shouldn't encounter Function(int)*
only dynamic* Function(int*)*
.
This would presumably also have implications for subtyping: class IntList extends List<int>
would mean that IntList*
has List<int*>*
as a supertype, and similarly for other shapes.
Closing this out, since we're not doing this. |
This fleshes out the equational theory of a
!
operator which cancels out with?
. So theT!
type is interpreted asT & Object
for non-nullableObject
type, and the obvious equations hold:T?! == T!! == T!
andT!? == T?? == T?
.Not for landing yet, since it is under discussion as to whether we want this feature.
cc @lrhn @eernstg @munificent @stereotype441 @kmillikin