-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
C# Language Design Review, Apr 22, 2015 #3910
Comments
Agreed: #1648 (comment). Perhaps in that case IDE support of attributes should also be improved - could be a part of the #711 story. At the moment all assembly-level attributes are kind of hidden. |
Expression TreesI agree that Nullable reference typesAn extreme effort is being made to not change the CLR, but I started thinking like this:
In this case, reference types are the dual of value types:
That wrapper could be something like this:
|
A Unless something has changed recently all I've heard about non-nullability is that the types would remain exactly as they are and that attributes will be used to notate whether a parameter or whatever may be public char FirstCharacter(string! value) {
return value[0];
} would be: public char FirstCharacter([NotNull] string value) {
if (value == null) throw new ArgumentNullException(nameof(value)); // potentially emitted by the compiler
return value[0];
} |
@HaloFour, the real problem here is that there is no way (so far) to have a non null default value for reference types. What you are showing in your very small example. can be achieved by code contracts. More complex examples would require null checks on all assignments of |
We still need:
|
@paulomorgado Indeed, but an intermediate I think that I'm in the two-type camp, but I think I'd prefer @whoisj Does the RAII syntax really buy anything beyond what |
@HaloFour The RAII style could imply a local |
@whoisj |
@vladd I do not think RAII is really possible in C#, at least not without a lot of heavy lifting making hardly worth talking about (at least for now). No, my point was that RAII semantics can work here. Using something like It's just semantics. |
+1 |
Not if they opt out. This could be suggested by IDE/compiler.
Exactly. I have tried using JetBrains' Why would you want to use PS: JavaScript managed to cope: http://www.w3schools.com/js/js_strict.asp |
For the null checking feature the default would be not-null. You would use a ? on your type to indicate that it can have null values. There would be no ! syntax. |
-1 |
Thus creating a dialect. The issue #3330 was explicitly closed by @gafter because it proposed adding such options to make
Because that is the language semantics over 6 versions, it's what people will expect and what the vast majority of shared knowledge on the subject specifies. I agree that the majority of cases the developer probably does not expect the value to be
JavaScript has done a massive number of really stupid things over the years. But it also started out in an exceptionally bad place. Despite all of this, the latest browsers will still happily execute the most asinine JavaScript 1.0.
I noticed that from the proposal. I worded my preference for the opposite two-type system, although as stated I think both are not without their problems and I'm less strongly entrenched in my position than I probably come across. How does the team intend to reconcile a semantic change of that nature given those other language decisions that have been made? |
@HaloFour This is not directed at you, but I would argue that effectively killing Silverlight (which was used by enterprise among others) is an actual real breaking change and so are many other Microsoft's decisions in the dark 2009-2014 period. Being brave enough to fix an early design mistake while still giving a way for older projects to work is a different proposition.
I don't propose that pre-C#7 versions will stop working. |
Yeah, but unlike Silverlight it appears that MS wants to keep C# around for a little while longer. This isn't a design mistake, it's a paradigm, and a very common one. Either way this doesn't align with what has been stated by members of the team.
No, you propose that pre-C#7 stop compiling, or at least require changing dialects and fixing a lot of new errors. |
Not really. The moment you open a legacy solution in Visual Studio 2015 / MonoDevelop it will ask you whether you want to upgrade to C# 7. If you pick 'Yes' it will then ask whether you want to opt in or out. If you pick 'Opt-out' it will then automatically modify your
|
I cannot see someone going through an effort of downloading a new version of compiler, skipping using mainstream IDEs and then ignoring compiler output and failing to find out which compilation flag to use. Not sure about class-level, but they seem to have a perfect keyword already - |
I can easily see someone clicking through whatever they're asked accidentally converting their project over and then wondering what the heck is happening when anything that they try to write with the knowledge of the language that they have no longer compiles, or produces a plethora of new warnings. That person will find 13 years of existing C# knowledge on the Internet which will be counter to this behavior. C# 7.0 will simply be broken. The |
We are a bit behind posting language design notes. And the following is not an excerpt from the notes, just me trying to explain from what I recall. The current thinking is that there will be an opt-in mode for compilation (the null checking feature won't be on by default). When on, potential null assignments to variables that cannot be null or dereferences of variables that can be null will generate warnings, unless the operations can be determined null safe via flow analysis. If you upgrade your project to C# 7.0, this feature won't be on until you enable it manually. There will be separate controls to ignore null annotations in libraries in case you were using the feature with libraries that convert to exposing null annotations before you are ready. If libraries don't have null annotations (or you disable them) then the reference types from those assemblies will be treated as old-style references (we cannot draw inferences from them, and will not generate warnings.) The null checking will only generate warnings, and explicitly null-able reference types will be encoded as custom attributes so signatures stay the same. The purpose of these annotations (and syntax) is to help you declare intent for your API's to pass null values (or not) and to find bugs in your code. The compiler is not expected to prove that nulls cannot exist in non-null reference types (there are many edge cases with static constructors, array allocation and concurrency), and it will not optimize IL based on this. |
@mattwar As a glorified analyzer I guess that makes sense. It is certainly less disruptive than reversing the notation. I'd probably be a little less apprehensive if we didn't have six versions of C# semantics that will be somewhat altered by this change. Hopefully decent flow analysis will reduce the potential for unexpected warnings. Would that aspect of it be treated as a standard analyzer, complete with code fixes? |
@mattwar It sounds like Code Contracts 2.0 with one attribute that has a one-character language-integrated syntax. Could you consider providing a way of declaring a language-integrated synonym for any attribute? The low guarantee of |
It almost seems appropriate to make it just an analyzer with an attribute, but we need the null-ness to flow through the type system in order to do the null checks properly, so we've been thinking about it as part of the language. |
It would be nice if one could hook into the type system in a similar manner to how you would need to in order to do these null checks via analyzers. I can imagine other useful checks that may be able to be done with such analyzer capability:
|
@dsaf, not yet, but that does make sense that we should consider compatibility with a possible contracts feature. |
Regarding nullability, https://github.com/bkoelman/ResharperCodeContractNullability can be used today and may be of interest for this discussion. It empowers Resharpers nullability analysis engine by reporting a diagnostic/fix on all applicable members that would benefit from annotation. This way, the user is guided in annotating his/her codebase (and keeping it annotated). |
@bkoelman I tried those and they result in noisy signatures (so called "ceremony") without giving any strong contract guarantees. From my personal perspective I would almost never use nullable arguments, that is why I am always arguing for opt-in nullability. |
As a Swift user (which uses the "three-type" approach), I disagree. Granted, Swift is a bit different, it's a new language and it has the annotations swapped. Also, Swift has explicit syntax for dealing with nullable values, which is nice: let optionalPerson: Person? = maybeGetPerson();
if let person = optionalPerson {
// `person` is always set in this if
} In my Swift app, I use variables with unspecified nullability for dealing with old (UI) frameworks. Those frameworks have weird initialisation where (non-null) fields aren't set in the constructor but later on. In practice, it is great to have the implicitly unwrapped optional (the so called "third type"). Since these older frameworks have been in use for years and there's never any null, it's just not encoded in the types. As of Swift 2, Apple has gone through most of the existing libraries and has annotated all types with either Once all the code I'm using has been annotated, the experience using my existing code would be: string foo = SomeAnnotatedBclMethod();
// ^ Warning: Throwing away nullability information, use `string?` or `string!` In my opinion, nullability should be a fundamental part of the language. That includes proper interop with existing libraries with unspecified nullability (requiring a "third type"). But that also means you might need to make some difficult decisions, e.g. adding something like Swift's Also, I made a picture! https://twitter.com/tomlokhorst/status/633282420259880960 EDIT: In principle, I dislike the idea of having a switch for two different dialects of the language, but it's arguable that it might be needed in this case. This switch could include a code converter, where this existing C# 6 code: string foo = GetFoo();
int x = foo.Length; // Might crash at runtime Gets translated to this C# 7 new-style-syntax: string! foo = GetFoo();
int x = foo.Length; // Still might crash at runtime, but now the user can clean this up Incidentally, Swift also includes a (mandatory) code converter. Between Swift 1 and Swift 2, the |
Well, as the person you quoted said:
Things rarely need to be nullable, so |
My preference would be a switch that puts the language into null-safe mode - i.e. the two-type option where nullable reference types are indicated as Legacy assembly references consumed in null-safe mode would have a nullness info wrapper generated automatically (either once-off and published, or generated at compile-time) that does static analysis on the compiled IL to wrap the assembly usage in null-safe behaviour where possible. I would guess for a useful proportion of methods, the correct nullness types can be derived. The aim of the nullness info wrapper is to guarantee (within the type system) correctness for as large a portion of code as possible. So legacy method compiled as returning Maybe additional annotation (either via attributes in the legacy assembly, or externally defined) could inform how the wrapper is generated. So if the ReSharper [NotNull] annotation is present, then the wrapper includes a runtime check if static analysis fails. The compatibility wrappers might work a bit like TypeScript type annotation files. Code compiled in null-safe mode might target both a null-safe version of the runtime, with proper non-null reference types, and older runtimes through a wrapper assembly that includes runtime null checks. Nullness type safety must be baked into the runtime, with backward compatibility possibly involving a runtime performance cost. I would prioritize some path that in the long term will take both the runtime and C# to the right place (which to me means the two-type approach, without explicit ! annotations) even if that takes 10 years. The road there would be filled in with tooling, generated wrappers etc. to bridge existing compiled code both ways. |
I'm OK with just code contracts and declaration expressions. |
We do not really want a new kind of type. We want to be sure that a parameter, field, local variable or return value and so on will never be null when we use it. Therefore in If a null-checked value is used, we know that we can use it safely without checking for null first. A warning should be issued if we use the null-conditional operator ( |
@MadsTorgersen I didn't know how this would be received, so I just write it here as a comment on wire formats. Since JSON is not only a wire format as you mentioned, it's now a successor to XML for configuration and such use cases. My idea is about a JSON-compatible syntax for lists and dictionaries. var dict = new {
"list": [ 1, 2, 3 ],
"key1": "value",
"key2": {
"key1": "value",
"key2": 123
}
} The whole expression returns a |
How will static types materialize? There's no standard format for JSON typing, so even if .NET/C# were to adopt one, it wouldn't do much good since most JSON is provided without a schema. So I assume the plan is to infer the type from sample JSON like everyone else, right? That being the case, how will C# bootstrap this process? Will you rely on tooling, such as VS, to generate types based on a JSON URI? If C# were to provide structural typing a la Gosu, you could generate a nesting of structural interfaces mirroring a JSON object and back the interfaces using a late bound dictionary object, probably via dynamic typing already built into the language. It seems to me a nesting is necessary given the anonymous nature of nodes in a JSON object tree. Otherwise, how will you distinguish between identically named fields having different inferred type structures? Just thinking out loud here as I also tackle this subject independently. Cheers. |
Design notes have been archived at https://github.com/dotnet/roslyn/blob/future/docs/designNotes/2015-04-22%20C%23%20Design%20Review.md but discussion can continue here. |
C# Language Design Review, Apr 22, 2015
Agenda
See #1921 for an explanation of design reviews and how they differ from design meetings.
Expression Trees
Expression trees are currently lagging behind the languages in terms of expressiveness. A full scale upgrade seems like an incredibly big investment, and doesn't seem worth the effort. For instance, implementing
dynamic
andasync
faithfully in expression trees would be daunting.However, supporting
?.
and string interpolation seems doable even without introducing new kinds of nodes in the expression tree library. We should consider making this work.Nullable reference types
A big question facing us is the "two-type" versus the "three-type" approach: We want you to guard member access etc. behind null checks when values are meant to be null, and to prevent you from sticking or leaving null in variables that are not meant to be null. In the "three-type" approach, both "meant to be null" and "not meant to be null" are expressed as new type annotations (
T?
andT!
respectively) and the existing syntax (T
) takes on a legacy "unsafe" status. This is great for compatibility, but means that the existing syntax is unhelpful, and you'd only get full benefit of the nullability checking by completely rooting out its use and putting annotations everywhere.The "two-type" approach still adds "meant to be null" annotations (
T?
), but holds that since you can now express when things are meant to be null, you should only use the existing syntax (T
) when things are not meant to be null. This certainly leads to a simpler end result, and also means that you get full benefit of the feature immediately in the form of warnings on all existing unsafe null behavior! Therein of course also lies the problem with the "two-type" approach: in its simplest form it changes the meaning of unannotatedT
in a massively breaking way.We think that the "three-type" approach is not very helpful, leads to massively rewritten over-adorned code, and is essentially not viable. The "two-type" approach seems desirable if there is an explicit step to opt in to the enforcement of "not meant to be null" on ordinary reference types. You can continue to use C# as it is, and you can even start to add
?
to types to force null checks. Then when you feel ready you can switch on the additional checks to prevent null from making it into reference types without '?'. This may lead to warnings that you can then either fix by adding further?
s or by putting non-null values into the given variable, depending on your intent.There are additional compatibility questions around evolution of libraries, but those are somewhat orthogonal: Maybe a library carries an assembly-level attribute saying it has "opted in", and that its unannotated types should be considered non-null.
There are still open design questions around generics and library compat.
Wire formats
We should focus attention on making it easier to work with wire formats such as JSON, and in particular on how to support strongly typed logic over them without forcing them to be deserialized to strongly typed objects at runtime. Such deserialization is brittle, lossy and clunky as formats evolve out of sync, and extra members e.g. aren't kept and reserialized on the other end.
There's a range of directions we could take here. Assuming there are dictionary-style objects representing the JSON (or other wire data) in a weakly typed way, options include:
We'd need to think about construction, not just consumption.
Maybe
Thing
is an interface with an attribute on it:Or maybe it is something else. This warrants further exploration; the right feature design here could be an extremely valuable tool for developers talking to wire formats - and who isn't?
Bucketing
We affirmed that the bucketing in issue #2136 reflects our priorities.
The text was updated successfully, but these errors were encountered: