Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Please Consider Improving Project Format and Structure (Serialized POCO/Format Agnostic) #613

Closed
Mike-E-angelo opened this issue May 12, 2016 · 195 comments
Labels
Milestone

Comments

@Mike-E-angelo
Copy link

Going to try this again, but hopefully with a better defined ask. The request is not to support a particular format, but to improve the MSBuild system overall so that it can support any format, while also making it more accessible/discoverable by tooling and (possible) designers.

Based on the current climate of .NET Core having its project.json merged back into MSBuild, this topic might get a little more interest conversation. Also, the Roslyn team is sending developers your way, so, there's that. :)

Use Case: Developers REALLY like their file formats!

Problem(s):

  1. MSBuild format is rooted in an arbitrary, not well-defined/known schema (difficult to explore and discover).
  2. MSBuild project structure can only be defined/edited in XML.
  3. MSFT development ecosystem (for better or for worse) is now comprised of two camps: web application developers and native application developers each has their own way of doing things and neither really likes the approach of the other.

Suggestions:

  1. Create a well-defined, well-known .NET object model that defines the Project Model (Project API). Visual Studio then loads (and saves) these as POCOs on disk. In this design, there are no arbitrary schemas or data files, but 100% serialized POCOs which are read and saved to disk. Another issue on Roslyn's board goes into more detail around this.
  2. Adopt a "Bring Your Own Serializer" strategy. Adding a new format/type is as simple as installing a Visual Studio extension (or even auto-detected and installed for you upon detection/initial file load). To start with, JSON5 and XML should be supported out of the box, but developers should be able to bring on any format they wish, such as Xaml, Yaml, etc.
  3. Allow (optional) naming extensions for projects to help identify stored format. Examples:
  • MyProject.csproj.json5 <-- C# Project serialized as JSON5
  • MyProject.vbproj.xml <-- VB.NET Project serialized as XML
  • MyProject.fsproj.xaml <-- F# Project serialized as XAML (OH YES I WENT THERE!!!)

Those are off the top of my head to get the conversation started. All of these are open to feedback of course and I would love to hear the thoughts of developers around this. Thank you for any consideration and/or support!

@SolalPirelli
Copy link

I think you're vastly over-estimating the demand for different file formats. Who really cares about XML vs JSON? And more importantly, are there enough people who care enough to justify adding complexity to the entire build system?
The current problems with MSBuild files are historical: they're meant to be read/written by VS tooling exclusively (and not humans), they don't support packages as first-class citizens, etc. . XML is not a problem.

@Mike-E-angelo
Copy link
Author

Thanks for your input @SolalPirelli. I am not sure what you mean by adding complexity. The solutions/goal is to reduce complexity while also satisfying developer (and organizational) preferences. Even if the tooling is meant to be used by VS/IDE exclusively, that does not mean that humans don't get their proverbial hands dirty with it, and they do all the time. The process for doing as such is very clumsy and awkward (which I believe you allude to via "historical").

I would also challenge you on demand for JSON vs. XML. What forums/tweets have you been reading? 😄

@SolalPirelli
Copy link

I am not sure what you mean by adding complexity.

Adding support for multiple file formats necessarily increases complexity. It means adding a public API that third-party providers will use, which will definitely cause headaches when new items are added and it turns out providers were doing crazy things to enable DSLs.

I would also challenge you on demand for JSON vs. XML. What forums/tweets have you been reading?

If this is to be resolved via forums and tweets, we'll end up writing our config file in Go, JavaScript or Rust.
The exact markup format does not matter, what matters is that the resulting format is editable by both humans and programs, adapted to current needs, and prepared for future evolution. There's nothing in JSON that makes it clearly better than XML in that regard. If anything, JSON is worse because it lacks comments.

@aolszowka
Copy link

I would also challenge you on demand for JSON vs. XML. What forums/tweets have you been reading?

I'll challenge you right back; never in a professional environment, by someone who I would consider a professional developer (or Build Master), would I say I've heard a cry for a switch to a JSON based MSBuild.

I will agree with regards to documentation and ask that the current XML format be better documented; but as far as a switch to JSON I see little to no technical gains. Why not take those resources that would be wasted on such a system and instead put them towards improving the MSDN docs?

As per Raymond Chen every feature starts out at -100; what are the gains that get us to Positive 100?

@Mike-E-angelo
Copy link
Author

If this is to be resolved via forums and tweets, we'll end up writing our config file in Go, JavaScript or Rust.If this is to be resolved via forums and tweets, we'll end up writing our config file in Go, JavaScript or Rust.

LOL!!!

I hear you. As for the API. Yes, that is the goal here. To (ultimately) have a well-defined/documented/accessible project API that we all know and have availability to, in case we want to know the file (project) we're actually describing. :)

When you open a .csproj file now... can you in all honesty say you know each and every element within it? The schema is difficult at best to discover and browse. Whereas if we were using a well-known API (.NET POCOs) then this becomes a snap.

If anything, JSON is worse because it lacks comments.

Agreed. But not everyone is agreement with this. And also, I am suggesting JSON5, which allows comments. Finally, the serialization is intended to be an implementation detail, and not supposed to be something that is part of MSBuild, per se. It just has to support it.

@Mike-E-angelo
Copy link
Author

I'll challenge you right back; never in a professional environment, by someone who I would consider a professional developer (or Build Master), would I say I've heard a cry for a switch to a JSON based MSBuild.

Well dudes... WHERE HAVE YOU BEEN IN MY LIFE FOR THE PAST YEAR?!?! LOL. I guess I have some baaaaaad luck then, because I have been harping against the project.json movement for over a year now and it has seemed to have been an uphill battle, to say the least!

I personally am very much more in the XML camp (I would actually prefer to see Xaml), but I want to consider and be mindful of the developers who have been enjoying the shiny new toy of project.json for the past year.

@Mike-E-angelo
Copy link
Author

And also, I do not want to "switch to JSON" ... but to simply support it as a serialization mechanism. If we're working with well-defined/known POCOs from an API, what matter does it make the format it is serialized/deserialized in?

@aolszowka
Copy link

aolszowka commented May 12, 2016

When you open a .csproj file now... can you in all honesty say you know each and every element within it?

Yes; that is what part of being a Build Master and putting on your resume that you speak MSBUILD means. I know the community has had a recent influx of posers from the "DevOps" movement; but there are still the few who actually know what they're doing beyond a drag n' derp interface.

The schema is difficult at best to discover and browse.

This is the primer for anyone coming into MSBuild that needs to start with https://msdn.microsoft.com/en-us/library/0k6kkbsd.aspx Reading Sayed Ibrahim Hashimi's "Inside the Microsoft Build Engine: Using MSBuild and Team Foundation Build" should also be required reading for anyone claiming intimate knowledge.

WHERE HAVE YOU BEEN IN MY LIFE FOR THE PAST YEAR?!?!

Monitoring the progress of MSBuild's open sourcing; trying to assist on StackOverflow under the MSBuild Tag (when Sayed isn't beating me to the punch). #JustBuildMasterThings

I want to consider and be mindful of the developers who have been enjoying the shiny new toy of project.json for the past year.

I see no reason why both projects cannot coexist; each project should be allowed to fail or succeed based on its technical merits. Instead of chasing tail lights I personally feel that resources are better spent on improving what we have if there is a clear technical gain. That being said I'm not the Product Owner, nor even a Developer, just a well informed end user.

but to simply support it as a serialization mechanism

Again what is accomplished from this?

@shmuelie
Copy link

I personally think what we need is a good way for the two build systems to share data, so that MSBuild can import data from a JSON project file.

@Mike-E-angelo
Copy link
Author

Yes; that is what part of being a Build Master and putting on your resume that you speak MSBUILD means

Wellllll that's all nice and good and it is great to see all your investments/commitments to becoming/being a Build Master, but for a typical developer who simply wants to add a new task to their project to do x, y, z, their objective is to get those tasks in and back to developing code, not spending a week in required reading to figure out the simplest of features.

Again what is accomplished from this?

You're asking me what is accomplished by allowing developers to use the format they most prefer to work in? 😄 Web developers REALLY like their JSON and native developers really like their XML/Xaml. Soooo...

And of course, there are other formats altogether that developers enjoy. By embracing and enabling support for them (which can be done on a 3rd party basis, not from MSBuild itself) you encourage adoption.

@SolalPirelli
Copy link

Everybody agrees that we need a less arcane format for build files. I can't speak for Microsoft, but it does seem like they want to move in this direction as well.

What you don't seem to understand is that one can't just "enable support for 3rd-party providers" by flipping a switch somewhere. Creating a flexible enough API, documenting it and versioning it is far from trivial, MSBuild or otherwise.

@aolszowka
Copy link

typical developer who simply wants to add a new task to their project to do x, y, z

It would be helpful to understand what you're asking the project to do. Looking at project.json (I had never heard of this project until today) I only see limitations on what I'm able to do. For one there is no readily apparent ability for me to extend their project system with custom tasks (something any one who utilizes MSBUILD will eventually do).

The concept of native NuGet Package management seems nice; but is hardly novel.

not spending a week in required reading to figure out the simplest of features.

Again if you could tell us those features that you feel are hard to discover and would take a week of reading?

In reality, a significant portion of developers will never find themselves editing the csproj or any other msbuild based file by hand. The vast majority of them are interacting with the system via Visual Studio. Those that are editing in these files generally are reading the documentation or are using wrappers around the system such as TeamCity, Jenkins, or Team Foundation Server (TFS/Visual Studio Online whatever you wanna call it these days).

If you're attempting to target an audience that is not using Visual Studio then they should be encouraged to seek out the recommended project system of choice for their environment.

most prefer to work in?

I'm asking what these developers are doing and why they're not using the tooling provided to them?

By embracing and enabling support for them (which can be done on a 3rd party basis, not from MSBuild itself) you encourage adoption.

This is great in theory; however in practice it results in a contract being formed between Microsoft and these 3rd Parties; Raymond Chen speaks at length about such subjects as they are a recurring theme in the history of Microsoft developed products. Offering such a system results only in additional technical debt, and unless there is an extremely compelling reason most teams are wise to not take on such debt.

But back to your original post as I feel we're far off topic

MSBuild format is rooted in an arbitrary, not well-defined/known schema (difficult to explore and discover).

You need to qualify what you mean by this; the schema provides an XSD by which the project file can be validated http://schemas.microsoft.com/developer/msbuild/2003 this doesn't help you if you're using custom tasks, for which you should have provided the XSD when you wrote the tasks. Many community based projects such as the MSBuild Extension Pack and the MSBuild Community Tasks do this.

The format is well documented on MSDN (https://msdn.microsoft.com/en-us/library/0k6kkbsd.aspx), an exploration of the project syntax is relativity straight forward to me personally.

If you could give a specific example of a common task developers are engaging in that is hindered by the current format it'd help to understand your request.

Create a well-defined, well-known .NET object model that defines the Project Model (Project API)

Believe it or not what you ask for already exists for MSBuild; however officially it is not a sanctioned binary for third party use and the API's are not guaranteed to change. However based on the widespread usage it has become defacto (probably much to the team's displeasure). The much API in question lives in Microsoft.Build; the most familiar of which to me is Microsoft.Build.Evaluation (https://msdn.microsoft.com/en-us/library/microsoft.build.evaluation.aspx).

Anyone who has done extensive deep dives into creating custom tasks or extending Visual Studio will be familiar with these API's.

Perhaps a better ask of the team is to make these API's sanctioned such that the third parties you mention can more reliably write to the specification.

@Mike-E-angelo
Copy link
Author

this doesn't help you if you're using custom tasks, for which you should have provided the XSD when you wrote the tasks

That sounds like -- and is -- a lot of work. 😛 The goal/ask here is to make this more of a POCO based model where we are working with well-known/well-defined POCOs, and the objects that are serialized/deserialized are these objects. Otherwise, we are asking developers to add "yet another artifact" with an .xsd file (are these even used anymore these days?) to use as a schema when they have already defined the schema with the Task object they have created.

If you could give a specific example of a common task developers are engaging in that is hindered by the current format it'd help to understand your request.

I am open to admitting that I might (most probably) using the wrong words in describing my problem. I am fundamentally lazy and don't like thinking about things until it really matters, like you are pushing me to do! Essentially, there are two scenarios here to consider:

  1. Build master (experts)
  2. Developers (script kiddies/interested in becoming a build master -- hey, that's me!)

I will speak from my perspective (developer -- but I do have a lot of experience with TFS build servers). To start with, I will provide context when I go to open a Xaml file. When I open a Xaml file, every symbol on that file -- regardless of the type -- is easily accessible to me. Using ReSharper, I can CTRL-B any symbol on that file, and I will be taken to its (decomiled) definition. This happens every time, no question.

Now, for MSBuild, I have to open up the project to edit it. First, it produces a very disruptive dialog if I want to close all files in the project to view this file. Secondly, once the file opens, discoverability is next to zero. Not to mention the conceptual paradigm in play is very awkward. If I I want to get a list of files, I have to work with these PropertyGroup and ItemGroup tags and work with a strange syntax to collect my files.

Whereas in Xaml I could see something more well-formed such as:

<msbuild:Project>
    <msbuild:Project.Resources>
        <msbuild:FileList x:Name="FilesToDelete" Files="SomeDirectory/**.*" />
    <msbuild:Project.Resouces>
    <msbuild:Project.Tasks>
       <msbuild:DeleteFilesTask Files="{x:Reference FilesToDelete}" />
    </msbuild:Project.Tasks>
</msbuild:Project>

(note that is a REALLY ROUGH sketch of what I would possibly like to see in an API model. Please don't make too much fun of it. 😛 But the point here is that as I am typing, tooling kicks in and I am able to reference tasks as I write them as they are POCOs resolved to assemblies I have referenced.

I'm asking what these developers are doing and why they're not using the tooling provided to them?

That's just the problem. This is no tooling provided to XML editing of MSBuild files. Well there is, but it is very prohibitive when compared to, say, Xaml editing experience.

Perhaps a better ask of the team is to make these API's sanctioned such that the third parties you mention can more reliably write to the specification.

Like I said this is just to get the conversation going. Looks like I posted in the right place! 😄 Thank you for providing your input/perspective. I can tell you know what you're talking about! And also, give me your Twitter handle so I can tag you when I get pwned by the JSON crowd. (kidding... sorta 😄 )

@Mike-E-angelo
Copy link
Author

What you don't seem to understand is that one can't just "enable support for 3rd-party providers" by flipping a switch somewhere

Isn't that exactly what the ASP.NET Core team did with its configuration provider API? That's pretty much the same idea here.

@SolalPirelli
Copy link

Isn't that exactly what the ASP.NET Core team did with its configuration provider API? That's pretty much the same idea here.

The ASP.NET team made the choice of accepting third-party configuration models, yes, and I'm sure they had good reasons to make that tradeoff; the fact that they needed to implement multiple configuration providers anyway for things like environment vars and config files probably factored into that discussion. In exchange for that flexibility, they get more complexity.

However, you have not given a good argument as to why MSBuild should become more complex. "It's the current fad in web development" is not a good argument.

This entire thread looks like the XY problem to me: you want to have a better MSBuild format, which is great, but you think it can only be achieved via your idea - to let everybody provide their own format - when there are plenty of other solutions out there.

@shmuelie
Copy link

@Mike-EEE The issue is more that you know the XAML APIs and (admittedly) the editor experience is less friendly for MSBuild. But if you really want to learn http://www.amazon.com/Inside-Microsoft-Build-Engine-Foundation/dp/0735645248?ie=UTF8&psc=1&redirect=true&ref_=oh_aui_detailpage_o00_s00

@colin-young
Copy link

You can count me in the project.json camp, although what I loved about it was the simplicity:

  • here's my project name and description
  • here are the target frameworks I want supported and here are the dependencies for each
  • here are my framework-independent dependencies (more often than not, none)
  • done. Here's your NuGet package.

I don't care what format you use to instruct the toolchain to build my project as long as it is documented, I can edit it by hand (because sometimes NuGet just gets confused) and I can use the exact same thing on a dev box as on a build server and they both work exactly the same way.

Oh, and I don't need to maintain multiple files, one for VS to build my project and another one for NuGet to package it. Hey, VS, if you can figure out how to build it, I'm sure you can handle collecting everything and making a nice package for me.

@shmuelie
Copy link

Oh, and I don't need to maintain multiple files, one for VS to build my project and another one for NuGet to package it. Hey, VS, if you can figure out how to build it, I'm sure you can handle collecting everything and making a nice package for me.

See https://github.com/nuproj/nuproj by @terrajobst

@Mike-E-angelo
Copy link
Author

However, you have not given a good argument as to why MSBuild should become more complex. "It's the current fad in web development" is not a good argument

Again, not making it more complex, but reducing its complexity. It's not just a fad in web development but a viable, popular pattern that has been used for quite some time in .NET. The ask would be to utilize this pattern for serializing objects that are used to describe MSBuild projects in a way that allows developers/organizations/teams to use the format they prefer.

I am starting to get the feeling that we should wait until more developers from the pro-JSON camp find their way onto this thread before attempting to provide a better argument. ;)

This entire thread looks like the XY problem to me: you want to have a better MSBuild format, which is great, but you think it can only be achieved via your idea - to let everybody provide their own format - when there are plenty of other solutions out there.

Haha... that's cool. I learned something new today. Thank you for the link 😄 My idea is to provide a better model (which it sounds like everyone agrees with!) which can then be serialized/deserialized in any format, if that helps clarify my position.

@aolszowka
Copy link

with an .xsd file (are these even used anymore these days?) to use as a schema when they have already defined the schema with the Task object they have created.

Yes; they're used all the time. In your Xaml example its how Intellisense (and other such tools) knows what to present to you and how XML files are validated as "well formed". The JSON Kids haven't grown up enough yet to understand why such systems are required; it looks like they're starting to come around though based on a quick google search.

When I open a Xaml file, every symbol on that file -- regardless of the type -- is easily accessible to me.

You're asking for Intellisense; again provided by a valid XSD which is automatically loaded as per the directive at the top of every well formed msbuild project file. Out of the box this is only provided for the included "base" MSBuild tasks.

Does the one provided within Visual Studio not meet your needs? Below is a screen shot from one of our build scripts showing this in action:

intellisenseformsbuild

A reasonable ask I think is to ask for more context documentation here to improve discover-ability; however that is for another subject; one I'd gladly up-vote as I know when I was starting out it was frustrating to continue to reference back to the documentation.

First, it produces a very disruptive dialog if I want to close all files in the project to view this file.

This is a limitation of Visual Studio; not of the chosen file format.

no tooling provided to XML editing of MSBuild files

Again, any XML capable editor can do this; I personally recommend Visual Studio simply because it will parse the XSD and any other included namespace to give you contextual Intellisense/Code completion.

And also, give me your Twitter handle

I've been told I need one; but honestly have never bothered to get on there. Feel free to @mention me anywhere on GitHub though.

@Mike-E-angelo
Copy link
Author

Mike-E-angelo commented May 12, 2016

@SamuelEnglard yeah I hate to mention Xaml here, because the Xaml that is already in use that is associated (very negatively i might add) with MSBuild is actually Windows Workflow and is really a bear to use (really, EVERYONE hates it and do not want "Xaml" because of it). I personally would like to see Xaml used to describe MSBuild files so it would be more like the current XML approach, but much more well-defined and discoverable.

@colin-young
Copy link

See https://github.com/nuproj/nuproj by @terrajobst

Yeah, no. That's still requiring me to duplicate information. The only reason I should ever need to specify some bit on info again is because I want it to be different than somewhere else. e.g. If I have AssemblyInfo.cs in my project and I want the assembly versions to be different than the version of the NuGet package, then I would specify each. Otherwise, setting one should flow into the other.

I should be able to describe everything about my project in a single location. I should also not need to tell NuGet that I want it to take the output of my project and use that in the package. Why wouldn't I want to include that? And if I've already specified which frameworks to generate assemblies for, why do I need to explain which targets are being packaged, again? Pick all of them, unless I tell you otherwise.

Sensible defaults and a mechanism to override them...

@Mike-E-angelo
Copy link
Author

Mike-E-angelo commented May 12, 2016

You're asking for Intellisense

Actually, no I am asking for more than that (from what I understand). Intellisense completes the symbols, and provides tooltips, but to actually dive into the symbol to take you directly to the class file where it exists, that is an IDE (or tooling, such as ReSharper) implementation detail.

Does the one provided within Visual Studio not meet your needs?

It's OK. But I find the Xaml experience much more expressive and discoverable. And intuitive as well. And I am not entirely sure that Xaml is using .xsd files, unless they are automatically creating them from the class definitions? That seems inefficient as the class definitions are already in memory and available to to the tools. It doesn't make sense to create a whole new file and then use that for navigation/information.

Also, another aspect we're overlooking here is .SLN files, which are their own animal (beast, more like it!) altogether and should be tamed/consolidated/considered into this new model as well.

@shmuelie
Copy link

@colin-young because it's all MSBuild you can "embed" it into the existing project and have it pull the information from there. I think I'll fork it to add an example of doing that...

I've added #614 to discuses a better experience editing the XML since that's really off topic for this issue

@aolszowka
Copy link

@colin-young

If I understand what you're asking for you want a workflow in which NuGet package creation is more tightly coupled with the build; this is already possible in MSBuild; it would require that you add a new target to your existing msbuild file and then call the target at the appropriate time; if you were using MSBuild Community Tasks you'd call the NuGetPack Task as appropriate (here's a snippet from one of our projects):

<NuGetPack File="%(NSPSTransformed.Identity)" OutputDirectory="$(BuildOutputNuGet)" Properties="Configuration=Release" IncludeReferencedProjects="true" ToolPath="$(nugetToolPath)" />

Reading between the lines you want a system that does this for you automagically; I'm not sure that specific business needs should be covered by the tool by default. At some point you will need to customize and modify the tools to fulfill your needs.

@aolszowka
Copy link

@Mike-EEE

but to actually dive into the symbol to take you directly to the class file where it exists

Its not clear what you would gain from being shown the source for a task such as "Move" or "Copy" 99% of the time unless you're debugging a bug within those tasks you're more interested in what the attributes (arguments) to the task are and what its behavior is; all of this can be embedded in the XSD; the version that they have provided and maintained is very simplistic covering only the built in tasks and the various attributes (IE "arguments") to be passed into the task.

And intuitive as well. And I am not entirely sure that Xaml is using .xsd files

Its slightly more complex than that Intellisense will utilize methods such as comment scraping for XML Docs to generate this information on the fly; but the end results are the same.

It doesn't make sense to create a whole new file and then use that for navigation/information.

Why not? The file is created in memory if anything.

Also, another aspect we're overlooking here is .SLN files, which are their own animal (beast, more like it!) altogether and should be tamed/consolidated/considered into this new model as well.

If you look at how MSBuild handles SLN files; they are actually transformed by MSBuild into pseudo MSBuild files prior to execution to avoid the nastiness incurred within them. However that being said I found the format straight forward; if you created another issue page to air your complaints with them I'm sure we can show you how they operate.

They are also editable via the above linked API.

@Mike-E-angelo
Copy link
Author

Its not clear what you would gain from being shown the source for a task such as "Move" or "Copy" 99% of the time unless you're debugging a bug within those tasks you're more interested in what the attributes (arguments) to the task are and what its behavior is

What you gain is a sense of discoverability and access -- not just for the default items described by the xsd but any object defined in the file. You get clear connection to the data you are describing and the object that ends up using those values. If you have not spent a lot of time in Xaml then it might not make sense to you, but when you have access to your code and can easily navigate through its properties and definitions, you not only get a better understanding of the elements at play, but also for the system as a whole. This is what is so awesome about .NET in general: being able to explore elements and see how they all connect and how they can be utilized.

Why not? The file is created in memory if anything.

Again, I am not sure if this takes place. Can you provide a resource showing that XSDs are used for intellisense? This is the first I have heard of this. And if a process is creating "yet another file" -- even in memory -- when the data it seeks is already in memory by way of symbols culled from a class definition, then obviously that is a very inefficient approach!

if you created another issue page to air your complaints with them

Truth be told, I have already done that here:
https://visualstudio.uservoice.com/forums/121579-visual-studio/suggestions/9347001-improve-reboot-visual-studio-project-system

:)

@colin-young
Copy link

@SamuelEnglard @aolszowka This thread started from dotnet/roslyn#11235, which was about improving the project definition format. To your point, what I'd like is a declarative project format rather than prescriptive. MSBuild is, by necessity, prescriptive. 90% of the time, I don't care how it accomplishes that because I just want to say, "Hey .Net, see all these files? Can you compile them all into an assembly and then package it for all of these targets? Here's all the details of what to name it and the version."

To me the question is, should the tool that takes a declarative description of a project and produces the requested output be part of MSBuild, or part of something else? But I do feel very strongly that it needs to be standard across all of .Net (i.e. one file format on Windows, Linux, OS X whether you are using the command line or Visual Studio).

@livarcocc livarcocc added this to the Discussion milestone Oct 15, 2019
@StingyJack
Copy link

@livarcocc - fine, like the nuget team, you aren't interested in maintaining feature compat or even parity. That seems like a move that only serves to punish the userbase thats actually interested in things like proj formats. I really think that feedback for this feature was not segregated between "OMGPREVIEW fanboys", the project file casuals that make up most of the .net developer populace, to whom this sounds like a good idea from the surface view (simpler always sounds better, until you need a feature that got chopped), and power users for whom an implicit content and still undocumented project file is an echo of web site projects and a looming compilation and automation disaster.

Can you at least guarantee or promise we won't be forced to use these problematic and inferior SDK style project formats?

@terrajobst

This comment has been minimized.

@StingyJack
Copy link

Please point out anything non-factual. This is what it looks like from the outside.

  • They require package reference style nuget. This means losing useful package things like adding files to a project as part of a (private) package, or providing simple xdt config transforms or running a powershell script to cover all the other possible gaps. It also means not knowing what dependencies are actually going to be included until the project is built and the output is inspected. I can't make a private package that is as useful to my team with package ref as is possible with package config.

  • there isn't an explicit list of things to be included in the compilation.

    • Stray files were a headache with web site projects, web app projects fixed that, but SDK brings back that indeterminism for all types of projects. Tools that create extra files or folders in the project filesystem have to be cleaned up after, sloppy development/filesystem habits create more problems than before, and one of my personal favorites vssscc and vssspc vestigial visual source safe files get even more foothold in version control.

    • inspecting projects across codebases can't be done by looking at the proj file, now the constituent files to include (which ones?) need to be downloaded. Hopefully VCS or FS or the user doesn' have a whoops and miss a file, because the project won't know unless it compiles.

    • automating some changes to projects is significantly more difficult for the same reasons as inspection, but the dearth of defaults and that so few of them dare to be documented - makes reworking any existing tooling a time sink swag session.

The history of this format reads like it's the recovery from the A-ha moment when you all (mostly) realized that json is great for data exchange but horrible for configuration. Instead of taking that lesson you forged ahead when it wasn't necessary in the first place. It didn't need to be done, there are other things needing modernization (sln files, easier wcf rest support, etc.)

Hopefully the impetus was not inspired by statements like "my company will not adopt/will abandon netcore hinges on a project format" because they are empty and dumb statements. Anyone holding true to that deserves what they get from that and other dumb choices.

This is what I can muster before breakfast on my phone, and may get a few typographical or formatting edits when I can see more than a paragraph at a time.

@jnm2
Copy link

jnm2 commented Oct 17, 2019

@StingyJack Parts of that don't ring true to my experience. I know of packages that add files to projects can be done using a .props file. It never seemed useful to me to keep the dependency graph in source control, but you can do that with a lock file: https://docs.microsoft.com/en-us/nuget/consume-packages/package-references-in-project-files#enabling-lock-file

there isn't an explicit list of things to be included in the compilation.

From the production or consumption side? When creating packages, it's explicit except for the single built assembly.

inspecting projects across codebases can't be done by looking at the proj file, now the constituent files to include (which ones?) need to be downloaded

This is a relief to those with frequent merge conflicts who can't wait to get the rest of their projects on the newer csproj format. Having to be explicit always felt like duplication because of how basic the includes are.

automating some changes to projects is significantly more difficult for the same reasons as inspection, but the dearth of defaults and that so few of them dare to be documented - makes reworking any existing tooling a time sink swag session.

Many things were always implicit and idiosyncratic. The cleaner SDK pushes that further along the same path. I think this is the thing https://github.com/daveaglick/Buildalyzer solves.

@aolszowka
Copy link

aolszowka commented Oct 17, 2019

Just to jump in and echo some of the comments:

there isn't an explicit list of things to be included in the compilation.

From the production or consumption side? When creating packages, it's explicit except for the single built assembly.

Its been awhile since I've looked at the new SDK format but I believe its from the consumption side; consider a project Layout Similar to the following:

└───Project
       A.cs
       B.cs
       Project.csproj

Based on my recollection Project.csproj will helpfully attempt to include everything under itself as part of the project format. This is bad for the reasons that @StingyJack mentions above.

The explicit nature ensures that the build system is doing what the developer asked for, not what was "helpfully found". It is very possible that a developer has forgotten to check in a source file, and unless your are explicit, could very well result in a run time failure as opposed to a build one. Consider a class library that utilizes Dependency Injection/IoC in which the classes contained within B.cs are consumed. If a Developer forgets to commit B.cs under an implicit model this is not discovered until runtime (hopefully in Internal testing, but as per Murphy at a Customer Site for sure).

There are other reasons to have extraneous files in a sub-folder that you explicitly do not want included. I know for a fact that in our large code base there are places where this is actually by design. Consider this pattern:

└───MyClassLibrary
        A.cs
        ATests.cs
        Implementation.csproj
        UnitTests.csproj

In this case you want A.cs to be included ONLY in the Implementation.csproj, whereas you want ATests.cs to be included ONLY in UnitTests.csproj. You can argue the merits of shoving this all into a single source folder (I know I have tried) but this is the reality for a lot of large development shops. It is difficult to get buy in from stake holders to refactor projects which previously "worked".

inspecting projects across codebases can't be done by looking at the proj file, now the constituent files to include (which ones?) need to be downloaded

This is a relief to those with frequent merge conflicts who can't wait to get the rest of their projects on the newer csproj format. Having to be explicit always felt like duplication because of how basic the includes are.

I would take the opposite side of this argument. We maintain 16 Branches of our code base (yeah, 16...). While on the surface I agree with you due to the number of merge conflicts there are things that can and should be done to try and minimize this. For starters ensuring that the project format is sorted in a deterministic matter (we settled on quasi-alphabetization) is helpful. I agree that having to write tooling to support this is not fun, and you can see that my GitHub is littered with tooling to do so, but the reality for us (and other Build Masters I've talked to in similar industries) is that its just par for the course.

The ability to quickly diff project files between branches is critical in understanding what is and what is not making it into the final binaries. Anything that can mux the output (by adding "indeterminism") is considered a defect from the DevOps world.

@jnm2
Copy link

jnm2 commented Oct 17, 2019

If you do like the manual maintenance of all .cs files, you can put <EnableDefaultCompileItems>false</EnableDefaultCompileItems> in Directory.Build.props or a csproj. <EnableDefaultItems>false</EnableDefaultItems> might also be interesting to you. See (https://docs.microsoft.com/en-us/dotnet/core/tools/csproj#default-compilation-includes-in-net-core-projects). The defaults are right for the majority of projects and you can override if they aren't right for your project.

How common is the problem of forgetting to add a source file to source control (and without failing CI)? New files show up prominently using Team Explorer (Git or TFVC) or VS Code or git from the CLI (posh-git).

The ability to quickly diff folders between branches using source control is powerful. It's more complete than diffing csproj files in my experience, and the diff UI lets you drop right down to the changes within any of the files. Files are the starting point, the ultimate source of truth.

@aolszowka
Copy link

How common is the problem of forgetting to add a source file to source control (and without failing CI)? New files show up prominently using Team Explorer (Git or TFVC) or VS Code or git from the CLI (posh-git).

At least once a week internally. We have ~70 Developers averaging 200 commits a week the average commit size is 3 CSPROJ Files +/- 20 CS Files, most are changes to existing files with a few additions sprinkled in. FWIW we're using Subversion, but I doubt changing VCS would really help (we already use Ankh which provides the overly). Even your absolute best developers (your 10x devs) will occasionally make a mistake. Its inevitable at this scale and rate of change.

YMMV; apparently it doesn't happen for you, I wish I was there (more than you can know).

The ability to quickly diff folders between branches using source control is powerful, more complete than diffing csproj files in my experience. Files are the starting point, the ultimate source of truth.

We don't disagree, however the scale at which this occurs is much larger than most diffing tools respond to reasonably quickly, our branches are ~670,000 Files 60,400 Folders

image

We have ~5,800 CSPROJ Files, quickly performing diffs at the CSPROJ file level is much more efficient and we have massive amounts of automated tooling (both Commit Hooks and a tool similar to SapFix (we call it Tattler)) having the structured, deterministic format of the CSPROJ files is a huge boon, such that we capture most of these failures very early in the process.

As @StingyJack mentions there are other places where the functionality would be very useful for example this hits right in the feels for me:

It didn't need to be done, there are other things needing modernization (sln files, easier wcf rest support, etc.)

And FWIW you might want to take a look at this closed idea: https://github.com/dotnet/cli/issues/12858 I am working on open sourcing our tooling that does something similar @StingyJack sounds like you would benefit from it (same boat you and I on the River Styx).

If you do like the manual maintenance of all .cs files, you can put <EnableDefaultCompileItems>false</EnableDefaultCompileItems> in Directory.Build.props or a csproj.

Thank you we will be enabling this shortly. Today we do not use the new style Project Format (mostly because we do not have much .NET Core) but if all goes as planned I am sure we will need it sooner rather than later.

@petertiedemann
Copy link

@aolszowka We have ~60 devs plus a lot of consultants, and i have never seen a case where a missing source file made it through CI (not to mention compilation). The setup you describe sounds absolutely horrifying :) How in the world can a missing source file pass CI? I understand that theoretically in some cases reflection over types in an assembly could compile without error (though even that should be extremely rare), but what does it say about the CI setup if it cannot catch basic things like missing source code?

For us ( having almost 200 repos in our main github org), the new project format, and especially the implicit includes and package references (death to packages.config! :) has been a huge advantage. Previously we actually developed our own hacky little tool to deal with merge conflicts because we wasted so much time on doing it manually, but since we started using the new project format we almost never need it.

@aolszowka
Copy link

aolszowka commented Oct 17, 2019

How in the world can a missing source file pass CI?

It doesn't today; because you have to explicitly include the file (that is the point of the entire conversation and example above). Therefore it kicks out in CI.

The proposal is to remove this functionality.

CI setup if it cannot catch basic things like missing source code

We completely agree, which is why we will utilize the explicit <EnableDefaultCompileItems>false</EnableDefaultCompileItems> mentioned above. 👍

It was probably added to support a similar scenario.

Previously we actually developed our own hacky little tool to deal with merge conflicts because we wasted so much time on doing it manually, but since we started using the new project format we almost never need it.

This looks like it was deprecated by the use of PackageReference; we previously had a similar tool to yours to sort package.config in a deterministic manner as you did. That being said you still encounter the same issue today though, which is why our new tooling simply enforces that PackageReference are sorted in a deterministic manner (alphabetized by the Include attribute).

The same issue exists today for just about any attribute. For our devs its most painful when dealing with <Compile>, <ProjectReference>, <PackageReference> tags as these have high rates of change for us, hence we enforce deterministic ordering to give the merging tools a fighting chance at the correct solution.

I assume this is what you mean when you say:

we almost never need it.

I would be interested to hear what other corner cases you encounter; it seems like we're not alone.

@StingyJack
Copy link

@petertiedemann - keep in mind it not just missing files, it's extra files.

This is a relief to those with frequent merge conflicts who can't wait to get the rest of their projects on the newer csproj format. Having to be explicit always felt like duplication because of how basic the includes are.

@jnm2 - Perhaps the solution to this would have been to reliably order most of the contents of the project file, as should be done with every other computer generated non-data file, so changes were not appearing haphazardly in the file. That would probably eliminate much of the angst by allowing most merge tools to handle it automatically and it would have been a simpler fix. However it alone would not have helped the merge and compare tools built into the world's premiere IDE that have reached a new all time low in quality.

Every day, I need to be able to do more with less. Finding and fixing a bad pattern that's been repeated in hundreds of projects is harder without the file manifest and with so many still undocumented defaults.

Re: packages, I mean that now I can make a dll only package, but I can no longer make a package that co-workers can add that will also take care of configuration of the consuming project, or add CS or TT files to the project. Those all now have to be done manually.

@petertiedemann
Copy link

How in the world can a missing source file pass CI?

It doesn't today; because you have to explicitly include the file (that is the point of the entire conversation and example above). Therefore it kicks out in CI.

The proposal is to remove this functionality.

CI setup if it cannot catch basic things like missing source code

We completely agree, which is why we will utilize the explicit <EnableDefaultCompileItems>false</EnableDefaultCompileItems> mentioned above. 👍

@aolszowka Sorry if i wasn't clear. We are already using the implicit includes, and have never encountered similar problems (as i mentioned we see the implicit. includes as a huge productivity improvement over explicit ones). My point was that, given implicit includes, how can you have your CI miss something like this except in some very exotic scenarios? It would seem that in the 90+% case you will simply have a missing type error somewhere and fail to compile. If you are using reflection / automated DI then i guess it could compile, but then i would be very worried if a test did not catch the problem.

This looks like it was deprecated by the use of PackageReference; we previously had a similar tool to yours to sort package.config in a deterministic manner as you did. That being said you still encounter the same issue today though, which is why our new tooling simply enforces that PackageReference are sorted in a deterministic manner (alphabetized by the Include attribute).

Actually, we rarely have the issue today, because you now only have to deal with the package reference, and not the duplicated entries in the csproj file. I do agree on maintaining it sorted though.

Our project files are almost empty actually. We have a shared project file that we import that have things like copyrights and namespace setup, and the rest we just leave to the implicit includes / default behavior. A typical project file is ~20-30 lines.

@jnm2
Copy link

jnm2 commented Oct 17, 2019

Perhaps the solution to this would have been to reliably order most of the contents of the project file, as should be done with every other computer generated non-data file

For most projects, folks want to leave behind the idea that a csproj is a computer-generated file.

and with so many still undocumented defaults.

Are there any undocumented defaults? The link I gave earlier seemed pretty thorough in the area I read.

@owenneil
Copy link

Doesn't have explicit includes just move the problem? Back before SDK projects, I often saw commits that included a new source file but forgot to add the project file change. Either way these were usually caught by CI due to being referenced by other files, but it was pretty easy to make this mistake.

@aolszowka
Copy link

how can you have your CI miss something like this except in some very exotic scenarios? It would seem that in the 90+% case you will simply have a missing type error somewhere and fail to compile.

If you are using reflection / automated DI then i guess it could compile

It seems like you answered your own question? I am not sure why you feel the need to debate this point when you know the answer?

As @StingyJack mentions you don't account for the extra files (the Unit Test Scenario I listed above for you).

Regardless we have a documented work around; I am not sure why there is need to continue any discussion on it?

because you now only have to deal with the package reference, and not the duplicated entries in the csproj file. I do agree on maintaining it sorted though.

And this new package format does nothing to solve these problems (again what @StingyJack is pointing out here). For a trivial number of PackageReferences/ProjectReferences this is managable, but when you get upwards of 50 of them it gets out of hand without any type of sorting enforced.

@owenneil

Doesn't have explicit includes just move the problem?

Yes; but it moves it back to the Developer (earliest in the process), and even better it gives your gated check-in a fighting chance to interrogate the CSPROJ Files on add; we have a commit hook that does exactly this (If CS File is added or removed ensure that the appropriate changes were made to the CSPROJ file).

@petertiedemann
Copy link

@petertiedemann - keep in mind it not just missing files, it's extra files.

But in a "normal" setup this really wouldn't be a problem, would it? In all of our ~200 repos (everything from web services, constraint engines to domain specific language parsens and compilers) we do not have source files in the project folder that we do not want to include (in either the ones using the new and the old project format). And even if you do have some exotic projects with this problem, you simply disable the implicit includes or explicitly exclude them.

@petertiedemann
Copy link

It seems like you answered your own question? I am not sure why you feel the need to debate this point when you know the answer?
...
Regardless we have a documented work around; I am not sure why there is need to continue any discussion on it?

I guess because the opinion voiced by you and @StingyJack seemed to be that implicit includes as default was poor design, but the cases brought up were pretty exotic, meaning that implicit includes indeed does seem to be an excellent design choice. The unit test scenario that was described basically goes against common practice for how to structure projects, and it is not one i have ever seen before. The runtime reflection CI/CD scenario is just scary design by itself, and it seems almost unthinkable that no tests would exist to fail in that case ( you have defined a type that is never used in any test or other project?).

And this new package format does nothing to solve these problems (again what @StingyJack is pointing out here). For a trivial number of PackageReferences/ProjectReferences this is managable, but when you get upwards of 50 of them it gets out of hand without any type of sorting enforced.

Well the PackageReference certainly helps a lot. Before we had people giving up on rebasing branches because of conflicts in dll references, now we rarely need to spend time on it.

But you projects have ~50 references? Thats quite a bit i must say, but i am still surprised that they would change often enough to cause signifcant problems. We have considered expanding our tool to deal with PackageReferences as well, but it would take far more time to implement it that than we spent on manually dealing with it.

@aolszowka
Copy link

aolszowka commented Oct 17, 2019

@jnm2

Are there any undocumented defaults? The link I gave earlier seemed pretty thorough in the area I read.

Depends on the context; I read that to indicate that the project file is still a black box in some respects. While Microsoft has made HUGE Strides in documenting the behavior and intent of the Project Format in the last few years with Microsoft Docs (and we thank them for this!).

The first one that comes to my mind is the behavior of References in the newish system; there are some very subtle bugs around the restore logic. See this NuGet Issue NuGet/Home#8272 many of these are not noticed until they are attempted at scale and only after careful evaluation.

I am sure we could continue to cherry pick behaviors; but its most likely irrelevant to the discussion at hand.

@petertiedemann

I guess because the opinion voiced by you and @StingyJack seemed to be that implicit includes as default was poor design,

I don't believe I ever used the words "poor design". I noted that:

Anything that can mux the output (by adding "indeterminism") is considered a defect from the DevOps world.

Which again, can be worked around simply by using the posted work around: <EnableDefaultCompileItems>false</EnableDefaultCompileItems>

but the cases brought up were pretty exotic, meaning that implicit includes indeed does seem to be an excellent design choice.

As to your point of exotic or not: One man's exotic is another mans common. Its the world we live in, and the tools were/are written such that you are free to suit them to your needs. I think StingyJack's frustration (along with mine) is that these scenarios are written off as corner cases (as you do below).

The unit test scenario that was described basically goes against common practice for how to structure projects, and it is not one i have ever seen before.

The runtime reflection CI/CD scenario is just scary design by itself, and it seems almost unthinkable that no tests would exist to fail in that case ( you have defined a type that is never used in any test or other project?).

Hey neither had I; but we're here today aren't we? (See the above) I think your inability to accept that the system is being used in perverse ways is causing you a lot of frustration and I apologize. I am not here to advocate that this is by any means good or acceptable nor to question the design persay. I am simply trying to be inclusive to current use cases that are affecting consumers of the product.

But you projects have ~50 references? Thats quite a bit i must say, but i am still surprised that they would change often enough to cause signifcant problems.

Oh yes! Easily. The problem is so bad I wrote a tool to help visualize it! (Shameless Plug: https://github.com/aolszowka/MsBuildProjectReferenceDependencyGraph) in fact there is a CI Process around automatically generating these graphs over every commit just so we can keep developers apprised of the next low level change that is bound to burn you on your next svn update! Those graphs are committed in their own repository so people can keep updating to get the latest.

Here's an anonymized version of a very commonly used Solution File by most Developers. Throw it in your favorite program that supports rendering dotGraphs (GraphViz for example, and its so complex WebGraphViz will not work) https://gist.github.com/aolszowka/a93ea5545d54344c61f66830fae90c4e it takes about 15-20 minutes to render on my personal work station. Oh by the way that is just ProjectReferences the tool makes no attempt to graph PackageReferences (yet, I accept pull requests!).

Its been a real joy to watch the Roslyn team experience large solutions in Visual Studio (Roslyn.sln would be considered on the small side internally). On the plus side they have made HUGE strides towards fixing it (because it burns them day to day), so at least its now possible to load these in VS 2017+ I am still hopeful they reach the end of the road soon and realize they need to make Visual Studio 64bit so then we aren't crashing 3-5 times a day due to OOM errors.

but it would take far more time to implement it that than we spent on manually dealing with it.

Oh how I envy you, but remember much like puppies and kittens: They start out small, but the grow up! At one time these branches were maintainable (probably why the attitude was "well what's one more?").

@AlgorithmsAreCool
Copy link

At least once a week internally. We have ~70 Developers averaging 200 commits a week the average commit size is 3 CSPROJ Files +/- 20 CS Files, most are changes to existing files with a few additions sprinkled in. FWIW we're using Subversion, but I doubt changing VCS would really help (we already use Ankh which provides the overly). Even your absolute best developers (your 10x devs) will occasionally make a mistake. Its inevitable at this scale and rate of change.

@aolszowka
FWIW, my team had this problem all the time years ago when we were on SVN. After the migration to Git it never happened anymore. Especially since we started gating checkins on CI Passing.

@aolszowka
Copy link

@AlgorithmsAreCool

Especially since we started gating checkins on CI Passing.

For sure; I think there is some confusion here; we gate these check-ins today via a Subversion Commit Hook (which we keep metrics on which is why I can tell you how often it happens), however the Subversion commit hook needs to utilize the CSPROJ to determine what is "correct" see this comment:

Yes; but it moves it back to the Developer (earliest in the process), and even better it gives your gated check-in a fighting chance to interrogate the CSPROJ Files on add; we have a commit hook that does exactly this (If CS File is added or removed ensure that the appropriate changes were made to the CSPROJ file).

@StingyJack
Copy link

I guess because the opinion voiced by you and @StingyJack seemed to be that implicit includes as default was poor design, but the cases brought up were pretty exotic, meaning that implicit includes indeed does seem to be an excellent design choice

If I didn't say that, I mean it, and I dont say that just to be combative or flippant. There is nothing exotic about having a manifest of what is expected to be included into the final result. Lack of that manifest opens the door for unexpected ("exotic?") things to be included into the final result. Every other profession that creates something has an explicit list like this. To put it differently...

Would you eat a meal when you knew the chef was not in control of the ingredients and preparation of that meal?

Would you permit a renovation of your home when you knew the general contractor was not checking the gauge of electrical wiring used or if it was copper or aluminum before installing it?

Would you take a medication if you knew the producer was not in total control of the manufacturing and packaging of that medicine?

These other "creation" industries may be regulated and thus required to use an explicit list, but the ones that aren't will use one because its necessary for planning and because it is an easy way to have a reasonable expectation of similar quality of output between different efforts. The second part is the key for us; we cant incrementally improve upon something if that something can change in quality without us even knowing it. I recently discovered something like this had been happening in a python project I work on. Some builds would just be weird, some unusable. I found that early on, someone had added a bunch of packages into the setup.py file and missed the comma between an upper and lower version range. It was an easy thing for at least 5 skilled developers to miss for almost a year. Pip (python's nuget.exe) read the whole thing as the lower version with some prerelease tag and started always including the latest version even if it was an alpha package.

Implicit code inclusions, wildcard/floating version dependencies and automatically included transitive references are a convenience, but this convenience comes with risks to your projects success that are very real. With the latter putting the project at the mercy of any package author in the dependency chain's release schedule and release quality (or un-release schedule - see "left-pad"). Assuming this risk should be an opt-in and not be the default. Its not setting programmers up for the "pit of success" by any stretch.

Also, we aren't usually involved in making software that has immediate Life or Death consequences, but we are all involved in making software that has Quality of Life consequences for users, ourselves, and our fellow programmers. The .net community tried this implicit file experiment in the .net v1 -2 era with Web Site Projects, and while it was great when you had the one exotic use case where you needed to update compileable files on the fly, it sucked to have to manage everything in the project based on the presence or absence of files (or phantom files - .dll.refresh) or specific file extensions (.excluded). But what really sucked was when some kind of pollution (like an unintended file, or the wrong version of a file) crept in and you had to troubleshoot the result. Anyone wanting to try that out is still able to do so, but most of us use Web Application Projects - where we can control the inputs and outputs to achieve a predictable result - instead.

@asbjornu
Copy link
Member

asbjornu commented Nov 3, 2019

@StingyJack:

There is nothing exotic about having a manifest of what is expected to be included into the final result. Lack of that manifest opens the door for unexpected ("exotic?") things to be included into the final result. Every other profession that creates something has an explicit list like this.

Why isn't Git enough of a "manifest" to you? If you are adding heaploads of files to Git that aren't supposed to be there or as a part of the final product, perhaps you should revise your development process?

@StingyJack
Copy link

@asbjornu - I think you mean "version control system" and not specifically git, correct? This isnt about version control in the first place, and AFAIK msbuild still requires the files to be present on a filesystem, and not in the VCS of our choosing. The abridged comment I made above regarding project files with all these implicit inclusions and defaults is about the project file. It was never a heap of files that caused problems with web site projects, it was always that one file that resulted several hours of troubleshooting.

I'm curious to know how you would entertain those three questions I posed about other professions, because the answer from me is going to be No, No, and No. If you also would answer "No" to those three, then consider the point of view of a business of our profession. Would they trust a programmer to make something important for their business who does not control what goes into the program?

If you want to take a shortcut with implicit includes, and you can manage the associated risk, I've got no complaints. But making all of these implicit includes the default behavior so that I have to do work now to mitigate a risk that I did not have to do with the prior format is something I am going to complain about.

@asbjornu
Copy link
Member

asbjornu commented Nov 3, 2019

@StingyJack:

I think you mean "version control system" and not specifically git, correct?

Nope, I mean Git, since most other version control systems don't have a cryptographically verifiable history, signed commits and signed tags. With these in place, you have a pretty strong source of truth for what should and should not be considered a part of the resulting application.

If you currently allow developers to add any strange file to your VCS without any code review, no verification, no sign-off or any other editorial process in place, I say your development process is at blame here, not the project system. Most development platforms take the same route here of implicit inclusion; Node.js, Ruby, Python, PHP, Go, Rust, Docker, etc.

Would they trust a programmer to make something important for their business who does not control what goes into the program?

Why do you claim the programmer doesn't have control over their VCS (preferably Git)?

@aolszowka
Copy link

Nope, I mean Git, since most other version control systems don't have a cryptographically verifiable history, signed commits and signed tags. With these in place, you have a pretty strong source of truth for what should and should not be considered a part of the resulting application.

A "cryptographically verifiable history, signed commits and signed tags" does not describe intent of the developer. You can cryptographically sign anything you want; that property does not mean that its contents should be trusted (and is the jist of @StingyJack 's argument).

Why do you claim the programmer doesn't have control over their VCS (preferably Git)?

This assumes that the developers in the system are competent or at least not malicious. Ask event-stream how that worked out for them (and everyone upstream that got burned "I don't know what to say" for me was the quote of 2018).

@dsplaisted
Copy link
Member

Hi folks,

We think for most people, automatically including all files with the right extension makes sense. For those who don't want that behavior, you can turn it off by setting the EnableDefaultItems property to false.

This is in line with the philosophy we had when designing the updated project files, which was to have sensible defaults that could be overridden when necessary.

There's been a ton of discussion on this issue and it seems like it's been a catch-all for any comments about the project file format. That makes it less likely that we will pick up on and address feedback. If you have concrete things causing you trouble, I recommend creating new issues for those.

Thanks!
Daniel

@asbjornu
Copy link
Member

@aolszowka:

A "cryptographically verifiable history, signed commits and signed tags" does not describe intent of the developer.

The commit log should reflect the developer's intent. Every single commit can be signed to identify the developer of said code. A code review signed off by someone else can double-verify the intent. A signed and tagged merge-commit of the reviewed commits can triple-verify the intent. How many verifications do you need?

You can cryptographically sign anything you want; that property does not mean that its contents should be trusted (and is the jist of @StingyJack 's argument).

Can no one in your team be trusted, not even a chain of command that are able to sign off on code reviews or tagged merge-commits?

Why do you claim the programmer doesn't have control over their VCS (preferably Git)?

This assumes that the developers in the system are competent or at least not malicious.

If all your developers incompetent and/or malicious, having explicit includes in projects files won't make a difference. If having three different people at different levels in the chain of command cryptographically sign and thumbs up a range of commits isn't sufficient to you (it is to code under the submission of a PCI-DSS review process) then nothing will ever be.

Ask event-stream how that worked out for them (and everyone upstream that got burned "I don't know what to say" for me was the quote of 2018).

Irrelevant and incomparable to the process I'm describing. If a hostile take-over of a Git repository is possible in your process and VCS, your process and VCS is broken, not your project system.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests