Tag Archives: .net core

Common MSBuild properties and items with Directory.Build.props

To be honest, I never really liked MSBuild until recently. The project files generated by Visual Studio were a mess, most of their content was redundant, you had to unload the projects to edit them, it was poorly documented… But with the advent of .NET Core and the new "SDK-style" projects, it’s become much, much better.

MSBuild 15 introduced a pretty cool feature: implicit imports (I don’t know if it’s the official name, but I’ll use it anyway). Basically, you can create a file named Directory.Build.props anywhere in your repo, and it will be automatically imported by any project under the directory containing this file. This makes it very easy to share common properties and items across projects. This feature is described in details in this documentation page.

For instance, if you want to share some metadata across multiple projects, just write a Directory.Build.props file in the parent directory of your projects:

<Project>

  <PropertyGroup>
    <Version>1.2.3</Version>
    <Authors>John Doe</Authors>
  </PropertyGroup>

</Project>

You can also do more interesting things like enabling and configuring StyleCop for all your projects:

<Project>

  <PropertyGroup>
    <!-- Common ruleset shared by all projects -->
    <CodeAnalysisRuleset>$(MSBuildThisFileDirectory)MyRules.ruleset</CodeAnalysisRuleset>
  </PropertyGroup>

  <ItemGroup>
    <!-- Add reference to StyleCop analyzers to all projects  -->
    <PackageReference Include="StyleCop.Analyzers" Version="1.0.2" />
    
    <!-- Common StyleCop configuration -->
    <AdditionalFiles Include="$(MSBuildThisFileDirectory)stylecop.json" />
  </ItemGroup>

</Project>

Note that the $(MSBuildThisFileDirectory) variable refers to the directory containing the current MSBuild file. Another useful variable is $(MSBuildProjectDirectory), which refers to the directory containing the project being built.

MSBuild looks for the Directory.Build.props file starting from the project directory and going up until it finds a matching file, then it stops looking. In some cases you might want to define some properties for all projects in your repo, and add some more properties in a subdirectory. To do this, the "inner" Directory.Build.props file will need to explicitly import the "outer" one:

  • (rootDir)/Directory.build.props:
<Project>

  <!-- Properties common to all projects -->
  <!-- ... -->
  
</Project>
  • (rootDir)/tests/Directory.build.props:
<Project>

  <!-- Import parent Directory.build.props -->
  <Import Project="../Directory.Build.props" />

  <!-- Properties common to all test projects -->
  <!-- ... -->
  
</Project>

The documentation mentions another approach, using the GetPathOfFileAbove function, but it didn’t seem to work when I tried… Anyway, I think using a relative path is easier to get right.

Using implicit imports brings the following benefits:

  • smaller project files, since common properties and items can be factored to common properties files.
  • single point of truth: if all projects reference the same package, the version to reference is defined in a single place; no more inconsistencies!

It also has a drawback: Visual Studio doesn’t care about where a property or item comes from, so if you change a property or a package reference from the IDE (using the project properties pages or NuGet Package Manager), it will be changed in the project file itself, rather than the Directory.Build.props file. The way I see it, it’s not a major issue, because I got into the habit of editing the projects manually rather than using the IDE features, but it might be annoying for some people.

If you want a real-world example of this technique in action, have a look at the FakeItEasy repository, where we use multiple Directory.Build.props files to keep the project files nice and clean.

Note that you can also create a Directory.Build.targets file, following the same principles, to define common build targets.

Linq performance improvements in .NET Core

By now, you’re probably aware that Microsoft released an open-source and cross-platform version of the .NET platform: .NET Core. This means you can now build and run .NET apps on Linux or macOS. This is pretty cool in itself, but it doesn’t end there: .NET Core also brings a lot of improvements to the Base Class Library.

For instance, Linq has been made faster in .NET Core. I made a little benchmark to compare the performance of some common Linq methods, and the results are quite impressive:


The full code for the benchmark can be found here. As with all microbenchmarks, it has to be taken with a grain of salt, but it gives an idea of the improvements.

Some lines in this table are quite surprising. How can Select run 5000 times almost instantly? First, we have to keep in mind that most Linq operators are lazy: they don’t actually do anything until you enumerate the result, so doing something like array.Select(i => i * i) executes in constant time (it just returns a lazy sequence, without consuming the items in array). This is why I included a call to Count() in my benchmark, to make sure the result is enumerated.

Despite this, it runs 5000 times in 413µs… This is possible due to an optimization in the .NET Core implementation of Select and Count. A useful property of Select is that it produces a sequence with the same number of items as the source sequence. In .NET Core, Select takes advantage of this. If the source is an ICollection<T> or an array, it returns a custom enumerable object that keeps track of the number of items. Count can then just retrieve this value and return it, which produces a result in constant time. The full .NET Framework implementation, on the other hand, naively enumerates the sequence produced by Select, which takes much longer.

It’s interesting to note that in this situation, .NET Core will not execute the projection specified in Select, so it’s a breaking change compared to the desktop framework for code that was relying on side effects of this projection. This has been identified as an issue which has already been fixed on the master branch, so the next release of .NET Core will execute the projection on each item.

OrderBy followed by Count() also runs almost instantly… did Microsoft invent a O(1) sorting algorithm? Unfortunately, no… The explanation is the same as for Select: since OrderBy preserves the item count, the information is recorded so that it can be used by Count, and there is no need to actually sort the input sequence.

OK, so these cases were pretty obvious improvements (which will be rolled back anyway, as mentioned above). What about the SelectAndToArray case? In this test, I call ToArray() on the result of Select, to make sure that the projection is actually performed on each item of the source sequence: no cheating this time. Still, the .NET Core version is 68% faster than the full .NET Framework version. The reason has to do with allocations: since the .NET Core implementation knows how many items are in the result of Select, it can directly allocate an array of the correct size. In the .NET Framework, this information is not available, so it starts with a small array, copies items into it until it’s full, then allocates a larger array, copies the previous array into it, copies the next items from the sequence until the array is full, and so on. This causes a lot of allocations and copies, hence the degraded performance. A few years ago, I suggested an optimized version of ToList and ToArray, where you had to specify the size. The .NET Core implementation basically does the same thing, except that you don’t have to pass the size manually, since it’s passed along the Linq method chain.

Where and WhereAndToArray are both about 8% faster on .NET Core 1.1. Looking at the code (.NET 4.6.2, .NET Core), I can’t see any obvious difference that could explain the better performance, so I suspect it’s mostly due to improvements in the runtime. In this case, ToArray doesn’t know the length of the input sequence, since there is no way to predict how many items Where will yield, so it can’t use the same optimization as with Select and has to build the array the slow way.

We already discussed OrderBy + Count(), which wasn’t a fair comparison since the .NET Core implementation didn’t actually sort the sequence. The OrderByAndToArray case is more interesting, because the sort can’t be skipped. And in this case, the .NET Core implementation is slightly slower than the .NET 4.6.2 one. I’m not sure why this is; again, the implementation is very similar, although there has been a bit of refactoring in .NET Core.

So, on the whole, Linq seems generally faster in .NET Core than in .NET 4.6.2, which is very good news. Of course, I only benchmarked a limited numbers of scenarios, but it shows the .NET Core team is working hard to optimize everything they can.