Leveraging ReSharper annotations

Very poorPoorAverageGoodExcellent (No Ratings Yet) 
Loading...Loading...

I don’t think it’s really necessary to present ReSharper (often abbreviated R#), but in case you don’t know about it, it’s a tool made by JetBrains that performs real-time analysis of your C# or VB.NET code to warn you about possible bugs, bad practices, convention violations, etc. It also provides many useful refactorings and code generators. I’ve been using it for a few years now, and it has tremendously improved both my productivity and my coding style.

Among other things, R# warns you about incorrect usage of .NET Framework methods. For instance, if you’re calling Path.GetFullPath with a path that might be null, it gives you a warning:

image

How does R# know that Path.GetFullName doesn’t accept a null argument? And how does it know that Console.ReadLine can return null? I guess it could have been hard-coded, but it wouldn’t be a very elegant approach, and wouldn’t allow easy extensibility… Instead, ReSharper uses external annotations. These are XML files that are shipped with R# and contain a lot of metadata about .NET framework classes and methods. This data is then used by the analyzer to detect possible issues with your code.

OK, but what about third-party libraries? Obviously, JetBrains can’t create annotations for those, there are too many. Well, the good news is that you can write your own external annotations for libraries that you use, or for your own code. However, for your own code, there is a much more convenient alternative: you can apply the annotations directly in your code as attributes. There are two ways to get those attributes:

  • Reference the assembly where they are defined (JetBrains.Annotations.dll in the R# installation directory). This is fine if you don’t mind having a reference to something that has nothing to do with your application. That’s probably not a good idea for libraries.
  • Declare the attributes in your own code. You don’t actually have to write them yourself, because R# has an option to copy their implementation to the clipboard, as shown below. You just paste it to a code file in your project.

image

Now that you have the attributes, how do you use them? I’ll show a few examples for the most common annotations.

NotNull

This annotation indicates that the element on which it is applied cannot, or must not be null.

If you apply it to a method or property, it means that the method or property will never return null:

        [NotNull]
        public string GetString()
        {
            return "Hello world!";
        }

When a method has this attribute, if you test if the return value is null (or not null), R# will warn you that the condition is always false (or true):

image

 

If you apply it to a method parameter, it means that null is not a valid argument value:

        public string Repeat([NotNull] string s)
        {
            if (s == null) throw new ArgumentNullException("s");
            return s + s;
        }

If R# determines that the value passed for s can be null, it warns you about it, as shown in the first example.

This annotation can be added automatically using ReSharper’s quick-fix menu. The “Not null” option will just add the annotation; the “Check parameter for null” option will add a check and the annotation:

image

image

 

CanBeNull

This is the opposite of NotNull. Applied to a method or property, it means that the method or property can return a null value. Applied to a method parameter, it means that the argument value is allowed to be null.

Pure

This one is very useful. Applied to a method, it means that the method is pure. A pure method has no observable side effect, so if you don’t use the return value of the method, the call is useless, so it’s probably a mistake. Typical example with String.Replace:

image

 

StringFormatMethod

This annotation indicates that a method works like the String.Format method, i.e. it takes a composite format string followed by arguments that will replace the placeholders in the format string:

        [StringFormatMethod("format")]
        public static void Log(string format, params object[] args)
        {
            ...
        }

It lets R# warn you if the placeholders and arguments don’t match:

image

UsedImplicitly

This one tells ReSharper that a code element is used, even though R# cannot statically detect it. It has the effect of suppressing the “(…) is never used” warning. It’s useful, for instance, when a type or member is used only via reflection.

NoEnumeration

This annotation is applied to an IEnumerable parameter, and means that the method will not enumerate the sequence. R# warns you when you enumerate an IEnumerable multiple times, so using this attribute prevents false positives for this warning:

        public static IEnumerable<T> EmptyIfNull<T>([NoEnumeration] this IEnumerable<T> source)
        {
            return source ?? Enumerable.Empty<T>();
        }

 

InstantHandle

This one is applied to a delegate parameter, and means that the delegate will be executed during the execution of the method. It prevents the “Access to modified closure” warning that occurs when a lambda captures a variable that is later modified.

ContractAnnotation

This annotation is a powerful way to describe how the output of a method depends on its inputs. It lets R# predict how the method will behave. For instance, this method will return null if its argument is null, an not null otherwise:

        [ContractAnnotation("null => null; notnull => notnull")]
        public object Transform(object data)
        {
            ...
        }

Thanks to the annotation, ReSharper will know that if the argument was not null, the result will not be null either.

This method doesn’t return normally (it throws an exception) if its argument is null:

        [ContractAnnotation("value:null => halt")]
        public static void CheckArgumentNull<T>(
            [NoEnumeration] this T value,
            [InvokerParameterName] string paramName)
            where T : class
        {
            if (value == null)
                throw new ArgumentNullException(paramName);
        }

This lets R# know that if you pass a null to this method, the code following the call will never be reached; if it is reached, the value can be assumed to be not null.

LocalizationRequired

This annotation means that a property or method parameter should be localized; if you pass a hard-coded string, R# will warn you and suggest to extract it to a resource file.

image11

Conclusion

Now, you might be wondering, “why should I go through the trouble of adding all those annotations to my code?”. The reason is simple: it helps ReSharper help you! By giving it more information about your code, you allow R# to give you better advice and produce less false positives. Also, if you’re a library author, it makes your library more comfortable to use for ReSharper users. I use R# annotations extensively in my Linq.Extras library, so it’s a good place to find more examples.

Note that I only described a small part of the available annotations. There are many more, mostly related to ASP.NET-specific scenarios. You can see them all in the annotations file generated by ReSharper, or in the documentation (which isn’t quite complete, but is still useful).

C# Puzzle 1

Very poorPoorAverageGoodExcellent (6 votes) 
Loading...Loading...

I love to solve C# puzzles; I think it’s a great way to gain a deep understanding of the language. And besides, it’s fun!

I just came up with this one:

static void Test(out int x, out int y)
{
    x = 42;
    y = 123;
    Console.WriteLine (x == y);
}

What do you think this code prints? Can you be sure? Post your answer in the comments!

I’ll try to post more puzzles in the future if I can come up with others.

Posted in Puzzles. Tags: , . 12 Comments »

Customizing string interpolation in C# 6

Very poorPoorAverageGoodExcellent (2 votes) 
Loading...Loading...

One of the major new features in C# 6 is string interpolation, which allows you to write things like this:

string text = $"{p.Name} was born on {p.DateOfBirth:D}";

A lesser known aspect of this feature is that an interpolated string can be treated either as a String, or as an IFormattable, depending on the context. When it is converted to an IFormattable, it constructs a FormattableString object that implements the interface and exposes:

  • the format string with the placeholders (“holes”) replaced by numbers (compatible with String.Format)
  • the values for the placeholders

The ToString() method of this object just calls String.Format(format, values). But there is also an overload that accepts an IFormatProvider, and this is where things get interesting, because it makes it possible to customize how the values are formatted. It might not be immediately obvious why this is useful, so let me give you a few examples…

Specifying the culture

During the design of the string interpolation feature, there was a lot of debate on whether to use the current culture or the invariant culture to format the values; there were good arguments on both sides, but eventually it was decided to use the current culture, for consistency with String.Format and similar APIs that use composite formatting. Using the current culture makes sense when you’re using string interpolation to build strings to be displayed in the user interface; but there are also scenarios where you want to build strings that will be consumed by an API or protocol (URLs, SQL queries…), and in those cases you usually want to use the invariant culture.

C# 6 provides an easy way to do that, by taking advantage of the conversion to IFormattable. You just need to create a method like this:

static string Invariant(FormattableString formattable)
{
    return formattable.ToString(CultureInfo.InvariantCulture);
}

And you can then use it as follows:

string text = Invariant($"{p.Name} was born on {p.DateOfBirth:D}");

The values in the interpolated strings will now be formatted with the invariant culture, rather than the default culture.

Building URLs

Here’s a more advanced example. String interpolation is a convenient way to build URLs, but if you include arbitrary strings in a URL, you need to be careful to URL-encode them. A custom string interpolator can do that for you; you just need to create a custom IFormatProvider that will take care of encoding the values. The implementation was not obvious at first, but after some trial and error I came up with this:

class UrlFormatProvider : IFormatProvider
{
    private readonly UrlFormatter _formatter = new UrlFormatter();

    public object GetFormat(Type formatType)
    {
        if (formatType == typeof(ICustomFormatter))
            return _formatter;
        return null;
    }

    class UrlFormatter : ICustomFormatter
    {
        public string Format(string format, object arg, IFormatProvider formatProvider)
        {
            if (arg == null)
                return string.Empty;
            if (format == "r")
                return arg.ToString();
            return Uri.EscapeDataString(arg.ToString());
        }
    }
}

You can use the formatter like this:

static string Url(FormattableString formattable)
{
    return formattable.ToString(new UrlFormatProvider());
}

...

string url = Url($"http://foobar/item/{id}/{name}");

It will correctly encode the values of id and name so that the resulting URL only contains valid characters.

Aside: Did you notice the if (format == "r")? It’s a custom format specifier to indicate that the value should not be encoded (“r” stands for “raw”). To use it you just include it in the format string like this: {id:r}. This will prevent the encoding of id.

Building SQL queries

You can do something similar for SQL queries. Of course, it’s a known bad practice to embed values directly in the query, for security and performance reasons (you should use parameterized queries instead); but for “quick and dirty” developments it can still be useful. And anyway, it’s a good illustration for the feature. When embedding values in a SQL queries, you should:

  • enclose strings in single quotes, and escape single quotes inside the strings by doubling them
  • format dates according to what the DBMS expects (typically MM/dd/yyyy)
  • format numbers using the invariant culture
  • replace null values with the NULL literal

(there are probably other things to take care of, but these are the most obvious).

We can use the same approach as for URLs and create a SqlFormatProvider:

class SqlFormatProvider : IFormatProvider
{
    private readonly SqlFormatter _formatter = new SqlFormatter();

    public object GetFormat(Type formatType)
    {
        if (formatType == typeof(ICustomFormatter))
            return _formatter;
        return null;
    }

    class SqlFormatter : ICustomFormatter
    {
        public string Format(string format, object arg, IFormatProvider formatProvider)
        {
            if (arg == null)
                return "NULL";
            if (arg is string)
                return "'" + ((string)arg).Replace("'", "''") + "'";
            if (arg is DateTime)
                return "'" + ((DateTime)arg).ToString("MM/dd/yyyy") + "'";
            if (arg is IFormattable)
                return ((IFormattable)arg).ToString(format, CultureInfo.InvariantCulture);
            return arg.ToString();
        }
    }
}

You can then use the formatter like this:

static string Sql(FormattableString formattable)
{
    return formattable.ToString(new SqlFormatProvider());
}

...

string sql = Sql($"insert into items(id, name, creationDate) values({id}, {name}, {DateTime.Now})");

This will take care of properly formatting the values to produce a valid SQL query.

Using string interpolation when targeting older versions of .NET

As is often the case for language features that leverage .NET framework types, you can use this feature with older versions of the framework that don’t have the FormattableString class; you just have to create the class yourself in the appropriate namespace. Actually, there are two classes to implement: FormattableString and FormattableStringFactory. Jon Skeet was apparently in a hurry to try this, and he has already provided an example with the code for these classes:

using System;

namespace System.Runtime.CompilerServices
{
    public class FormattableStringFactory
    {
        public static FormattableString Create(string messageFormat, params object[] args)
        {
            return new FormattableString(messageFormat, args);
        }

        public static FormattableString Create(string messageFormat, DateTime bad, params object[] args)
        {
            var realArgs = new object[args.Length + 1];
            realArgs[0] = "Please don't use DateTime";
            Array.Copy(args, 0, realArgs, 1, args.Length);
            return new FormattableString(messageFormat, realArgs);
        }
    }
}

namespace System
{
    public class FormattableString
    {
        private readonly string messageFormat;
        private readonly object[] args;

        public FormattableString(string messageFormat, object[] args)
        {
            this.messageFormat = messageFormat;
            this.args = args;
        }
        public override string ToString()
        {
            return string.Format(messageFormat, args);
        }
    }
}

This is the same approach that made it possible to use Linq when targeting .NET 2 (LinqBridge) or caller info attributes when targeting .NET 4 or earlier. Of course, it still requires the C# 6 compiler to work…

Conclusion

The conversion of interpolated strings to IFormattable had been mentioned previously, but it wasn’t implemented until recently; the just released CTP 6 of Visual Studio 2015 ships with a new version of the compiler that includes this feature, so you can now go ahead and use it. This feature makes string interpolation very flexible, and I’m sure people will come up with many other use cases that I didn’t think of.

You can find the code for the examples above on GitHub.

Async unit tests with NUnit

Very poorPoorAverageGoodExcellent (No Ratings Yet) 
Loading...Loading...

Recently, my team and I started writing unit tests on an application that uses a lot of async code. We used NUnit (2.6) because we were already familiar with it, but we had never tried it on async code yet.

Let’s assume the system under test is this very interesting Calculator class:

    public class Calculator
    {
        public async Task<int> AddAsync(int x, int y)
        {
            // simulate long calculation
            await Task.Delay(100).ConfigureAwait(false);
            // the answer to life, the universe and everything.
            return 42;
        }
    }

(Hint: this code has a bug… 42 isn’t always the answer. This came to me as a shock!)

And here’s a unit test for the AddAsync method:

        [Test]
        public async void AddAsync_Returns_The_Sum_Of_X_And_Y()
        {
            var calculator = new Calculator();
            int result = await calculator.AddAsync(1, 1);
            Assert.AreEqual(2, result);
        }

async void vs. async Task

Even before trying to run this test, I thought to myself: This isn’t gonna work! an async void method will return immediately on the first await, so NUnit will think the test is complete before the assertion is executed, and the test will always pass even if the assertion fails. So I changed the method signature to async Task instead, thinking myself very clever for having avoided this trap…

        [Test]
        public async Task AddAsync_Returns_The_Sum_Of_X_And_Y()

As expected, the test failed, confirming that NUnit knew how to handle async tests. I fixed the Calculator class, and stopped thinking about it. Until one day, I noticed that my colleague was writing test methods with async void. So I started to explain to him why it couldn’t work, and tried to demonstrate it by introducing an assertion that would fail… and to my surprise, the test failed, proving that I was wrong. Mind blown!

Having an inquisitive mind, I immediately started to investigate… My first idea was to check the current SynchronizationContext, and indeed I saw that NUnit had changed it to an instance of NUnit.Framework.AsyncSynchronizationContext. This class maintains a queue of all the continuations that are posted to it. After the async void test method has returned (i.e., the first time a not-yet-completed task is awaited), NUnit calls the WaitForPendingOperationsToComplete method, which executes all the continuations in the queue, until the queue is empty. Only then is the test considered complete.

So, the moral of the story is: you can write async void unit tests in NUnit 2.6. It also works for delegates passed to Assert.Throws, which can have an async modified. Now, just because you can doesn’t mean you should. Not all test frameworks seem to have the same support for this. The next version of NUnit (3.0, still in alpha) will not support async void tests.

So, unless you plan on staying with NUnit 2.6.4 forever, it’s probably better to always use async Task in your unit tests.

A new library to display animated GIFs in XAML apps

Very poorPoorAverageGoodExcellent (1 votes) 
Loading...Loading...

A few years ago, I wrote an article that showed how to display an animated GIF in WPF. The article included the full code, and was quite successful, since WPF had no built-in support for animated GIFs. Based on the issues reported in the comments, I made many edits to the code in the article. At some point I realized it was very impractical, so I published the code on CodePlex (it has now moved to GitHub) under the name WpfAnimatedGif, and started maintaining it there. It was my first serious open-source project, and it was quite popular.

As bug reports started coming in, a serious issue was quickly identified: the library was using a huge amount of memory. There were a few leaks that I fixed, but ultimately the problem was inherent to the way the library worked: it prepared all frames in advance, keeped them in memory, and displayed them in turn using an WPF animation. Having all the frames pre-rendered in memory was reasonable for small images with few frames, but totally impractical for large GIF animations with many frames.

Changing the core of the library to use another approach might have been possible, but there were other issues I wanted to address. For instance, it relied heavily on WPF imaging features, which made it impossible to port it to Windows Phone or Windows Store apps. Also, some parts of the code were quite complex and inefficient, partly because of my initial choice to specify the image as an ImageSource, and changing that would have broken compatibility with previous versions.

WpfAnimatedGif is dead, long live XamlAnimatedGif!

So I decided to restart from scratch to address these issues, and created a new project: XamlAnimatedGif (as you can see, I’m not very imaginative when it comes to names).

On the surface, it seems very similar to WpfAnimatedGif, but at its core it uses a completely different approach. Instead of preparing the frames in advance, they are rendered on the fly using a WriteableBitmap. This approach uses more CPU, but much less RAM. Also, in order to be portable, I couldn’t rely on WPF’s built-in image decoding, so I had to implement a full GIF decoder, including LZW decompression of the pixel data. Matthew Flickinger’s article “What’s In A GIF” was a big help.

The basic usage is roughly the same: just set an attached property on an Image control to specify the GIF animation source.

<Image gif:AnimationBehavior.SourceUri="/images/working.gif" />

Here’s the result in the Windows Phone emulator (yes, it’s a animated GIF representing an animated GIF… I guess it could be called a meta-GIF Winking smile):

XamlAnimatedGif-WP

Unlike WpfAnimatedGif, the source is specified as an URI or as a stream, rather than an ImageSource. It makes the internal implementation much simpler and more robust.

XamlAnimatedGif currently works on WPF 4.5, Windows 8.1 store apps, and Windows Phone 8.1. It could be extended to support other platforms (WPF 4.0, Windows 8.0, Windows Phone 8.0, Windows Phone Silverlight 8.1, perhaps Silverlight 5), but so far I just focused on making it work on the most recent XAML platforms. I’m not sure if it’s possible to support iOS and Android as well, as I haven’t looked into Xamarin yet. If you want to give it a try, I’ll be glad to accept contributions.

The library is still labeled alpha because it’s new, but it seems reasonably stable so far. You can install it from NuGet:

PM> Install-Package XamlAnimatedGif -Pre 
Posted in Libraries. Tags: , , . No Comments »

Optimize ToArray and ToList by providing the number of elements

Very poorPoorAverageGoodExcellent (2 votes) 
Loading...Loading...

The ToArray and ToList extension methods are convenient ways to eagerly materialize an enumerable sequence (e.g. a Linq query) into an array or a list. However, there’s something that bothers me: both of these methods are very inefficient if they don’t know the number of elements in the sequence (which is almost always the case when you use them on a Linq query). Let’s focus on ToArray for now (ToList has a few differences, but the principle is mostly the same).

Basically, ToArray takes a sequence, and returns an array that contains all the elements from the sequence. If the sequence implements ICollection<T>, it uses the Count property to allocate an array of the right size, and copy the elements into it; here’s an example:

List<User> users = GetUsers();
User[] array = users.ToArray();

In this scenario, ToArray is fairly efficient. Now, let’s change that code to extract just the names from the users:

List<User> users = GetUsers();
string[] array = users.Select(u => u.Name).ToArray();

Now, the argument of ToArray is an IEnumerable<User> returned by Select. It doesn’t implement ICollection<User>, so ToArray doesn’t know the number of elements, so it cannot allocate an array of the appropriate size. So here’s what it does:

  1. start by allocating a small array (4 elements in the current implementation)
  2. copy elements from the source into the array until the array is full
  3. if there are no more elements in the source, go to 7
  4. otherwise, allocate a new array, twice as large as the previous one
  5. copy the items from the old array to the new array
  6. repeat from step 2
  7. if the array is longer than the number of elements, trim it: allocate a new array with exactly the right size, and copy the elements from the previous array
  8. return the array

If there are few elements, this is quite painless; but for a very long sequence, it’s very inefficient, because of the many allocations and copies.

What is annoying is that, in many cases, we know the number of elements in the source! In the example above, we only use Select, which doesn’t change the number of elements, so we know that it’s the same as in the original list; but ToArray doesn’t know, because the information was lost along the way. If only we had a way to help it by providing this information ourselves….

Well, it’s actually very easy to do: all we have to do is create a new extension method that accepts the count as a parameter. Here’s what it might look like:

public static TSource[] ToArray<TSource>(this IEnumerable<TSource> source, int count)
{
    if (source == null) throw new ArgumentNullException("source");
    if (count < 0) throw new ArgumentOutOfRangeException("count");
    var array = new TSource[count];
    int i = 0;
    foreach (var item in source)
    {
        array[i++] = item;
    }
    return array;
}

Now we can optimize our previous example like this:

List<User> users = GetUsers();
string[] array = users.Select(u => u.Name).ToArray(users.Count);

Note that if you specify a count that is less than the actual number of elements in the sequence, you will get an IndexOutOfRangeException; it’s your responsibility to provide the correct count to the method.

So, what do we actually gain by doing that? From my benchmarks, this improved ToArray is about twice as fast as the built-in one, for a long sequence (tested with 1,000,000 elements). This is pretty good!

Note that we can improve ToList in the same way, by using the List<T> constructor that lets us specify the initial capacity:

public static List<TSource> ToList<TSource>(this IEnumerable<TSource> source, int count)
{
    if (source == null) throw new ArgumentNullException("source");
    if (count < 0) throw new ArgumentOutOfRangeException("count");
    var list = new List<TSource>(count);
    foreach (var item in source)
    {
        list.Add(item);
    }
    return list;
}

In this case, the performance gain is not as as big as for ToArray (about 25% instead of 50%), probably because the list doesn’t need to be trimmed, but it’s not negligible.

Obviously, a similar optimization could be made to ToDictionary as well, since the Dictionary<TKey, TValue> class also has a constructor that lets us specify the initial capacity.

The improved ToArray and ToList methods are available in my Linq.Extras library, which also provides many useful extension methods for working on sequences and collections.

Easily convert file sizes to human-readable form

Very poorPoorAverageGoodExcellent (No Ratings Yet) 
Loading...Loading...

If you write an application that has anything to do with file management, you will probably need to display the size of the files. But if a file has a size of 123456789 bytes, it doesn’t mean that you should just display this value to the user, because it’s hard to read, and the user usually doesn’t need 1-byte precision. Instead, you will write something like 118 MB.

This should be a no-brainer, but there are actually a number of different ways to display byte sizes… For instance, there are several co-existing conventions for units and prefixes:

  • The SI (International System of Units) convention uses decimal multiples, based on powers of 10: 1 kilobyte is 1000 bytes, 1 megabyte is 1000 kilobytes, etc. The prefixes are the one from the metric system (k, M, G, etc.).
  • The IEC convention uses binary multiples, based on powers of 2: 1 kibibyte is 1024 bytes, 1 mebibyte is 1024 kibibytes, etc. The prefixes are Ki, Mi, Gi etc., to avoid confusion with the metric system.
  • But neither of these conventions is commonly used: the customary convention is to use binary mutiples (1024), but decimal prefixes (K, M, G, etc.).

Depending on the context, you might want to use either of these conventions. I’ve never seen the SI convention used anywhere; some apps (I’ve seen it in VirtualBox for instance) use the IEC convention; most apps and operating systems use the customary convention. You can read this Wikipedia article if you want more details: Binary prefix.

OK, so let’s chose the customary convention for now. Now you have to decide which scale to use: do you want to write 0.11 GB, 118 MB, 120564 KB, or 123456789 B? Typically, the scale is chosen so that the displayed value is between 1 and 1024.

A few more things you might have to consider:

  • Do you want to display integer values, or include a few decimal places?
  • Is there a minimum unit to use (for instance, Windows never uses bytes: a 1 byte file is displayed as 1 KB)?
  • How should the value be rounded?
  • How do you want to format the value?
  • for values less than 1KB, do you want to use the word “bytes”, or just the symbol “B”?

OK, enough of this! What’s your point?

So as you can see, displaying a byte size in human-readable form isn’t as straightforward as you might have expected… I’ve had to write code to do it in a number of apps, and I eventually got tired of doing it again over and over, so I wrote a library that attempts to cover all use cases. I called it HumanBytes, for reasons that should be obvious… It is also available as a NuGet package.

Its usage is quite simple. It’s based on a class named ByteSizeFormatter, which has a few properties to control how the value is rendered:

var formatter = new ByteSizeFormatter
{
    Convention = ByteSizeConvention.Binary,
    DecimalPlaces = 1,
    NumberFormat = "#,##0.###",
    MinUnit = ByteSizeUnit.Kilobyte,
    MaxUnit = ByteSizeUnit.Gigabyte,
    RoundingRule = ByteSizeRounding.Closest,
    UseFullWordForBytes = true,
};

var f = new FileInfo("TheFile.jpg");
Console.WriteLine("The size of '{0}' is {1}", f, formatter.Format(f.Length));

In most cases, though, you will just want to use the default settings. You can do that easily with the Bytes extension method:

var f = new FileInfo("TheFile.jpg");
Console.WriteLine("The size of '{0}' is {1}", f, f.Length.Bytes());

This method returns an instance of the ByteSize structure, whose ToString method formats the value using the default formatter. You can change the default formatter settings globally through the ByteSizeFormatter.Default static property.

A note on localization

Not all languages use the same symbol for “byte”, and obviously the word “byte” itself is different across languages. Currently the library only supports English and French; if you want your language to be supported as well, please fork, add your translation, and make a pull request. There are only 3 terms to translate, so it shouldn’t take long Winking smile.

Posted in Libraries. Tags: , , . No Comments »

StringTemplate: another approach to string interpolation

Very poorPoorAverageGoodExcellent (No Ratings Yet) 
Loading...Loading...

With the upcoming version 6 of C#, there’s a lot of talk on CodePlex and elsewhere about string interpolation. Not very surprising, since it’s one of the major features of that release… In case you were living under a rock during the last few months and you haven’t heard about it, string interpolation is a way to insert C# expressions inside a string, so that they’re evaluated at runtime and replaced with their values. Basically, you write something like this:

string text = $"{p.Name} was born on {p.DateOfBirth:D}";

And the compiler transforms it to this:

string text = String.Format("{0} was born on {1:D}", p.Name, p.DateOfBirth);

Note: the syntax shown above is the one from the latest design notes about this feature. It might still change before the final release, and the current preview build of VS2015 uses a different syntax: “\{p.Name} was born on \{p.DateOfBirth:D}”.

I really love this feature. It’s going to be extremely convenient for things like logging, generating URLs or queries, etc. I will probably use it a lot, especially since Microsoft has listened to community feedback and included a way to customize how the embedded expressions are evaluated (see the part about IFormattable in the design notes).

But there’s one thing that bothers me, though: since interpolated strings are interpreted by the compiler, they have to be hard-coded ; you can’t extract them to resources for localization. This means that this feature cannot be used for localization, and we’re stuck with old-fashioned numeric placeholders in localized strings.

Or are we really?

For a few years now, I’ve been using a custom string interpolation engine that can be used like String.Format, but with named placeholders instead of numeric ones. It takes a format string, and an object with properties that match the placeholder names:

string text = StringTemplate.Format("{Name} was born on {DateOfBirth:D}", new { p.Name, p.DateOfBirth });

Obviously, if you already have an object with the properties you want to include in the string, you can just pass that object directly:

string text = StringTemplate.Format("{Name} was born on {DateOfBirth:D}", p);

The result is exactly what you would expect: the placeholders are replaced with the values of the corresponding properties.

In which ways is it better than String.Format?

  • It’s much more readable: a named placeholder tells you immediately which value will go there
  • It’s less error-prone: you don’t need to pay attention to the order of the values to be formatted
  • When you extract the format strings to resources for localization, the translator sees a name in the placeholder, not a number. This gives more context to the string, and makes it easier to understand what the final string will look like.

Note that you can use the same format specifiers as in String.Format. The StringTemplate class parses your format string into one compatible with String.Format, extracts the property values into an array, and calls String.Format.

Of course, parsing the string and extracting the property values with reflection every time would be very inefficient, so there are a some optimizations:

  • each distinct format string is only parsed once, and the result of the parsing is added to a cache, to be reused every time.
  • for each property used in a format string, a getter delegate is generated and cached, to avoid using reflection every time.

This means that the first time you use a given format string, there will be the overhead of parsing and generating the delegates, but subsequent usages of the same format string will be much faster.

The StringTemplate class is part of a library called NString, which also contains a few extension methods to make string manipulations easier. The library is a PCL that can be used with all .NET flavors except Silverlight 5. A NuGet package is available here.

Passing parameters by reference to an asynchronous method

Very poorPoorAverageGoodExcellent (1 votes) 
Loading...Loading...

Asynchrony in C# 5 is awesome, and I’ve been using it a lot since it was introduced. But there are few annoying limitations; for instance, you cannot pass parameters by reference (ref or out) to an asynchronous method. There are good reasons for that; the most obvious is that if you pass a local variable by reference, it is stored on the stack, but the current stack won’t remain available during the whole execution of the async method (only until the first await), so the location of the variable won’t exist anymore.

However, it’s pretty easy to work around that limitation : you only need to create a Ref<T> class to hold the value, and pass an instance of this class by value to the async method:

async void btnFilesStats_Click(object sender, EventArgs e)
{
    var count = new Ref<int>();
    var size = new Ref<ulong>();
    await GetFileStats(tbPath.Text, count, size);
    txtFileStats.Text = string.Format("{0} files ({1} bytes)", count, size);
}

async Task GetFileStats(string path, Ref<int> totalCount, Ref<ulong> totalSize)
{
    var folder = await StorageFolder.GetFolderFromPathAsync(path);
    foreach (var f in await folder.GetFilesAsync())
    {
        totalCount.Value += 1;
        var props = await f.GetBasicPropertiesAsync();
        totalSize.Value += props.Size;
    }
    foreach (var f in await folder.GetFoldersAsync())
    {
        await GetFilesCountAndSize(f, totalCount, totalSize);
    }
}

The Ref<T> class looks like this:

public class Ref<T>
{
    public Ref() { }
    public Ref(T value) { Value = value; }
    public T Value { get; set; }
    public override string ToString()
    {
        T value = Value;
        return value == null ? "" : value.ToString();
    }
    public static implicit operator T(Ref<T> r) { return r.Value; }
    public static implicit operator Ref<T>(T value) { return new Ref<T>(value); }
}

As you can see, it’s pretty straightforward. This approach can also be used in iterator blocks (i.e. yield return), that also don’t allow ref and out parameters. It also has an advantage over standard ref and out parameters: you can make the parameter optional, if for instance you’re not interested in the result (obviously, the callee must handle that case appropriately).

Easy unit testing of null argument validation

Very poorPoorAverageGoodExcellent (No Ratings Yet) 
Loading...Loading...

When unit testing a method, one of the things to test is argument validation : for instance, ensure that the method throws a ArgumentNullException when a null argument is passed for a parameter that isn’t allowed to be null. Writing this kind of test is very easy, but it’s also a tedious and repetitive task, especially if the method has many parameters… So I wrote a method that automates part of this task: it tries to pass null for each of the specified arguments, and asserts that the method throws an ArgumentNullException. Here’s an example that tests a FullOuterJoin extension method:

[Test]
public void FullOuterJoin_Throws_If_Argument_Null()
{
    var left = Enumerable.Empty<int>();
    var right = Enumerable.Empty<int>();
    TestHelper.AssertThrowsWhenArgumentNull(
        () => left.FullOuterJoin(right, x => x, y => y, (k, x, y) => 0, 0, 0, null),
        "left", "right", "leftKeySelector", "rightKeySelector", "resultSelector");
}

The first parameter is a lambda expression that represents how to call the method. In this lambda, you should only pass valid arguments. The following parameters are the names of the parameters that are not allowed to be null. For each of the specified names, AssertThrowsWhenArgumentNull will replace the corresponding argument with null in the provided lambda, compile and invoke the lambda, and assert that the method throws a ArgumentNullException.

Using this method, instead of writing a test for each of the arguments that are not allowed to be null, you only need one test.

Here’s the code for the TestHelper.AssertThrowsWhenArgumentNull method (you can also find it on Gist):

using System;
using System.Linq;
using System.Linq.Expressions;
using NUnit.Framework;

namespace MyLibrary.Tests
{
    static class TestHelper
    {
        public static void AssertThrowsWhenArgumentNull(Expression<TestDelegate> expr, params string[] paramNames)
        {
            var realCall = expr.Body as MethodCallExpression;
            if (realCall == null)
                throw new ArgumentException("Expression body is not a method call", "expr");

            var realArgs = realCall.Arguments;
            var paramIndexes = realCall.Method.GetParameters()
                .Select((p, i) => new { p, i })
                .ToDictionary(x => x.p.Name, x => x.i);
            var paramTypes = realCall.Method.GetParameters()
                .ToDictionary(p => p.Name, p => p.ParameterType);
            
            

            foreach (var paramName in paramNames)
            {
                var args = realArgs.ToArray();
                args[paramIndexes[paramName]] = Expression.Constant(null, paramTypes[paramName]);
                var call = Expression.Call(realCall.Method, args);
                var lambda = Expression.Lambda<TestDelegate>(call);
                var action = lambda.Compile();
                var ex = Assert.Throws<ArgumentNullException>(action, "Expected ArgumentNullException for parameter '{0}', but none was thrown.", paramName);
                Assert.AreEqual(paramName, ex.ParamName);
            }
        }

    }
}

Note that it is written for NUnit, but can easily be adapted to other unit test frameworks.

I used this method in my Linq.Extras library, which provides many additional extension methods for working with sequences and collections (including the FullOuterJoin method mentioned above).

css.php