Update the updater to update

update-windows-update

I formatted one of my laptops and reinstalled Windows the other day. Whilst reinstalling software and bringing it up to date, Windows update gave me this ridiculous message:

“To check for updates, you must first install an update for Windows Update.”

I guess software has become so complex that even updates require various levels of abstraction. 🙂

C# 6 Preview: Changes in VS2015 CTP 5

C# 6 and Visual Studio 2015 are both prerelease software at the moment, and thus they are subject to change at any time.

There have indeed been some changes in C# 6 that came out in CTP 5 but which don’t seem to have been mentioned anywhere.

String interpolation

If you’ve read my original article on string interpolation, you’ll know that the syntax was expected to change. Well, that has happened, and the expected syntax is now in place. So now, you can use string interpolation to write special formatted strings as follows:

            var name = "Chuck";
            var surname = "Norris";

            string message = $"The man is {name} {surname}";

The placeholders in the curly brackets no longer require a prefixing backslash, but a dollar sign is necessary at the start of the string to allow them to be interpreted properly.

Just like before, the placeholders are not restricted to simple variables. They may contain arbitrary expressions including properties and methods:

            var numbers = new int[] { 1, 2, 3, 4, 5 };

            string message = $"There are {numbers.Length} numbers, and their average is {numbers.Average()}";

You can use format strings in placeholders. Unlike in the original implementation, they don’t need to be enclosed in quotes if they contain operators (e.g. dashes which normally represent minus signs). However you need to be a little precise with them, as any extra space after the colon (:) appears in the output:

            var dateOfBirth = DateTime.Now;
            string message = $"It's {dateOfBirth:yyyy-MM-dd}!";

using static

I’ve also written about the using static feature before, which lets you declare a static class among the using statements and then omit the class prefix when using static methods.

The syntax has changed, and you will need to prefix the static class with the keyword “static” in the declaration. This is a good thing because it eliminates the confusion between which using statements are namespaces and which are static classes.

    using static System.Console;

    class Program
    {
        static void Main(string[] args)
        {   
            WriteLine("Hello Lilly!");

            ReadLine();
        }
    }

VS2015 Preview: Layout Management

In earlier versions of Visual Studio, if you happened to mess up your window layout in Visual Studio, you could reset it back to the default layout by using the appropriate item in the Window menu:

vs2015-windowlayouts-duringreset

This rearranges the docked windows to whatever you originally had when Visual Studio was installed:

vs2015-windowlayouts-afterreset

That’s nice and all. But you may have noticed some new items above “Reset Window Layout” in the Window menu which are pretty handy when it comes to managing your window layouts.

For instance, when developing a new WPF application, the toolbox can sometimes come in handy for those without much experience with XAML. So, after introducing the Toolbox window, you can save the current layout:

vs2015-windowlayouts-duringsave

To save a layout, you need to give it a name:

vs2015-windowlayouts-promptsave

…and if that name happens to already exist, you’re asked whether you want to replace it (this is how you update saved layouts):

vs2015-windowlayouts-promptsaveexisting

You can then load (apply) these layouts by selecting them from the list in the Window menu, or by using the appropriate shortcut key (available for the first nine layouts in order). For instance, I saved this alternate layout suitable for unit tests, and I can apply it like this:

vs2015-windowlayouts-duringapply

…and then, like magic, the selected layout is applied:

vs2015-windowlayouts-afterapply

There’s also a menu item for management of these layouts:

vs2015-windowlayouts-managemenuitem

This opens a dialog which allows you to rename, delete, or reorder layouts.

vs2015-windowlayouts-managedialog

Reordering layouts has the effect of reassigning their keyboard shortcuts, since they are assigned in order to the first nine (from Ctrl+Alt+1 to Ctrl+Alt+9).

So, there you have it! Window layout management is yet another new feature in Visual Studio 2015 to improve your productivity.

VS2015 Preview: Live Static Code Analysis

In Visual Studio 2015, the new C# and VB .NET compilers are based on the .NET Compiler Platform known as Roslyn. This allows third-party tools to use the compiler itself to query the structure of code, rather than having to redo the compiler’s job. It also provides the ability to fix the code by modifying its structure.

You can use the NuGet Package Manager in VS2015 to install analyzers. At the moment there are very few available, and they’re still prerelease software. For this demonstration we’ll use the CodeCracker C# analyzer:

vs2015-analyzers-nuget

Once you install the package, you’ll see that the analyzer has been added under a special Analyzers node under the References in Solution Explorer. If you expand the analyzer, you can see all the rules that it enforces:

vs2015-analyzers-solution-explorer

As you can see, the code analysis rules may be enforced at various levels. Those marked as errors will result in actual compiler errors, and obviously cause the build to fail.

The rules that are broken by the code will then show up in the error list, which in VS2015 has been enhanced so that you can see more info about the error/rule and also click on the error code link to go to its online documentation (at the time of writing this article, the CodeCracker’s error code links are broken):

vs2015-analyzers-error-list

The best thing about live static code analysis is that they’re live: the rules will be checked as you’re writing your code, and you don’t even need to build for them to be evaulated.

Warnings may be sorted out by moving the caret within the relevant code and pressing Ctrl+. (Control Dot), after which the Quick Actions become available. See, code analyzers don’t need to limit themselves to complaining. If a fix is available, you’ll have the option to apply it, and you can preview the changes as with any other refactoring:

vs2015-analyzers-fix

If you know better than the code analyzer, you can opt to suppress the warning instead:

vs2015-analyzers-suppress

Code analysis allows rules to be enforced on your team’s code. Roslyn facilitates this by making it easy to plug in different code analyzers (and hopefully in future there will be a wider range to choose from) and by running code analysis live, without the need to build the project.

Note: this demonstration used the CodeCracker C# analyzer in order to show the different levels of rules (e.g. warnings, errors, etc), since the StyleCop analyzer’s rules are all warnings. The CodeCracker C# analyzer is in quite a mess at the time of writing, with missing fixes, broken rule documentation links, and countless grammatical errors. But it’s prerelease, so we’ll pretend that’s a good excuse and forgive it for its sins.

Deserializing Derived Types with JSON.NET

If you’re using Json.NET to serialize objects which involve inheritance hierarchies, there’s a little issue you might run into.

Let’s say we have a class Person:

    public class Person
    {
        public string Name { get; set; }
    }

…and we also have another class Employee which derives from Person:

    public class Employee : Person
    {
        public decimal Salary { get; set; }
    }

Then I can write the following little Console Application (in which Json.NET is installed via NuGet):

            var employee = new Employee() { Name = "John", Salary = 1000m };

            var json = JsonConvert.SerializeObject(employee);

            var deserialized = JsonConvert.DeserializeObject<Person>(json);

We’re serializing an Employee (which derives from Person), and then deserializing into a Person (the base class). Due to polymorphism, we’d expect the result to hold a reference to an Employee instance complete with Salary. Alas, this is not the case:

json-derivedclasses-problem

As you can see, the result is actually of type Person, and the Salary goes missing in the deserialization.

Apparently the solution is to turn on Type Name Handling. The code is thus transformed:

            var settings = new JsonSerializerSettings { TypeNameHandling = TypeNameHandling.All };

            var employee = new Employee() { Name = "John", Salary = 1000m };

            var json = JsonConvert.SerializeObject(employee, settings);

            var deserialized = JsonConvert.DeserializeObject<Person>(json, settings);

Once we create an instance of JsonDerializerSettings with TypeNameHandling turned on, we can pass that object as a paramater in the serialization/deserialization methods to obtain the correct result:

json-derivedclasses-solution

As you can see, the deserialized result is now an Employee, complete with Salary. You’ll also notice that the actual serialized JSON has changed: there is now type information in it, and that’s what allows the correct type to be deserialized.

So why is this functionality not enabled by default? I can only guess that it’s for performance reasons.

What’s in a Job Title?

The companies I’ve worked for so far have always had some kind of hierarchical organisation. For instance, you start off as a Software Developer, then you are promoted to Senior Software Developer, and so on. However, I’m aware that there are other companies which prefer to have a flat hierarchy, and keep job titles to a minimum.

Well, what’s in a job title, anyway? Does it really matter what your job title is?

Motivation

There’s this scene from the film “Kingdom of Heaven“, where the main character (Balian) selects a peasant and knights him on the spot. The bishop is horrified.

Bishop: “Who do you think you are? Will you alter the world? Does making a man a knight make him a better fighter?

Balian: “Yes.”

You see, making a man a knight doesn’t give him any special power. But he knows that he is now a knight. This means that he is responsible to uphold his duties as such.

On the job, it’s pretty much the same. Becoming a Senior Software Developer does not make you any more able in your work than turning 18 years old makes you suitable to drive. But it does put you into a bag of a handful of accomplished and trusted developers, and as such you will work a lot harder to show that you deserve that title. It also means that you will most likely take more initiative in your work, and go outside the scope of your individual duties by guiding others in performing theirs.

A little recognition goes a long way in motivating individuals.

Categorisation

Some companies offer the excuse that their employees have the same job titles or salaries in the interest of fairness, so that they are all treated equally.

Such companies should wake up and realise that people aren’t all equal. Some work harder than others. Those people do not deserve to be lumped in the same boat as those who do a miserable job.

The CV Factor

When an employee applies for a job, having “Senior Software Developer” looks a lot better than “Software Developer”.

It is true that job titles mean different things from one company to another. In some companies, a Software Developer is merely responsible for coding; while in others, he might actually be managing a whole project. Hiring companies should ideally look beyond the job title and ask about the roles that the candidate played in his employment.

However it is also true that companies and recruitment agencies receiving a lot of job applications often resort to simple filtering at face value in order to reduce the number of applications.

Consider this: individual A has been a loyal and hard-working Software Developer for 20 years, and his company never gave him a promotion. His friend, individual B, has been promoted to Senior Software Developer and then Lead Developer, even though his skills and responsibilities are less than those of individual A. When recruiters look at their CVs at face value, who will they prefer? What will they think about individual A when they see that he’s had the same role for 20 years?

VS2015 Preview: NuGet 3 Preview

Well, what do you know. It seems like NuGet’s been to the hairdresser recently.  In fact, in Visual Studio 2015 you’ll find the preview of NuGet 3.0, which looks like this:

vs2015-nuget3-layout

It’s obviously changed quite a bit from the version we use today, which for the sake of comparison is this:

vs2015-old-nuget

The most obvious thing to notice is that NuGet is now a first-class citizen with its own page in Visual Studio, rather than being a little modal window. There are other layout improvements you’ll notice, such as the fact that they’ve done away with the separation between installed, installable and updatable packages. In fact, the new NuGet experience is a unified one in which you can filter the packages you want to see: All, Installed, or Update Available.

Usability is not all that is being improved in NuGet 3.0. In fact, there are a bunch of new features that weren’t in NuGet before. One that I am very happy to see is the ability to select the package version to install, rather than having to resort to the Package Manager Console to install specific versions.

Another really cool feature is that of package consolidation. Have you ever had something like 4 different versions of JSON.NET across your solution, and then had to painfully manually converge them into a single version? NuGet 3 allows you to consolidate different versions of the same assembly pretty easily. Just select the version you want, and select the “Consolidate” action, and click the “Consolidate” button:

vs2015-nuget3-consolidate

There are also a few other features, such as a Preview button showing what actions will be taken when you execute a change, and a couple of advanced options. For full details on what’s new, check out the official announcement.

Oh, and just in case you’re curious… that funny thingy next to the Search box that looks like a ship’s wheel is actually supposed to be a gear icon, because it takes you into the NuGet settings.

Simple Validation with Data Annotations

Data Annotations are attributes which you can apply to properties in order to specify validity constraints, such as required fields, string lengths, or numeric ranges. They are quite useful to use as part of bigger frameworks such as ASP .NET MVC or WPF. This article shows very simple examples of their usage in a console application.

Adding Data Annotations

In order to add data annotations, you’ll first need to add a reference to System.ComponentModel.DataAnnotations.dll. Once that is done, add a simple class and decorate the properties with attributes from that namespace:

    public class Person
    {
        [Required]
        public string Name { get; set; }

        [Range(18, 60)]
        public int Age { get; set; }
    }

There are many predefined attributes you can use, and it is also possible to create your own by creating a class that derives from ValidationAttribute.

In this example we’re using the RequiredAttribute, which causes validation to fail if the string is null, empty, or whitespace; and the RangeAttribute, which requires a number to be within a specified range.

Property Validation

We can validate a single property on our Person object by using the Validator.TryValidateProperty() method:

        static void RunValidateProperty(string value)
        {
            var person = new Person();

            var context = new ValidationContext(person) { MemberName = "Name" };
            var results = new List<ValidationResult>();
            var valid = Validator.TryValidateProperty(value, context, results);
        }

In order to do this, we need to supply three things:

  • A ValidationContext which specifies the property to validate and the object it belongs to;
  • A collection of ValidationResult, which is a glorified error message; and
  • A value for the property that will be checked for validity.

The fact that any value can be checked for a given property makes TryValidateProperty() particularly useful to use in property setters, such as in this example.

Let’s see what happens when we try validating the Name property (remember, it’s marked as Required) with a value of null:

dataannotationsintro-required-null

In this case TryValidateProperty() returned false, and a ValidationResult was added to the results collection with the message “The Name field is required.”.

Now if we give it a valid string, it behaves quite differently:

dataannotationsintro-required-valid

TryValidateProperty() returned true, and there are no ValidationResults to report.

Object Validation

While validating a single property is quite useful (e.g. while a particular field is being edited), it is often useful to validate every property in a class (e.g. when submitting data in a form).

This functionality is provided thanks to Validator.TryValidateObject():

        static void RunValidateObject()
        {
            var person = new Person();

            var context = new ValidationContext(person);
            var results = new List<ValidationResult>();
            var valid = Validator.TryValidateObject(person, context, results);
        }

Let’s try it out and see what happens:

dataannotationsintro-validateobject-requiredonly

You’ll notice that validation failed and we got a ValidationResult for the Name field. But we also have an Age property, which is supposed to be between 18 and 60, and yet it still has its default value of zero. Why didn’t that property fail validation too?

See, this happens due to an awkward default behaviour of TryValidateObject(), which by default only validates fields which are marked as Required. In order to factor in other attribute types in the validation, you need to use a different overload of TryValidateObject() which takes a boolean at the end, and set it to true:

        static void RunValidateObject()
        {
            var person = new Person();

            var context = new ValidationContext(person);
            var results = new List<ValidationResult>();
            var valid = Validator.TryValidateObject(person, context, results, true);
        }

And now, the result is much more reasonable:

dataannotationsintro-validateobject-allfields

Limitations

Using attributes for validation is a very useful concept. It allows you to simply attach metadata to properties, and let the validation logic consume those attributes when necessary.

However, data annotations also carry with them the limitations of attributes. Among these is the fact that attributes can only have static values, and so it is not possible to incorporate logic into them (e.g. to have them depend on the value of another property).

Source Code

Check out the source code for this article at the Gigi Labs BitBucket repository.

Streaming Data with ASP .NET Web API and PushContentStream

This article explains how you can subscribe to a data stream and receive data pushed spontaneously by the server, using the ASP .NET Web API. It is intended as nothing more than a learning exercise. Technologies such as WebSockets, SignalR, WCF or even plain sockets may be more suitable for this kind of thing.

Update 2015-03-14: Full source code for server and client applications is now available.

The Server

Our Web API is going to allow clients to subscribe to a service which will send the price of… something every two seconds. To start off, you will need to create a new Web project with a Web API component, and create a new controller (which I called PriceController).

Then, it is simply a matter of sending back a response whose content is a PushStreamContent:

        [HttpGet]
        public HttpResponseMessage Subscribe(HttpRequestMessage request)
        {
            var response = request.CreateResponse();
            response.Content = new PushStreamContent((a, b, c) =>
                { OnStreamAvailable(a, b, c); }, "text/event-stream");
            return response;
        }

The odd arguments in the lambda are a workaround for an unfortunate ambiguity between PushStreamContent constructors in Web API 2.

The implementation for OnStreamAvailable() is pretty simple:

        private void OnStreamAvailable(Stream stream, HttpContent content,
            TransportContext context)
        {
            var client = new StreamWriter(stream);
            clients.Add(client);
        }

We’re simply wrapping a StreamWriter around the stream, and then keeping it in a ConcurrentBag called “clients”.

The last thing we need is a timer to periodically send the price data. This is what the timer’s Elapsed event looks like:

        private async static void timer_Elapsed(object sender, ElapsedEventArgs e)
        {
            var price = 1.0 + random.NextDouble(); // random number between 1 and 2

            foreach (var client in clients)
            {
                try
                {
                    var data = string.Format("data: {0}\n\n", price);
                    await client.WriteAsync(data);
                    await client.FlushAsync();
                }
                catch(Exception)
                {
                    StreamWriter ignore;
                    clients.TryTake(out ignore);
                }
            }
        }

Every 2 seconds, a random number between 1 and 2 is selected, and is sent to all subscribed clients. The data is sent based on the format for Server-Sent Events.

If an exception occurs, then the client is effectively unsubscribed by removing it from the ConcurrentBag. This is necessary because, as Henrik Nielsen states in this discussion:

Detecting that the TCP connection has been reset is something that the Host (ASP, WCF, etc.) monitors but in .NET 4 neither ASP nor WCF tells us (the Web API layer) about it. This means that the only reliable manner to detect a broken connection is to actually write data to it. This is why we have the try/catch around the write operation in the sample. That is, responses will get cleaned up when they fail and not before.

PriceController – Full Code

All you need to get the PriceController working are the member variable declarations and the static constructor which initialises them. I’m providing the entire class below so that you can just copy it and get up and running.

    public class PriceController : ApiController
    {
        private static ConcurrentBag<StreamWriter> clients;
        private static Random random;
        private static Timer timer;

        static PriceController()
        {
            clients = new ConcurrentBag<StreamWriter>();

            timer = new Timer();
            timer.Interval = 2000;
            timer.AutoReset = true;
            timer.Elapsed += timer_Elapsed;
            timer.Start();

            random = new Random();
        }

        private async static void timer_Elapsed(object sender, ElapsedEventArgs e)
        {
            var price = 1.0 + random.NextDouble(); // random number between 1 and 2

            foreach (var client in clients)
            {
                try
                {
                    var data = string.Format("data: {0}\n\n", price);
                    await client.WriteAsync(data);
                    await client.FlushAsync();
                }
                catch(Exception)
                {
                    StreamWriter ignore;
                    clients.TryTake(out ignore);
                }
            }
        }

        [HttpGet]
        public HttpResponseMessage Subscribe(HttpRequestMessage request)
        {
            var response = request.CreateResponse();
            response.Content = new PushStreamContent((a, b, c) =>
                { OnStreamAvailable(a, b, c); }, "text/event-stream");
            return response;
        }

        private void OnStreamAvailable(Stream stream, HttpContent content,
            TransportContext context)
        {
            var client = new StreamWriter(stream);
            clients.Add(client);
        }
    }

Browser Support

To test the PriceController, you could consider firing up a browser and have it do the subscription. However, I’ve found that browser support for this is pretty crap. For instance, Firefox thinks the event stream is something you can download:

pushstreamcontent-firefox

So does IE:

pushstreamcontent-ie

Chrome actually manages to display data; however it sometimes only displays part of a message (the rest is buffered and displayed when the next message is received), and has a habit of giving up:

pushstreamcontent-chrome

When I originally saw these results, I thought I had done something seriously wrong. However, I realised this was not the case when I wrote a client in C# and found that it worked correctly. In fact, let’s do that now.

The Client

This client requires the latest Web API Client Libraries from NuGet. Since writing async Console applications can be a bit messy (see a workaround if you still want to take the Console direction), I wrote a simple WPF application. This just has a “Subscribe” button, and a TextBox which I named “OutputField”.

This is the code that does the subscription and streams the prices from the PriceController:

        private async void SubscribeButton_Click(object sender, RoutedEventArgs e)
        {
            if (this.client == null)
            {
                this.client = new HttpClient();
                var stream = await client.GetStreamAsync("http://localhost:1870/api/Price/Subscribe");

                try
                {
                    using (var reader = new StreamReader(stream))
                    {
                        while (true)
                        {
                            var line = await reader.ReadLineAsync() + Environment.NewLine;

                            this.OutputField.Text += line;
                            this.OutputField.ScrollToEnd();

                        }
                    }
                }
                catch (Exception)
                {
                    this.OutputField.Text += "Stream ended";
                }
            }
        }

Since we’re dealing with a stream, we’re using GetStreamAsync() to talk to the Web API rather than the usual GetAsync(). Then, we can read it as we would any other stream.

And in fact, it works pretty nicely:

pushstreamcontent-wpfclient

Related Links

Roslyn moves to GitHub

For the past several weeks I’ve been writing about the new features in C# 6 and Visual Studio 2015, which are facilitated by the .NET Compiler Platform, known as Roslyn.

Roslyn’s Codeplex homepage has hosted the project’s source code and information for a while.

However, over the past few days, the following announcement was posted on Roslyn’s Codeplex homepage:

This coming week (Wednesday is the target, Thursday at the latest) we will be moving the Roslyn Project to live under GitHub, joining the rest of the .NET team over there.

Details:

  • This will be a simple switch – turn off CodePlex, turn on GitHub. You’ll be able to see our checkins on GitHub that same day, for example.
  • Any of your pull requests to our project in GitHub will pile up for a couple of weeks, because we are going to take the opportunity to also streamline our (currently very complex) pull request process – yeah! We’ll reopen in a couple weeks with a much easier process. That, combined with us switching to use Git internally as well at the same time (!), means many fewer moving parts and gets us much closer to the same environment you’ll be using on Roslyn code. It will be so worth it.
    • At this point, I’d advise holding off on any requests sent to CodePlex, and save them for GitHub instead.
  • We’ll be using GitHub Issues for both discussions and bugs after the switch.
  • We will try to move over outstanding bugs from CodePlex, but this is the trickier part of the plan. Stay tuned.
  • We will also do our best to preserve check-in history.
  • We will be under the .NET Foundation over there, as the “Compilers” project.

We will update this page with the forwarding information when the switch is complete mid-week. I’ll also be blogging about some of the additional OSS work we’re about to embark on in a week or two.

–Matt Gertz–*
Group Software Engineering Manager, “Roslyn”

It appears that the move is now complete, and you can find all the latest on Roslyn at its new home on GitHub.