All posts by Gigi

The King who Forsook his own Virtues

Bias

As I write this, I can’t help but be conscious about bias. I’ve been a fan of the Ultima series of games since childhood. In 2001, I joined the Ultima Dragons Internet Chapter (UDIC) – an online fanclub dedicated to the series – and I’ve been a part of this community longer than I haven’t. In July 2002, I launched my first website, Dino’s Ultima Page, which was a leading site in the Ultima community for about a decade, and it will turn 16 years old in less than two weeks from now.

Left to right: Dr. Cat, Starr Long, Denis Loubet facepalming, and Richard Garriott at the Ultima Dragons Internet Chapter 25th Anniversary Bash

Last year, that same UDIC fanclub turned 25 years old, and a big party took place in Disneyland, Anaheim. I travelled all the way to California to be part of it, and like the rest of the people there, I was thrilled that several of the people who worked on the game – essentially our childhood heroes – were present to hang out with their fans.

The Kickstarter

There was similar enthusiasm a few years before that party, in March 2013, when Richard Garriott’s latest company, Portalarium, set up a Kickstarter campaign to fund a spiritual successor of Ultima called Shroud of the Avatar: Forsaken Virtues. The fans, starved for years of the creativity and entertainment by Electronic Arts (who currently owns the rights to the Ultima intellectual property), and sick of the failures it produced in an attempt to make money off its existing fanbase, readily poured their coin into a new game that would be made by some of the same people behind Ultima. The Kickstarter alone raised $1.9m, with additional funding secured after that.

Starr Long and Richard Garriott, speaking at the Ultima Dragons Internet Chapter 25th Anniversary Bash

Faced with this exciting prospect, what do you think a long-standing fan such as myself did?

I simply ignored it.

One reason was that I seldom had time to play games any more. But more importantly, it felt like madness to put money into a game even before it had started development, no matter who was involved. Coming from a country where customer service is abysmal, the last thing I’m going to do is give people my money to do whatever they want with it, without even being able to check some reviews first.

The trainwreck

In hindsight, I’m glad I did that. A recent lengthy review by taxalot at RPG Codex (with additional post-mortem insight by the author in the article’s comments) exposes the game as unfinished, buggy, and all round underwhelming in just about every aspect.

Most notable is that Portalarium tried to appeal to both the existing Ultima fanbase by promising a single player experience, while also going the MMORPG direction for those who wanted that.

“And sold they did. The first consequence of this was that if you backed the game for the single player experience… well, you probably gave up hope the moment your bank account was debited. To someone who was looking for a great single player adventure, the monthly emails focused solely on player housing were utterly depressing, an obvious sign that Portalarium had taken your money and were doing whatever the hell they wanted with it. Month after month, the studio unveiled new kinds of houses that you could buy with real money. But why stop at a house? Why not buy a castle? Or a whole town? You could do that too, as a solo player or as a guild to have your own place to regroup. The emphasis on this aspect of the game was truly puzzling. Between that and the monthly dance parties thrown by “DJ Darkstarr” (executive producer Starr Long’s alter ego), one might wonder whether the point was to have exciting adventures or just to create some sort of virtual renaissance fair for everyone to LARP in. In many ways, it felt like Portalarium were increasingly less interested in selling a game than a medieval Second Life service.” — RPG Codex Review: Shroud of the Avatar

Even more maddening is the concept of buying virtual houses with real money, and have to pay regular taxes on them. As if real-life housing weren’t bad enough – all we needed was to have the same problems in our games.

As you can imagine, this enraged several fans who backed the game based on the promise of Richard Garriott going back to his roots. One of these, who pledged $1500 for the game, was permanently banned from Shroud of the Avatar forums for questioning the direction of the project in this regard. He recently published the comments he was banned for, along with all the email correspondence that ensued, exposing what seems to be blatant abuse of power and excessive censorship.

The Future of Portalarium

While this whole mess is still unfolding, Portalarium laid off half their team just a few weeks ago, mainly laying off people in their art and design department. Which is ironic, because seeing that review on RPG Codex, it appears that these are the areas where help is most needed.

Meanwhile, in reaction to same review, Ultima Dragons have been discussing whether the resulting game is the fault of incompetent developers or incompetent management. While this is difficult to ascertain without having inside information, one may take a hint from the single Glassdoor review about the company (to be sure, a single review isn’t a very good sample, but it gives an idea):

The email correspondence about the aforementioned banning incident also rings alarm bells.

“It can also be hard to be confronted with your own misbehavior. In fact it can be so hard that many people, like yourself, cannot even face it and instead choose to focus on everything but your own actions.” — Starr Long, email correspondence

Given that this whole incident was a result of trying to stifle criticism, let’s just say I wouldn’t have been too happy to get this kind of response myself, especially from an Executive Producer.

History Repeats Itself

Shroud of the Avatar: Forsaken Virtues was fully released in March 2018 (even if in the pitiful state that the aforementioned review shows). That means it’s taken five years of development, and a whole lot of money. If you’ve been following the history of Ultima, you’ll find that it’s strangely reminiscent of Ultima 9, the last Ultima game that was released in 1999. Ultima fans generally consider the game to be a disaster, and often blame EA for the turnout.

Another thing EA is blamed for is the general fate of the Ultima intellectual property. After Ultima 9, there was pretty much no activity whatsoever for years. In more recent years, EA decided to reuse the Ultima intellectual property, resulting in a series of failures that were cancelled either even before being launched, or afterwards.

Ultima fans, for instance, generally agree that Lords of Ultima had nothing to do with Ultima other than the name. Ultima Forever: Quest for the Avatar similarly has a few names that fans will remember (including “Lady British”), but little else that feels familiar in terms of story or gameplay. This practice is called name-dropping, and guess what other game does this? That’s right. Shroud of the Avatar: Forsaken Virtues.

One would think that veteran game developers would learn from past blunders (theirs or otherwise), but after all this, the advice to management from that earlier Glassdoor review seems to hit the nail on the head.

Forsaken Virtues Indeed

Ultima 4 received critical acclaim because it brought ethics into an RPG genre that was principally dominated by “kill the bad villain” storylines. The virtues, conceived by Richard Garriott, would be central to all the mainstream Ultima games after that, except for a couple set on different words. Ultima 5, for instance, showed what happens when virtues are taken to the extreme.

“Thou shalt not lie, or thou shalt lose thy tongue.” — Ultima 5

If Shroud of the Avatar got nothing right, it has a great name. Forsaken Virtues very much reflects its overall direction. Honesty, for instance, was thrown out the window along with the Kickstarter promises. Compassion is shot down once you read the aforementioned email correspondence. Sacrifice is done by Portalarium only insofar as other people’s money and their own staff are involved.

As for humility, there are multiple aspects to this. One is that the game tried to be everything (scope creep anyone?), and thus failed to be stand out (or even be decent) in any one department. Another is that the top people behind the game need to get off their pedestal and start listening to their fans.

AWS Lambda .NET Core 2.1 Support Released

Amazon Web Services (AWS) has just announced that its serverless function offering, AWS Lambda, now supports the .NET Core 2.1 runtime, which was released towards the end of May 2018.

Quoting the official announcement:

“Today we released support for the new .NET Core 2.1.0 runtime in AWS Lambda. You can now take advantage of this version’s more performant HTTP client. This is particularly important when integrating with other AWS services from your AWS Lambda function. You can also start using highly anticipated new language features such as Span<T> and Memory<T>.

“We encourage you to update your .NET Core 2.0 AWS Lambda functions to use .NET Core 2.1 as soon as possible. Microsoft is expected to provide long-term support (LTS) for .NET Core 2.1 starting later this summer, and will continue that support for three years. Microsoft will end its support for .NET Core 2.0 at the beginning of October, 2018[2]. At that time, .NET Core 2.0 AWS Lambda functions will be subject to deprecation per the AWS Lambda Runtime Support Policy. After three months, you will no longer be able to create AWS Lambda functions using .NET Core 2.0, although you will be able to update existing functions. After six months, update functionality will also be disabled.

“[1] See Microsoft Support for .NET Core for the latest details on Microsoft’s .NET Core support.
“[2] See this blog post from Microsoft about .NET Core 2.0’s end of life.”

The choice here seems obvious: upgrade and get faster HttpClient, new language features, and long-term support; or lose support for your functions targeting .NET Core 2.0 (whatever that actually means).

In order to migrate to .NET Core 2.1, you’ll need the latest tooling – either version 1.14.4.0 of the AWS Toolkit for Visual Studio, or version 2.2.0 of the Amazon.Lambda.Tools NuGet package.

Check out the official announcement at the AWS blog for more information, including additional tips on upgrading.

Orleans 2.0 Stateless Worker Grains

In this article, we’ll see how to create grains that automatically scale up and down depending on load, in Microsoft Orleans 2.0.

The source code for this article is very similar to that in “Getting Started with Microsoft Orleans 2.0 in .NET Core“, with a few key differences:

  • It has been modified to gracefully stop the silo and gracefully close the client.
  • It uses the latest packages at the time of writing this article – Orleans 2.0.3 and OrleansDashboard 2.0.7.
  • It uses a slightly different example, and the load generation has been adapted accordingly.

Since there’s nothing really new in the client and silo setup, we’ll be focusing mainly on the grain and load generation parts. However, you may find the full source code for this article in the Orleans2StatelessWorkers folder in the Gigi Labs BitBucket repository.

Example Grain

For the sake of example, we’ll imagine that the job of our Orleans cluster is to provide hashing as a service. A client provides an input string, and we’ll have a grain that computes a hash of the string (it doesn’t really matter what hash function it is – we’ll use MD5 in the example) and returns it.

Based on this requirement, we can easily write a grain and its corresponding interface to perform the hash calculation:

    public interface IHashGeneratorGrain : IGrainWithIntegerKey
    {
        Task<string> GenerateHashAsync(string input);
    }

    public class HashGeneratorGrain : Grain, IHashGeneratorGrain
    {
        private HashAlgorithm hashAlgorithm;

        public HashGeneratorGrain()
        {
            this.hashAlgorithm = MD5.Create();
        }

        public Task<string> GenerateHashAsync(string input)
        {
            var inputBytes = Encoding.UTF8.GetBytes(input);
            var hashBytes = hashAlgorithm.ComputeHash(inputBytes);
            var hashBase64Str = Convert.ToBase64String(hashBytes);

            return Task.FromResult(hashBase64Str);
        }
    }

Load Generation

Typically, when we talk about actor models, the whole point is to have an instance of an actor (grain in Orleans) per entity ID. For instance, you’d have a grain instance for each Device, Vehicle, BlogPost, Game, User, or whatever other domain object you’re dealing with. In this case, however, our grain is completely stateless, and there is no difference in behaviour between one activation and another. In fact, since the grain ID doesn’t matter, we can just pass in 0 as a sort of convention when requesting a grain of this kind:

var hashGenerator = client.GetGrain<IHashGeneratorGrain>(0);

Once we have an instance of the grain, we can generate some load by creating random strings and invoking the relevant method on the grain repeatedly:

            while (true)
            {
                var randomString = GenerateRandomString();
                var hash = await hashGenerator.GenerateHashAsync(randomString);
                Console.WriteLine(hash);
            }

You can monitor the grain’s activity from the Orleans Dashboard (localhost:8080 by default), and as you’d expect, there is only one activation of the grain:

Stateless Worker Grains

This situation is a very good fit for Stateless Worker Grains.

Normally, when you request a grain with a particular ID, you get a single activation – and it is a singleton throughout the cluster, so you would never (bar edge cases involving failover scenarios) get more than one instance of that grain in the cluster. However, if you just add a [StatelessWorker] attribute on the grain…

    [StatelessWorker]
    public class HashGeneratorGrain : Grain, IHashGeneratorGrain

…you’ll see very different behaviour:

Notice how there are now two activations of the HashGeneratorGrain, even though we’re still requesting an instance with ID 0.

When Orleans sees the [StatelessWorker] attribute, it will create a pool of grains behind the ID you specify. This is similar to a load balancer. Those grains are hidden behind that same ID, so you can’t access individual grains in the pool directly (it wouldn’t make any sense to do that). The number of grains will grow up to as many CPU cores are available on the machine, unless you pass an argument to the attribute specifying otherwise.

Aside from autoscaling, another important benefit of stateless worker grains is that they are always local. Orleans will always execute a request to a stateless worker on the same silo where the request was generated, spawning a new activation if necessary. This saves the overhead of potentially passing the request to an instance in a different silo (i.e. remote call), which makes a lot of sense for stateless workers that are pure logic and there’s no difference between activations running in different places.

Although stateless worker grains are best used for stateless logic (as one would expect), there is nothing preventing their use with state. However, coordination of state between multiple grain activations with the same ID can be complicated. The Stateless Worker Grains documentation describes some patterns where stateless worker grains with state make sense (although calling them that way is bizarre).

Summary

  • Use the [StatelessWorker] attribute to treat a grain as a stateless worker grain.
  • This creates a load-balanced autoscaling pool of grains with the same ID.
  • Requests to stateless worker grains are always local and never incur a remote call.
  • Stateless worker grains may have state, although this is unusual.

Accessing an ASP .NET Core Web Application Remotely

After setting up an empty ASP .NET Core Web Application, it’s easy to quickly run it and see something working, in the form of the usual “Hello World”:

When trying to deploy this somewhere though, you might be disappointed to notice that you can’t access the web application from another machine:

In fact, you’ll notice that you can’t even access it from the same machine if you use the actual hostname rather than localhost.

This is because by default, Kestrel will listen only on localhost. In order for another machine to access the web application using the server’s hostname, the web application must specify the endpoints on which Kestrel will listen to, using code or command-line arguments.

Note: you may also need to open a port in your firewall.

In code, this can be done by invoking UseUrls() in the webhost builder as follows:

        public static IWebHost BuildWebHost(string[] args) =>
            WebHost.CreateDefaultBuilder(args)
                .UseStartup<Startup>()
                .UseUrls("http://myhostname:54691")
                .Build();

Replace “myhostname” with the hostname of the server, and note that the localhost endpoint will still work even though it’s not specified explicitly here.

If you want to pass the the endpoint(s) via command line parameters instead, you can do so via the --urls argument. First, you need to change the BuildWebHost() method generated by the project template as per this GitHub comment, to allow command line parameters to be passed to the WebHostBuilder via configuration:

public static IWebHost BuildWebHost(string[] args)
{
    var configuration = new ConfigurationBuilder().AddCommandLine(args).Build();

    return WebHost.CreateDefaultBuilder(args)
        .UseConfiguration(configuration)
        .UseStartup<Startup>()
        .Build();
}

Then, use the --urls argument when invoking dotnet run:

dotnet run --urls http://banshee:54691/

Either of these methods is fine to allow remote machines to access your ASP .NET Core web application.

.NET Core 3 to Support Desktop Applications… Kind of

A few days ago, Microsoft published a blog post titled “.NET Core 3 and Support for Windows Desktop Applications“. Just by reading the title, I’m pretty sure many of us jumped in their seats as thoughts like “WPF on Linux” became a source of excitement.

Image source: .NET Core 3 and Support for Windows Desktop Applications

Unfortunately however, the excitement turns into a disappointed “Oh. [Awkward silence] OK.” when reading that although .NET Core 3 is planned to support desktop applications built on technologies like Windows Forms and WPF, this support is for Windows only:

“Support for Windows desktop will be added as a set of “Windows Desktop Packs”, which will only work on Windows. .NET Core isn’t changing architecturally with this new version. We’ll continue to offer a great cross-platform product, focused on the cloud. We have lots of improvements planned for those scenarios that we’ll share later.”

The article does mention that this will bring several benefits ranging from performance improvements to deployment options, but this pales in comparison to the prospect of going cross-platform.

But given that they “are planning on releasing a first preview of .NET Core 3 later this year and the final version in 2019”, I can only wonder why they would spend a year doing a huge amount of work that most people won’t even care about, and pass it as a major release of .NET Core.

Time will tell, but one can get an idea of how people feel about this from the comments in the blog post.

My guess is that this could be part of a long-term strategy to retire the full .NET Framework, rather than bringing any real value to .NET Core or desktop applications.