Kotan Code 枯淡コード

In search of simple, elegant code

Menu Close

Adding a Configurable Global Route Prefix in ASP.NET Core

This morning I was asked if it was possible to set up a configurable global route prefix in ASP.NET Core applications. I’ve done this in the past using the old (now legacy) Web API as well as with older versions of ASP.NET MVC, but I hadn’t yet tried it with ASP.NET Core.

Before I get into the how, I think it’s worth mentioning a little of the why. In this case, I have an application (and, like all applications, it’s just a microservice) that I run locally but I also deploy it in a number of environments (clouds or cloud installations). In some of those environments, the application runs as a sub-domain, e.g. myapp.mycompany.com. In other environments, it runs with a path prefix like mycompany.com/myapp. I don’t want to hard-code this capability into my application, I want the environment to be able to tell my application in which mode it is operating without me having to rebuild the app.

To do this, we need to create a new convention. Route conventions are just code that executes an Apply() method against an application model. This apply method is executed the first time Kestrel receives an inbound request, which is when it evaluates the entire system to prepare all the routes.

Filip W (@filip_woj on Twitter) has put together a sample that shows how to do this. The following snippet is from his code which can be found in Github here.

public class RouteConvention : IApplicationModelConvention
    {
        private readonly AttributeRouteModel _centralPrefix;

        public RouteConvention(IRouteTemplateProvider routeTemplateProvider)
        {
            _centralPrefix = new AttributeRouteModel(routeTemplateProvider);
        }

        public void Apply(ApplicationModel application)
        {
            foreach (var controller in application.Controllers)
            {
                var matchedSelectors = controller.Selectors.Where(x => x.AttributeRouteModel != null).ToList();
                if (matchedSelectors.Any())
                {
                    foreach (var selectorModel in matchedSelectors)
                    {
                        selectorModel.AttributeRouteModel = AttributeRouteModel.CombineAttributeRouteModel(_centralPrefix,
                            selectorModel.AttributeRouteModel);
                    }
                }

                var unmatchedSelectors = controller.Selectors.Where(x => x.AttributeRouteModel == null).ToList();
                if (unmatchedSelectors.Any())
                {
                    foreach (var selectorModel in unmatchedSelectors)
                    {
                        selectorModel.AttributeRouteModel = _centralPrefix;
                    }
                }
            }
        }
    }

The cliff notes for what’s happening here is that when the Apply() method is called, we iterate through all of the controllers that the application model knows about. For each of those controllers, we query the selectors where there exists a route model. If we have a controller that has at least one matching selector (it has a Route() attribute on it), then we combine that controller’s route attribute with the prefix. If the controller doesn’t have a matching selector (there are no route attributes), then we set it.

This has the net effect of adding the prefix we’ve defined to every single one of the RESTful routes that we’ve defined in our application. To use this new convention, we can add it while we’re configuring services during startup (also shown in the Github demo) (note that the UseCentralRoutePrefix method is an extension method that you can see in the linked github repo:

public class Startup
{
    public Startup(IHostingEnvironment env)
    {}

    public void ConfigureServices(IServiceCollection services)
    {
        services.AddMvc(opt =>
        {
            opt.UseCentralRoutePrefix(new RouteAttribute("api/v{version}"));
        });
    }

    public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
    {
        loggerFactory.AddDebug();
        app.UseMvc();
    }
}

Again, this isn’t my code, these are just extremely useful nuggets that I found on the Strathweb blog. Now that we’ve got a convention and we know how to add that convention, we can modify the above code slightly so that we can accept an override from the environment, allowing us to deploy to multiple cloud environments.

We don’t really want to read directly from the environment. Instead, we want to use AddEnvironmentVariables when setting up our configuration. This allows us to set the app’s global prefix in a configuration file, and then override it with an environment variable, giving us the best flexibility while developing locally and pushing to a cloud via CI/CD pipelines.

So, our new Startup method looks like this:

public Startup(IHostingEnvironment env)
{
  var builder = new ConfigurationBuilder()
      .SetBasePath(env.ContentRootPath)
      .AddJsonFile("appsettings.json", optional: true, reloadOnChange: false)
      .AddEnvironmentVariables()
  Configuration = builder.build();
}

public IConfigurationRoot Configuration { get; }

And we can modify the ConfigureServices method so that we read the prefix from configuration (which will have already done the overriding logic during the Startup method):

public void ConfigureServices(IServiceCollection services)
{
  var prefix = Configuration.GetSection("globalprefix") != null ? Configuration.GetSection("globalprefix").Value : String.Empty;
 services.AddMvc(opt =>
 {
     opt.UseCentralRoutePrefix(new RouteAttribute(prefix));
 });
}

And that’s it. Using the RouteConvention class we found on Filip W’s blog and combining that with environment-overridable configuration settings, we now have an ASP.NET Core microservice that can run as the root of a sub-domain (myapp.foo.com) or at any level of nesting in context paths (foo.com/this/is/myapp) all without having to hard-code anything or recompile the app.

Finally, we can put something like this in our appsettings.json file:

{
  "globalprefix" : "deep/root"
}

And optionally override that with an environment variable of the same name.

Migrating an ASP.NET Core RC1 Microservice to RC2

In a recent blog post, I talked about how to deploy an ASP.NET Core RC1 service to the cloud. With yesterday’s release of ASP.NET Core RC2, I had to make a number of minor changes to my code in order to get it working both locally and in the cloud.

The biggest change between RC1 and RC2 is an architectural change. In RC1, the thing responsible for prepping the application was inside the dnx tool, and it was tightly coupled to the notion of building web applications. In Rc2, we’re actually returning to a more explicit model with a Program class and a Main(), where we write our own code to construct the WebHostBuilder instead of it being done implicitly on our behalf.

First, there are changes to the structure of the project.json file. Rather than go through them all, I’m just going to point you to the modified project.json for the Zombies service. The easiest thing to do is to take a project.json file created by new RC2 tooling and use that as a template to adapt your existing file.

The next thing I did was add a Program class. You’ll notice that I use the AddStartup() method to invoke all of the old startup code I had from RC1:

using System;
using System.Collections.Generic;
using System.IO;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Builder;
using Microsoft.Extensions.Configuration;

namespace SampleMicroservice
{
  public class Program
  {
    public static void Main(string[] args)
    {
      var config = new ConfigurationBuilder()
        .AddCommandLine(args)
        .Build();

      var host = new WebHostBuilder()
        .UseConfiguration(config)
        .UseKestrel()
        .UseContentRoot(Directory.GetCurrentDirectory())
        .UseStartup<Startup>()
        .Build();

      host.Run();
    }
  }
}

The next change is that I had to use the AddCommandLine() method. Without this, our application isn’t going to respond to or care about the –server.urls parameter that is going to get passed to it by the buildpack when we push to Cloud Foundry.

I had to make a few tiny little changes to the Startup class, but it remains essentially unchanged:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Options;

namespace SampleMicroservice
{
    public class Startup
    {
        public Startup(IHostingEnvironment env)
        {
            var builder = new ConfigurationBuilder()
                .SetBasePath(env.ContentRootPath)
                .AddJsonFile("appsettings.json", optional: true, reloadOnChange: false)
                .AddJsonFile("ZombieConfig.json", optional: true, reloadOnChange: false)
                .AddEnvironmentVariables();
            Configuration = builder.Build();
        }

        public IConfigurationRoot Configuration { get; set; }

        public void ConfigureServices(IServiceCollection services)
        {
            services.AddMvc();
            services.AddOptions();
            services.Configure<ZombieOptions>(Configuration);

            services.AddScoped<IZombieRepository, ZombieRepository>();
            services.AddSingleton<IGlobalCounter, GlobalCounter>();
        }

        public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
        {
            loggerFactory.AddConsole(Configuration.GetSection("Logging"));
            loggerFactory.AddDebug();
            app.UseMvc();
        }
    }
}

The Configure<ZombieOptions>() method is supported by the same extension method as in RC1, except this method is in a new package, so we had to add a dependency on Microsoft.Extensions.Options.ConfigurationExtensions to our project.json.

With all this in place, we can simply run dotnet restore then dotnet build and finally cf push and that’s that… our app is now running on RC2 in Cloud Foundry.

All of the code for the RC2 version of this app can be found in my github repository.

Using Cloud Foundry Bound Services with SteelToe on ASP.NET Core

Before I begin, if you’re not familiar with the concept of backing services or bound services in Cloud Foundry (any of the implementations, including PCF Dev, support service bindings in some form), check out the Pivotal documentation. The short of it is that rather than your application hard-coding information about the backing services with which it communicates, it should instead externalize that configuration. One way of doing this is through Cloud Foundry’s services mechanism.

When you bind services, either user-provided or brokered, they appear in an environment variable given to your application called VCAP_SERVICES. While this is just a simple JSON value, and you could pretty easily parse it on your own, you shouldn’t have to. This is a solved problem and there’s a library available for .NET that includes as part of its foundation some DI-friendly classes that talk to Cloud Foundry’s VCAP_SERVICES and VCAP_APPLICATION environment variables.

This library is called Steel Toe. I will save the history and origin of the name Steel Toe for a future blog post. For now, you can think of Steel Toe as a set of libraries that enable ASP.NET applications to participate as clients in the Netflix OSS microservice ecosystem by talking to Configuration Server and the Discovery Server (Eureka).

So, the first thing you need to do is add a NuGet reference to the assembly SteelToe.Extensions.Configuration.CloudFoundry. To do this, you’ll have to use the feed SteelToe is on, since you won’t find it in the regular Microsoft feed. SteelToe packages can currently be found here:

  • Master – https://www.myget.org/F/steeltoemaster/api/v3/index.json
  • Dev – https://www.myget.org/F/steeltoedev/api/v3/index.json

Once you’ve told either your command line tools or Visual Studio 2015 where to go look for Steel Toe, you can add a reference to the Assembly mentioned above (there are more than this one, but for this blog post we’re only covering Cloud Foundry configuration).

To add Cloud Foundry’s environment variables as a configuration provider, we just use the AddCloudFoundry method in our Startup method:

 var builder = new ConfigurationBuilder()
                .AddJsonFile("appsettings.json")
                .AddEnvironmentVariables()
                .AddCloudFoundry();
 Configuration = builder.Build();

At this point we could continue and just use the syntax vcap:services:p-mysql:0:uri for raw configuration values, but I like injecting service binding information using the IOptions feature, which converts the service bindings into a collection of POCOs. To use it, our ConfigureServices method might look something like this:

  services.AddMvc();

  services.AddOptions();
  services.Configure<CloudFoundryApplicationOptions>(Configuration);
  services.Configure<CloudFoundryServicesOptions>(Configuration);

This is going to make both our application metadata and our service bindings available as injectable options. We can make use of these easily in a controller. The following is a modified “values” controller borrowed right from the Visual Studio “new project” template for ASP.NET Web API. I’ve added a constructor that allows these options to be injected:

[Route("api/[controller]")]
public class ValuesController : Controller
{
    public ValuesController(IOptions<CloudFoundryApplicationOptions> appOptions,
        IOptions<CloudFoundryServicesOptions> serviceOptions)
    {
        AppOptions = appOptions.Value;
        ServiceOptions = serviceOptions.Value;
    }

    CloudFoundryApplicationOptions AppOptions { get; set; }
    CloudFoundryServicesOptions ServiceOptions { get; set; }

    [HttpGet]
    public Object Get()
    {
        return new
        {
            AppName = AppOptions.ApplicationName,
            AppId = AppOptions.ApplicationId,
            Service = new
            {
                Label = ServiceOptions.Services[0].Label,
                Url = ServiceOptions.Services[0].Credentials["url"].Value,
                Password = ServiceOptions.Services[0].Credentials["password"].Value
            }
        };           
    }
}

Now if I bind a service to my application that has a url credential property and a password credential property, I’ll be able to see that when I issue a GET to /api/values. Here’s a sample output that I get from using the REST api for my service:

{"AppName":"bindings-example","AppId":"a605f066-8c2c-4ae3-8dbf-b2e7931fc39d","Service":{"Label":"user-provided","Url":"http://foo.bar","Password":"l33t"}}

You should try this out – create an application, copy some of the code from this blog post, push it to Cloud Foundry, and bind a couple of user-provided services to it and see how incredibly easy Steel Toe makes externalizing your backing services configuration.

Deploying ASP.NET 5 (Core) Apps to Windows in Cloud Foundry

Today I found myself sequestered in a room with a pile of super smart people. This is my favorite kind of day. We have already experimented with pushing an ASP.NET Core application to Cloud Foundry on linux, but we haven’t tried out whether it would work on Windows.

There are a bunch of wrong ways to do this, and we discovered a number of them. First, it’s possible (and actually a default in RC1) to use dnu publish to produce an application that needs to be compiled and doesn’t have its dependencies fully vendored. This is not only a bad idea, but it violates a number of the 12 factors, so we can’t have that.

In Release Candidate 1, you need to explicitly specify which runtime you want bundled with the app as well as indicate that you want your application compiled via the no-source option:

dnu publish --no-source --runtime active

Assuming we’ve already got a functioning, buildable application and we run this from the application’s project root directory, this will then run through a ton of compile spam and finally publish the app to a directory like bin/output/approot/, but you can override this location as well.

When RC2 comes around, we are likely to see the publish command show up as a parameter to the dotnet command line tool. Unfortunately, even though I’ve got an early RC2 build, there’s no way of knowing if that’s going to change so I don’t want to confuse anyone by speculating as to the final syntax.

Next we go into this directory and modify the web.cmd file. At the very end of this file you’ll see a line that looks like this:

@"%DNX_PATH%" --project "%~dp0packages\SampleMicroservice\1.0.0\root" --configuration Debug web %*

Make note of the %~dp0packages section of this. This substitution is looking at the packages sub-directory within the published approot directory. If we didn’t publish with the –no-source option, then our compiled service would not appear as a package here.

While this will work just fine on a plain vanilla Windows Server box, it’s not going to work in the cloud because cloud applications need to adhere to the port binding factor. This means that the PaaS needs to be able to tell the application on which port to listen, because it is taking care of mapping the inside-container ports to the outside-container ports (e.g. port 80).

So we modify this last line of web.cmd to look like this:

@"%DNX_PATH%" --project "%~dp0packages\SampleMicroservice\1.0.0\root" --configuration Debug --server.urls=%PORT% web %*

We’ve added the %PORT% environment variable, which we know is set by Cloud Foundry for all applications, Windows or Linux. Now our start command will use the application-bundled runtime to launch the application, which will have all of its dependencies vendored locally. The only thing we need to do before we push to Cloud Foundry is ensure that our Windows Cell has the right frameworks installed on it. For our test, we made sure that ASP.NET 4.6 was installed.

Now we can push the application, assuming we’re using the default name for the Windows stack:

cf push sample-service -b binary_buildpack -s windows2012R2 -c "approot\web.cmd"

If your application does not expose a legitimate response to a GET of the root (GET /) then you’ll want to push it with –health-check-type none so PCF doesn’t think your app is crashing. Note that you have to specify the location pointing to the web.cmd file to set your start command. Your app isn’t going to start properly without that.

It would be nice if Microsoft let us supply parameters to alter the generated start command inside web.cmd, but we’ll take what we can get.

While it’s a little bit inconvenient to have to modify the web.cmd generated by the publish step, it isn’t all that difficult to push your ASP.NET 5/Core RC1 application to Pivotal Cloud Foundry on Windows cells.

Debugging Node.js Applications in Cloud Foundry

I just added a post over on Medium describing some steps we took to do remote debugging of a Node.js application running in Cloud Foundry.

Check out the post here: https://medium.com/@KevinHoffman/debugging-node-js-applications-in-cloud-foundry-b8fee5178a09#.dblflhdxm

 

 

Migrating to ASP.NET Core in the Cloud: What you will and won’t miss from ASP.NET 5

The folks over at InfoQ have graciously provided a nice list of technology that has been discontinued in .NET Core. You can find their original list here. They present the list without much opinion or color. In this blog post, I’d like to take a look at that list from the point of view of someone considering using ASP.NET Core for cloud native application development.

Reflection

Reflection isn’t gone, it’s just changed a little bit. When you make things designed to work on multiple operating systems without having to change your code, some of your original assumptions about the code no longer apply. Reflection is one of those cases. Throughout my career, the majority of the times I’ve used Reflection in a running, production application have either been for enhanced debug information dumps in logs, or, to be honest, I was working around a complex problem in entirely the wrong way.

If you find yourself thinking that you need reflection for your .NET microservice, ask yourself if you really, really need it. Are you using it because you’re adopting some one-off, barely supported serialization scheme? Are you using it because you’re afraid your regular diagnostics aren’t good enough? I would be shocked if I ever had to use Reflection in 2016 directly when .NET Core already provides me with JSON and XML serializers.

App Domains

Manipulating application domains is something that I would consider a cloud native anti-pattern. Historically, we’ve used AppDomains for code and data isolation and as a logical unit of segmentation for CAS (Code Access Security). If your deployment target is the cloud (or basically any PaaS that uses container isolation), then I contend you should have nothing in your app that does any kind of AppDomain shenanigans. In fact, from the original article, here’s a quote from Microsoft:

For code isolation, we recommend processes and/or containers

Remoting

Ah, Remoting, how do I hate thee? Let me count the nearly infinite ways… The following statement from InfoQ’s article makes me feel old. VERY old.

These days few developers even remember the Remoting library exists, let alone how to use it.

Not only do I remember how to use it, but I wrote entire chapters for popular books on using Remoting. I’ve written custom communication channels for Remoting, implemented distributed trace collection systems that predate Splunk, and other things I shall not admit publicly… all with Remoting.

The bottom line is there is no place for a technology like Remoting in an ecosystem of services all scaling elastically and moving dynamically in the cloud. It’s dinosaur tech and those of us who used to write code against it for a living are likely also dinosaurs.

You’re not going to need this in the cloud, so it’s absence from .NET Core is a blessing.

Serialization

Most of the tools you need for serialization are still there. Binary serialization is one of those ugly things that used to go deep into the bowels of your objects, even private members, and convert POCOs into state that way.

If you need high-performance binary/byte array serialization of your .NET classes, use protocol buffers. You can use them from .NET Core.

If you’re doing more traditional service-to-service communication, then you’ll be just fine with JSON and XML. As far as building services for the cloud, you should be blissfully unaware of most serialization gaps between ASP.NET 4.x and .NET Core.

Sandboxing

Sandboxing is something that has traditionally had strong, low-level OS ties. This makes its implementation in a cross-platform framework a little more complicated and subject to least common denominator culling. If you’re just building services, you should be fine, but there’s a sentence in the InfoQ article that is decidedly not cloud-native:

The recommended alternative is to spawn separate processes that run under a user account with restricted permissions

This is not something you should be doing in the cloud. As a cloud native developer, you should be unconcerned with the identity with which your application process runs – this is a detail that is abstracted away from you by your PaaS. If you need credentials to access a backing service, you can deal with that using bound resources or bound services, which is all externalized configuration.

Sandboxing as the article describes isn’t something you should be doing when developing services for the cloud, so you should hopefully be well insulated from any of the changes or removals in .NET Core related to this.

Miscellaneous

There are a handful of miscellaneous changes also mentioned by the InfoQ article:

  • DataTable/DataSet – I feel so. very. old. Many of you will likely have no idea what these things are. This is how your grandparents communicated with SQL data sources prior to the invention of Entity Framework, nHibernate, or LINQ to SQL. That’s right, we wrote code uphill both ways in the snow. Get off my lawn. You will not need these in the cloud.
  • System.DirectoryServices – For some pretty obvious reasons, this doesn’t exist. You shouldn’t need to use this at all. If you need to talk to LDAP, you can do so using simpler interfaces or, better yet, through a token system like OAuth2.
  • System.Transactions – Distributed transactions, at least the traditional kind supported by libraries like this, are a cloud-native anti-pattern. Good riddance.
  • XSL and XmlSchema – Ok, so these might still be handy and I can see a couple types of services that might actually suffer a bit from their absence in the framework. Good news is that .NET Core is open source, so if enough people need this, either Microsoft or someone else will put it in.
  • System.Net.Mail – If you need to send mail from your cloud native service, you should consider using a brokered backing service for e-mailing. Just about every PaaS with a service marketplace has some kind of cloud-based mailing system with a simple REST API.
  • System.IO.Ports – I’m fairly certain you should not be writing code that communicates with physical serial ports in your cloud native service.
  • Workflow – Windows Workflow Foundation is also gone from .NET Core. Good riddance. I did a tremendous amount of work with that beast, and I tried harder than most mere mortals to like it, to embrace it, and to use it for greatness. I was never pleased with anything I produced in WF. The amount of hacky statefulness required to get it working would have immediately put this tech onto the cloud native naughty list anyway.
  • XAML – You do not need XAML to build a cloud service, so this is also no big loss.

Conclusion

The bottom line is that, aside from a few high-friction experiences in the current release candidate, the feature set of ASP.NET Core contains everything you need right now to build microservices for the cloud. The biggest concern isn’t what isn’t in .NET Core, it’s what third party libraries for accessing backing services are either missing or not ready for prime time. That’s where I see the risk in early adoption, not in the core feature set of the framework.

Configuration Options and DI with ASP.NET 5

In ASP.NET 5, we now have access to a robust, unified configuration model that finally gives us the configuration flexibility that we’ve spent the past several years hoping and begging for.  Gone are the days of the hideous web.config XML files, and, if we’ve done our jobs right, so too are the days of web-dev.configweb-prod.config, and other such abominations that turn out to be cloud native anti-patterns.

The new configuration system explicitly decouples the concept of a configuration source with the actual value accessible from the Configuration object (yes, IConfiguration is always available via DI). When you use yeoman to create an ASP.NET application from a template, you’ll get some boilerplate code that goes in your Startup.cs file that looks something like this:

 public Startup(IHostingEnvironment env)
        {
            // Set up configuration sources.
            var builder = new ConfigurationBuilder()
                .AddJsonFile("appsettings.json")         
                .AddEnvironmentVariables();
            Configuration = builder.Build();
        }

What you see in this code is the application setting up two configuration sources: A JSON file called “appsettings.json” and the collection of key-value pairs pulled from environment variables. You can access these configuration settings using the colon (:) separator for arbitrary depth accessors. For example, the default “appsettings.json” file looks like this:

{
  "Logging": {
    "IncludeScopes": false,
    "LogLevel": {
      "Default": "Verbose",
      "System": "Information",
      "Microsoft": "Information"
    }
  }
}

If you wanted to know the default log level, you could access it using the configuration key string Logging:LogLevel:Default. While this is all well and good, ideally we want to segment our configuration into as many small chunks as possible, so that modules of our app can be configured separately (reducing the surface area of change per configuration modification). One thing that helps us with this is the fact that ASP.NET 5’s configuration system lets us define Options classes (POCOs) that essentially de-serialize arbitrary key-value structures from any configuration source into a C# object.

To continue adding to my previous samples,  let’s say that we’d like to provide some zombie configuration. This configuration will, in theory, be used to configure the zombie controller behavior. First, let’s create a JSON file (this is just for convenience, we could use any number of built-in or third-party config sources):

{	
	"ZombiesDescription" : "zombies",
	"MaxZombieCount" : 22,
	"EnableAdvancedZombieTracking" : true		
}

And now we need a POCO to contain this grouping of options:

using System;

namespace SampleMicroservice
{
	public class ZombieOptions
	{
		public String ZombiesDescription { get; set; }
		public int MaxZombieCount { get; set; }
		public bool EnableAdvancedZombieTracking { get; set; }
	}
}

We’re almost ready to roll. Next, we need to modify our ZombieController class to use these options. We’re going to continue to follow the inversion of control pattern and simply declare that we now depend on these options being injected from an external source:

namespace SampleMicroservice.Controllers
{		
	[Route("api/zombies")]
	public class ZombieController 
	{
		private IZombieRepository repository;
		private IGlobalCounter counter;
                private ZombieOptions options;
		
		public ZombieController(IZombieRepository zombieRepository, 
								IGlobalCounter globalCounter,
								IOptions<ZombieOptions> zombieOptionsAccessor) {
			repository = zombieRepository;	
			counter = globalCounter;
			options = zombieOptionsAccessor.Value;
		}
		
		
		[HttpGet]
        public IEnumerable<Zombie> Get()
        {
			counter.Increment();
			Console.WriteLine("count: " + counter.Count + 
				", advanced zombie tracking: " + options.EnableAdvancedZombieTracking + 
				", zombie name: '" + options.ZombiesDescription + "'");			
            return repository.ListAll();
        }
	}
}

NOTE that there’s a bug in the current RC1 documentation. In my code, I use zombieOptionsAccessor.Value whereas the docs say this property name is called OptionsValue is the one that compiles on RC1, so YMMV. Also note that we’re not injecting a direct instance of ZombieOptions, we’re injecting an IOptions<T> wrapper around it.

Finally we can modify the Startup.cs to add options support and to configure the ZombieOptions class as an options target:

// This method gets called by the runtime. Use this method to add services to the container.
        public void ConfigureServices(IServiceCollection services)
        {
            // Add framework services.
            services.AddMvc();            
            services.AddOptions();
            services.Configure<ZombieOptions>(Configuration);                       
            
            services.AddScoped<IZombieRepository, ZombieRepository>();
            services.AddSingleton<IGlobalCounter, GlobalCounter>();
 }

The important parts here are the call to AddOptions(), which enables the options service, and then the call to Configure<ZombieOptions>(Configuration), which uses the options service to pull values from a source (in this case, Configuration) and stuff them into a DI-injected instance of ZombieOptions.

There is one thing that I find a little awkward with the ASP.NET 5 configuration system: all of your sources and values are merged at the top level namespace. After you add all your sources, all of the top-level property names are now peers of each other in the master IConfiguration instance.

This means that while all the properties of ZombieOptions have meaning together, they are floating around loose at the top level with all other values. If you name one of your properties something like Enabled and someone else has a third party options integration that also has a property called Enabled, those will overlap, and the last-loaded source will set the value for both options classes. If you really want to make sure that your options do not interfere with anyone else’s, I suggest using nested properties. So our previous JSON file would be modified to look like this:

{
  "ZombieConfiguration": {
	"ZombiesDescription" : "zombies",
	"MaxZombieCount" : 22,
	"EnableAdvancedZombieTracking" : true
  }
}

Then the ZombieOptions class would move the three properties down one level below a new parent property called ZombieConfiguration. At this point, you would only have to worry about name conflicts with other config sources that contain properties called ZombieConfiguration.

In conclusion, after this relatively long-winded explanation of ASP.NET 5 configuration options, it finally feels like we have the flexibility we have all been craving when it comes to configuring our applications. As I’ll show in a future blog post, this amount of flexibility sets us up quite nicely for the ability to work both locally and deploy our applications to the cloud and still have rich  application configuration without returning to the dark times of web.config.

Dependency Injection and Services in ASP.NET 5

In this blog post, I talked about how to build a microservice using ASP.NET 5. While it was a trivial, contrived sample, it did show how easy it is to get started with ASP.NET 5 (did I mention this was on a Mac? That’s never going to get old).

Dependency Injection works on the Inversion of Control principle. Rather than using what has colloquially been referred to as “new glue” where every class creates instances of the classes on which they depend, you avoid this tight coupling by simply being explicit about the dependencies required for your class by listing them in your application’s constructor. This gives you a ton more flexibility, but, most importantly, it also makes it much easier to test your application’s classes in isolation without resorting to terrible, space-time-altering hacks to test things.

The old (“new glue”) way:

public MyConstructor() {
  subordinateClass1 = new SubordinateClass1();
  subordinateClass2 = new SubordinateClass2();
}

The new (DI) way:

public MyConstructor(ISubordinate1 subordinate1, ISubordinate2 subordinate2) 
{
  this.subordinate1 = subordinate1;
  this.subordinate2 = subordinate2;
}

Among the many advantages of the second way is that the class is usable both with and without a formal dependency injection framework, and is far more testable than the first option.

So, now that we’ve got a “why do we care about DI?” primer out of the way, let’s take a look at how this might be used to apply to an ASP.NET 5 application.

In the ConfigureServices method of our application startup, we can choose to add built-in services (like ASP.NET MVC, identity, security, entity framework, etc) or we can add services that we have created with one of 3 different lifetime configurations:

  • Transient – Every time an instance of this interface is requested, the receiving object gets a new instance.
  • Scoped – This is request scope, so the instance of the injected object will remain intact throughout the lifetime of a web request. This is a particularly useful lifetime.
  • Singleton – A single instance of this object will be dispensed to all requesting injection points.

Let’s say that we want to replace the hacky controller-embedded code we put in the previous sample for the Get() method of our zombie controller with an actual zombie repository. If we did this the old way, we would new up an instance of the repository inside our controller, incur a tight coupling penalty, and then make it horribly difficult to test said controller. Using DI, we can modify our controller so it looks like this:

[Route("api/zombies")]
	public class ZombieController 
	{
		private IZombieRepository repository;
		private IGlobalCounter counter;
		
		public ZombieController(IZombieRepository zombieRepository, IGlobalCounter globalCounter) {
			repository = zombieRepository;	
			counter = globalCounter;
		}
		
		[HttpGet]
        public IEnumerable<Zombie> Get()
        {
			counter.Increment();
			Console.WriteLine("count: " + counter.Count);			
            return repository.ListAll();
        }
	}

Now our controller’s dependencies are explicit in the constructor, and are satisfied by some external source, such as our testing framework or a DI framework. Nowhere in this controller will you find “new glue” or tight coupling to a particular implementation of these classes.

To register specific implementations and lifetimes of these objects, we can modify the ConfigureServices method of our Startup class as shown here:

  public void ConfigureServices(IServiceCollection services)
        {
            // Add framework services.
            services.AddMvc();
            
            services.AddScoped<IZombieRepository, ZombieRepository>();
            services.AddSingleton<IGlobalCounter, GlobalCounter>();
        }

When we run this application now, a new instance of ZombieRepository will be created for each HTTP request and, because it’s a hacked implementation, it just returns the same array as the previous blog post. In the console we will see a global counter continue to increment beyond the span of individual HTTP requests, and this value will continue to increment until the host process (e.g. Kestrel) is shut down.

While there are too many possibilities to list here, as you go forward building your ASP.NET 5 applications, you should always be asking yourself, “Could this be a service?” Any time you find yourself isolating some piece of functionality into a module, check to see whether it fits nicely into the services framework in ASP.NET. This isn’t just a third party extension point, remember that ASP.NET 5 is now a decoupled, modular thing and nearly all of ASP.NET’s own functionality is added to the core with this same services framework.

Pushing an ASP.NET 5 Microservice to Cloud Foundry

In my previous blog post, I walked through the process of creating a lightweight microservice in ASP.NET 5 that builds in a cross-platform fashion, so it can run on Windows, Linux, and OS X. I’m going to glaze over how completely amazing it is that we now live in a time where I can write cross-platform microservices in C# using emacs.

If you’re building microservices, then there’s a good chance you’re thinking about deploying that microservice out into some Platform as a Service provider. For obvious reasons, my favorite PaaS target is Cloud Foundry (you can use any of the commercial providers that rest atop the CF Foundation like Pivotal Web Services, IBM Bluemix, HP, etc).

Thankfully, there is a community supported buildpack for ASP.NET 5. I believe it’s up to date with beta 8 of ASP.NET 5, so there is a minor modification you’ll have to make in order to get your application to launch in Cloud Foundry.

First, go into the project.json file and make sure your commands section looks like this:

"commands": {
    "web": "Microsoft.AspNet.Server.Kestrel",
    "kestrel": "Microsoft.AspNet.Server.Kestrel"
  },

The reason for this is that the buildpack expects to use the default launch command from pre-RC1, kestrel. If you’ve used the RC1 project generator, then the default kestrel launch command is just called web. The easy fix for this, as shown above, is just to create a duplicate command called kestrel. Now you can launch your app with dnx web or dnx kestrel, and that’ll make the buildpack happy.

With that one tiny little change, you can now push your application to the cloud:

cf push zombieservice -b https://github.com/cloudfoundry-community/asp.net5-buildpack.git

If you’re not familiar with the cf command line or Cloud Foundry, go check out Pivotal Web Services, a public commercial implementation of Cloud Foundry.

It will take a little while to deal with uploading the app, gathering the dependencies, and then starting. But, it works! To recap, just so this sinks in: I can write an ASP.NET microservice, in C#, using my favorite IDE (vi!), on my Mac, and deploy it to a container in the cloud, which is running Linux. Oh yeah, and if I really feel like it I can run this on Windows, too.

A few years ago, I would never have believed that something like this would be possible.

Creating a Microservice in ASP.NET 5 RC1

In this blog post, I am going to walk through my experience using ASP.NET 5 (on my Mac!) to create a very simple microservice. I thought I would embrace the new separated, component nature of .NET 5 (the framework formerly known as vNext) and so I tried to use the Routing middleware directly.

I don’t know whether the original authors of the routing middleware intended it to be used directly, but it is an unwieldy beast that absolutely begs to have another layer of abstraction placed on top of it. In fact, it’s so difficult to use that I got about 30 lines into a static extension when I realized I could just use the ASP.NET Web API, which is actually just ASP.NET MVC 6. Confused yet? In classic Microsoft fashion, none of this stuff makes sense until you sit through a codename and versioning seminar.

Cliff notes: ASP.NET Web API 2 (which was in ASP.NET 4.5) is now rolled into ASP.NET MVC 6, which is a part of .NET 5. Clear as mud? Ok, let’s move on.

After installing ASP.NET 5 by going to http://get.asp.net, I made sure that everything worked by running dnvm list to make sure that I was pointing at the coreclr version of .NET. I installed the Yeoman tool to aid in scaffolding, but to be honest, I wanted to avoid generators as much as possible. If this is the new, open, unfettered Microsoft, then I should be able to create my project files with vi and use nothing but command-line tools, right?

When you create a new project (either by hand, if you’re a glutton for punishment, or using a yo generator), you get a stock project.json file. Here’s what mine looks like after running yo aspnet to create an empty ASP.NET Web API project (I’m importing the ASP.NET MVC libraries because, remember using the Routing middleware directly is an experience chock full o’ suck):

{
  "version": "1.0.0-*",
  "compilationOptions": {
    "emitEntryPoint": true
  },
  "tooling": {
    "defaultNamespace": "SampleMicroservice"
  },

  "dependencies": {
    "Microsoft.AspNet.IISPlatformHandler": "1.0.0-rc1-final",
    "Microsoft.AspNet.Mvc": "6.0.0-rc1-final",
    "Microsoft.AspNet.Server.Kestrel": "1.0.0-rc1-final",
    "Microsoft.AspNet.StaticFiles": "1.0.0-rc1-final",
    "Microsoft.Extensions.Configuration.FileProviderExtensions" : "1.0.0-rc1-final",
    "Microsoft.Extensions.Logging": "1.0.0-rc1-final",
    "Microsoft.Extensions.Logging.Console": "1.0.0-rc1-final",
    "Microsoft.Extensions.Logging.Debug" : "1.0.0-rc1-final"
  },

  "commands": {
    "web": "Microsoft.AspNet.Server.Kestrel"
  },

  "frameworks": {
    "dnx451": { },
    "dnxcore50": { }
  },

  "exclude": [
    "wwwroot",
    "node_modules",
    "bower_components"
  ],
  "publishExclude": [
    "**.user",
    "**.vspscc"
  ]
}

This is a far cry from the good old days of ridiculous .csproj files and .sln files. Also, I should re-iterate that I’m doing this on my Mac. For those of us who have been using C# since before the 1.0 days, the concept of non-mono, Mac-compiled C# feels a little bit like washing ashore on paradise after being adrift for 15 years (I feel so old… I can remember writing C# in notepad and compiling with csc.exe before Visual Studio even existed).

If you’ve been following along with ASP.NET 5, you know that you now have a Startup.cs file that rigs everything up, and configures your application:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNet.Builder;
using Microsoft.AspNet.Hosting;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;

namespace SampleMicroservice
{
    public class Startup
    {
        public Startup(IHostingEnvironment env)
        {
            // Set up configuration sources.
            var builder = new ConfigurationBuilder()
                .AddJsonFile("appsettings.json")
                .AddEnvironmentVariables();
            Configuration = builder.Build();
        }

        public IConfigurationRoot Configuration { get; set; }

        // This method gets called by the runtime. Use this method to add services to the container.
        public void ConfigureServices(IServiceCollection services)
        {
            // Add framework services.
            services.AddMvc();
        }

        // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
        public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
        {
            loggerFactory.AddConsole(Configuration.GetSection("Logging"));
            loggerFactory.AddDebug();

            app.UseIISPlatformHandler();

            app.UseStaticFiles();

            app.UseMvc();
        }

        // Entry point for the application.
        public static void Main(string[] args) => Microsoft.AspNet.Hosting.WebApplication.Run<Startup>(args);
    }
}

This is also mostly boilerplate. The highlights here are “services.AddMvc()” which, using dependency injection and a pretty robust module architecture, sets your application up for ASP.NET MVC routing and template rendering.

Now that we have that ceremony and configuration out of the way, let’s create a controller (there’s no longer a distinction between an MVC controller and a Web API controller, and that’s a good thing) that serves up the usual array of Zombies:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNet.Mvc;

namespace SampleMicroservice.Controllers
{	
	public class Zombie {
		public String Name { get; set; }
		public int Age { get; set; }
	}
	
	[Route("api/zombies")]
	public class ZombieController : Controller
	{
		[HttpGet]
        public IEnumerable<Zombie> Get()
        {
            return new Zombie[] { new Zombie() { Name = "Bob", Age = 21},
								  new Zombie() { Name = "Alfred", Age = 52}
								 };
        }
	}
}

With all this in place, we can do  dnu restore (you might need to do this in a sudo. RC1 creates files with silly permissions, at least on my Mac). This will vendor our dependencies based on the project.json file. Next, we can do dnx web (also might need to sudo this as well).

If everything worked properly, I can hit http://localhost:5000/api/zombies and get an array of properly JSON-serialized objects. I won’t go into the details of creating the POST/PUT/DELETE methods on the controller because that hasn’t changed much.

In conclusion, I am truly excited about the prospect of being able to build microservices and web applications in ASP.NET 5. I am even more excited about the fact that I might be able to do this using nothing but my Mac and my favorite non-Microsoft text editor (vi FTW baby!). I was a little disappointed that tapping directly into the Routing middleware was cumbersome and ugly, but slotting in ASP.NET MVC 6 as a service worked like a champ and gave me an annotation-based routing configuration that works just like the old web API.

I am looking forward to exploring ASP.NET 5 further!