Kotan Code 枯淡コード

In search of simple, elegant code

Menu Close

Tag: dotnet (page 1 of 2)

Adding a Configurable Global Route Prefix in ASP.NET Core

This morning I was asked if it was possible to set up a configurable global route prefix in ASP.NET Core applications. I’ve done this in the past using the old (now legacy) Web API as well as with older versions of ASP.NET MVC, but I hadn’t yet tried it with ASP.NET Core.

Before I get into the how, I think it’s worth mentioning a little of the why. In this case, I have an application (and, like all applications, it’s just a microservice) that I run locally but I also deploy it in a number of environments (clouds or cloud installations). In some of those environments, the application runs as a sub-domain, e.g. myapp.mycompany.com. In other environments, it runs with a path prefix like mycompany.com/myapp. I don’t want to hard-code this capability into my application, I want the environment to be able to tell my application in which mode it is operating without me having to rebuild the app.

To do this, we need to create a new convention. Route conventions are just code that executes an Apply() method against an application model. This apply method is executed the first time Kestrel receives an inbound request, which is when it evaluates the entire system to prepare all the routes.

Filip W (@filip_woj on Twitter) has put together a sample that shows how to do this. The following snippet is from his code which can be found in Github here.

public class RouteConvention : IApplicationModelConvention
    {
        private readonly AttributeRouteModel _centralPrefix;

        public RouteConvention(IRouteTemplateProvider routeTemplateProvider)
        {
            _centralPrefix = new AttributeRouteModel(routeTemplateProvider);
        }

        public void Apply(ApplicationModel application)
        {
            foreach (var controller in application.Controllers)
            {
                var matchedSelectors = controller.Selectors.Where(x => x.AttributeRouteModel != null).ToList();
                if (matchedSelectors.Any())
                {
                    foreach (var selectorModel in matchedSelectors)
                    {
                        selectorModel.AttributeRouteModel = AttributeRouteModel.CombineAttributeRouteModel(_centralPrefix,
                            selectorModel.AttributeRouteModel);
                    }
                }

                var unmatchedSelectors = controller.Selectors.Where(x => x.AttributeRouteModel == null).ToList();
                if (unmatchedSelectors.Any())
                {
                    foreach (var selectorModel in unmatchedSelectors)
                    {
                        selectorModel.AttributeRouteModel = _centralPrefix;
                    }
                }
            }
        }
    }

The cliff notes for what’s happening here is that when the Apply() method is called, we iterate through all of the controllers that the application model knows about. For each of those controllers, we query the selectors where there exists a route model. If we have a controller that has at least one matching selector (it has a Route() attribute on it), then we combine that controller’s route attribute with the prefix. If the controller doesn’t have a matching selector (there are no route attributes), then we set it.

This has the net effect of adding the prefix we’ve defined to every single one of the RESTful routes that we’ve defined in our application. To use this new convention, we can add it while we’re configuring services during startup (also shown in the Github demo) (note that the UseCentralRoutePrefix method is an extension method that you can see in the linked github repo:

public class Startup
{
    public Startup(IHostingEnvironment env)
    {}

    public void ConfigureServices(IServiceCollection services)
    {
        services.AddMvc(opt =>
        {
            opt.UseCentralRoutePrefix(new RouteAttribute("api/v{version}"));
        });
    }

    public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
    {
        loggerFactory.AddDebug();
        app.UseMvc();
    }
}

Again, this isn’t my code, these are just extremely useful nuggets that I found on the Strathweb blog. Now that we’ve got a convention and we know how to add that convention, we can modify the above code slightly so that we can accept an override from the environment, allowing us to deploy to multiple cloud environments.

We don’t really want to read directly from the environment. Instead, we want to use AddEnvironmentVariables when setting up our configuration. This allows us to set the app’s global prefix in a configuration file, and then override it with an environment variable, giving us the best flexibility while developing locally and pushing to a cloud via CI/CD pipelines.

So, our new Startup method looks like this:

public Startup(IHostingEnvironment env)
{
  var builder = new ConfigurationBuilder()
      .SetBasePath(env.ContentRootPath)
      .AddJsonFile("appsettings.json", optional: true, reloadOnChange: false)
      .AddEnvironmentVariables()
  Configuration = builder.build();
}

public IConfigurationRoot Configuration { get; }

And we can modify the ConfigureServices method so that we read the prefix from configuration (which will have already done the overriding logic during the Startup method):

public void ConfigureServices(IServiceCollection services)
{
  var prefix = Configuration.GetSection("globalprefix") != null ? Configuration.GetSection("globalprefix").Value : String.Empty;
 services.AddMvc(opt =>
 {
     opt.UseCentralRoutePrefix(new RouteAttribute(prefix));
 });
}

And that’s it. Using the RouteConvention class we found on Filip W’s blog and combining that with environment-overridable configuration settings, we now have an ASP.NET Core microservice that can run as the root of a sub-domain (myapp.foo.com) or at any level of nesting in context paths (foo.com/this/is/myapp) all without having to hard-code anything or recompile the app.

Finally, we can put something like this in our appsettings.json file:

{
  "globalprefix" : "deep/root"
}

And optionally override that with an environment variable of the same name.

Migrating an ASP.NET Core RC1 Microservice to RC2

In a recent blog post, I talked about how to deploy an ASP.NET Core RC1 service to the cloud. With yesterday’s release of ASP.NET Core RC2, I had to make a number of minor changes to my code in order to get it working both locally and in the cloud.

The biggest change between RC1 and RC2 is an architectural change. In RC1, the thing responsible for prepping the application was inside the dnx tool, and it was tightly coupled to the notion of building web applications. In Rc2, we’re actually returning to a more explicit model with a Program class and a Main(), where we write our own code to construct the WebHostBuilder instead of it being done implicitly on our behalf.

First, there are changes to the structure of the project.json file. Rather than go through them all, I’m just going to point you to the modified project.json for the Zombies service. The easiest thing to do is to take a project.json file created by new RC2 tooling and use that as a template to adapt your existing file.

The next thing I did was add a Program class. You’ll notice that I use the AddStartup() method to invoke all of the old startup code I had from RC1:

using System;
using System.Collections.Generic;
using System.IO;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Builder;
using Microsoft.Extensions.Configuration;

namespace SampleMicroservice
{
  public class Program
  {
    public static void Main(string[] args)
    {
      var config = new ConfigurationBuilder()
        .AddCommandLine(args)
        .Build();

      var host = new WebHostBuilder()
        .UseConfiguration(config)
        .UseKestrel()
        .UseContentRoot(Directory.GetCurrentDirectory())
        .UseStartup<Startup>()
        .Build();

      host.Run();
    }
  }
}

The next change is that I had to use the AddCommandLine() method. Without this, our application isn’t going to respond to or care about the –server.urls parameter that is going to get passed to it by the buildpack when we push to Cloud Foundry.

I had to make a few tiny little changes to the Startup class, but it remains essentially unchanged:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Options;

namespace SampleMicroservice
{
    public class Startup
    {
        public Startup(IHostingEnvironment env)
        {
            var builder = new ConfigurationBuilder()
                .SetBasePath(env.ContentRootPath)
                .AddJsonFile("appsettings.json", optional: true, reloadOnChange: false)
                .AddJsonFile("ZombieConfig.json", optional: true, reloadOnChange: false)
                .AddEnvironmentVariables();
            Configuration = builder.Build();
        }

        public IConfigurationRoot Configuration { get; set; }

        public void ConfigureServices(IServiceCollection services)
        {
            services.AddMvc();
            services.AddOptions();
            services.Configure<ZombieOptions>(Configuration);

            services.AddScoped<IZombieRepository, ZombieRepository>();
            services.AddSingleton<IGlobalCounter, GlobalCounter>();
        }

        public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
        {
            loggerFactory.AddConsole(Configuration.GetSection("Logging"));
            loggerFactory.AddDebug();
            app.UseMvc();
        }
    }
}

The Configure<ZombieOptions>() method is supported by the same extension method as in RC1, except this method is in a new package, so we had to add a dependency on Microsoft.Extensions.Options.ConfigurationExtensions to our project.json.

With all this in place, we can simply run dotnet restore then dotnet build and finally cf push and that’s that… our app is now running on RC2 in Cloud Foundry.

All of the code for the RC2 version of this app can be found in my github repository.

Using Cloud Foundry Bound Services with SteelToe on ASP.NET Core

Before I begin, if you’re not familiar with the concept of backing services or bound services in Cloud Foundry (any of the implementations, including PCF Dev, support service bindings in some form), check out the Pivotal documentation. The short of it is that rather than your application hard-coding information about the backing services with which it communicates, it should instead externalize that configuration. One way of doing this is through Cloud Foundry’s services mechanism.

When you bind services, either user-provided or brokered, they appear in an environment variable given to your application called VCAP_SERVICES. While this is just a simple JSON value, and you could pretty easily parse it on your own, you shouldn’t have to. This is a solved problem and there’s a library available for .NET that includes as part of its foundation some DI-friendly classes that talk to Cloud Foundry’s VCAP_SERVICES and VCAP_APPLICATION environment variables.

This library is called Steel Toe. I will save the history and origin of the name Steel Toe for a future blog post. For now, you can think of Steel Toe as a set of libraries that enable ASP.NET applications to participate as clients in the Netflix OSS microservice ecosystem by talking to Configuration Server and the Discovery Server (Eureka).

So, the first thing you need to do is add a NuGet reference to the assembly SteelToe.Extensions.Configuration.CloudFoundry. To do this, you’ll have to use the feed SteelToe is on, since you won’t find it in the regular Microsoft feed. SteelToe packages can currently be found here:

  • Master – https://www.myget.org/F/steeltoemaster/api/v3/index.json
  • Dev – https://www.myget.org/F/steeltoedev/api/v3/index.json

Once you’ve told either your command line tools or Visual Studio 2015 where to go look for Steel Toe, you can add a reference to the Assembly mentioned above (there are more than this one, but for this blog post we’re only covering Cloud Foundry configuration).

To add Cloud Foundry’s environment variables as a configuration provider, we just use the AddCloudFoundry method in our Startup method:

 var builder = new ConfigurationBuilder()
                .AddJsonFile("appsettings.json")
                .AddEnvironmentVariables()
                .AddCloudFoundry();
 Configuration = builder.Build();

At this point we could continue and just use the syntax vcap:services:p-mysql:0:uri for raw configuration values, but I like injecting service binding information using the IOptions feature, which converts the service bindings into a collection of POCOs. To use it, our ConfigureServices method might look something like this:

  services.AddMvc();

  services.AddOptions();
  services.Configure<CloudFoundryApplicationOptions>(Configuration);
  services.Configure<CloudFoundryServicesOptions>(Configuration);

This is going to make both our application metadata and our service bindings available as injectable options. We can make use of these easily in a controller. The following is a modified “values” controller borrowed right from the Visual Studio “new project” template for ASP.NET Web API. I’ve added a constructor that allows these options to be injected:

[Route("api/[controller]")]
public class ValuesController : Controller
{
    public ValuesController(IOptions<CloudFoundryApplicationOptions> appOptions,
        IOptions<CloudFoundryServicesOptions> serviceOptions)
    {
        AppOptions = appOptions.Value;
        ServiceOptions = serviceOptions.Value;
    }

    CloudFoundryApplicationOptions AppOptions { get; set; }
    CloudFoundryServicesOptions ServiceOptions { get; set; }

    [HttpGet]
    public Object Get()
    {
        return new
        {
            AppName = AppOptions.ApplicationName,
            AppId = AppOptions.ApplicationId,
            Service = new
            {
                Label = ServiceOptions.Services[0].Label,
                Url = ServiceOptions.Services[0].Credentials["url"].Value,
                Password = ServiceOptions.Services[0].Credentials["password"].Value
            }
        };           
    }
}

Now if I bind a service to my application that has a url credential property and a password credential property, I’ll be able to see that when I issue a GET to /api/values. Here’s a sample output that I get from using the REST api for my service:

{"AppName":"bindings-example","AppId":"a605f066-8c2c-4ae3-8dbf-b2e7931fc39d","Service":{"Label":"user-provided","Url":"http://foo.bar","Password":"l33t"}}

You should try this out – create an application, copy some of the code from this blog post, push it to Cloud Foundry, and bind a couple of user-provided services to it and see how incredibly easy Steel Toe makes externalizing your backing services configuration.

Deploying ASP.NET 5 (Core) Apps to Windows in Cloud Foundry

Today I found myself sequestered in a room with a pile of super smart people. This is my favorite kind of day. We have already experimented with pushing an ASP.NET Core application to Cloud Foundry on linux, but we haven’t tried out whether it would work on Windows.

There are a bunch of wrong ways to do this, and we discovered a number of them. First, it’s possible (and actually a default in RC1) to use dnu publish to produce an application that needs to be compiled and doesn’t have its dependencies fully vendored. This is not only a bad idea, but it violates a number of the 12 factors, so we can’t have that.

In Release Candidate 1, you need to explicitly specify which runtime you want bundled with the app as well as indicate that you want your application compiled via the no-source option:

dnu publish --no-source --runtime active

Assuming we’ve already got a functioning, buildable application and we run this from the application’s project root directory, this will then run through a ton of compile spam and finally publish the app to a directory like bin/output/approot/, but you can override this location as well.

When RC2 comes around, we are likely to see the publish command show up as a parameter to the dotnet command line tool. Unfortunately, even though I’ve got an early RC2 build, there’s no way of knowing if that’s going to change so I don’t want to confuse anyone by speculating as to the final syntax.

Next we go into this directory and modify the web.cmd file. At the very end of this file you’ll see a line that looks like this:

@"%DNX_PATH%" --project "%~dp0packages\SampleMicroservice\1.0.0\root" --configuration Debug web %*

Make note of the %~dp0packages section of this. This substitution is looking at the packages sub-directory within the published approot directory. If we didn’t publish with the –no-source option, then our compiled service would not appear as a package here.

While this will work just fine on a plain vanilla Windows Server box, it’s not going to work in the cloud because cloud applications need to adhere to the port binding factor. This means that the PaaS needs to be able to tell the application on which port to listen, because it is taking care of mapping the inside-container ports to the outside-container ports (e.g. port 80).

So we modify this last line of web.cmd to look like this:

@"%DNX_PATH%" --project "%~dp0packages\SampleMicroservice\1.0.0\root" --configuration Debug --server.urls=%PORT% web %*

We’ve added the %PORT% environment variable, which we know is set by Cloud Foundry for all applications, Windows or Linux. Now our start command will use the application-bundled runtime to launch the application, which will have all of its dependencies vendored locally. The only thing we need to do before we push to Cloud Foundry is ensure that our Windows Cell has the right frameworks installed on it. For our test, we made sure that ASP.NET 4.6 was installed.

Now we can push the application, assuming we’re using the default name for the Windows stack:

cf push sample-service -b binary_buildpack -s windows2012R2 -c "approot\web.cmd"

If your application does not expose a legitimate response to a GET of the root (GET /) then you’ll want to push it with –health-check-type none so PCF doesn’t think your app is crashing. Note that you have to specify the location pointing to the web.cmd file to set your start command. Your app isn’t going to start properly without that.

It would be nice if Microsoft let us supply parameters to alter the generated start command inside web.cmd, but we’ll take what we can get.

While it’s a little bit inconvenient to have to modify the web.cmd generated by the publish step, it isn’t all that difficult to push your ASP.NET 5/Core RC1 application to Pivotal Cloud Foundry on Windows cells.

Migrating to ASP.NET Core in the Cloud: What you will and won’t miss from ASP.NET 5

The folks over at InfoQ have graciously provided a nice list of technology that has been discontinued in .NET Core. You can find their original list here. They present the list without much opinion or color. In this blog post, I’d like to take a look at that list from the point of view of someone considering using ASP.NET Core for cloud native application development.

Reflection

Reflection isn’t gone, it’s just changed a little bit. When you make things designed to work on multiple operating systems without having to change your code, some of your original assumptions about the code no longer apply. Reflection is one of those cases. Throughout my career, the majority of the times I’ve used Reflection in a running, production application have either been for enhanced debug information dumps in logs, or, to be honest, I was working around a complex problem in entirely the wrong way.

If you find yourself thinking that you need reflection for your .NET microservice, ask yourself if you really, really need it. Are you using it because you’re adopting some one-off, barely supported serialization scheme? Are you using it because you’re afraid your regular diagnostics aren’t good enough? I would be shocked if I ever had to use Reflection in 2016 directly when .NET Core already provides me with JSON and XML serializers.

App Domains

Manipulating application domains is something that I would consider a cloud native anti-pattern. Historically, we’ve used AppDomains for code and data isolation and as a logical unit of segmentation for CAS (Code Access Security). If your deployment target is the cloud (or basically any PaaS that uses container isolation), then I contend you should have nothing in your app that does any kind of AppDomain shenanigans. In fact, from the original article, here’s a quote from Microsoft:

For code isolation, we recommend processes and/or containers

Remoting

Ah, Remoting, how do I hate thee? Let me count the nearly infinite ways… The following statement from InfoQ’s article makes me feel old. VERY old.

These days few developers even remember the Remoting library exists, let alone how to use it.

Not only do I remember how to use it, but I wrote entire chapters for popular books on using Remoting. I’ve written custom communication channels for Remoting, implemented distributed trace collection systems that predate Splunk, and other things I shall not admit publicly… all with Remoting.

The bottom line is there is no place for a technology like Remoting in an ecosystem of services all scaling elastically and moving dynamically in the cloud. It’s dinosaur tech and those of us who used to write code against it for a living are likely also dinosaurs.

You’re not going to need this in the cloud, so it’s absence from .NET Core is a blessing.

Serialization

Most of the tools you need for serialization are still there. Binary serialization is one of those ugly things that used to go deep into the bowels of your objects, even private members, and convert POCOs into state that way.

If you need high-performance binary/byte array serialization of your .NET classes, use protocol buffers. You can use them from .NET Core.

If you’re doing more traditional service-to-service communication, then you’ll be just fine with JSON and XML. As far as building services for the cloud, you should be blissfully unaware of most serialization gaps between ASP.NET 4.x and .NET Core.

Sandboxing

Sandboxing is something that has traditionally had strong, low-level OS ties. This makes its implementation in a cross-platform framework a little more complicated and subject to least common denominator culling. If you’re just building services, you should be fine, but there’s a sentence in the InfoQ article that is decidedly not cloud-native:

The recommended alternative is to spawn separate processes that run under a user account with restricted permissions

This is not something you should be doing in the cloud. As a cloud native developer, you should be unconcerned with the identity with which your application process runs – this is a detail that is abstracted away from you by your PaaS. If you need credentials to access a backing service, you can deal with that using bound resources or bound services, which is all externalized configuration.

Sandboxing as the article describes isn’t something you should be doing when developing services for the cloud, so you should hopefully be well insulated from any of the changes or removals in .NET Core related to this.

Miscellaneous

There are a handful of miscellaneous changes also mentioned by the InfoQ article:

  • DataTable/DataSet – I feel so. very. old. Many of you will likely have no idea what these things are. This is how your grandparents communicated with SQL data sources prior to the invention of Entity Framework, nHibernate, or LINQ to SQL. That’s right, we wrote code uphill both ways in the snow. Get off my lawn. You will not need these in the cloud.
  • System.DirectoryServices – For some pretty obvious reasons, this doesn’t exist. You shouldn’t need to use this at all. If you need to talk to LDAP, you can do so using simpler interfaces or, better yet, through a token system like OAuth2.
  • System.Transactions – Distributed transactions, at least the traditional kind supported by libraries like this, are a cloud-native anti-pattern. Good riddance.
  • XSL and XmlSchema – Ok, so these might still be handy and I can see a couple types of services that might actually suffer a bit from their absence in the framework. Good news is that .NET Core is open source, so if enough people need this, either Microsoft or someone else will put it in.
  • System.Net.Mail – If you need to send mail from your cloud native service, you should consider using a brokered backing service for e-mailing. Just about every PaaS with a service marketplace has some kind of cloud-based mailing system with a simple REST API.
  • System.IO.Ports – I’m fairly certain you should not be writing code that communicates with physical serial ports in your cloud native service.
  • Workflow – Windows Workflow Foundation is also gone from .NET Core. Good riddance. I did a tremendous amount of work with that beast, and I tried harder than most mere mortals to like it, to embrace it, and to use it for greatness. I was never pleased with anything I produced in WF. The amount of hacky statefulness required to get it working would have immediately put this tech onto the cloud native naughty list anyway.
  • XAML – You do not need XAML to build a cloud service, so this is also no big loss.

Conclusion

The bottom line is that, aside from a few high-friction experiences in the current release candidate, the feature set of ASP.NET Core contains everything you need right now to build microservices for the cloud. The biggest concern isn’t what isn’t in .NET Core, it’s what third party libraries for accessing backing services are either missing or not ready for prime time. That’s where I see the risk in early adoption, not in the core feature set of the framework.

Creating a Microservice in ASP.NET 5 RC1

In this blog post, I am going to walk through my experience using ASP.NET 5 (on my Mac!) to create a very simple microservice. I thought I would embrace the new separated, component nature of .NET 5 (the framework formerly known as vNext) and so I tried to use the Routing middleware directly.

I don’t know whether the original authors of the routing middleware intended it to be used directly, but it is an unwieldy beast that absolutely begs to have another layer of abstraction placed on top of it. In fact, it’s so difficult to use that I got about 30 lines into a static extension when I realized I could just use the ASP.NET Web API, which is actually just ASP.NET MVC 6. Confused yet? In classic Microsoft fashion, none of this stuff makes sense until you sit through a codename and versioning seminar.

Cliff notes: ASP.NET Web API 2 (which was in ASP.NET 4.5) is now rolled into ASP.NET MVC 6, which is a part of .NET 5. Clear as mud? Ok, let’s move on.

After installing ASP.NET 5 by going to http://get.asp.net, I made sure that everything worked by running dnvm list to make sure that I was pointing at the coreclr version of .NET. I installed the Yeoman tool to aid in scaffolding, but to be honest, I wanted to avoid generators as much as possible. If this is the new, open, unfettered Microsoft, then I should be able to create my project files with vi and use nothing but command-line tools, right?

When you create a new project (either by hand, if you’re a glutton for punishment, or using a yo generator), you get a stock project.json file. Here’s what mine looks like after running yo aspnet to create an empty ASP.NET Web API project (I’m importing the ASP.NET MVC libraries because, remember using the Routing middleware directly is an experience chock full o’ suck):

{
  "version": "1.0.0-*",
  "compilationOptions": {
    "emitEntryPoint": true
  },
  "tooling": {
    "defaultNamespace": "SampleMicroservice"
  },

  "dependencies": {
    "Microsoft.AspNet.IISPlatformHandler": "1.0.0-rc1-final",
    "Microsoft.AspNet.Mvc": "6.0.0-rc1-final",
    "Microsoft.AspNet.Server.Kestrel": "1.0.0-rc1-final",
    "Microsoft.AspNet.StaticFiles": "1.0.0-rc1-final",
    "Microsoft.Extensions.Configuration.FileProviderExtensions" : "1.0.0-rc1-final",
    "Microsoft.Extensions.Logging": "1.0.0-rc1-final",
    "Microsoft.Extensions.Logging.Console": "1.0.0-rc1-final",
    "Microsoft.Extensions.Logging.Debug" : "1.0.0-rc1-final"
  },

  "commands": {
    "web": "Microsoft.AspNet.Server.Kestrel"
  },

  "frameworks": {
    "dnx451": { },
    "dnxcore50": { }
  },

  "exclude": [
    "wwwroot",
    "node_modules",
    "bower_components"
  ],
  "publishExclude": [
    "**.user",
    "**.vspscc"
  ]
}

This is a far cry from the good old days of ridiculous .csproj files and .sln files. Also, I should re-iterate that I’m doing this on my Mac. For those of us who have been using C# since before the 1.0 days, the concept of non-mono, Mac-compiled C# feels a little bit like washing ashore on paradise after being adrift for 15 years (I feel so old… I can remember writing C# in notepad and compiling with csc.exe before Visual Studio even existed).

If you’ve been following along with ASP.NET 5, you know that you now have a Startup.cs file that rigs everything up, and configures your application:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNet.Builder;
using Microsoft.AspNet.Hosting;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;

namespace SampleMicroservice
{
    public class Startup
    {
        public Startup(IHostingEnvironment env)
        {
            // Set up configuration sources.
            var builder = new ConfigurationBuilder()
                .AddJsonFile("appsettings.json")
                .AddEnvironmentVariables();
            Configuration = builder.Build();
        }

        public IConfigurationRoot Configuration { get; set; }

        // This method gets called by the runtime. Use this method to add services to the container.
        public void ConfigureServices(IServiceCollection services)
        {
            // Add framework services.
            services.AddMvc();
        }

        // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
        public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
        {
            loggerFactory.AddConsole(Configuration.GetSection("Logging"));
            loggerFactory.AddDebug();

            app.UseIISPlatformHandler();

            app.UseStaticFiles();

            app.UseMvc();
        }

        // Entry point for the application.
        public static void Main(string[] args) => Microsoft.AspNet.Hosting.WebApplication.Run<Startup>(args);
    }
}

This is also mostly boilerplate. The highlights here are “services.AddMvc()” which, using dependency injection and a pretty robust module architecture, sets your application up for ASP.NET MVC routing and template rendering.

Now that we have that ceremony and configuration out of the way, let’s create a controller (there’s no longer a distinction between an MVC controller and a Web API controller, and that’s a good thing) that serves up the usual array of Zombies:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNet.Mvc;

namespace SampleMicroservice.Controllers
{	
	public class Zombie {
		public String Name { get; set; }
		public int Age { get; set; }
	}
	
	[Route("api/zombies")]
	public class ZombieController : Controller
	{
		[HttpGet]
        public IEnumerable<Zombie> Get()
        {
            return new Zombie[] { new Zombie() { Name = "Bob", Age = 21},
								  new Zombie() { Name = "Alfred", Age = 52}
								 };
        }
	}
}

With all this in place, we can do  dnu restore (you might need to do this in a sudo. RC1 creates files with silly permissions, at least on my Mac). This will vendor our dependencies based on the project.json file. Next, we can do dnx web (also might need to sudo this as well).

If everything worked properly, I can hit http://localhost:5000/api/zombies and get an array of properly JSON-serialized objects. I won’t go into the details of creating the POST/PUT/DELETE methods on the controller because that hasn’t changed much.

In conclusion, I am truly excited about the prospect of being able to build microservices and web applications in ASP.NET 5. I am even more excited about the fact that I might be able to do this using nothing but my Mac and my favorite non-Microsoft text editor (vi FTW baby!). I was a little disappointed that tapping directly into the Routing middleware was cumbersome and ugly, but slotting in ASP.NET MVC 6 as a service worked like a champ and gave me an annotation-based routing configuration that works just like the old web API.

I am looking forward to exploring ASP.NET 5 further!

Creating a Microservice with the ASP.NET Web API

I’ve been blogging about microservices for quite some time, and have included examples in Scala/Akka, Go, and even Java. Today I thought I would include a sample of how to build a microservice using the Microsoft ASP.NET Web API. As I’ve mentioned a few times before, the term microservice is a bit overloaded. Really, it just refers to a service that adheres to the Single Responsibility Principle (SRP), and does one thing. Sometimes people also include the fact that a service is bootstrapped (the container/HTTP server is included in the service code rather than relied upon as an external dependency) but that detail is unimportant for this discussion.

The first thing we need to do is create an empty ASP.NET application, which should be straightforward for any Microsoft developer. Next, we can create routes for our RESTful resources. There are a number of ways to do this, but I like providing a universal default resource pattern and then overriding it at the individual resource level as exceptions occur.

I can create a starter route pattern in the global.asax.cs file like so:

  GlobalConfiguration.Configure(config =>
            {
                config.MapHttpAttributeRoutes();

                config.Routes.MapHttpRoute(
                    name: "DefaultApi",
                    routeTemplate: "api/v3/{controller}/{id}",
                    defaults: new { id = RouteParameter.Optional }
                );               
            });

If I then create a Web API controller called something like ZombiesController, and put a method like Get on it, this will automatically be the target of the route api/v3/zombies. Further, a Get(id) method will then automatically get invoked by api/v3/zombies/12.

The next step is to create a model object that we want to return from our resource URLs. Since we’ve been using zombies and zombie sightings, let’s continue with that domain and create the following C# class:

using Newtonsoft.Json;
using System;

namespace KotanCode.Microservices.Models
{    
    public class ZombieSighting
    {
        [JsonProperty(PropertyName = "zombieId")]
        public Guid ZombieId { get; set; }

        [JsonProperty(PropertyName = "name)]
        public String Name { get; set; }

        [JsonProperty(PropertyName = "lat")]
        public Double Latitude { get; set; }

        [JsonProperty(PropertyName = "long")]
        public Double Longitude { get; set; }
    }
}

Now we can create a resource that exposes a collection of zombie sightings by adding a new ASP.NET Web API controller to our project:

namespace KotanCode.Microservices.Controllers
{
    public class ZombieController : ApiController
    {
        [Route("api/v7/zombies")] // override the standard route pattern
        [HttpGet]
        public IEnumerable<ZombieSighting> GetAllSightings()
        {
            ZombieDataContext ctx = new ZombieDataContext();
            var zombies = from zomb in ctx.ZombieSightings
                           select new ZombieSighting()
                           {
                               ZombieId = zomb.ID,
                               Name = zomb.ZombieName,
                               Longitude = zomb.Long,
                               Latitude = zomb.Lat
                           };
            return zombies;        
        }
    }
}

In the preceding sample, I’ve overridden the default routing pattern set up in the application’s global object so that this method will be invoked whenever someone issues a GET on /api/v7/zombies. Just for giggles, I illustrate querying from an Entity Framework context and using a LINQ projection to convert the data entities into the JSON-decorated object that is part of my public API. In a real-world example, you might have additional layers here (e.g. an anti-corruption layer), but for this sample we’re okay because all we care about here is the REST API.

That’s it. If you add the one model object and an ApiController class to your ASP.NET application and run it, you’re good to go and you now have the basic building blocks for a microservice that you can deploy locally, or out in a cloud like Cloud Foundry or Azure.

Building a RESTful Service with Grizzly, Jersey, and Glassfish

I realize that my typical blog post of late usually revolves around creating something with iOS or building an application for Mac OS X or wondering what the hell is up with the Windows 8 identity crisis. If you’ve been following my blog posts for a while, you’ll know that there was a very long period of time where I was convinced that there was no easier way to create a RESTful service than with the WCF Web Api which is now a basic, included part of the latest version of the .NET Framework.

I have seen other ways to create RESTful services and one of my favorites was doing so using the Play! Framework, Scala, and Akka. That was a crapload of fun and anytime you can have fun being productive you know it’s a good thing.

Recently I had to wrap a RESTful service around some pre-existing Java libraries and I was shocked to find out how easily I could create a self-launching JAR that embeds its own web server and all the plumbing necessary to host the service, e.g. java -jar myservice.jar. All I needed was Maven and to declare some dependencies on Grizzly and Jersey.

To start with, you just create a driver class, e.g. something that hosts your main() method that will do the work required to launch the web server and discover your RESTful resources.


// Create and fire up an HTTP server
ResourceConfig rc = new PackagesResourceConfig("com.kotancode.samples");
HttpServer server = GrizzlyServerFactory.createHttpServer("http://localhost:9999", rc);

Once you have created your HTTP server, all you need to do is create classes in the com.kotancode.samples (the same  package name we specified in the ResourceConfig class) that will serve as your RESTful resources. So, to create a RESTful resource that returns a list of all of the zombies within a given area code you can just create a resource class (imagine that, calling a resource a resource … which hardly any RESTful frameworks actually do!):


// Zombies Resource
// imports removed for brevity

@Path("/zombies")
public class ZombieResource {

    @GET
    @Produces("application/json")
    @Path("nearby/{zipcode}")
    public String getZombiesNearby(
        @PathParam("zipcode") String zipCode
    ) {

       // Do calculations
       // Get object, convert to JSON string.
       return zombiesNearbyJsonString;
    }
}

In this simple class, we’re saying that the root of the zombie resource is the path /zombies. The method getZombiesNearby() will be triggered whenever someone issues a GET at the template nearby/{zipcode}. The cool part of this is that the paths are inherited, so even though this method’s path is nearby/{zipcode}, it gets the root path from the resource, e.g. http://(server root)/(resource root)/(method path) or /zombies/nearby/{zipcode}.

So the moral of this story, kids, is that there is no one best technology for everything and any developer who introduces themselves with an adjective or a qualifier, e.g. “a .NET developer” or “an iPhone developer” may be suffering from technology myopia. If you keep a closed mind, then you’ll never see all the awesome things the (code) world has to offer. If I limited my thinking to being “a .NET developer” or “an iOS developer” or “a Ruby Developer”, I would miss a metric crap-ton of really good stuff.

p.s. If you want to get this working with Maven, then just add the following dependencies to your POM file and then if you want, drop in a <build> section to uberjar it so everything you need is contained in a single JAR file – no WAR files, no XML configuration files, no Spring, no clutter.

<dependency>
  <groupId>com.sun.jersey</groupId>
  <artifactId>jersey-server</artifactId>
  <version>1.16</version>
</dependency>
<dependency>
  <groupId>com.sun.jersey</groupId>
  <artifactId>jersey-grizzly2</artifactId>
  <version>1.16</version>
</dependency>

Java Initializers vs C# Constructors

Today I had yet another opportunity to discover and interesting (and unexpected) difference between C# and Java. I was looking through some code and noticed a bizarre construct – some code sitting within a pair of curly braces that had no method name.

I’ve seen this kind of thing before and C# has support for anonymous methods and even anonymous classes and I even knew that Java supported similar things. I thought maybe it was a shortcut anonymous method (which it is, in a way), but as it turned out it was something more interesting – a static initializer.

Java has two different paradigms that allow developers to perform initialization code whereas C# only supports one. Both languages support the notion of constructors but Java also supports something a little different called an initializer. This code is actually invoked before the constructor. Technically it is invoked at the time the class is loaded. This can become a really important thing to be aware of, especially if you’re dynamically loading the class or you’re referencing it via something like MyClass.class, which is a pattern that should be familiar to Objective-C developers used to doing things like [MyClass class] (in C# we might do something like typeof(MyClass)).

Anyway, Java doesn’t support the notion of a static constructor like C# does, so the only way to initialize static fields or to perform other initialization code at class load (as opposed to instance load) time is with a static initializer. Java also supports an instance initializer which is actually fired before the classes constructor. This actually makes for really elegant looking code if there is initialization that needs to be done before any of a number of chained constructors needs to be invoked.

To demonstrate what these initializers look like (and the order in which they are executed), consider the following class:


public class Monster {

	private int localInt = 21;
	private static int staticInt = 225;

	// this will look crazy to a C# dev...
	// Instance Initializer
	{
		System.out.printf("I'm in an initializer: %d, %d%n",
				localInt,
				staticInt);
		localInt = 23;
		staticInt = 80;
	}

	// STATIC initializer
	static {
		System.out.printf("It's statically delicious: %d%n",
				staticInt);
		staticInt = 500;
	}

	public Monster() {
		System.out.printf("Monster: Constructed %d, %d%n",
				localInt,
				staticInt);
	}
}

When this code executes, it prints the following:

It’s statically delicious: 225
I’m in an initializer: 21, 500
Monster: Constructed 23, 80
Built a monster.

As I said in the previous post about Java, I’m sure this is old hat to you veteran Java developers but for me, it’s something else I didn’t know about Java that I simply never knew because I hadn’t spent the time examining the language to find out.

Is there finally a Heroku for .NET developers? AppHarbor thinks so.

If you’ve read many of my posts, especially those from my previous blog (The .NET Addict), then you’re well aware of how much I like Heroku. These guys are geniuses and the service they offer is absolutely top notch. And, by top notch, I mean, I haven’t found anyone who can offer better. People use Heroku as their go-to source for web apps and web services when they don’t want to front the infrastructure in-house. Their tiered, free-to-start pricing is absolutely ideal for independent developers who have an idea they want to build but it requires some cloud presence.

For example, think of all those iOS developers building iOS apps that need some kind of server presence or back-end to make their app work. Unless you’re a company, you probably can’t afford to stick a box in a datacenter and fire it up and do the maintenance yourself. You might go the traditional web hosting route but, you probably run into all kinds of configuration quirks and limitations. Enter Heroku. Build a Ruby on Rails site on your local machine that provides exactly the kind of back-end you need for your app and then git push your work up to Heroku. Once there, your stuff’s in the cloud.

Ok, enough about Heroku. The biggest drawback of Heroku (at least as far as many C# developers are concerned) is that it works only on Ruby on Rails and, is generally easier to access from Macs and Linux boxes than windows. Until recently, if you were a .NET developer then your only options were to go the traditional web hosting route, fire up an Amazon EC2 VM running Windows, or use Windows Azure and learn an entirely new SDK and make absolutely sure your app works nowhere but in Microsoft’s proprietary cloud.

You know what would kick ass? Heroku for .NET. It would be great if I could create a Visual Studio solution that runs and works locally, talks to local SQL databases, has unit tests that work just like my regular enterprise solutions… but then I could git push that VS solution into the cloud and then it just works.

Enter AppHarbor. These folks claim to be just that – Heroku for .NET. You can use git to push your Visual Studio 2010 solution right up to their servers. Once there, they actually will seek out and run all of the unit tests in your solution and show you the results. So now you’ve got Heroku for .NET with a little sprinkle of TeamCity and Continuous Integration thrown in for good measure.

One of the things that I love about Heroku is that I can just do a rake db:migrate and push data from my local box out to the cloud version of my application. I can do the same with an AppHarbor database – their SQL server databases are accessible directly from SQL Management Studio with SQL Server authentication. My first question is – why the hell isn’t it this easy to talk to Azure databases?? My second thought is – awesome.

If AppHarbor wants to compete with Heroku for developer mindshare, they’re going to need to provide a truckload of add-ons and plugins. If you’ve seen Heroku’s add-on page lately, you know that they have an insane amount of additional platform features that you can turn on with the flip of a switch. AppHarbor and Heroku both truly embrace the PaaS (Platform as a Service) model. The add-ons that are available at launch (which is today, actually) from AppHarbor are: MongoHQ, Cloudant, Redis, Memcacher, Mailgun, and New Relic (available next week).

AppHarbor’s infrastructure sits on top of Amazon’s infrastructure. This means that they’re basically doing all the hard work of utilizing Amazon’s virtual infrastructure, configuring the machines, partitioning things, dealing with disk space issues, clustering, load balancing – all that stuff you’d have to do if you talked directly to Amazon yourself. Coincidentally, Heroku does the same thing. Your application is really backed by Amazon’s cloud.

So, a cloud platform that allows me to create an application and git deploy to it in less than 1 minute (yes, I timed it – 42 seconds from “File -> New Project” to running on the web), gives me access to databases in the cloud (though I think their current 10MB -or- 10GB choice is a little myopic), gives me plug-ins and add-ons to add features to my virtual platform in the cloud? To quote my favorite TV personality, the Russian DirecTV guy with the miniature lap giraffe – I JUMP IN IT.

But seriously, time will tell if companies like AppHarbor are successful. One huge risk that people take when jumping into the cloud is lock-in. Nobody wants to write code for one specific cloud vendor if that cloud vendor might in the future change their platform or worse, shut down entirely. The beauty of companies like Heroku and AppHarbor is that there is no lock-in. The Ruby on Rails app that works on your Mac will work on Heroku and if Heroku burns to the ground tomorrow, heaven forbid, all that code you write is still viable elsewhere. The same goes for AppHarbor. You’re not writing code for Microsoft’s proprietary Azure Table or Queue storage – you’re hitting a SQL or mySQL server. You’re not writing code that only runs on your machine in a simulated environment that isn’t even all that good a simulator (Azure), you’re writing code that is just pure .NET that you can re-use on some other platform provider if you choose to move it, or pull it in-house if your needs change.