Kotan Code 枯淡コード

In search of simple, elegant code

Menu Close

Category: Technology (page 1 of 13)

Adding a Configurable Global Route Prefix in ASP.NET Core

This morning I was asked if it was possible to set up a configurable global route prefix in ASP.NET Core applications. I’ve done this in the past using the old (now legacy) Web API as well as with older versions of ASP.NET MVC, but I hadn’t yet tried it with ASP.NET Core.

Before I get into the how, I think it’s worth mentioning a little of the why. In this case, I have an application (and, like all applications, it’s just a microservice) that I run locally but I also deploy it in a number of environments (clouds or cloud installations). In some of those environments, the application runs as a sub-domain, e.g. myapp.mycompany.com. In other environments, it runs with a path prefix like mycompany.com/myapp. I don’t want to hard-code this capability into my application, I want the environment to be able to tell my application in which mode it is operating without me having to rebuild the app.

To do this, we need to create a new convention. Route conventions are just code that executes an Apply() method against an application model. This apply method is executed the first time Kestrel receives an inbound request, which is when it evaluates the entire system to prepare all the routes.

Filip W (@filip_woj on Twitter) has put together a sample that shows how to do this. The following snippet is from his code which can be found in Github here.

public class RouteConvention : IApplicationModelConvention
    {
        private readonly AttributeRouteModel _centralPrefix;

        public RouteConvention(IRouteTemplateProvider routeTemplateProvider)
        {
            _centralPrefix = new AttributeRouteModel(routeTemplateProvider);
        }

        public void Apply(ApplicationModel application)
        {
            foreach (var controller in application.Controllers)
            {
                var matchedSelectors = controller.Selectors.Where(x => x.AttributeRouteModel != null).ToList();
                if (matchedSelectors.Any())
                {
                    foreach (var selectorModel in matchedSelectors)
                    {
                        selectorModel.AttributeRouteModel = AttributeRouteModel.CombineAttributeRouteModel(_centralPrefix,
                            selectorModel.AttributeRouteModel);
                    }
                }

                var unmatchedSelectors = controller.Selectors.Where(x => x.AttributeRouteModel == null).ToList();
                if (unmatchedSelectors.Any())
                {
                    foreach (var selectorModel in unmatchedSelectors)
                    {
                        selectorModel.AttributeRouteModel = _centralPrefix;
                    }
                }
            }
        }
    }

The cliff notes for what’s happening here is that when the Apply() method is called, we iterate through all of the controllers that the application model knows about. For each of those controllers, we query the selectors where there exists a route model. If we have a controller that has at least one matching selector (it has a Route() attribute on it), then we combine that controller’s route attribute with the prefix. If the controller doesn’t have a matching selector (there are no route attributes), then we set it.

This has the net effect of adding the prefix we’ve defined to every single one of the RESTful routes that we’ve defined in our application. To use this new convention, we can add it while we’re configuring services during startup (also shown in the Github demo) (note that the UseCentralRoutePrefix method is an extension method that you can see in the linked github repo:

public class Startup
{
    public Startup(IHostingEnvironment env)
    {}

    public void ConfigureServices(IServiceCollection services)
    {
        services.AddMvc(opt =>
        {
            opt.UseCentralRoutePrefix(new RouteAttribute("api/v{version}"));
        });
    }

    public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
    {
        loggerFactory.AddDebug();
        app.UseMvc();
    }
}

Again, this isn’t my code, these are just extremely useful nuggets that I found on the Strathweb blog. Now that we’ve got a convention and we know how to add that convention, we can modify the above code slightly so that we can accept an override from the environment, allowing us to deploy to multiple cloud environments.

We don’t really want to read directly from the environment. Instead, we want to use AddEnvironmentVariables when setting up our configuration. This allows us to set the app’s global prefix in a configuration file, and then override it with an environment variable, giving us the best flexibility while developing locally and pushing to a cloud via CI/CD pipelines.

So, our new Startup method looks like this:

public Startup(IHostingEnvironment env)
{
  var builder = new ConfigurationBuilder()
      .SetBasePath(env.ContentRootPath)
      .AddJsonFile("appsettings.json", optional: true, reloadOnChange: false)
      .AddEnvironmentVariables()
  Configuration = builder.build();
}

public IConfigurationRoot Configuration { get; }

And we can modify the ConfigureServices method so that we read the prefix from configuration (which will have already done the overriding logic during the Startup method):

public void ConfigureServices(IServiceCollection services)
{
  var prefix = Configuration.GetSection("globalprefix") != null ? Configuration.GetSection("globalprefix").Value : String.Empty;
 services.AddMvc(opt =>
 {
     opt.UseCentralRoutePrefix(new RouteAttribute(prefix));
 });
}

And that’s it. Using the RouteConvention class we found on Filip W’s blog and combining that with environment-overridable configuration settings, we now have an ASP.NET Core microservice that can run as the root of a sub-domain (myapp.foo.com) or at any level of nesting in context paths (foo.com/this/is/myapp) all without having to hard-code anything or recompile the app.

Finally, we can put something like this in our appsettings.json file:

{
  "globalprefix" : "deep/root"
}

And optionally override that with an environment variable of the same name.

Migrating an ASP.NET Core RC1 Microservice to RC2

In a recent blog post, I talked about how to deploy an ASP.NET Core RC1 service to the cloud. With yesterday’s release of ASP.NET Core RC2, I had to make a number of minor changes to my code in order to get it working both locally and in the cloud.

The biggest change between RC1 and RC2 is an architectural change. In RC1, the thing responsible for prepping the application was inside the dnx tool, and it was tightly coupled to the notion of building web applications. In Rc2, we’re actually returning to a more explicit model with a Program class and a Main(), where we write our own code to construct the WebHostBuilder instead of it being done implicitly on our behalf.

First, there are changes to the structure of the project.json file. Rather than go through them all, I’m just going to point you to the modified project.json for the Zombies service. The easiest thing to do is to take a project.json file created by new RC2 tooling and use that as a template to adapt your existing file.

The next thing I did was add a Program class. You’ll notice that I use the AddStartup() method to invoke all of the old startup code I had from RC1:

using System;
using System.Collections.Generic;
using System.IO;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Builder;
using Microsoft.Extensions.Configuration;

namespace SampleMicroservice
{
  public class Program
  {
    public static void Main(string[] args)
    {
      var config = new ConfigurationBuilder()
        .AddCommandLine(args)
        .Build();

      var host = new WebHostBuilder()
        .UseConfiguration(config)
        .UseKestrel()
        .UseContentRoot(Directory.GetCurrentDirectory())
        .UseStartup<Startup>()
        .Build();

      host.Run();
    }
  }
}

The next change is that I had to use the AddCommandLine() method. Without this, our application isn’t going to respond to or care about the –server.urls parameter that is going to get passed to it by the buildpack when we push to Cloud Foundry.

I had to make a few tiny little changes to the Startup class, but it remains essentially unchanged:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Options;

namespace SampleMicroservice
{
    public class Startup
    {
        public Startup(IHostingEnvironment env)
        {
            var builder = new ConfigurationBuilder()
                .SetBasePath(env.ContentRootPath)
                .AddJsonFile("appsettings.json", optional: true, reloadOnChange: false)
                .AddJsonFile("ZombieConfig.json", optional: true, reloadOnChange: false)
                .AddEnvironmentVariables();
            Configuration = builder.Build();
        }

        public IConfigurationRoot Configuration { get; set; }

        public void ConfigureServices(IServiceCollection services)
        {
            services.AddMvc();
            services.AddOptions();
            services.Configure<ZombieOptions>(Configuration);

            services.AddScoped<IZombieRepository, ZombieRepository>();
            services.AddSingleton<IGlobalCounter, GlobalCounter>();
        }

        public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
        {
            loggerFactory.AddConsole(Configuration.GetSection("Logging"));
            loggerFactory.AddDebug();
            app.UseMvc();
        }
    }
}

The Configure<ZombieOptions>() method is supported by the same extension method as in RC1, except this method is in a new package, so we had to add a dependency on Microsoft.Extensions.Options.ConfigurationExtensions to our project.json.

With all this in place, we can simply run dotnet restore then dotnet build and finally cf push and that’s that… our app is now running on RC2 in Cloud Foundry.

All of the code for the RC2 version of this app can be found in my github repository.

Using Cloud Foundry Bound Services with SteelToe on ASP.NET Core

Before I begin, if you’re not familiar with the concept of backing services or bound services in Cloud Foundry (any of the implementations, including PCF Dev, support service bindings in some form), check out the Pivotal documentation. The short of it is that rather than your application hard-coding information about the backing services with which it communicates, it should instead externalize that configuration. One way of doing this is through Cloud Foundry’s services mechanism.

When you bind services, either user-provided or brokered, they appear in an environment variable given to your application called VCAP_SERVICES. While this is just a simple JSON value, and you could pretty easily parse it on your own, you shouldn’t have to. This is a solved problem and there’s a library available for .NET that includes as part of its foundation some DI-friendly classes that talk to Cloud Foundry’s VCAP_SERVICES and VCAP_APPLICATION environment variables.

This library is called Steel Toe. I will save the history and origin of the name Steel Toe for a future blog post. For now, you can think of Steel Toe as a set of libraries that enable ASP.NET applications to participate as clients in the Netflix OSS microservice ecosystem by talking to Configuration Server and the Discovery Server (Eureka).

So, the first thing you need to do is add a NuGet reference to the assembly SteelToe.Extensions.Configuration.CloudFoundry. To do this, you’ll have to use the feed SteelToe is on, since you won’t find it in the regular Microsoft feed. SteelToe packages can currently be found here:

  • Master – https://www.myget.org/F/steeltoemaster/api/v3/index.json
  • Dev – https://www.myget.org/F/steeltoedev/api/v3/index.json

Once you’ve told either your command line tools or Visual Studio 2015 where to go look for Steel Toe, you can add a reference to the Assembly mentioned above (there are more than this one, but for this blog post we’re only covering Cloud Foundry configuration).

To add Cloud Foundry’s environment variables as a configuration provider, we just use the AddCloudFoundry method in our Startup method:

 var builder = new ConfigurationBuilder()
                .AddJsonFile("appsettings.json")
                .AddEnvironmentVariables()
                .AddCloudFoundry();
 Configuration = builder.Build();

At this point we could continue and just use the syntax vcap:services:p-mysql:0:uri for raw configuration values, but I like injecting service binding information using the IOptions feature, which converts the service bindings into a collection of POCOs. To use it, our ConfigureServices method might look something like this:

  services.AddMvc();

  services.AddOptions();
  services.Configure<CloudFoundryApplicationOptions>(Configuration);
  services.Configure<CloudFoundryServicesOptions>(Configuration);

This is going to make both our application metadata and our service bindings available as injectable options. We can make use of these easily in a controller. The following is a modified “values” controller borrowed right from the Visual Studio “new project” template for ASP.NET Web API. I’ve added a constructor that allows these options to be injected:

[Route("api/[controller]")]
public class ValuesController : Controller
{
    public ValuesController(IOptions<CloudFoundryApplicationOptions> appOptions,
        IOptions<CloudFoundryServicesOptions> serviceOptions)
    {
        AppOptions = appOptions.Value;
        ServiceOptions = serviceOptions.Value;
    }

    CloudFoundryApplicationOptions AppOptions { get; set; }
    CloudFoundryServicesOptions ServiceOptions { get; set; }

    [HttpGet]
    public Object Get()
    {
        return new
        {
            AppName = AppOptions.ApplicationName,
            AppId = AppOptions.ApplicationId,
            Service = new
            {
                Label = ServiceOptions.Services[0].Label,
                Url = ServiceOptions.Services[0].Credentials["url"].Value,
                Password = ServiceOptions.Services[0].Credentials["password"].Value
            }
        };           
    }
}

Now if I bind a service to my application that has a url credential property and a password credential property, I’ll be able to see that when I issue a GET to /api/values. Here’s a sample output that I get from using the REST api for my service:

{"AppName":"bindings-example","AppId":"a605f066-8c2c-4ae3-8dbf-b2e7931fc39d","Service":{"Label":"user-provided","Url":"http://foo.bar","Password":"l33t"}}

You should try this out – create an application, copy some of the code from this blog post, push it to Cloud Foundry, and bind a couple of user-provided services to it and see how incredibly easy Steel Toe makes externalizing your backing services configuration.

Deploying ASP.NET 5 (Core) Apps to Windows in Cloud Foundry

Today I found myself sequestered in a room with a pile of super smart people. This is my favorite kind of day. We have already experimented with pushing an ASP.NET Core application to Cloud Foundry on linux, but we haven’t tried out whether it would work on Windows.

There are a bunch of wrong ways to do this, and we discovered a number of them. First, it’s possible (and actually a default in RC1) to use dnu publish to produce an application that needs to be compiled and doesn’t have its dependencies fully vendored. This is not only a bad idea, but it violates a number of the 12 factors, so we can’t have that.

In Release Candidate 1, you need to explicitly specify which runtime you want bundled with the app as well as indicate that you want your application compiled via the no-source option:

dnu publish --no-source --runtime active

Assuming we’ve already got a functioning, buildable application and we run this from the application’s project root directory, this will then run through a ton of compile spam and finally publish the app to a directory like bin/output/approot/, but you can override this location as well.

When RC2 comes around, we are likely to see the publish command show up as a parameter to the dotnet command line tool. Unfortunately, even though I’ve got an early RC2 build, there’s no way of knowing if that’s going to change so I don’t want to confuse anyone by speculating as to the final syntax.

Next we go into this directory and modify the web.cmd file. At the very end of this file you’ll see a line that looks like this:

@"%DNX_PATH%" --project "%~dp0packages\SampleMicroservice\1.0.0\root" --configuration Debug web %*

Make note of the %~dp0packages section of this. This substitution is looking at the packages sub-directory within the published approot directory. If we didn’t publish with the –no-source option, then our compiled service would not appear as a package here.

While this will work just fine on a plain vanilla Windows Server box, it’s not going to work in the cloud because cloud applications need to adhere to the port binding factor. This means that the PaaS needs to be able to tell the application on which port to listen, because it is taking care of mapping the inside-container ports to the outside-container ports (e.g. port 80).

So we modify this last line of web.cmd to look like this:

@"%DNX_PATH%" --project "%~dp0packages\SampleMicroservice\1.0.0\root" --configuration Debug --server.urls=%PORT% web %*

We’ve added the %PORT% environment variable, which we know is set by Cloud Foundry for all applications, Windows or Linux. Now our start command will use the application-bundled runtime to launch the application, which will have all of its dependencies vendored locally. The only thing we need to do before we push to Cloud Foundry is ensure that our Windows Cell has the right frameworks installed on it. For our test, we made sure that ASP.NET 4.6 was installed.

Now we can push the application, assuming we’re using the default name for the Windows stack:

cf push sample-service -b binary_buildpack -s windows2012R2 -c "approot\web.cmd"

If your application does not expose a legitimate response to a GET of the root (GET /) then you’ll want to push it with –health-check-type none so PCF doesn’t think your app is crashing. Note that you have to specify the location pointing to the web.cmd file to set your start command. Your app isn’t going to start properly without that.

It would be nice if Microsoft let us supply parameters to alter the generated start command inside web.cmd, but we’ll take what we can get.

While it’s a little bit inconvenient to have to modify the web.cmd generated by the publish step, it isn’t all that difficult to push your ASP.NET 5/Core RC1 application to Pivotal Cloud Foundry on Windows cells.

Debugging Node.js Applications in Cloud Foundry

I just added a post over on Medium describing some steps we took to do remote debugging of a Node.js application running in Cloud Foundry.

Check out the post here: https://medium.com/@KevinHoffman/debugging-node-js-applications-in-cloud-foundry-b8fee5178a09#.dblflhdxm

 

 

Migrating to ASP.NET Core in the Cloud: What you will and won’t miss from ASP.NET 5

The folks over at InfoQ have graciously provided a nice list of technology that has been discontinued in .NET Core. You can find their original list here. They present the list without much opinion or color. In this blog post, I’d like to take a look at that list from the point of view of someone considering using ASP.NET Core for cloud native application development.

Reflection

Reflection isn’t gone, it’s just changed a little bit. When you make things designed to work on multiple operating systems without having to change your code, some of your original assumptions about the code no longer apply. Reflection is one of those cases. Throughout my career, the majority of the times I’ve used Reflection in a running, production application have either been for enhanced debug information dumps in logs, or, to be honest, I was working around a complex problem in entirely the wrong way.

If you find yourself thinking that you need reflection for your .NET microservice, ask yourself if you really, really need it. Are you using it because you’re adopting some one-off, barely supported serialization scheme? Are you using it because you’re afraid your regular diagnostics aren’t good enough? I would be shocked if I ever had to use Reflection in 2016 directly when .NET Core already provides me with JSON and XML serializers.

App Domains

Manipulating application domains is something that I would consider a cloud native anti-pattern. Historically, we’ve used AppDomains for code and data isolation and as a logical unit of segmentation for CAS (Code Access Security). If your deployment target is the cloud (or basically any PaaS that uses container isolation), then I contend you should have nothing in your app that does any kind of AppDomain shenanigans. In fact, from the original article, here’s a quote from Microsoft:

For code isolation, we recommend processes and/or containers

Remoting

Ah, Remoting, how do I hate thee? Let me count the nearly infinite ways… The following statement from InfoQ’s article makes me feel old. VERY old.

These days few developers even remember the Remoting library exists, let alone how to use it.

Not only do I remember how to use it, but I wrote entire chapters for popular books on using Remoting. I’ve written custom communication channels for Remoting, implemented distributed trace collection systems that predate Splunk, and other things I shall not admit publicly… all with Remoting.

The bottom line is there is no place for a technology like Remoting in an ecosystem of services all scaling elastically and moving dynamically in the cloud. It’s dinosaur tech and those of us who used to write code against it for a living are likely also dinosaurs.

You’re not going to need this in the cloud, so it’s absence from .NET Core is a blessing.

Serialization

Most of the tools you need for serialization are still there. Binary serialization is one of those ugly things that used to go deep into the bowels of your objects, even private members, and convert POCOs into state that way.

If you need high-performance binary/byte array serialization of your .NET classes, use protocol buffers. You can use them from .NET Core.

If you’re doing more traditional service-to-service communication, then you’ll be just fine with JSON and XML. As far as building services for the cloud, you should be blissfully unaware of most serialization gaps between ASP.NET 4.x and .NET Core.

Sandboxing

Sandboxing is something that has traditionally had strong, low-level OS ties. This makes its implementation in a cross-platform framework a little more complicated and subject to least common denominator culling. If you’re just building services, you should be fine, but there’s a sentence in the InfoQ article that is decidedly not cloud-native:

The recommended alternative is to spawn separate processes that run under a user account with restricted permissions

This is not something you should be doing in the cloud. As a cloud native developer, you should be unconcerned with the identity with which your application process runs – this is a detail that is abstracted away from you by your PaaS. If you need credentials to access a backing service, you can deal with that using bound resources or bound services, which is all externalized configuration.

Sandboxing as the article describes isn’t something you should be doing when developing services for the cloud, so you should hopefully be well insulated from any of the changes or removals in .NET Core related to this.

Miscellaneous

There are a handful of miscellaneous changes also mentioned by the InfoQ article:

  • DataTable/DataSet – I feel so. very. old. Many of you will likely have no idea what these things are. This is how your grandparents communicated with SQL data sources prior to the invention of Entity Framework, nHibernate, or LINQ to SQL. That’s right, we wrote code uphill both ways in the snow. Get off my lawn. You will not need these in the cloud.
  • System.DirectoryServices – For some pretty obvious reasons, this doesn’t exist. You shouldn’t need to use this at all. If you need to talk to LDAP, you can do so using simpler interfaces or, better yet, through a token system like OAuth2.
  • System.Transactions – Distributed transactions, at least the traditional kind supported by libraries like this, are a cloud-native anti-pattern. Good riddance.
  • XSL and XmlSchema – Ok, so these might still be handy and I can see a couple types of services that might actually suffer a bit from their absence in the framework. Good news is that .NET Core is open source, so if enough people need this, either Microsoft or someone else will put it in.
  • System.Net.Mail – If you need to send mail from your cloud native service, you should consider using a brokered backing service for e-mailing. Just about every PaaS with a service marketplace has some kind of cloud-based mailing system with a simple REST API.
  • System.IO.Ports – I’m fairly certain you should not be writing code that communicates with physical serial ports in your cloud native service.
  • Workflow – Windows Workflow Foundation is also gone from .NET Core. Good riddance. I did a tremendous amount of work with that beast, and I tried harder than most mere mortals to like it, to embrace it, and to use it for greatness. I was never pleased with anything I produced in WF. The amount of hacky statefulness required to get it working would have immediately put this tech onto the cloud native naughty list anyway.
  • XAML – You do not need XAML to build a cloud service, so this is also no big loss.

Conclusion

The bottom line is that, aside from a few high-friction experiences in the current release candidate, the feature set of ASP.NET Core contains everything you need right now to build microservices for the cloud. The biggest concern isn’t what isn’t in .NET Core, it’s what third party libraries for accessing backing services are either missing or not ready for prime time. That’s where I see the risk in early adoption, not in the core feature set of the framework.

Pushing an ASP.NET 5 Microservice to Cloud Foundry

In my previous blog post, I walked through the process of creating a lightweight microservice in ASP.NET 5 that builds in a cross-platform fashion, so it can run on Windows, Linux, and OS X. I’m going to glaze over how completely amazing it is that we now live in a time where I can write cross-platform microservices in C# using emacs.

If you’re building microservices, then there’s a good chance you’re thinking about deploying that microservice out into some Platform as a Service provider. For obvious reasons, my favorite PaaS target is Cloud Foundry (you can use any of the commercial providers that rest atop the CF Foundation like Pivotal Web Services, IBM Bluemix, HP, etc).

Thankfully, there is a community supported buildpack for ASP.NET 5. I believe it’s up to date with beta 8 of ASP.NET 5, so there is a minor modification you’ll have to make in order to get your application to launch in Cloud Foundry.

First, go into the project.json file and make sure your commands section looks like this:

"commands": {
    "web": "Microsoft.AspNet.Server.Kestrel",
    "kestrel": "Microsoft.AspNet.Server.Kestrel"
  },

The reason for this is that the buildpack expects to use the default launch command from pre-RC1, kestrel. If you’ve used the RC1 project generator, then the default kestrel launch command is just called web. The easy fix for this, as shown above, is just to create a duplicate command called kestrel. Now you can launch your app with dnx web or dnx kestrel, and that’ll make the buildpack happy.

With that one tiny little change, you can now push your application to the cloud:

cf push zombieservice -b https://github.com/cloudfoundry-community/asp.net5-buildpack.git

If you’re not familiar with the cf command line or Cloud Foundry, go check out Pivotal Web Services, a public commercial implementation of Cloud Foundry.

It will take a little while to deal with uploading the app, gathering the dependencies, and then starting. But, it works! To recap, just so this sinks in: I can write an ASP.NET microservice, in C#, using my favorite IDE (vi!), on my Mac, and deploy it to a container in the cloud, which is running Linux. Oh yeah, and if I really feel like it I can run this on Windows, too.

A few years ago, I would never have believed that something like this would be possible.

Complex JSON Handling in Go

In a recent blog post, I showed a fairly naive and contrived microservice example – the equivalent of a RESTful “hello world”. While this example may be helpful in getting some people started, it doesn’t really represent the real world. Yesterday, while spilling my Go language newbsauce all over my colleagues, I ran into an interesting problem that happens more often than you might think.

In most samples involving de-serializing a JSON response from some server, you usually see a fixed schema. In other words, you know ahead of time the expected shape of the reply, even if that means you expect some of the fields to be missing or nullable. A common pattern, however, is for an API to return some kind of generic response object as a wrapper around an optional payload.

Let’s assume you’ve got an API that exposes multiple URLs that allow you to provision different kinds of resources. Every time you POST to a resource collection, you get back some generic JSON wrapper that includes two fields: status  and data. The status field indicates whether your operation succeeded, and, if it did succeed, the data field will contain information about the thing you provisioned. Since the API is responsible for different kinds of resources, the schema of the data field may vary.

While this approach makes the job of the server/API developer easier, and provides a very consistent way of getting at the “outer” information, this generally requires clients to make 2 passes at de-serialization or un-marshaling.

First, let’s take a look at what a wrapper pattern might look like as JSON-annotated Go structs:

// GenericServerResponse - wrapper for all responses from apocalypse server.
type GenericServerResponse struct {
      //Status returns a string indicating success or failure
      Status string `json:"status"`
      //Data holds the payload of the response
      Data interface{} `json:"data,omitempty"`      
}

// ZombieApocalypseOutpost - one possibility for contents of the Data field
// in a generic server response.
type ZombieApocalypseOutpost struct {
    Name        string    `json:"name"`
    Latitude    float32   `json:"latitude"`
    Longitude   float32   `json:"longitude"`
    Status      string    `json:"status"`
}

While this example shows one possibility for the contents of the Data field, this pattern usually makes sense when there are multiple possibilities. For example, we could have one resource that provisions zombie apocalypse outposts, and another resource that provisions field artillery, where the struct inside the Data field would be something like a FieldArtillery struct, but the outer wrapper stays the same regardless.

First, let’s take a look at how the server might create a payload like this in a method rigged up to an HTTP handler function (I show how to set up HTTP handlers in my previous blog post):

func outpostProvisioner(w http.ResponseWriter, r *http.Request) {
    // check r.Method for POST
    outpost := ZombieApocalypseOutpost{Name:"Outpost Alpha",Status:"Safe",
      Latitude:37.323011, Longitude:-122.032252}
    response := GenericServerResponse{Status:"success", Data: outpost}
    b2, err := json.Marshal(response)
    if err != nil {
      panic(err)
    }
    w.Write(b2)
}

Note that on the server side we can usually get away with a single marshaling pass because we’re the ones deciding the contents of the Data field, so we know the structure we want. The client, on the other hand, likely needs to perform some conditional logic to figure out what schema applies to the wrapped payload.

You can’t use runtime typecasting tricks in Go because it doesn’t do runtime typecasting the way other languages like Java or C# might do. When we define something as interface{} like we did with the Data field, that looks like calling it Object on the surface, but as I said – we can’t typecast from interface{} to a struct.

What I can do is take advantage of the fact that we already have marshal/unmarshal capabilities in Go. What follows may not be the best, most idiomatic way to do it, but it seems to work out nicely.

  URL := "http://localhost:8080/outposts/"

  r, _ := http.Get(URL)
  response, _ := ioutil.ReadAll(r.Body)
  r.Body.Close()

  var responseMessage GenericServerResponse
  err := json.Unmarshal(response, &responseMessage)
  if err != nil {
    panic(err)
  }
  var outpost ZombieApocalypseOutpost
  b, err := json.Marshal(responseMessage.Data)
  err = json.Unmarshal(b, &outpost)

  fmt.Println(responseMessage)
  fmt.Println(outpost)

First, we grab the raw payload from the server (which is just hosting the method I listed above). Once we’ve got the raw payload, we can unmarshal the outer or wrapper message (the GenericServerResponse JSON-annotated struct). At this point, the Data field is of type interface{} which really means we know nothing about it. Internally one can assume that the JSON unmarshalling code probably produced some kind of map, but I don’t want to deal with the low-level stuff. So, I marshal the Data field which gives me a byte array. Conveniently, a byte array is exactly what the unmarshal method expects. So I produce a byte array and then decode that byte array into the ZombieApocalypseOutpost struct.

This may seem like a little bit of added complexity on the client side, but there is actually a valuable tradeoff here. We gain the advantage of dealing with virtually all of an APIs resources using the same wrapper message, which greatly simplifies our client code. Only when we actually need the data inside the wrapper do we need to crack it open and perform the second pass, which can also be hidden behind a simple library.

I used to cringe every time I saw this pattern because I’m not a fan of black box data. If the inside of the Data field can vary in schema, then you are at the mercy of the API provider to insulate you from changes to the inner data. However, you’re no more a potential victim of breaking changes using this pattern than you would be without the wrapper.

So, in short, if you trust the API provider using this wrapper pattern, it can actually be a very convenient way to standardize on a microservice communication protocol.

Creating a Microservice with the ASP.NET Web API

I’ve been blogging about microservices for quite some time, and have included examples in Scala/Akka, Go, and even Java. Today I thought I would include a sample of how to build a microservice using the Microsoft ASP.NET Web API. As I’ve mentioned a few times before, the term microservice is a bit overloaded. Really, it just refers to a service that adheres to the Single Responsibility Principle (SRP), and does one thing. Sometimes people also include the fact that a service is bootstrapped (the container/HTTP server is included in the service code rather than relied upon as an external dependency) but that detail is unimportant for this discussion.

The first thing we need to do is create an empty ASP.NET application, which should be straightforward for any Microsoft developer. Next, we can create routes for our RESTful resources. There are a number of ways to do this, but I like providing a universal default resource pattern and then overriding it at the individual resource level as exceptions occur.

I can create a starter route pattern in the global.asax.cs file like so:

  GlobalConfiguration.Configure(config =>
            {
                config.MapHttpAttributeRoutes();

                config.Routes.MapHttpRoute(
                    name: "DefaultApi",
                    routeTemplate: "api/v3/{controller}/{id}",
                    defaults: new { id = RouteParameter.Optional }
                );               
            });

If I then create a Web API controller called something like ZombiesController, and put a method like Get on it, this will automatically be the target of the route api/v3/zombies. Further, a Get(id) method will then automatically get invoked by api/v3/zombies/12.

The next step is to create a model object that we want to return from our resource URLs. Since we’ve been using zombies and zombie sightings, let’s continue with that domain and create the following C# class:

using Newtonsoft.Json;
using System;

namespace KotanCode.Microservices.Models
{    
    public class ZombieSighting
    {
        [JsonProperty(PropertyName = "zombieId")]
        public Guid ZombieId { get; set; }

        [JsonProperty(PropertyName = "name)]
        public String Name { get; set; }

        [JsonProperty(PropertyName = "lat")]
        public Double Latitude { get; set; }

        [JsonProperty(PropertyName = "long")]
        public Double Longitude { get; set; }
    }
}

Now we can create a resource that exposes a collection of zombie sightings by adding a new ASP.NET Web API controller to our project:

namespace KotanCode.Microservices.Controllers
{
    public class ZombieController : ApiController
    {
        [Route("api/v7/zombies")] // override the standard route pattern
        [HttpGet]
        public IEnumerable<ZombieSighting> GetAllSightings()
        {
            ZombieDataContext ctx = new ZombieDataContext();
            var zombies = from zomb in ctx.ZombieSightings
                           select new ZombieSighting()
                           {
                               ZombieId = zomb.ID,
                               Name = zomb.ZombieName,
                               Longitude = zomb.Long,
                               Latitude = zomb.Lat
                           };
            return zombies;        
        }
    }
}

In the preceding sample, I’ve overridden the default routing pattern set up in the application’s global object so that this method will be invoked whenever someone issues a GET on /api/v7/zombies. Just for giggles, I illustrate querying from an Entity Framework context and using a LINQ projection to convert the data entities into the JSON-decorated object that is part of my public API. In a real-world example, you might have additional layers here (e.g. an anti-corruption layer), but for this sample we’re okay because all we care about here is the REST API.

That’s it. If you add the one model object and an ApiController class to your ASP.NET application and run it, you’re good to go and you now have the basic building blocks for a microservice that you can deploy locally, or out in a cloud like Cloud Foundry or Azure.

A gearhead test drives a Tesla

Last week I was in Palo Alto, CA for work. During some downtime, I went for a walk. First, I passed a McLaren dealership and remarked on the fact that they had $250,000 cars just sitting in a parking lot. Truly, I was not in Kansas anymore. A few blocks down from the McLaren dealership was a Tesla showroom. Just for giggles, I scheduled a test drive for a Tesla AWD Model S 70D (all-wheel drive, 70kWH, dual motors).

Before I get to the details, I think it’s worth mentioning that I’m a car guy. I have owned two BMWs and I owned them solely for the speed with which I could take corners. Neither were particularly comfortable to sit in, but they gave me an ear-to-ear grin to drive. Now that I am no longer a BMW owner, I have an armored Jeep with 37″ tires, custom suspension, and other modifications from front to back designed to make it absolutely destroy rock crawling obstacles. It is a beast into which my friends and I have poured hours of work (and blood!). It’s the most fun I’ve ever had driving under 5 miles per hour. I love the sound of engines (especially big engines). I love the roar of a super car, and I absolutely adore the feeling of shifting a well made manual transmission. I love my clutch, and you can pry it from my cold, dead left foot.

So, with this background in mind, you can imagine how I might have been skeptical of the all-electric sedan that looks like a land yacht. No stick shift? No exhaust note? No engine noise? And it looked big enough to fit one of my previous BMWs in the trunk. I looked at it and thought there is no way this car will be fun to drive. I had preconceived notions of driving a low clearance, $80,000 golf cart.

I couldn’t possibly have been more wrong.

Comfort

When I got into the vehicle, I was overcome by the quality of the interior. I set my standards of interior quality by BMW comparison, so the bar is pretty high. The seats were both comfortable and supportive (you almost never get both), there was room to spare. With no transmission hump in the middle of the vehicle, it gave the impression of having even more room. I’d say the inside of the Tesla fits somewhere between the BMW 5 and 7 series, bigger than the interior of the pre-2014 Chrysler 300s.

The interior absolutely screams elegant simplicity. There’s a single, massive screen in the middle of the dash but that’s it. There are no buttons, no knobs, no distractions anywhere. The steering wheel has a couple multi-function knobs but other than that, everything is out of your way. Even the massive 17″ touch screen doesn’t intrude nearly as much into my peripheral vision as I thought it would.

This is easily the single most comfortable vehicle I have ever had the privilege to sit in. That said, as indicated by my background context from above – comfort is not the most important thing to me in a vehicle. If it was, I sure as hell wouldn’t drive a Jeep 😉

Handling

Moments after pulling out of the car dealership, I had that “oh no, this is a giant golf cart” feeling. That feeling vanished as soon as I took my first turn. You can control how stiff or splashy the steering is, ranging from what I could call the “cadillac” steering to BMW stiffness. I cranked the steering stiffness all the way up and the sales rep took me out on the freeway.

On the freeway, this thing is amazing. I’ll cover the technology in a minute, but the Tesla glides along effortlessly, giving me the impression that I could easily drive cross-country in this. Once we got off the freeway, we started charging up and down hilly back roads.

It’s been years since my face was covered in the grin of a car enthusiast, but it came back during this test drive. There’s no understeer, and the ridiculously low center of gravity makes this car tear through curves better than my 335is.

It’s the most comfortable vehicle I’ve ever driven, and it handles like a sports car. In fact, it handles so much like a sports car that I would rather take the Tesla on twisty back roads than my old BMW, and demolishing back roads was my favorite thing to do in that car.

Performance

The 70D is an all-wheel drive model, but it hits 60mph in 5 seconds. This is the base model, and if you go further up into the performance models, you can get to 60mph in under 3 seconds, which is legit supercar acceleration, acceleration that could cost you ten times the price of a Tesla in a supercar.

There are almost no moving parts. There’s no turbo lag. There’s no “drive by wire” lag where the ECM decodes the pressure on my gas pedal into an appropriate response to the fuel injector. There are no gear changes, no clutch gasp. It’s just press and gofast. It doesn’t matter whether you’re doing 5mph or 50mph, if you push down on the accelerator (can’t call it a gas pedal) you’ll get a metric shit-ton of torque, even if you’re going uphill.

This is no golf cart.

Technology

I don’t even know where to start with the technology. The car comes standard with all the bells and whistles that are extras for mainstream sedans or SUVs like lane keeping warnings, blind spot warnings, etc. It also comes with a massive 17″ screen in the dash that can be configured however you like, and can feed you real-time traffic from the (free for 4 years) 4G LTE connection built into the vehicle.

One of the upgrades is auto pilot. This is not just adaptive cruise, this is take-your-hands-off-the-wheel auto pilot where you can let the vehicle drive for you on the highway or locate and automatically park in parking spots. It’s got a camera that reads speed limit signs on the side of the road.

There’s an app that lets you remotely monitor the charge status and location of the car, and you can remotely control the climate, and not just that half-assed vent starter you can do with GM and Chrysler vehicles.

The door handles detect your presence and push out of the vehicle when you’re nearby. The nav system is aware of Tesla’s network of superchargers and places with destination chargers and automatically routes you through them on long haul drives.

There is no car on the market that has more advanced, more insanely cool technology than the Tesla.

Price

The price is brutal. Honestly, most people gasp when they hear how much these things cost. Fully decked out, the base 70D can run you in the mid $80,000 range. But, if you think long term about the cost, it gets a little (tiny) bit easier to swallow. You qualify for a $7,500 federal tax credit for buying an EV, there might be a state tax credit, and you need to do the math and determine how much money you’ll save in gas every year and subtract that from the monthly payment on the car. For some people, that price difference can be huge.

Still, the sticker price is a whopper, and this car is definitely not within the price range of the average household. That said, I suspect the Model 3 ($35,000) will destroy the marketplace in 2017 when it ships.

Conclusion

The bottom line is the Tesla is a big family sedan that you can stuff all the kids and their stuff into, and you get to drive it like a sports car with roller-coaster grade acceleration. The fact that you get to do all that without putting a drop of gasoline in it is just icing on the cake.

In my opinion, this is the best of all worlds. You get to have your cake and eat it too (for a price). If you can afford a car like this, you’d be doing yourself a disservice not to get behind the wheel of one for a test drive. It is a life-altering experience.