Kotan Code 枯淡コード

In search of simple, elegant code

Menu Close

Tag: services (page 1 of 2)

Migrating an ASP.NET Core RC1 Microservice to RC2

In a recent blog post, I talked about how to deploy an ASP.NET Core RC1 service to the cloud. With yesterday’s release of ASP.NET Core RC2, I had to make a number of minor changes to my code in order to get it working both locally and in the cloud.

The biggest change between RC1 and RC2 is an architectural change. In RC1, the thing responsible for prepping the application was inside the dnx tool, and it was tightly coupled to the notion of building web applications. In Rc2, we’re actually returning to a more explicit model with a Program class and a Main(), where we write our own code to construct the WebHostBuilder instead of it being done implicitly on our behalf.

First, there are changes to the structure of the project.json file. Rather than go through them all, I’m just going to point you to the modified project.json for the Zombies service. The easiest thing to do is to take a project.json file created by new RC2 tooling and use that as a template to adapt your existing file.

The next thing I did was add a Program class. You’ll notice that I use the AddStartup() method to invoke all of the old startup code I had from RC1:

using System;
using System.Collections.Generic;
using System.IO;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Builder;
using Microsoft.Extensions.Configuration;

namespace SampleMicroservice
{
  public class Program
  {
    public static void Main(string[] args)
    {
      var config = new ConfigurationBuilder()
        .AddCommandLine(args)
        .Build();

      var host = new WebHostBuilder()
        .UseConfiguration(config)
        .UseKestrel()
        .UseContentRoot(Directory.GetCurrentDirectory())
        .UseStartup<Startup>()
        .Build();

      host.Run();
    }
  }
}

The next change is that I had to use the AddCommandLine() method. Without this, our application isn’t going to respond to or care about the –server.urls parameter that is going to get passed to it by the buildpack when we push to Cloud Foundry.

I had to make a few tiny little changes to the Startup class, but it remains essentially unchanged:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Options;

namespace SampleMicroservice
{
    public class Startup
    {
        public Startup(IHostingEnvironment env)
        {
            var builder = new ConfigurationBuilder()
                .SetBasePath(env.ContentRootPath)
                .AddJsonFile("appsettings.json", optional: true, reloadOnChange: false)
                .AddJsonFile("ZombieConfig.json", optional: true, reloadOnChange: false)
                .AddEnvironmentVariables();
            Configuration = builder.Build();
        }

        public IConfigurationRoot Configuration { get; set; }

        public void ConfigureServices(IServiceCollection services)
        {
            services.AddMvc();
            services.AddOptions();
            services.Configure<ZombieOptions>(Configuration);

            services.AddScoped<IZombieRepository, ZombieRepository>();
            services.AddSingleton<IGlobalCounter, GlobalCounter>();
        }

        public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
        {
            loggerFactory.AddConsole(Configuration.GetSection("Logging"));
            loggerFactory.AddDebug();
            app.UseMvc();
        }
    }
}

The Configure<ZombieOptions>() method is supported by the same extension method as in RC1, except this method is in a new package, so we had to add a dependency on Microsoft.Extensions.Options.ConfigurationExtensions to our project.json.

With all this in place, we can simply run dotnet restore then dotnet build and finally cf push and that’s that… our app is now running on RC2 in Cloud Foundry.

All of the code for the RC2 version of this app can be found in my github repository.

Configuration Options and DI with ASP.NET 5

In ASP.NET 5, we now have access to a robust, unified configuration model that finally gives us the configuration flexibility that we’ve spent the past several years hoping and begging for.  Gone are the days of the hideous web.config XML files, and, if we’ve done our jobs right, so too are the days of web-dev.configweb-prod.config, and other such abominations that turn out to be cloud native anti-patterns.

The new configuration system explicitly decouples the concept of a configuration source with the actual value accessible from the Configuration object (yes, IConfiguration is always available via DI). When you use yeoman to create an ASP.NET application from a template, you’ll get some boilerplate code that goes in your Startup.cs file that looks something like this:

 public Startup(IHostingEnvironment env)
        {
            // Set up configuration sources.
            var builder = new ConfigurationBuilder()
                .AddJsonFile("appsettings.json")         
                .AddEnvironmentVariables();
            Configuration = builder.Build();
        }

What you see in this code is the application setting up two configuration sources: A JSON file called “appsettings.json” and the collection of key-value pairs pulled from environment variables. You can access these configuration settings using the colon (:) separator for arbitrary depth accessors. For example, the default “appsettings.json” file looks like this:

{
  "Logging": {
    "IncludeScopes": false,
    "LogLevel": {
      "Default": "Verbose",
      "System": "Information",
      "Microsoft": "Information"
    }
  }
}

If you wanted to know the default log level, you could access it using the configuration key string Logging:LogLevel:Default. While this is all well and good, ideally we want to segment our configuration into as many small chunks as possible, so that modules of our app can be configured separately (reducing the surface area of change per configuration modification). One thing that helps us with this is the fact that ASP.NET 5’s configuration system lets us define Options classes (POCOs) that essentially de-serialize arbitrary key-value structures from any configuration source into a C# object.

To continue adding to my previous samples,  let’s say that we’d like to provide some zombie configuration. This configuration will, in theory, be used to configure the zombie controller behavior. First, let’s create a JSON file (this is just for convenience, we could use any number of built-in or third-party config sources):

{	
	"ZombiesDescription" : "zombies",
	"MaxZombieCount" : 22,
	"EnableAdvancedZombieTracking" : true		
}

And now we need a POCO to contain this grouping of options:

using System;

namespace SampleMicroservice
{
	public class ZombieOptions
	{
		public String ZombiesDescription { get; set; }
		public int MaxZombieCount { get; set; }
		public bool EnableAdvancedZombieTracking { get; set; }
	}
}

We’re almost ready to roll. Next, we need to modify our ZombieController class to use these options. We’re going to continue to follow the inversion of control pattern and simply declare that we now depend on these options being injected from an external source:

namespace SampleMicroservice.Controllers
{		
	[Route("api/zombies")]
	public class ZombieController 
	{
		private IZombieRepository repository;
		private IGlobalCounter counter;
                private ZombieOptions options;
		
		public ZombieController(IZombieRepository zombieRepository, 
								IGlobalCounter globalCounter,
								IOptions<ZombieOptions> zombieOptionsAccessor) {
			repository = zombieRepository;	
			counter = globalCounter;
			options = zombieOptionsAccessor.Value;
		}
		
		
		[HttpGet]
        public IEnumerable<Zombie> Get()
        {
			counter.Increment();
			Console.WriteLine("count: " + counter.Count + 
				", advanced zombie tracking: " + options.EnableAdvancedZombieTracking + 
				", zombie name: '" + options.ZombiesDescription + "'");			
            return repository.ListAll();
        }
	}
}

NOTE that there’s a bug in the current RC1 documentation. In my code, I use zombieOptionsAccessor.Value whereas the docs say this property name is called OptionsValue is the one that compiles on RC1, so YMMV. Also note that we’re not injecting a direct instance of ZombieOptions, we’re injecting an IOptions<T> wrapper around it.

Finally we can modify the Startup.cs to add options support and to configure the ZombieOptions class as an options target:

// This method gets called by the runtime. Use this method to add services to the container.
        public void ConfigureServices(IServiceCollection services)
        {
            // Add framework services.
            services.AddMvc();            
            services.AddOptions();
            services.Configure<ZombieOptions>(Configuration);                       
            
            services.AddScoped<IZombieRepository, ZombieRepository>();
            services.AddSingleton<IGlobalCounter, GlobalCounter>();
 }

The important parts here are the call to AddOptions(), which enables the options service, and then the call to Configure<ZombieOptions>(Configuration), which uses the options service to pull values from a source (in this case, Configuration) and stuff them into a DI-injected instance of ZombieOptions.

There is one thing that I find a little awkward with the ASP.NET 5 configuration system: all of your sources and values are merged at the top level namespace. After you add all your sources, all of the top-level property names are now peers of each other in the master IConfiguration instance.

This means that while all the properties of ZombieOptions have meaning together, they are floating around loose at the top level with all other values. If you name one of your properties something like Enabled and someone else has a third party options integration that also has a property called Enabled, those will overlap, and the last-loaded source will set the value for both options classes. If you really want to make sure that your options do not interfere with anyone else’s, I suggest using nested properties. So our previous JSON file would be modified to look like this:

{
  "ZombieConfiguration": {
	"ZombiesDescription" : "zombies",
	"MaxZombieCount" : 22,
	"EnableAdvancedZombieTracking" : true
  }
}

Then the ZombieOptions class would move the three properties down one level below a new parent property called ZombieConfiguration. At this point, you would only have to worry about name conflicts with other config sources that contain properties called ZombieConfiguration.

In conclusion, after this relatively long-winded explanation of ASP.NET 5 configuration options, it finally feels like we have the flexibility we have all been craving when it comes to configuring our applications. As I’ll show in a future blog post, this amount of flexibility sets us up quite nicely for the ability to work both locally and deploy our applications to the cloud and still have rich  application configuration without returning to the dark times of web.config.

Dependency Injection and Services in ASP.NET 5

In this blog post, I talked about how to build a microservice using ASP.NET 5. While it was a trivial, contrived sample, it did show how easy it is to get started with ASP.NET 5 (did I mention this was on a Mac? That’s never going to get old).

Dependency Injection works on the Inversion of Control principle. Rather than using what has colloquially been referred to as “new glue” where every class creates instances of the classes on which they depend, you avoid this tight coupling by simply being explicit about the dependencies required for your class by listing them in your application’s constructor. This gives you a ton more flexibility, but, most importantly, it also makes it much easier to test your application’s classes in isolation without resorting to terrible, space-time-altering hacks to test things.

The old (“new glue”) way:

public MyConstructor() {
  subordinateClass1 = new SubordinateClass1();
  subordinateClass2 = new SubordinateClass2();
}

The new (DI) way:

public MyConstructor(ISubordinate1 subordinate1, ISubordinate2 subordinate2) 
{
  this.subordinate1 = subordinate1;
  this.subordinate2 = subordinate2;
}

Among the many advantages of the second way is that the class is usable both with and without a formal dependency injection framework, and is far more testable than the first option.

So, now that we’ve got a “why do we care about DI?” primer out of the way, let’s take a look at how this might be used to apply to an ASP.NET 5 application.

In the ConfigureServices method of our application startup, we can choose to add built-in services (like ASP.NET MVC, identity, security, entity framework, etc) or we can add services that we have created with one of 3 different lifetime configurations:

  • Transient – Every time an instance of this interface is requested, the receiving object gets a new instance.
  • Scoped – This is request scope, so the instance of the injected object will remain intact throughout the lifetime of a web request. This is a particularly useful lifetime.
  • Singleton – A single instance of this object will be dispensed to all requesting injection points.

Let’s say that we want to replace the hacky controller-embedded code we put in the previous sample for the Get() method of our zombie controller with an actual zombie repository. If we did this the old way, we would new up an instance of the repository inside our controller, incur a tight coupling penalty, and then make it horribly difficult to test said controller. Using DI, we can modify our controller so it looks like this:

[Route("api/zombies")]
	public class ZombieController 
	{
		private IZombieRepository repository;
		private IGlobalCounter counter;
		
		public ZombieController(IZombieRepository zombieRepository, IGlobalCounter globalCounter) {
			repository = zombieRepository;	
			counter = globalCounter;
		}
		
		[HttpGet]
        public IEnumerable<Zombie> Get()
        {
			counter.Increment();
			Console.WriteLine("count: " + counter.Count);			
            return repository.ListAll();
        }
	}

Now our controller’s dependencies are explicit in the constructor, and are satisfied by some external source, such as our testing framework or a DI framework. Nowhere in this controller will you find “new glue” or tight coupling to a particular implementation of these classes.

To register specific implementations and lifetimes of these objects, we can modify the ConfigureServices method of our Startup class as shown here:

  public void ConfigureServices(IServiceCollection services)
        {
            // Add framework services.
            services.AddMvc();
            
            services.AddScoped<IZombieRepository, ZombieRepository>();
            services.AddSingleton<IGlobalCounter, GlobalCounter>();
        }

When we run this application now, a new instance of ZombieRepository will be created for each HTTP request and, because it’s a hacked implementation, it just returns the same array as the previous blog post. In the console we will see a global counter continue to increment beyond the span of individual HTTP requests, and this value will continue to increment until the host process (e.g. Kestrel) is shut down.

While there are too many possibilities to list here, as you go forward building your ASP.NET 5 applications, you should always be asking yourself, “Could this be a service?” Any time you find yourself isolating some piece of functionality into a module, check to see whether it fits nicely into the services framework in ASP.NET. This isn’t just a third party extension point, remember that ASP.NET 5 is now a decoupled, modular thing and nearly all of ASP.NET’s own functionality is added to the core with this same services framework.

Complex JSON Handling in Go

In a recent blog post, I showed a fairly naive and contrived microservice example – the equivalent of a RESTful “hello world”. While this example may be helpful in getting some people started, it doesn’t really represent the real world. Yesterday, while spilling my Go language newbsauce all over my colleagues, I ran into an interesting problem that happens more often than you might think.

In most samples involving de-serializing a JSON response from some server, you usually see a fixed schema. In other words, you know ahead of time the expected shape of the reply, even if that means you expect some of the fields to be missing or nullable. A common pattern, however, is for an API to return some kind of generic response object as a wrapper around an optional payload.

Let’s assume you’ve got an API that exposes multiple URLs that allow you to provision different kinds of resources. Every time you POST to a resource collection, you get back some generic JSON wrapper that includes two fields: status  and data. The status field indicates whether your operation succeeded, and, if it did succeed, the data field will contain information about the thing you provisioned. Since the API is responsible for different kinds of resources, the schema of the data field may vary.

While this approach makes the job of the server/API developer easier, and provides a very consistent way of getting at the “outer” information, this generally requires clients to make 2 passes at de-serialization or un-marshaling.

First, let’s take a look at what a wrapper pattern might look like as JSON-annotated Go structs:

// GenericServerResponse - wrapper for all responses from apocalypse server.
type GenericServerResponse struct {
      //Status returns a string indicating success or failure
      Status string `json:"status"`
      //Data holds the payload of the response
      Data interface{} `json:"data,omitempty"`      
}

// ZombieApocalypseOutpost - one possibility for contents of the Data field
// in a generic server response.
type ZombieApocalypseOutpost struct {
    Name        string    `json:"name"`
    Latitude    float32   `json:"latitude"`
    Longitude   float32   `json:"longitude"`
    Status      string    `json:"status"`
}

While this example shows one possibility for the contents of the Data field, this pattern usually makes sense when there are multiple possibilities. For example, we could have one resource that provisions zombie apocalypse outposts, and another resource that provisions field artillery, where the struct inside the Data field would be something like a FieldArtillery struct, but the outer wrapper stays the same regardless.

First, let’s take a look at how the server might create a payload like this in a method rigged up to an HTTP handler function (I show how to set up HTTP handlers in my previous blog post):

func outpostProvisioner(w http.ResponseWriter, r *http.Request) {
    // check r.Method for POST
    outpost := ZombieApocalypseOutpost{Name:"Outpost Alpha",Status:"Safe",
      Latitude:37.323011, Longitude:-122.032252}
    response := GenericServerResponse{Status:"success", Data: outpost}
    b2, err := json.Marshal(response)
    if err != nil {
      panic(err)
    }
    w.Write(b2)
}

Note that on the server side we can usually get away with a single marshaling pass because we’re the ones deciding the contents of the Data field, so we know the structure we want. The client, on the other hand, likely needs to perform some conditional logic to figure out what schema applies to the wrapped payload.

You can’t use runtime typecasting tricks in Go because it doesn’t do runtime typecasting the way other languages like Java or C# might do. When we define something as interface{} like we did with the Data field, that looks like calling it Object on the surface, but as I said – we can’t typecast from interface{} to a struct.

What I can do is take advantage of the fact that we already have marshal/unmarshal capabilities in Go. What follows may not be the best, most idiomatic way to do it, but it seems to work out nicely.

  URL := "http://localhost:8080/outposts/"

  r, _ := http.Get(URL)
  response, _ := ioutil.ReadAll(r.Body)
  r.Body.Close()

  var responseMessage GenericServerResponse
  err := json.Unmarshal(response, &responseMessage)
  if err != nil {
    panic(err)
  }
  var outpost ZombieApocalypseOutpost
  b, err := json.Marshal(responseMessage.Data)
  err = json.Unmarshal(b, &outpost)

  fmt.Println(responseMessage)
  fmt.Println(outpost)

First, we grab the raw payload from the server (which is just hosting the method I listed above). Once we’ve got the raw payload, we can unmarshal the outer or wrapper message (the GenericServerResponse JSON-annotated struct). At this point, the Data field is of type interface{} which really means we know nothing about it. Internally one can assume that the JSON unmarshalling code probably produced some kind of map, but I don’t want to deal with the low-level stuff. So, I marshal the Data field which gives me a byte array. Conveniently, a byte array is exactly what the unmarshal method expects. So I produce a byte array and then decode that byte array into the ZombieApocalypseOutpost struct.

This may seem like a little bit of added complexity on the client side, but there is actually a valuable tradeoff here. We gain the advantage of dealing with virtually all of an APIs resources using the same wrapper message, which greatly simplifies our client code. Only when we actually need the data inside the wrapper do we need to crack it open and perform the second pass, which can also be hidden behind a simple library.

I used to cringe every time I saw this pattern because I’m not a fan of black box data. If the inside of the Data field can vary in schema, then you are at the mercy of the API provider to insulate you from changes to the inner data. However, you’re no more a potential victim of breaking changes using this pattern than you would be without the wrapper.

So, in short, if you trust the API provider using this wrapper pattern, it can actually be a very convenient way to standardize on a microservice communication protocol.

Creating a Microservice with the ASP.NET Web API

I’ve been blogging about microservices for quite some time, and have included examples in Scala/Akka, Go, and even Java. Today I thought I would include a sample of how to build a microservice using the Microsoft ASP.NET Web API. As I’ve mentioned a few times before, the term microservice is a bit overloaded. Really, it just refers to a service that adheres to the Single Responsibility Principle (SRP), and does one thing. Sometimes people also include the fact that a service is bootstrapped (the container/HTTP server is included in the service code rather than relied upon as an external dependency) but that detail is unimportant for this discussion.

The first thing we need to do is create an empty ASP.NET application, which should be straightforward for any Microsoft developer. Next, we can create routes for our RESTful resources. There are a number of ways to do this, but I like providing a universal default resource pattern and then overriding it at the individual resource level as exceptions occur.

I can create a starter route pattern in the global.asax.cs file like so:

  GlobalConfiguration.Configure(config =>
            {
                config.MapHttpAttributeRoutes();

                config.Routes.MapHttpRoute(
                    name: "DefaultApi",
                    routeTemplate: "api/v3/{controller}/{id}",
                    defaults: new { id = RouteParameter.Optional }
                );               
            });

If I then create a Web API controller called something like ZombiesController, and put a method like Get on it, this will automatically be the target of the route api/v3/zombies. Further, a Get(id) method will then automatically get invoked by api/v3/zombies/12.

The next step is to create a model object that we want to return from our resource URLs. Since we’ve been using zombies and zombie sightings, let’s continue with that domain and create the following C# class:

using Newtonsoft.Json;
using System;

namespace KotanCode.Microservices.Models
{    
    public class ZombieSighting
    {
        [JsonProperty(PropertyName = "zombieId")]
        public Guid ZombieId { get; set; }

        [JsonProperty(PropertyName = "name)]
        public String Name { get; set; }

        [JsonProperty(PropertyName = "lat")]
        public Double Latitude { get; set; }

        [JsonProperty(PropertyName = "long")]
        public Double Longitude { get; set; }
    }
}

Now we can create a resource that exposes a collection of zombie sightings by adding a new ASP.NET Web API controller to our project:

namespace KotanCode.Microservices.Controllers
{
    public class ZombieController : ApiController
    {
        [Route("api/v7/zombies")] // override the standard route pattern
        [HttpGet]
        public IEnumerable<ZombieSighting> GetAllSightings()
        {
            ZombieDataContext ctx = new ZombieDataContext();
            var zombies = from zomb in ctx.ZombieSightings
                           select new ZombieSighting()
                           {
                               ZombieId = zomb.ID,
                               Name = zomb.ZombieName,
                               Longitude = zomb.Long,
                               Latitude = zomb.Lat
                           };
            return zombies;        
        }
    }
}

In the preceding sample, I’ve overridden the default routing pattern set up in the application’s global object so that this method will be invoked whenever someone issues a GET on /api/v7/zombies. Just for giggles, I illustrate querying from an Entity Framework context and using a LINQ projection to convert the data entities into the JSON-decorated object that is part of my public API. In a real-world example, you might have additional layers here (e.g. an anti-corruption layer), but for this sample we’re okay because all we care about here is the REST API.

That’s it. If you add the one model object and an ApiController class to your ASP.NET application and run it, you’re good to go and you now have the basic building blocks for a microservice that you can deploy locally, or out in a cloud like Cloud Foundry or Azure.

Creating a Microservice in Go

A while back (like, quite a while) I started playing with Go and then unfortunate circumstances kept me from continuing my exploration. My recent exposure to super brilliant people has rekindled some of my desire to explore new languages. Since I’ve been knee-deep in microservices recently, I decided to try and build a microservice in Go.

There are a number of libraries available to Go developers that help with this kind of thing (including Martini). But, I want to see what it looks like to build the most minimal microservice possible. Turns out, that’s pretty damn minimal:

package main

import (
 "encoding/json"
 "net/http"
)

func main() {
 http.HandleFunc("/zombie/", zombie)
 http.ListenAndServe(":8080", nil)
}

type ZombieThing struct {
 Text string
 Name string
 Age int
}

func zombie (w http.ResponseWriter, r *http.Request) {
  zomb := ZombieThing {"Watch out for this guy!", "Bob Zombie", 12}
  b, err := json.Marshal(zomb)
  if err != nil {
    panic(err)
  }
  w.Write(b)
}

So now I just type go run service.go and the service is running. Now I can just curl the service:

$ curl http://localhost:8080/zombie/
 {"Text":"Watch out for this guy!","Name":"Bob Zombie","Age":12}

If you’re wondering why I used upper-case member names for the JSON object being returned, it’s because I’m lazy. Go considers a variable name with lower case to be private or non-exported, so if I were to make the ZombieThing struct have lowercase names, nothing would be exported and the service would return {}.

So, basically we have yet another example in a long list of examples proving that microservices are a language-agnostic architecture and you can build them in pretty much whatever language and framework that suits your needs.

Securing a Spring Boot Microservice

In my most recent blog post, I decided to try and explore microservices with a popular framework that I had never used before – Spring Boot. ‘Hello world’ samples are all well and good, but they rarely ever give you a good idea of what it’s going to be like to use that framework or language in production.

One production concern that just about every microservice has is security. It was because of this that I decided to try and secure the zombie service I made in the last blog post. The first step was pretty easy, adding the following line to my build.gradle file:

compile("org.springframework.boot:spring-boot-starter-security")

If you’ve ever implemented security with other Java frameworks, you probably expect another hour or so of rigging up things, configuration, and defining custom filter classes. Like any good opinionated framework, Spring Boot takes the most accepted patterns and turns them into reasonable defaults. As a result, my application is now already secured using basic HTTP authentication.

To prove it, I try and hit the previous zombie resource:

$ curl http://localhost:8080/zombies/12
{"timestamp":1440592533449,"status":401,"error":"Unauthorized","message":"Full authentication is required to access this resource","path":"/zombies/12"}

When I look at the new startup log after adding the security starter dependency, I notice a number of new things, like default filters being added. I also see the following line of trace:

Using default security password: c611a795-ce2a-4f24-97e3-a886b31586e7

I happened to read somewhere in the documentation that the default security username is user. So, I can now use basic auth to hit the same zombie URL, and this time I will get results:

$ curl -u user:c611a795-ce2a-4f24-97e3-a886b31586e7 http://localhost:8080/zombies/12
{"name":"Bob","age":12}

Let’s assume for a moment that I don’t want a GUID as a password, nor do I want to have to read the application logs to find the password. There is a way to override the default username and randomly generated password using an application.properties file. However, properties files are a big no-no if you’re planning on deploying to the cloud, so a better way to do it would be environment variables:

export SECURITY_USER_NAME=kevin
export SECURITY_USER_PASSWORD=root

Now when I run the application, the default credentials for basic auth will be pulled from the environment variables.

Finally, let’s say I want to have more than one user, and I might want to have security roles, but I’m not quite ready to make the commitment to having a fully persistent user backing store. I can create a security configurer like the one below and the magic happens (code adapted from public Spring docs and Greg Turnquist’s “Learning Spring Boot” book):

package demo;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Configuration;
import org.springframework.http.HttpMethod;
import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder;
import org.springframework.security.config.annotation.method.configuration.EnableGlobalMethodSecurity;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;

@Configuration
@EnableGlobalMethodSecurity(securedEnabled = true)
public class DemoSecurityConfiguration extends WebSecurityConfigurerAdapter {

@Autowired
 public void configureAuth(AuthenticationManagerBuilder auth) throws Exception {
   auth.inMemoryAuthentication()
     .withUser("kevin").password("zombies").roles("USER").and()
     .withUser("root").password("root").roles("USER", "ADMIN");
 }

@Override
 protected void configure(HttpSecurity http) throws Exception {
 http.authorizeRequests()
     .antMatchers(HttpMethod.GET, "/zombies").permitAll()
     .anyRequest().hasRole("USER")
   .and()
     .httpBasic();
 }
}

With this configuration, I’ve made it so the /zombies URL is publicly accessible, but /zombies/(id) is secured and requires the basic credentials to belong to either the kevin user or the root user.

If you’ve read this blog, then you know I’ve dabbled with just about every framework and language around, including many ways of securing applications and services. So far, Spring Boot seems super easy and like a breath of fresh air compared to the tedious, complicated ways of securing services I’ve played with before.

Some of you may shoot me for this, but, I think it might even be easier than creating auth filters for Play Framework, but only time will tell if that assertion holds true.

Creating a Microservice with Spring Boot

It’s no secret that I’m a big fan of microservices. I have blogged about creating a microservice with Akka, and I’m an avid follower of all things service-oriented. This weekend I decided that I would try and see why people are so excited about Spring Boot, and, as a foot in the door to Spring Boot, I would build a microservice.

The first issue I encountered was a lot of conflicting advice on where to get started. For an opinionated framework, it felt awkward that so many people had so many recommendations just to get into the Hello World phase. You can download the Spring CLI, or you can use the Spring Boot starter service online to create a starter project. You can also choose to have your project built by Gradle or Maven.

Since I’m on a Mac, I made sure my homebrew installation was up to date and just fired off:

brew install gvm

I did this so I could have gvm manage my springboot installations. I used gvm to install spring boot as follows:

gvm install springboot

If you want you can have homebrew install springboot directly.

The next step is to create a new, empty Spring Boot project. You can do this by hitting up the Spring Initializr  (http://start.spring.io) or you can use the spring boot CLI to create your stub (this still uses the Spring Initializr service under the covers).

$ spring init --build=gradle HelloService
Using service at https://start.spring.io
Project extracted to '/Users/khoffman/Code/SpringBoot/HelloService'

This creates a new application in a directory called HelloService. There is a DemoApplication class in the demo package that is decorated with the @SpringBootApplication annotation. Without going into too much detail (mostly because I don’t know much detail), this annotation tells Spring to enable automatic configuration based on discovered dependencies and tells it to automatically scan for components to satisfy DI requirements.

Next, I need to make sure that the project has access to the right annotations and components to let me rig up a basic REST controller, so I’ll add the following dependency to my build.gradle file in the dependencies section:

compile("org.springframework.boot:spring-boot-starter-web")

Now I can create a new file called ZombieController.java in src/main/java/demo/controller:

package demo.controller;

import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.bind.annotation.RequestMapping;

@RestController
public class ZombieController {
  @RequestMapping("/zombies")
  public String getZombies() {
    return "Goodbye, cruel world";
  }
}

With no additional work or wiring up, I can now do a gradle build in the root of my application directory and then I can execute the application (the web server comes embedded, which is one of the reasons why it’s on my list of good candidates for microservice building):

java -jar build/libs/demo-0.0.1-SNAPSHOT.jar

Now hitting http://localhost:8080/zombies will return the string “Goodbye, cruel world”. This is all well and good, but I don’t think it goes far enough for a sample. Nobody builds microservices that return raw strings, they build microservices that return actual data, usually in the form of JSON.

Fist, let’s build a Zombie model object using some Jackson JSON annotations:

@JsonAutoDetect(getterVisibility = JsonAutoDetect.Visibility.NONE)
@JsonIgnoreProperties(ignoreUnknown = true)
public class Zombie {

 @NotEmpty
 @JsonSerialize
 @JsonProperty("name")
 private String name;

 @NotEmpty
 @JsonSerialize
 @JsonProperty("age")
 private int age;

 public Zombie(String name, int age) {
   this.name = name;
   this.age = age;
 }
 public String getName() {
   return name;
 }
 public int getAge() {
   return age;
 }
}

And now I can add a new method to my controller that returns an individual Zombie, and takes care of JSON serialization for me based on my preferences defined on the class:

 @RequestMapping("/zombies/{id}")
 public @ResponseBody Zombie getZombie(@PathVariable("id") int id) {
   return new Zombie("Bob", id);
 }

Now I can rebuild my application with gradle build (or I can install a gradle wrapper via gradle wrapper and then invoke ./gradlew build) and then run it again. Once it has compiled and it’s running again, I can hit the following URL with curl: http://localhost:8080/zombies/12

And I will get the following JSON reply:

{"name":"Bob","age":12}

And that’s basically it, at least for the simplest hello world sample of a Spring Boot service. Ordinarily, you wouldn’t return values directly from inside the controller, instead the controllers usually delegate to “auto wired” services, which perform the real work. But, for the purposes of my sample, I decided it was okay to leave the code in the controller.

So, what’s my conclusion? Well, writing a single REST method that returns fake data is by no means a way to judge an entire framework. However, if you’ve been doing RESTful services in Java and have not been using Spring Boot, then this is likely a super refreshing change of pace. I’ll likely keep poking around with it so I can get a better idea for how it behaves in a real production environment.

Distributed Transactions in a Cloud-Native, Microservice World

I have been going back and forth between whether I want to do my blogging from WordPress or from Medium. My latest blog, Distributed Transactions in a Cloud-Native, Microservice World, was over on medium.

See the blog post on medium here.

If you have thoughts on whether you’d like me to continue using Medium, or using this blog (Word Press), or both, please let me know in the comments section below.

Someone Needs to Teach McDonalds about Queueing Theory

I handed over my $1 bill to pay for my remarkably cheap fast food coffee (I had too much blood in my caffeine stream, judge me later). In response, I got a receipt and an hourglass timer. My first reaction was, “WTF is this?” My second reaction was to ask aloud, “What is this?” (thankfully I had enough caffeine to filter my language that early in the morning).

It turns out that, at least in my neighborhood, McDonald’s has instituted a 60 seconds or it’s free policy. If the hourglass empties between the time the first cashier took my money and the time I receive my coffee (or whatever I ordered), then apparently I get a free sandwich.

This seems like a good idea at first, but not to me. I’ve had far too much experience with queues (both of the human and computer type) to think this is going to be anything other than a total train wreck. I ease up to the food window and, lo and behold, my gimmicky little plastic hourglass is empty.

I exchange my coffee for the empty hourglass, and then the cashier goes about her business. I explain that my hourglass was empty, and it takes them 30 seconds to decide what to do with me. After figuring out that my presence has basically caused an exception to normal processing, they tell me to pull forward. I then wait four full minutes to receive my little paper business card telling me that I am owed a free sandwich.

Those of you who have dealt with optimizing back-end systems for throughput, especially systems with message passing and queueing, will immediately recognize huge problems with the design of this promotion. First and foremost is that they have enforced a maximum latency requirement of 60 seconds (+ the amount of time it took me to get to the first cashier) on my order. If the customer perceived latency (remember the internal clock started ticking before the customer clock) exceeds 60 seconds, then McDonalds loses between 40 cents and a buck or so, depending on the sandwich the customer claims upon the next visit.

Unfortunately, that’s not the real problem, and that’s where most people start to horribly misunderstand queueing theory both in the real world and in pipeline processing for computers. The real problem is that exceptions in the processing pipeline have a direct impact on the processing of items further in the backlog. In other words, the fact that it took them 30 seconds to figure out what to do with me likely emptied the hourglass of the customer behind me, and the customer behind them is in a good position to get free goodies as well.

Next, they decide to push the queue exceptions to an overflow queue (the “please pull ahead and wait for us at the curb” routine). This is another spot where lots of people screw up queue implementations. In the case of McDonalds, the size of that overflow queue is 1, possibly 2 if you’re at a big place. Otherwise, you get dumped into the parking lot, which also has a finite number of spaces.

In server software, if the overflow queue overflows, the best case scenario is that all work halts, everything grinds to a halt, and the main feeder queue builds and builds and builds, all waiting for a single item to be processed at the head of the queue. In the real world, you end up with a pile of angry customers waiting in the queue.

Now, someone at McDonald’s was thinking, because the “free stuff” timer starts at window 1 and ends at window 2 when you get your food. This means that, at any time, regardless of the size of the queue, there are a maximum of 2 people (given the average distance between drive-thru windows) with running hourglass timers and potentially able to take advantage of the free sandwich offer. This basically means that, no matter how crappy that line gets, McDonald’s financial exposure will be limited to a small subset of the people in the queue. Even more interesting is that statistically, every time someone uses a “free thing” coupon, they spend more than they otherwise would have. So, even if McDonalds gives out a metric crap-ton of free sandwich coupons, they will likely come out ahead anyway, dollar-wise.

But, that doesn’t say anything for the customers stuck in the queue, where most of them are stuck in line because this giveaway is actually increasing queue processing time, and their delay will not be rewarded because the food prep actually gets ahead when the drive-thru line queues up and slows down.

So, now that you’ve actually made it through the entire blog post, reading the verbal equivalent of me bloviating about something that seems to have no relevance to computer programming… what was the point?

For developers building backend processing pipelines:

  • Optimize your exception processing plan. When everything is working properly, you might be able to process a hojillion messages per second, but when a single item clogs up your queue and tanks your throughput, that’s when managers come storming into the office with flamethrowers.
  • Spend time planning for poison pills and queue cloggers. Sometimes, processing a particular item can crash the processor. You should be able to quickly recover from this, identify the poison pill, and isolate it so you don’t process it again (at least not in your main pipeline). Similarly, if throughput is crucial (and it usually is), then if an item is going to take an inordinately long time to process, put it in another queue to allow the main one to drain … and remember that if your overflow queue overflows, you’re just as screwed as you were when you started.
  • Perceived performance is just as important as clocked performance. McDonalds is doing a little slight of hand with an hourglass gimmick to get you to focus on the smallest time interval in the entire process while the food processing started much earlier. If you can’t get your single pipeline to go any faster, maybe splitting into multiple parallel pipelines will offer the customer (or client, client app, etc) a better perceived performance?
  • Identify and mitigate choke points. If you’re doing throughput and latency analysis on your processing pipeline (you are, aren’t you?) then you should be able to easily identify the point at which the pipeline takes the longest. This is an ideal spot to fan out and perform that task in parallel, use map/reduce, or potentially decide to fork the pipeline. For McDonalds, the slowest activity is actually taking your order, so many places actually fork that pipeline and can take 2 orders in parallel.

Congratulations if you made it all the way to the end. Now you can see how even stupid things like waiting for my damn coffee can get me fired up about queueing theory and optimizing back-end server processing pipelines.