Kotan Code 枯淡コード

In search of simple, elegant code

Menu Close

Page 2 of 16

Complex JSON Handling in Go

In a recent blog post, I showed a fairly naive and contrived microservice example – the equivalent of a RESTful “hello world”. While this example may be helpful in getting some people started, it doesn’t really represent the real world. Yesterday, while spilling my Go language newbsauce all over my colleagues, I ran into an interesting problem that happens more often than you might think.

In most samples involving de-serializing a JSON response from some server, you usually see a fixed schema. In other words, you know ahead of time the expected shape of the reply, even if that means you expect some of the fields to be missing or nullable. A common pattern, however, is for an API to return some kind of generic response object as a wrapper around an optional payload.

Let’s assume you’ve got an API that exposes multiple URLs that allow you to provision different kinds of resources. Every time you POST to a resource collection, you get back some generic JSON wrapper that includes two fields: status  and data. The status field indicates whether your operation succeeded, and, if it did succeed, the data field will contain information about the thing you provisioned. Since the API is responsible for different kinds of resources, the schema of the data field may vary.

While this approach makes the job of the server/API developer easier, and provides a very consistent way of getting at the “outer” information, this generally requires clients to make 2 passes at de-serialization or un-marshaling.

First, let’s take a look at what a wrapper pattern might look like as JSON-annotated Go structs:

// GenericServerResponse - wrapper for all responses from apocalypse server.
type GenericServerResponse struct {
      //Status returns a string indicating success or failure
      Status string `json:"status"`
      //Data holds the payload of the response
      Data interface{} `json:"data,omitempty"`      

// ZombieApocalypseOutpost - one possibility for contents of the Data field
// in a generic server response.
type ZombieApocalypseOutpost struct {
    Name        string    `json:"name"`
    Latitude    float32   `json:"latitude"`
    Longitude   float32   `json:"longitude"`
    Status      string    `json:"status"`

While this example shows one possibility for the contents of the Data field, this pattern usually makes sense when there are multiple possibilities. For example, we could have one resource that provisions zombie apocalypse outposts, and another resource that provisions field artillery, where the struct inside the Data field would be something like a FieldArtillery struct, but the outer wrapper stays the same regardless.

First, let’s take a look at how the server might create a payload like this in a method rigged up to an HTTP handler function (I show how to set up HTTP handlers in my previous blog post):

func outpostProvisioner(w http.ResponseWriter, r *http.Request) {
    // check r.Method for POST
    outpost := ZombieApocalypseOutpost{Name:"Outpost Alpha",Status:"Safe",
      Latitude:37.323011, Longitude:-122.032252}
    response := GenericServerResponse{Status:"success", Data: outpost}
    b2, err := json.Marshal(response)
    if err != nil {

Note that on the server side we can usually get away with a single marshaling pass because we’re the ones deciding the contents of the Data field, so we know the structure we want. The client, on the other hand, likely needs to perform some conditional logic to figure out what schema applies to the wrapped payload.

You can’t use runtime typecasting tricks in Go because it doesn’t do runtime typecasting the way other languages like Java or C# might do. When we define something as interface{} like we did with the Data field, that looks like calling it Object on the surface, but as I said – we can’t typecast from interface{} to a struct.

What I can do is take advantage of the fact that we already have marshal/unmarshal capabilities in Go. What follows may not be the best, most idiomatic way to do it, but it seems to work out nicely.

  URL := "http://localhost:8080/outposts/"

  r, _ := http.Get(URL)
  response, _ := ioutil.ReadAll(r.Body)

  var responseMessage GenericServerResponse
  err := json.Unmarshal(response, &responseMessage)
  if err != nil {
  var outpost ZombieApocalypseOutpost
  b, err := json.Marshal(responseMessage.Data)
  err = json.Unmarshal(b, &outpost)


First, we grab the raw payload from the server (which is just hosting the method I listed above). Once we’ve got the raw payload, we can unmarshal the outer or wrapper message (the GenericServerResponse JSON-annotated struct). At this point, the Data field is of type interface{} which really means we know nothing about it. Internally one can assume that the JSON unmarshalling code probably produced some kind of map, but I don’t want to deal with the low-level stuff. So, I marshal the Data field which gives me a byte array. Conveniently, a byte array is exactly what the unmarshal method expects. So I produce a byte array and then decode that byte array into the ZombieApocalypseOutpost struct.

This may seem like a little bit of added complexity on the client side, but there is actually a valuable tradeoff here. We gain the advantage of dealing with virtually all of an APIs resources using the same wrapper message, which greatly simplifies our client code. Only when we actually need the data inside the wrapper do we need to crack it open and perform the second pass, which can also be hidden behind a simple library.

I used to cringe every time I saw this pattern because I’m not a fan of black box data. If the inside of the Data field can vary in schema, then you are at the mercy of the API provider to insulate you from changes to the inner data. However, you’re no more a potential victim of breaking changes using this pattern than you would be without the wrapper.

So, in short, if you trust the API provider using this wrapper pattern, it can actually be a very convenient way to standardize on a microservice communication protocol.

Creating a Microservice with the ASP.NET Web API

I’ve been blogging about microservices for quite some time, and have included examples in Scala/Akka, Go, and even Java. Today I thought I would include a sample of how to build a microservice using the Microsoft ASP.NET Web API. As I’ve mentioned a few times before, the term microservice is a bit overloaded. Really, it just refers to a service that adheres to the Single Responsibility Principle (SRP), and does one thing. Sometimes people also include the fact that a service is bootstrapped (the container/HTTP server is included in the service code rather than relied upon as an external dependency) but that detail is unimportant for this discussion.

The first thing we need to do is create an empty ASP.NET application, which should be straightforward for any Microsoft developer. Next, we can create routes for our RESTful resources. There are a number of ways to do this, but I like providing a universal default resource pattern and then overriding it at the individual resource level as exceptions occur.

I can create a starter route pattern in the global.asax.cs file like so:

  GlobalConfiguration.Configure(config =>

                    name: "DefaultApi",
                    routeTemplate: "api/v3/{controller}/{id}",
                    defaults: new { id = RouteParameter.Optional }

If I then create a Web API controller called something like ZombiesController, and put a method like Get on it, this will automatically be the target of the route api/v3/zombies. Further, a Get(id) method will then automatically get invoked by api/v3/zombies/12.

The next step is to create a model object that we want to return from our resource URLs. Since we’ve been using zombies and zombie sightings, let’s continue with that domain and create the following C# class:

using Newtonsoft.Json;
using System;

namespace KotanCode.Microservices.Models
    public class ZombieSighting
        [JsonProperty(PropertyName = "zombieId")]
        public Guid ZombieId { get; set; }

        [JsonProperty(PropertyName = "name)]
        public String Name { get; set; }

        [JsonProperty(PropertyName = "lat")]
        public Double Latitude { get; set; }

        [JsonProperty(PropertyName = "long")]
        public Double Longitude { get; set; }

Now we can create a resource that exposes a collection of zombie sightings by adding a new ASP.NET Web API controller to our project:

namespace KotanCode.Microservices.Controllers
    public class ZombieController : ApiController
        [Route("api/v7/zombies")] // override the standard route pattern
        public IEnumerable<ZombieSighting> GetAllSightings()
            ZombieDataContext ctx = new ZombieDataContext();
            var zombies = from zomb in ctx.ZombieSightings
                           select new ZombieSighting()
                               ZombieId = zomb.ID,
                               Name = zomb.ZombieName,
                               Longitude = zomb.Long,
                               Latitude = zomb.Lat
            return zombies;        

In the preceding sample, I’ve overridden the default routing pattern set up in the application’s global object so that this method will be invoked whenever someone issues a GET on /api/v7/zombies. Just for giggles, I illustrate querying from an Entity Framework context and using a LINQ projection to convert the data entities into the JSON-decorated object that is part of my public API. In a real-world example, you might have additional layers here (e.g. an anti-corruption layer), but for this sample we’re okay because all we care about here is the REST API.

That’s it. If you add the one model object and an ApiController class to your ASP.NET application and run it, you’re good to go and you now have the basic building blocks for a microservice that you can deploy locally, or out in a cloud like Cloud Foundry or Azure.

A gearhead test drives a Tesla

Last week I was in Palo Alto, CA for work. During some downtime, I went for a walk. First, I passed a McLaren dealership and remarked on the fact that they had $250,000 cars just sitting in a parking lot. Truly, I was not in Kansas anymore. A few blocks down from the McLaren dealership was a Tesla showroom. Just for giggles, I scheduled a test drive for a Tesla AWD Model S 70D (all-wheel drive, 70kWH, dual motors).

Before I get to the details, I think it’s worth mentioning that I’m a car guy. I have owned two BMWs and I owned them solely for the speed with which I could take corners. Neither were particularly comfortable to sit in, but they gave me an ear-to-ear grin to drive. Now that I am no longer a BMW owner, I have an armored Jeep with 37″ tires, custom suspension, and other modifications from front to back designed to make it absolutely destroy rock crawling obstacles. It is a beast into which my friends and I have poured hours of work (and blood!). It’s the most fun I’ve ever had driving under 5 miles per hour. I love the sound of engines (especially big engines). I love the roar of a super car, and I absolutely adore the feeling of shifting a well made manual transmission. I love my clutch, and you can pry it from my cold, dead left foot.

So, with this background in mind, you can imagine how I might have been skeptical of the all-electric sedan that looks like a land yacht. No stick shift? No exhaust note? No engine noise? And it looked big enough to fit one of my previous BMWs in the trunk. I looked at it and thought there is no way this car will be fun to drive. I had preconceived notions of driving a low clearance, $80,000 golf cart.

I couldn’t possibly have been more wrong.


When I got into the vehicle, I was overcome by the quality of the interior. I set my standards of interior quality by BMW comparison, so the bar is pretty high. The seats were both comfortable and supportive (you almost never get both), there was room to spare. With no transmission hump in the middle of the vehicle, it gave the impression of having even more room. I’d say the inside of the Tesla fits somewhere between the BMW 5 and 7 series, bigger than the interior of the pre-2014 Chrysler 300s.

The interior absolutely screams elegant simplicity. There’s a single, massive screen in the middle of the dash but that’s it. There are no buttons, no knobs, no distractions anywhere. The steering wheel has a couple multi-function knobs but other than that, everything is out of your way. Even the massive 17″ touch screen doesn’t intrude nearly as much into my peripheral vision as I thought it would.

This is easily the single most comfortable vehicle I have ever had the privilege to sit in. That said, as indicated by my background context from above – comfort is not the most important thing to me in a vehicle. If it was, I sure as hell wouldn’t drive a Jeep 😉


Moments after pulling out of the car dealership, I had that “oh no, this is a giant golf cart” feeling. That feeling vanished as soon as I took my first turn. You can control how stiff or splashy the steering is, ranging from what I could call the “cadillac” steering to BMW stiffness. I cranked the steering stiffness all the way up and the sales rep took me out on the freeway.

On the freeway, this thing is amazing. I’ll cover the technology in a minute, but the Tesla glides along effortlessly, giving me the impression that I could easily drive cross-country in this. Once we got off the freeway, we started charging up and down hilly back roads.

It’s been years since my face was covered in the grin of a car enthusiast, but it came back during this test drive. There’s no understeer, and the ridiculously low center of gravity makes this car tear through curves better than my 335is.

It’s the most comfortable vehicle I’ve ever driven, and it handles like a sports car. In fact, it handles so much like a sports car that I would rather take the Tesla on twisty back roads than my old BMW, and demolishing back roads was my favorite thing to do in that car.


The 70D is an all-wheel drive model, but it hits 60mph in 5 seconds. This is the base model, and if you go further up into the performance models, you can get to 60mph in under 3 seconds, which is legit supercar acceleration, acceleration that could cost you ten times the price of a Tesla in a supercar.

There are almost no moving parts. There’s no turbo lag. There’s no “drive by wire” lag where the ECM decodes the pressure on my gas pedal into an appropriate response to the fuel injector. There are no gear changes, no clutch gasp. It’s just press and gofast. It doesn’t matter whether you’re doing 5mph or 50mph, if you push down on the accelerator (can’t call it a gas pedal) you’ll get a metric shit-ton of torque, even if you’re going uphill.

This is no golf cart.


I don’t even know where to start with the technology. The car comes standard with all the bells and whistles that are extras for mainstream sedans or SUVs like lane keeping warnings, blind spot warnings, etc. It also comes with a massive 17″ screen in the dash that can be configured however you like, and can feed you real-time traffic from the (free for 4 years) 4G LTE connection built into the vehicle.

One of the upgrades is auto pilot. This is not just adaptive cruise, this is take-your-hands-off-the-wheel auto pilot where you can let the vehicle drive for you on the highway or locate and automatically park in parking spots. It’s got a camera that reads speed limit signs on the side of the road.

There’s an app that lets you remotely monitor the charge status and location of the car, and you can remotely control the climate, and not just that half-assed vent starter you can do with GM and Chrysler vehicles.

The door handles detect your presence and push out of the vehicle when you’re nearby. The nav system is aware of Tesla’s network of superchargers and places with destination chargers and automatically routes you through them on long haul drives.

There is no car on the market that has more advanced, more insanely cool technology than the Tesla.


The price is brutal. Honestly, most people gasp when they hear how much these things cost. Fully decked out, the base 70D can run you in the mid $80,000 range. But, if you think long term about the cost, it gets a little (tiny) bit easier to swallow. You qualify for a $7,500 federal tax credit for buying an EV, there might be a state tax credit, and you need to do the math and determine how much money you’ll save in gas every year and subtract that from the monthly payment on the car. For some people, that price difference can be huge.

Still, the sticker price is a whopper, and this car is definitely not within the price range of the average household. That said, I suspect the Model 3 ($35,000) will destroy the marketplace in 2017 when it ships.


The bottom line is the Tesla is a big family sedan that you can stuff all the kids and their stuff into, and you get to drive it like a sports car with roller-coaster grade acceleration. The fact that you get to do all that without putting a drop of gasoline in it is just icing on the cake.

In my opinion, this is the best of all worlds. You get to have your cake and eat it too (for a price). If you can afford a car like this, you’d be doing yourself a disservice not to get behind the wheel of one for a test drive. It is a life-altering experience.

Creating a Microservice in Go

A while back (like, quite a while) I started playing with Go and then unfortunate circumstances kept me from continuing my exploration. My recent exposure to super brilliant people has rekindled some of my desire to explore new languages. Since I’ve been knee-deep in microservices recently, I decided to try and build a microservice in Go.

There are a number of libraries available to Go developers that help with this kind of thing (including Martini). But, I want to see what it looks like to build the most minimal microservice possible. Turns out, that’s pretty damn minimal:

package main

import (

func main() {
 http.HandleFunc("/zombie/", zombie)
 http.ListenAndServe(":8080", nil)

type ZombieThing struct {
 Text string
 Name string
 Age int

func zombie (w http.ResponseWriter, r *http.Request) {
  zomb := ZombieThing {"Watch out for this guy!", "Bob Zombie", 12}
  b, err := json.Marshal(zomb)
  if err != nil {

So now I just type go run service.go and the service is running. Now I can just curl the service:

$ curl http://localhost:8080/zombie/
 {"Text":"Watch out for this guy!","Name":"Bob Zombie","Age":12}

If you’re wondering why I used upper-case member names for the JSON object being returned, it’s because I’m lazy. Go considers a variable name with lower case to be private or non-exported, so if I were to make the ZombieThing struct have lowercase names, nothing would be exported and the service would return {}.

So, basically we have yet another example in a long list of examples proving that microservices are a language-agnostic architecture and you can build them in pretty much whatever language and framework that suits your needs.

Securing a Spring Boot Microservice

In my most recent blog post, I decided to try and explore microservices with a popular framework that I had never used before – Spring Boot. ‘Hello world’ samples are all well and good, but they rarely ever give you a good idea of what it’s going to be like to use that framework or language in production.

One production concern that just about every microservice has is security. It was because of this that I decided to try and secure the zombie service I made in the last blog post. The first step was pretty easy, adding the following line to my build.gradle file:


If you’ve ever implemented security with other Java frameworks, you probably expect another hour or so of rigging up things, configuration, and defining custom filter classes. Like any good opinionated framework, Spring Boot takes the most accepted patterns and turns them into reasonable defaults. As a result, my application is now already secured using basic HTTP authentication.

To prove it, I try and hit the previous zombie resource:

$ curl http://localhost:8080/zombies/12
{"timestamp":1440592533449,"status":401,"error":"Unauthorized","message":"Full authentication is required to access this resource","path":"/zombies/12"}

When I look at the new startup log after adding the security starter dependency, I notice a number of new things, like default filters being added. I also see the following line of trace:

Using default security password: c611a795-ce2a-4f24-97e3-a886b31586e7

I happened to read somewhere in the documentation that the default security username is user. So, I can now use basic auth to hit the same zombie URL, and this time I will get results:

$ curl -u user:c611a795-ce2a-4f24-97e3-a886b31586e7 http://localhost:8080/zombies/12

Let’s assume for a moment that I don’t want a GUID as a password, nor do I want to have to read the application logs to find the password. There is a way to override the default username and randomly generated password using an application.properties file. However, properties files are a big no-no if you’re planning on deploying to the cloud, so a better way to do it would be environment variables:


Now when I run the application, the default credentials for basic auth will be pulled from the environment variables.

Finally, let’s say I want to have more than one user, and I might want to have security roles, but I’m not quite ready to make the commitment to having a fully persistent user backing store. I can create a security configurer like the one below and the magic happens (code adapted from public Spring docs and Greg Turnquist’s “Learning Spring Boot” book):

package demo;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Configuration;
import org.springframework.http.HttpMethod;
import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder;
import org.springframework.security.config.annotation.method.configuration.EnableGlobalMethodSecurity;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;

@EnableGlobalMethodSecurity(securedEnabled = true)
public class DemoSecurityConfiguration extends WebSecurityConfigurerAdapter {

 public void configureAuth(AuthenticationManagerBuilder auth) throws Exception {
     .withUser("root").password("root").roles("USER", "ADMIN");

 protected void configure(HttpSecurity http) throws Exception {
     .antMatchers(HttpMethod.GET, "/zombies").permitAll()

With this configuration, I’ve made it so the /zombies URL is publicly accessible, but /zombies/(id) is secured and requires the basic credentials to belong to either the kevin user or the root user.

If you’ve read this blog, then you know I’ve dabbled with just about every framework and language around, including many ways of securing applications and services. So far, Spring Boot seems super easy and like a breath of fresh air compared to the tedious, complicated ways of securing services I’ve played with before.

Some of you may shoot me for this, but, I think it might even be easier than creating auth filters for Play Framework, but only time will tell if that assertion holds true.

Creating a Microservice with Spring Boot

It’s no secret that I’m a big fan of microservices. I have blogged about creating a microservice with Akka, and I’m an avid follower of all things service-oriented. This weekend I decided that I would try and see why people are so excited about Spring Boot, and, as a foot in the door to Spring Boot, I would build a microservice.

The first issue I encountered was a lot of conflicting advice on where to get started. For an opinionated framework, it felt awkward that so many people had so many recommendations just to get into the Hello World phase. You can download the Spring CLI, or you can use the Spring Boot starter service online to create a starter project. You can also choose to have your project built by Gradle or Maven.

Since I’m on a Mac, I made sure my homebrew installation was up to date and just fired off:

brew install gvm

I did this so I could have gvm manage my springboot installations. I used gvm to install spring boot as follows:

gvm install springboot

If you want you can have homebrew install springboot directly.

The next step is to create a new, empty Spring Boot project. You can do this by hitting up the Spring Initializr  (http://start.spring.io) or you can use the spring boot CLI to create your stub (this still uses the Spring Initializr service under the covers).

$ spring init --build=gradle HelloService
Using service at https://start.spring.io
Project extracted to '/Users/khoffman/Code/SpringBoot/HelloService'

This creates a new application in a directory called HelloService. There is a DemoApplication class in the demo package that is decorated with the @SpringBootApplication annotation. Without going into too much detail (mostly because I don’t know much detail), this annotation tells Spring to enable automatic configuration based on discovered dependencies and tells it to automatically scan for components to satisfy DI requirements.

Next, I need to make sure that the project has access to the right annotations and components to let me rig up a basic REST controller, so I’ll add the following dependency to my build.gradle file in the dependencies section:


Now I can create a new file called ZombieController.java in src/main/java/demo/controller:

package demo.controller;

import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.bind.annotation.RequestMapping;

public class ZombieController {
  public String getZombies() {
    return "Goodbye, cruel world";

With no additional work or wiring up, I can now do a gradle build in the root of my application directory and then I can execute the application (the web server comes embedded, which is one of the reasons why it’s on my list of good candidates for microservice building):

java -jar build/libs/demo-0.0.1-SNAPSHOT.jar

Now hitting http://localhost:8080/zombies will return the string “Goodbye, cruel world”. This is all well and good, but I don’t think it goes far enough for a sample. Nobody builds microservices that return raw strings, they build microservices that return actual data, usually in the form of JSON.

Fist, let’s build a Zombie model object using some Jackson JSON annotations:

@JsonAutoDetect(getterVisibility = JsonAutoDetect.Visibility.NONE)
@JsonIgnoreProperties(ignoreUnknown = true)
public class Zombie {

 private String name;

 private int age;

 public Zombie(String name, int age) {
   this.name = name;
   this.age = age;
 public String getName() {
   return name;
 public int getAge() {
   return age;

And now I can add a new method to my controller that returns an individual Zombie, and takes care of JSON serialization for me based on my preferences defined on the class:

 public @ResponseBody Zombie getZombie(@PathVariable("id") int id) {
   return new Zombie("Bob", id);

Now I can rebuild my application with gradle build (or I can install a gradle wrapper via gradle wrapper and then invoke ./gradlew build) and then run it again. Once it has compiled and it’s running again, I can hit the following URL with curl: http://localhost:8080/zombies/12

And I will get the following JSON reply:


And that’s basically it, at least for the simplest hello world sample of a Spring Boot service. Ordinarily, you wouldn’t return values directly from inside the controller, instead the controllers usually delegate to “auto wired” services, which perform the real work. But, for the purposes of my sample, I decided it was okay to leave the code in the controller.

So, what’s my conclusion? Well, writing a single REST method that returns fake data is by no means a way to judge an entire framework. However, if you’ve been doing RESTful services in Java and have not been using Spring Boot, then this is likely a super refreshing change of pace. I’ll likely keep poking around with it so I can get a better idea for how it behaves in a real production environment.

Distributed Transactions in a Cloud-Native, Microservice World

I have been going back and forth between whether I want to do my blogging from WordPress or from Medium. My latest blog, Distributed Transactions in a Cloud-Native, Microservice World, was over on medium.

See the blog post on medium here.

If you have thoughts on whether you’d like me to continue using Medium, or using this blog (Word Press), or both, please let me know in the comments section below.

Real-Time Selfie Stick Destruction Monitoring Console

I recently stumbled upon a library that made me happy in so many ways. First, you can use this library to quickly create all kinds of dashboards that can be used for a number of purposes, not the least of which is real-time monitoring of systems or system data. This library is blessed-contrib, and since it uses ASCII/ANSI art to make these dashboards available in the terminal window, it brings back so many great memories of the “good old days”, I had to try it out.

Every time I play with some new library or technology, I need to come up with some hypothetical sample use case. In this case, I wondered what it would look like if I could monitor the real-time per-capita density of people wielding selfie-sticks. This would guide my decision as to where to deploy a selfie-stick-interfering EMP device.

This is what such a dashboard would look like:

Selfie Stick Density Monitoring

Selfie Stick Density Monitoring

I would be lying if I said that looking at this didn’t make me want to fire up my copy of War Games and play a game of Global Thermonuclear War with Joshua.

And here’s the ridiculously simple JavaScript code, executed by Node.js:

var blessed = require('blessed')
 , contrib = require('../')
 , screen = blessed.screen()
 , map = contrib.map({label: 'Selfie Stick Destruction Console'})

map.addMarker({"lon" : "-79.0000", "lat" : "37.5000", color: "red", char: "X" })
map.addMarker({"lon" : "13.25", "lat": "52.30", color: "red", char: "X" })
map.addMarker({"lon" : "72.82", "lat": "18.96", color: "yellow", char: "X" })
map.addMarker({"lon" : "103.85", "lat": "1.3", color: "white", char: "X" })
map.addMarker({"lon" : "116.4", "lat" : "39.93", color: "red", char: "X" })
map.addMarker({"lon" : "-0.1", "lat" : "51.52", color: "red", char: "X" })
map.addMarker({"lon" : "-118.41", "lat" : "34.11", color: "red", char: "X" })


This just scratches the surface, but the big lesson learned here is this: I love the internet, because it brings me things like this. I will now absolutely use this to create a terminal-based monitoring tool for the next application I build, regardless of domain or complexity 🙂

Tracking the Zombie Apocalypse with the MEAN Stack – Pt I

When some hardware folks find a new device, they ask, “Will it blend?” When I find a new programming language, stack, or platform, I ask, “Can it help me survive the impending zombie apocalypse?”

Today I started working with the MEAN (MongoDB, Express, AngularJS, Node.js) stack. Much like LAMP (Linux, Apache, MySQL, PHP) was “back in the day”, developers using one or more of these technologies naturally gravitate to this particular combination. This combination happens so often that several folks have already done some work toward providing a useful abstraction layer on top of this stack.

In my case, I started working with mean.io‘s stack, which aims to provide a rails-y command-line based interface to building MEAN web applications in full-stack JavaScript.

I won’t go through the details of installing and configuring the Mean IO tools – their website does a pretty good job of getting you set up. Before I get to my code sample, here’s a quick recap of the components of the MEAN stack:

  • MongoDB – This is a document-oriented database that works really well as an all-purpose NoSQL store, but it also has some really powerful capabilities that make it a good choice for even some of the most strict performance requirements.
  • Express – This is a very fast, lightweight web application framework that sits on top of Node.js
  • AngularJS – This is an extremely powerful, very popular framework for building “application in a page” type client-side code for web applications.
  • Node.js – This is a low-level framework that allows you to run JavaScript on the server. On its own, Node.js is not a web application server, it is just the framework that enables such servers.

Now on to my sample. I want to build a web application that will help me monitor the zombie outbreaks that will inevitably start happening all over the world. This sounds like an ideal, simple CRUD example I can use to test out the MEAN stack.

First, I created a new application using the mean command line, then I created a new package (reusable sub-module of a web application, mean.io actually has a store-like interface for sharing these packages) called apocalypse with the command line “mean package apocalypse”.

The package has two really important directories: public and server. The public directory is the client-side directory, where all your AngularJS code and HTML templates will go. The server directory is the server-side directory which is processed by Node and Express.

This blog post will go through the server side and then in the next post, I’ll go through the client side.

In the server directory, there are a number of other directories, including controllersmodelsroutes, and tests. I’m going to leave tests out and start with a model. Since this is a server-side model, we’re actually talking about an object that interacts with MongoDB via mongoose.

Let’s take a look at my OutbreakReport model, which represents an instance of a report of a zombie outbreak – a description, a geocoordinate, and a count of the number of observed zombies.

'use strict';
 * Module dependencies.
var mongoose = require('mongoose'),
 Schema = mongoose.Schema;
 * Outbreak Report Schema
var OutbreakReportSchema = new Schema({
 created: {
   type: Date,
   default: Date.now
 title: {
   type: String,
   required: true,
   trim: true
 zombieCount: {
   type: Number,
   default: 0
 latitude: Number,
 longitude: Number,
 user: {
   type: Schema.ObjectId,
   ref: 'User'
 updated: {
   type: Array
OutbreakReportSchema.path('title').validate(function(title) {
 return !!title;
}, 'Title cannot be blank');
 * Statics
OutbreakReportSchema.statics.load = function(id, cb) {
 _id: id
 }).populate('user', 'name username').exec(cb);
mongoose.model('OutbreakReport', OutbreakReportSchema);

The schema is a JavaScript object that represents a mongoose schema type, which will be used to constrain the objects that go into and come out of MongoDB. If this code looks like the articles sample that comes with the mean.io scaffolding, that’s not a coincidence. I’m diving into a new technology, so I’m reusing as much sample code as I can to avoid making stupid mistakes.

Now that I have a model, I need a controller to sit on top of it. This controller is run on the server, and serves as the plumbing that supports the RESTful API for dealing with outbreak reports programmatically. It can either supply a client application with a zombie outbreak backend, or (as you’ll see in the next post), it can also supply an AngularJS front-end with all the data interaction necessary to manipulate this domain.

'use strict';
 * Module dependencies.
var mongoose = require('mongoose'),
 OutbreakReport = mongoose.model('OutbreakReport'),
 _ = require('lodash');
module.exports = function(OutbreakReports) {
return {
 * Find outbreak report by id
 outbreakreport: function(req, res, next, id) {
   OutbreakReport.load(id, function(err, report) {
     if (err) return next(err);
     if (!report) return next(new Error('Failed to load outbreak report ' + id));
     req.outbreakreport = report;
 * Create an outbreak report
 create: function(req, res) {
   var report = new OutbreakReport(req.body);
   report.user = req.user;
   report.save(function(err) {
     if (err) {
        return res.status(500).json({
        error: 'Cannot save the outbreak report'
 OutbreakReports.events.publish('create', {
 description: req.user.name + ' created ' + req.body.title + ' outbreak report.'

 * Update an outbreak report
 update: function(req, res) {
   var report = req.outbreakreport;
   report = _.extend(report, req.body);
   report.save(function(err) {
     if (err) {
       return res.status(500).json({
         error: 'Cannot update the outbreak report'
 OutbreakReports.events.publish('update', {
 description: req.user.name + ' updated ' + req.body.title + ' outbreak report.'

 * Delete an outbreak report
 destroy: function(req, res) {
 var report = req.outbreakreport;
 report.remove(function(err) {
 if (err) {
   return res.status(500).json({
   error: 'Cannot delete the report'
 OutbreakReports.events.publish('remove', {
 description: req.user.name + ' deleted ' + report.title + ' outbreak report.'

 * Show an outbreak report
 show: function(req, res) {
   OutbreakReports.events.publish('view', {
   description: req.user.name + ' read ' + req.outbreakreport.title + ' outbreak report.'

 * List of Reports
 all: function(req, res) {
   OutbreakReport.find().sort('-created').populate('user', 'name username').exec(function(err, reports) {
 if (err) {
   return res.status(500).json({
   error: 'Cannot list the outbreak reports'

Those of you who have spent any amount of time building web applications, regardless of platform, should start to recognize what’s happening here. We’ve got a controller to invoke the right code to load or manipulate domain objects (in this case, OutbreakReports), and we’ve got a domain object that is linked to a MongoDB backing store. Now we need to define the routes, which define the RESTful API used to interact with OutbreakReports:

'use strict';
/* jshint -W098 */
// The Package is past automatically as first parameter
module.exports = function(OutbreakReports, app, auth, database) {

var reports = require('../controllers/outbreakreports')(OutbreakReports);

.post(auth.requiresLogin, reports.create);
.get(auth.isMongoId, reports.show)
.put(auth.isMongoId, auth.requiresLogin, reports.update)
.delete(auth.isMongoId, auth.requiresLogin, reports.destroy);

// Finish with setting up the articleId param
app.param('reportId', reports.outbreakreport);

Now, if I’ve done everything right, I should be able to issue a curl against http://localhost:3000/api/outbreakreports and get some data (I went into my MongoDB instance and manually created the outbreak reports collection and an initial document since I don’t yet have a GUI).

vertex5:zombies kevin$ curl http://localhost:3000/api/outbreakreports
[{"_id":"5573098f83b289c40d15c229","title":"random mob","latitude":41.8596,"longitude":72.6791,"user":{"name":"Kevin Hoffman","username":"kotancode","_id":"5572cdae31b741c910b11c15"},"updated":[],"zombieCount":3,"created":"2015-06-06T10:56:03.803Z"}]

From start to finish, from empty directory to working prototype, I think it took me 20 minutes, and most of that time was just the effort of typing things in. This is, of course, the value add of scaffolding to begin with. I’m quite certain it would have taken me a lot longer to do anything more complicated, but I am really pleased with how smoothly everything ran the first time.

In the next post, I’ll cover how to create the AngularJS GUI to consume the outbreak reports model, which will include creating a service, an Angular controller, HTML, etc. Stay tuned!

Someone Needs to Teach McDonalds about Queueing Theory

I handed over my $1 bill to pay for my remarkably cheap fast food coffee (I had too much blood in my caffeine stream, judge me later). In response, I got a receipt and an hourglass timer. My first reaction was, “WTF is this?” My second reaction was to ask aloud, “What is this?” (thankfully I had enough caffeine to filter my language that early in the morning).

It turns out that, at least in my neighborhood, McDonald’s has instituted a 60 seconds or it’s free policy. If the hourglass empties between the time the first cashier took my money and the time I receive my coffee (or whatever I ordered), then apparently I get a free sandwich.

This seems like a good idea at first, but not to me. I’ve had far too much experience with queues (both of the human and computer type) to think this is going to be anything other than a total train wreck. I ease up to the food window and, lo and behold, my gimmicky little plastic hourglass is empty.

I exchange my coffee for the empty hourglass, and then the cashier goes about her business. I explain that my hourglass was empty, and it takes them 30 seconds to decide what to do with me. After figuring out that my presence has basically caused an exception to normal processing, they tell me to pull forward. I then wait four full minutes to receive my little paper business card telling me that I am owed a free sandwich.

Those of you who have dealt with optimizing back-end systems for throughput, especially systems with message passing and queueing, will immediately recognize huge problems with the design of this promotion. First and foremost is that they have enforced a maximum latency requirement of 60 seconds (+ the amount of time it took me to get to the first cashier) on my order. If the customer perceived latency (remember the internal clock started ticking before the customer clock) exceeds 60 seconds, then McDonalds loses between 40 cents and a buck or so, depending on the sandwich the customer claims upon the next visit.

Unfortunately, that’s not the real problem, and that’s where most people start to horribly misunderstand queueing theory both in the real world and in pipeline processing for computers. The real problem is that exceptions in the processing pipeline have a direct impact on the processing of items further in the backlog. In other words, the fact that it took them 30 seconds to figure out what to do with me likely emptied the hourglass of the customer behind me, and the customer behind them is in a good position to get free goodies as well.

Next, they decide to push the queue exceptions to an overflow queue (the “please pull ahead and wait for us at the curb” routine). This is another spot where lots of people screw up queue implementations. In the case of McDonalds, the size of that overflow queue is 1, possibly 2 if you’re at a big place. Otherwise, you get dumped into the parking lot, which also has a finite number of spaces.

In server software, if the overflow queue overflows, the best case scenario is that all work halts, everything grinds to a halt, and the main feeder queue builds and builds and builds, all waiting for a single item to be processed at the head of the queue. In the real world, you end up with a pile of angry customers waiting in the queue.

Now, someone at McDonald’s was thinking, because the “free stuff” timer starts at window 1 and ends at window 2 when you get your food. This means that, at any time, regardless of the size of the queue, there are a maximum of 2 people (given the average distance between drive-thru windows) with running hourglass timers and potentially able to take advantage of the free sandwich offer. This basically means that, no matter how crappy that line gets, McDonald’s financial exposure will be limited to a small subset of the people in the queue. Even more interesting is that statistically, every time someone uses a “free thing” coupon, they spend more than they otherwise would have. So, even if McDonalds gives out a metric crap-ton of free sandwich coupons, they will likely come out ahead anyway, dollar-wise.

But, that doesn’t say anything for the customers stuck in the queue, where most of them are stuck in line because this giveaway is actually increasing queue processing time, and their delay will not be rewarded because the food prep actually gets ahead when the drive-thru line queues up and slows down.

So, now that you’ve actually made it through the entire blog post, reading the verbal equivalent of me bloviating about something that seems to have no relevance to computer programming… what was the point?

For developers building backend processing pipelines:

  • Optimize your exception processing plan. When everything is working properly, you might be able to process a hojillion messages per second, but when a single item clogs up your queue and tanks your throughput, that’s when managers come storming into the office with flamethrowers.
  • Spend time planning for poison pills and queue cloggers. Sometimes, processing a particular item can crash the processor. You should be able to quickly recover from this, identify the poison pill, and isolate it so you don’t process it again (at least not in your main pipeline). Similarly, if throughput is crucial (and it usually is), then if an item is going to take an inordinately long time to process, put it in another queue to allow the main one to drain … and remember that if your overflow queue overflows, you’re just as screwed as you were when you started.
  • Perceived performance is just as important as clocked performance. McDonalds is doing a little slight of hand with an hourglass gimmick to get you to focus on the smallest time interval in the entire process while the food processing started much earlier. If you can’t get your single pipeline to go any faster, maybe splitting into multiple parallel pipelines will offer the customer (or client, client app, etc) a better perceived performance?
  • Identify and mitigate choke points. If you’re doing throughput and latency analysis on your processing pipeline (you are, aren’t you?) then you should be able to easily identify the point at which the pipeline takes the longest. This is an ideal spot to fan out and perform that task in parallel, use map/reduce, or potentially decide to fork the pipeline. For McDonalds, the slowest activity is actually taking your order, so many places actually fork that pipeline and can take 2 orders in parallel.

Congratulations if you made it all the way to the end. Now you can see how even stupid things like waiting for my damn coffee can get me fired up about queueing theory and optimizing back-end server processing pipelines.