Kotan Code 枯淡コード

In search of simple, elegant code

Menu Close

Tag: soa

Complex JSON Handling in Go

In a recent blog post, I showed a fairly naive and contrived microservice example – the equivalent of a RESTful “hello world”. While this example may be helpful in getting some people started, it doesn’t really represent the real world. Yesterday, while spilling my Go language newbsauce all over my colleagues, I ran into an interesting problem that happens more often than you might think.

In most samples involving de-serializing a JSON response from some server, you usually see a fixed schema. In other words, you know ahead of time the expected shape of the reply, even if that means you expect some of the fields to be missing or nullable. A common pattern, however, is for an API to return some kind of generic response object as a wrapper around an optional payload.

Let’s assume you’ve got an API that exposes multiple URLs that allow you to provision different kinds of resources. Every time you POST to a resource collection, you get back some generic JSON wrapper that includes two fields: status  and data. The status field indicates whether your operation succeeded, and, if it did succeed, the data field will contain information about the thing you provisioned. Since the API is responsible for different kinds of resources, the schema of the data field may vary.

While this approach makes the job of the server/API developer easier, and provides a very consistent way of getting at the “outer” information, this generally requires clients to make 2 passes at de-serialization or un-marshaling.

First, let’s take a look at what a wrapper pattern might look like as JSON-annotated Go structs:

// GenericServerResponse - wrapper for all responses from apocalypse server.
type GenericServerResponse struct {
      //Status returns a string indicating success or failure
      Status string `json:"status"`
      //Data holds the payload of the response
      Data interface{} `json:"data,omitempty"`      

// ZombieApocalypseOutpost - one possibility for contents of the Data field
// in a generic server response.
type ZombieApocalypseOutpost struct {
    Name        string    `json:"name"`
    Latitude    float32   `json:"latitude"`
    Longitude   float32   `json:"longitude"`
    Status      string    `json:"status"`

While this example shows one possibility for the contents of the Data field, this pattern usually makes sense when there are multiple possibilities. For example, we could have one resource that provisions zombie apocalypse outposts, and another resource that provisions field artillery, where the struct inside the Data field would be something like a FieldArtillery struct, but the outer wrapper stays the same regardless.

First, let’s take a look at how the server might create a payload like this in a method rigged up to an HTTP handler function (I show how to set up HTTP handlers in my previous blog post):

func outpostProvisioner(w http.ResponseWriter, r *http.Request) {
    // check r.Method for POST
    outpost := ZombieApocalypseOutpost{Name:"Outpost Alpha",Status:"Safe",
      Latitude:37.323011, Longitude:-122.032252}
    response := GenericServerResponse{Status:"success", Data: outpost}
    b2, err := json.Marshal(response)
    if err != nil {

Note that on the server side we can usually get away with a single marshaling pass because we’re the ones deciding the contents of the Data field, so we know the structure we want. The client, on the other hand, likely needs to perform some conditional logic to figure out what schema applies to the wrapped payload.

You can’t use runtime typecasting tricks in Go because it doesn’t do runtime typecasting the way other languages like Java or C# might do. When we define something as interface{} like we did with the Data field, that looks like calling it Object on the surface, but as I said – we can’t typecast from interface{} to a struct.

What I can do is take advantage of the fact that we already have marshal/unmarshal capabilities in Go. What follows may not be the best, most idiomatic way to do it, but it seems to work out nicely.

  URL := "http://localhost:8080/outposts/"

  r, _ := http.Get(URL)
  response, _ := ioutil.ReadAll(r.Body)

  var responseMessage GenericServerResponse
  err := json.Unmarshal(response, &responseMessage)
  if err != nil {
  var outpost ZombieApocalypseOutpost
  b, err := json.Marshal(responseMessage.Data)
  err = json.Unmarshal(b, &outpost)


First, we grab the raw payload from the server (which is just hosting the method I listed above). Once we’ve got the raw payload, we can unmarshal the outer or wrapper message (the GenericServerResponse JSON-annotated struct). At this point, the Data field is of type interface{} which really means we know nothing about it. Internally one can assume that the JSON unmarshalling code probably produced some kind of map, but I don’t want to deal with the low-level stuff. So, I marshal the Data field which gives me a byte array. Conveniently, a byte array is exactly what the unmarshal method expects. So I produce a byte array and then decode that byte array into the ZombieApocalypseOutpost struct.

This may seem like a little bit of added complexity on the client side, but there is actually a valuable tradeoff here. We gain the advantage of dealing with virtually all of an APIs resources using the same wrapper message, which greatly simplifies our client code. Only when we actually need the data inside the wrapper do we need to crack it open and perform the second pass, which can also be hidden behind a simple library.

I used to cringe every time I saw this pattern because I’m not a fan of black box data. If the inside of the Data field can vary in schema, then you are at the mercy of the API provider to insulate you from changes to the inner data. However, you’re no more a potential victim of breaking changes using this pattern than you would be without the wrapper.

So, in short, if you trust the API provider using this wrapper pattern, it can actually be a very convenient way to standardize on a microservice communication protocol.

Creating a Microservice in Go

A while back (like, quite a while) I started playing with Go and then unfortunate circumstances kept me from continuing my exploration. My recent exposure to super brilliant people has rekindled some of my desire to explore new languages. Since I’ve been knee-deep in microservices recently, I decided to try and build a microservice in Go.

There are a number of libraries available to Go developers that help with this kind of thing (including Martini). But, I want to see what it looks like to build the most minimal microservice possible. Turns out, that’s pretty damn minimal:

package main

import (

func main() {
 http.HandleFunc("/zombie/", zombie)
 http.ListenAndServe(":8080", nil)

type ZombieThing struct {
 Text string
 Name string
 Age int

func zombie (w http.ResponseWriter, r *http.Request) {
  zomb := ZombieThing {"Watch out for this guy!", "Bob Zombie", 12}
  b, err := json.Marshal(zomb)
  if err != nil {

So now I just type go run service.go and the service is running. Now I can just curl the service:

$ curl http://localhost:8080/zombie/
 {"Text":"Watch out for this guy!","Name":"Bob Zombie","Age":12}

If you’re wondering why I used upper-case member names for the JSON object being returned, it’s because I’m lazy. Go considers a variable name with lower case to be private or non-exported, so if I were to make the ZombieThing struct have lowercase names, nothing would be exported and the service would return {}.

So, basically we have yet another example in a long list of examples proving that microservices are a language-agnostic architecture and you can build them in pretty much whatever language and framework that suits your needs.

Securing a Spring Boot Microservice

In my most recent blog post, I decided to try and explore microservices with a popular framework that I had never used before – Spring Boot. ‘Hello world’ samples are all well and good, but they rarely ever give you a good idea of what it’s going to be like to use that framework or language in production.

One production concern that just about every microservice has is security. It was because of this that I decided to try and secure the zombie service I made in the last blog post. The first step was pretty easy, adding the following line to my build.gradle file:


If you’ve ever implemented security with other Java frameworks, you probably expect another hour or so of rigging up things, configuration, and defining custom filter classes. Like any good opinionated framework, Spring Boot takes the most accepted patterns and turns them into reasonable defaults. As a result, my application is now already secured using basic HTTP authentication.

To prove it, I try and hit the previous zombie resource:

$ curl http://localhost:8080/zombies/12
{"timestamp":1440592533449,"status":401,"error":"Unauthorized","message":"Full authentication is required to access this resource","path":"/zombies/12"}

When I look at the new startup log after adding the security starter dependency, I notice a number of new things, like default filters being added. I also see the following line of trace:

Using default security password: c611a795-ce2a-4f24-97e3-a886b31586e7

I happened to read somewhere in the documentation that the default security username is user. So, I can now use basic auth to hit the same zombie URL, and this time I will get results:

$ curl -u user:c611a795-ce2a-4f24-97e3-a886b31586e7 http://localhost:8080/zombies/12

Let’s assume for a moment that I don’t want a GUID as a password, nor do I want to have to read the application logs to find the password. There is a way to override the default username and randomly generated password using an application.properties file. However, properties files are a big no-no if you’re planning on deploying to the cloud, so a better way to do it would be environment variables:


Now when I run the application, the default credentials for basic auth will be pulled from the environment variables.

Finally, let’s say I want to have more than one user, and I might want to have security roles, but I’m not quite ready to make the commitment to having a fully persistent user backing store. I can create a security configurer like the one below and the magic happens (code adapted from public Spring docs and Greg Turnquist’s “Learning Spring Boot” book):

package demo;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Configuration;
import org.springframework.http.HttpMethod;
import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder;
import org.springframework.security.config.annotation.method.configuration.EnableGlobalMethodSecurity;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;

@EnableGlobalMethodSecurity(securedEnabled = true)
public class DemoSecurityConfiguration extends WebSecurityConfigurerAdapter {

 public void configureAuth(AuthenticationManagerBuilder auth) throws Exception {
     .withUser("root").password("root").roles("USER", "ADMIN");

 protected void configure(HttpSecurity http) throws Exception {
     .antMatchers(HttpMethod.GET, "/zombies").permitAll()

With this configuration, I’ve made it so the /zombies URL is publicly accessible, but /zombies/(id) is secured and requires the basic credentials to belong to either the kevin user or the root user.

If you’ve read this blog, then you know I’ve dabbled with just about every framework and language around, including many ways of securing applications and services. So far, Spring Boot seems super easy and like a breath of fresh air compared to the tedious, complicated ways of securing services I’ve played with before.

Some of you may shoot me for this, but, I think it might even be easier than creating auth filters for Play Framework, but only time will tell if that assertion holds true.

Creating a Microservice with Spring Boot

It’s no secret that I’m a big fan of microservices. I have blogged about creating a microservice with Akka, and I’m an avid follower of all things service-oriented. This weekend I decided that I would try and see why people are so excited about Spring Boot, and, as a foot in the door to Spring Boot, I would build a microservice.

The first issue I encountered was a lot of conflicting advice on where to get started. For an opinionated framework, it felt awkward that so many people had so many recommendations just to get into the Hello World phase. You can download the Spring CLI, or you can use the Spring Boot starter service online to create a starter project. You can also choose to have your project built by Gradle or Maven.

Since I’m on a Mac, I made sure my homebrew installation was up to date and just fired off:

brew install gvm

I did this so I could have gvm manage my springboot installations. I used gvm to install spring boot as follows:

gvm install springboot

If you want you can have homebrew install springboot directly.

The next step is to create a new, empty Spring Boot project. You can do this by hitting up the Spring Initializr  (http://start.spring.io) or you can use the spring boot CLI to create your stub (this still uses the Spring Initializr service under the covers).

$ spring init --build=gradle HelloService
Using service at https://start.spring.io
Project extracted to '/Users/khoffman/Code/SpringBoot/HelloService'

This creates a new application in a directory called HelloService. There is a DemoApplication class in the demo package that is decorated with the @SpringBootApplication annotation. Without going into too much detail (mostly because I don’t know much detail), this annotation tells Spring to enable automatic configuration based on discovered dependencies and tells it to automatically scan for components to satisfy DI requirements.

Next, I need to make sure that the project has access to the right annotations and components to let me rig up a basic REST controller, so I’ll add the following dependency to my build.gradle file in the dependencies section:


Now I can create a new file called ZombieController.java in src/main/java/demo/controller:

package demo.controller;

import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.bind.annotation.RequestMapping;

public class ZombieController {
  public String getZombies() {
    return "Goodbye, cruel world";

With no additional work or wiring up, I can now do a gradle build in the root of my application directory and then I can execute the application (the web server comes embedded, which is one of the reasons why it’s on my list of good candidates for microservice building):

java -jar build/libs/demo-0.0.1-SNAPSHOT.jar

Now hitting http://localhost:8080/zombies will return the string “Goodbye, cruel world”. This is all well and good, but I don’t think it goes far enough for a sample. Nobody builds microservices that return raw strings, they build microservices that return actual data, usually in the form of JSON.

Fist, let’s build a Zombie model object using some Jackson JSON annotations:

@JsonAutoDetect(getterVisibility = JsonAutoDetect.Visibility.NONE)
@JsonIgnoreProperties(ignoreUnknown = true)
public class Zombie {

 private String name;

 private int age;

 public Zombie(String name, int age) {
   this.name = name;
   this.age = age;
 public String getName() {
   return name;
 public int getAge() {
   return age;

And now I can add a new method to my controller that returns an individual Zombie, and takes care of JSON serialization for me based on my preferences defined on the class:

 public @ResponseBody Zombie getZombie(@PathVariable("id") int id) {
   return new Zombie("Bob", id);

Now I can rebuild my application with gradle build (or I can install a gradle wrapper via gradle wrapper and then invoke ./gradlew build) and then run it again. Once it has compiled and it’s running again, I can hit the following URL with curl: http://localhost:8080/zombies/12

And I will get the following JSON reply:


And that’s basically it, at least for the simplest hello world sample of a Spring Boot service. Ordinarily, you wouldn’t return values directly from inside the controller, instead the controllers usually delegate to “auto wired” services, which perform the real work. But, for the purposes of my sample, I decided it was okay to leave the code in the controller.

So, what’s my conclusion? Well, writing a single REST method that returns fake data is by no means a way to judge an entire framework. However, if you’ve been doing RESTful services in Java and have not been using Spring Boot, then this is likely a super refreshing change of pace. I’ll likely keep poking around with it so I can get a better idea for how it behaves in a real production environment.

Distributed Transactions in a Cloud-Native, Microservice World

I have been going back and forth between whether I want to do my blogging from WordPress or from Medium. My latest blog, Distributed Transactions in a Cloud-Native, Microservice World, was over on medium.

See the blog post on medium here.

If you have thoughts on whether you’d like me to continue using Medium, or using this blog (Word Press), or both, please let me know in the comments section below.

Someone Needs to Teach McDonalds about Queueing Theory

I handed over my $1 bill to pay for my remarkably cheap fast food coffee (I had too much blood in my caffeine stream, judge me later). In response, I got a receipt and an hourglass timer. My first reaction was, “WTF is this?” My second reaction was to ask aloud, “What is this?” (thankfully I had enough caffeine to filter my language that early in the morning).

It turns out that, at least in my neighborhood, McDonald’s has instituted a 60 seconds or it’s free policy. If the hourglass empties between the time the first cashier took my money and the time I receive my coffee (or whatever I ordered), then apparently I get a free sandwich.

This seems like a good idea at first, but not to me. I’ve had far too much experience with queues (both of the human and computer type) to think this is going to be anything other than a total train wreck. I ease up to the food window and, lo and behold, my gimmicky little plastic hourglass is empty.

I exchange my coffee for the empty hourglass, and then the cashier goes about her business. I explain that my hourglass was empty, and it takes them 30 seconds to decide what to do with me. After figuring out that my presence has basically caused an exception to normal processing, they tell me to pull forward. I then wait four full minutes to receive my little paper business card telling me that I am owed a free sandwich.

Those of you who have dealt with optimizing back-end systems for throughput, especially systems with message passing and queueing, will immediately recognize huge problems with the design of this promotion. First and foremost is that they have enforced a maximum latency requirement of 60 seconds (+ the amount of time it took me to get to the first cashier) on my order. If the customer perceived latency (remember the internal clock started ticking before the customer clock) exceeds 60 seconds, then McDonalds loses between 40 cents and a buck or so, depending on the sandwich the customer claims upon the next visit.

Unfortunately, that’s not the real problem, and that’s where most people start to horribly misunderstand queueing theory both in the real world and in pipeline processing for computers. The real problem is that exceptions in the processing pipeline have a direct impact on the processing of items further in the backlog. In other words, the fact that it took them 30 seconds to figure out what to do with me likely emptied the hourglass of the customer behind me, and the customer behind them is in a good position to get free goodies as well.

Next, they decide to push the queue exceptions to an overflow queue (the “please pull ahead and wait for us at the curb” routine). This is another spot where lots of people screw up queue implementations. In the case of McDonalds, the size of that overflow queue is 1, possibly 2 if you’re at a big place. Otherwise, you get dumped into the parking lot, which also has a finite number of spaces.

In server software, if the overflow queue overflows, the best case scenario is that all work halts, everything grinds to a halt, and the main feeder queue builds and builds and builds, all waiting for a single item to be processed at the head of the queue. In the real world, you end up with a pile of angry customers waiting in the queue.

Now, someone at McDonald’s was thinking, because the “free stuff” timer starts at window 1 and ends at window 2 when you get your food. This means that, at any time, regardless of the size of the queue, there are a maximum of 2 people (given the average distance between drive-thru windows) with running hourglass timers and potentially able to take advantage of the free sandwich offer. This basically means that, no matter how crappy that line gets, McDonald’s financial exposure will be limited to a small subset of the people in the queue. Even more interesting is that statistically, every time someone uses a “free thing” coupon, they spend more than they otherwise would have. So, even if McDonalds gives out a metric crap-ton of free sandwich coupons, they will likely come out ahead anyway, dollar-wise.

But, that doesn’t say anything for the customers stuck in the queue, where most of them are stuck in line because this giveaway is actually increasing queue processing time, and their delay will not be rewarded because the food prep actually gets ahead when the drive-thru line queues up and slows down.

So, now that you’ve actually made it through the entire blog post, reading the verbal equivalent of me bloviating about something that seems to have no relevance to computer programming… what was the point?

For developers building backend processing pipelines:

  • Optimize your exception processing plan. When everything is working properly, you might be able to process a hojillion messages per second, but when a single item clogs up your queue and tanks your throughput, that’s when managers come storming into the office with flamethrowers.
  • Spend time planning for poison pills and queue cloggers. Sometimes, processing a particular item can crash the processor. You should be able to quickly recover from this, identify the poison pill, and isolate it so you don’t process it again (at least not in your main pipeline). Similarly, if throughput is crucial (and it usually is), then if an item is going to take an inordinately long time to process, put it in another queue to allow the main one to drain … and remember that if your overflow queue overflows, you’re just as screwed as you were when you started.
  • Perceived performance is just as important as clocked performance. McDonalds is doing a little slight of hand with an hourglass gimmick to get you to focus on the smallest time interval in the entire process while the food processing started much earlier. If you can’t get your single pipeline to go any faster, maybe splitting into multiple parallel pipelines will offer the customer (or client, client app, etc) a better perceived performance?
  • Identify and mitigate choke points. If you’re doing throughput and latency analysis on your processing pipeline (you are, aren’t you?) then you should be able to easily identify the point at which the pipeline takes the longest. This is an ideal spot to fan out and perform that task in parallel, use map/reduce, or potentially decide to fork the pipeline. For McDonalds, the slowest activity is actually taking your order, so many places actually fork that pipeline and can take 2 orders in parallel.

Congratulations if you made it all the way to the end. Now you can see how even stupid things like waiting for my damn coffee can get me fired up about queueing theory and optimizing back-end server processing pipelines.


Consuming a REST (micro) Service with Akka HTTP

In a recent blog post, I talked about how we could quickly and easily pull in all the bootstrapping necessary to fire up an HTTP server and create an Akka HTTP micro service. In this blog post, I’m going to walk you through using the same Akka HTTP library to consume a service.

First, let’s set up a flow that starts with an HttpRequest and finishes with an HttpResponse:

lazy val zombieConnectionFlow: Flow[HttpRequest, HttpResponse, Any] =
    Http().outgoingConnection("localhost", 9001)

This sets up a connection flow from the client (which can be a full app, or in many cases, another micro service) pointing at http://localhost:9001. Note that you can add many more options to the Http() builder syntax to set up things like authentication, SSL, etc.

Once we have a flow, we need a way to convert requests into responses:

def zombieRequest(request:HttpRequest): Future[HttpResponse] = 

This function takes an HttpRequest and uses the Akka HTTP scala DSL to instruct Akka HTTP in how to complete that request. In our case, we create a single request and transmit it via our zombieConnectionFlow.

Now that we’ve got the plumbing (a flow and a request handler) set up, we can create a simple function that will consume our zombie micro service, fetching a single zombie by ID:

def fetchZombieInfo(id: String) : Future[Either[String, Zombie]] = {
 zombieRequest(RequestBuilding.Get(s"/zombies/$id")).flatMap { response =>
   response.status match {
     case OK => Unmarshal(response.entity).to[Zombie].map(Right(_))
     case BadRequest => Future.successful(Left(s"bad request"))
     case _ => Unmarshal(response.entity).to[String].flatMap { entity => 
       val error = s"FAIL - ${response.status}"
       Future.failed(new IOException(error))

There are a couple of really important things to note here. The first is that when invoking my zombieRequest method, I am using just the REST API spec – there’s no URL used here – that was abstracted earlier as part of the flow.

The potential here is enormous. With Akka HTTP, we no longer have to string together a bunch of repetitive, imperative-looking statements to manifest a client that consumes another service. Instead, we can declare our intended flow, define a bunch of requests that execute over that flow, and then pattern match the results of invoking those requests.

Finally, with the fetchZombieInfo method created, we can expose that in our own micro service route (assuming, for the sake of example, that we had augmented the other micro service and we weren’t just proxying here):

pathPrefix("zombieproxy") {
  (get & path(Segment)) { zombieId =>
     complete {
       fetchZombieInfo(zombieId).map[ToResponseMarshallable] {
         case Right(zombie) => zombie
         case Left(errorMessage) => BadRequest -> errorMessage

While I personally feel that the convention of using the left side of Scala’s Either type is prejudiced against those of us who are left-handed, I can understand where the convention started.

So now if we issue the following curl:

curl http://localhost:9001/zombieclient/spongebob

It will go through our proxy and consume the micro service we wrote the other day, and return the single zombie we’re looking for:

 "name": "spongebob",
 "speed": 12

I am a huge fan of a number of HTTP client libraries. Lately, my favorite has been OK HTTP, for a number of reasons. It provides a very clean, simple, easy-to-use syntax for performing relatively complex HTTP operations. Also, it’s available automatically for Android, which allows me to write Java code that consumes services that is identical on my Android and server platforms.

As much as I love that library, Akka HTTP is my new favorite HTTP client. There are a pile of reasons, but I think the biggest is that Akka HTTP provides the smallest impedance mismatch between what I want to do over HTTP and how I write code to accomplish that.

In every project I have created in the past that had a component that consumed other services, the service client code rapidly became bloated, confusing, and covered in scar tissue. I actually feel like putting a little thought into creating a layer atop Akka HTTP to consume services could prevent that and still allow us to use futures and remain within the confines of the “reactive manifesto”.

Creating a Microservice with Akka HTTP and Scala

Lately there’s been a lot of hype and buzz over micro services. The funny thing is, there’s absolutely nothing new about micro services. In fact, back in the good old days when I was trying to convince people that SOA was the way to go, if you designed a service bus the way you were supposed to, you ended up with micro services.

What is new, however, is the technology for creating, deploying, and managing micro services. Today we’ve got Docker, we have mesosphere, marathon, AWS, countless cloud providers, and Typesafe has even given us ConductR.

A microservice is, as its name implies, a very small service. This doesn’t mean that a micro service will have very few lines of code – it means that it should be singular in purpose. Think of a micro service as a service endpoint embodying the Single Responsibility Principle (SRP) that we all love from standard Object-Oriented Design.

To create a micro service you basically need three things:

  • HTTP Server (typically bootstrapped by the application)
  • Route Management (e.g. the REST resource definitions)
  • JSON serialization and de-serialization (although there’s nothing preventing you from using XML if you want)

For the first, we get a truckload of functionality by using Akka HTTP. I haven’t really started delving too much into this (for many reasons, not the least of which is that the documentation is literally littered with fragments that say “todo”), but it looks really powerful. The whole concept of flows and flow management of HTTP interaction sitting atop Akka streams seems like it could dramatically simplify the lives of developers who write services or HTTP clients (or in many cases, services that are also HTTP clients of other services).

For the second, Akka HTTP also provides built-in functionality that lets you bind routing definitions to an HTTP server. In classic Scala fashion, they’ve decided to use pattern matching and “combinator” syntax to let you define your routes.

Here’s a sample snippet where I’ve defined routes that expose GET resources for querying a single zombie or getting the complete list of all zombies in the zombie micro service:

val routes = {
 logRequestResult("zombies-microservice") {
   pathPrefix("zombies") {
     (get & path(Segment)) { zombieId =>
       complete {
         Seq(Zombie(zombieId, 12))
     } ~
     (get) {
       complete {
         Seq(Zombie("bob", 1), Zombie("alfred", 2))

You can probably infer from the little sample above how to use the pattern matching, parser combinator syntax to build robust, powerful route definitions. Syntax like “get & path(Segment)” is pretty straightforward – it defines a response to a GET on the resource and extracts out the segment as a lambda parameter, which we then use in the “complete { }” section below (complete indicates a completion of an HTTP server response future).

So now that we have our routes, we can start up an HTTP server using those routes:

object ZombieAkkaHttpMicroservice extends App with Service {
 override implicit val system = ActorSystem()
 override implicit val executor = system.dispatcher
 override implicit val materializer = ActorFlowMaterializer()

 override val config = ConfigFactory.load()
 override val logger = Logging(system, getClass)

     config.getString("http.interface"), config.getInt("http.port"))

The third piece of plumbing that makes this all work is the use of JSON serializers and de-serializers. In my sample, which is extrapolated from an Activator template, I use Spray JSON formatters. I find these a little disappointing, as the “JSON inception” macros you get with Play Framework work almost as if by magic, whereas the Spray formatters require you to tell spray the number of properties that are contained in the case class for which you’re creating a round-trip JSON formatter:

case class Zombie(name: String, speed: Int)

trait Protocols extends DefaultJsonProtocol {
 implicit val zombieFormat = jsonFormat2(Zombie.apply)

With those three pieces of the “bootstrap” in place (HTTP server, JSON serialization, REST API/routes definition) you have all the building blocks of a micro service that can be run standalone, or can be bundled and deployed as part of a Docker infrastructure, or deployed and managed using Typesafe’s newest product: ConductR.

With my app running (you can see nearly identical samples in the Activator template library), I can hit the local zombie micro service like so:

curl http://localhost:9001/zombies  (returns all zombies)
curl http://localhost:9001/zombies/spongebob (returns the zombie with ID spongebob)

Regardless of where you sit on the fence of opinion of micro services, it’s good to know that frameworks like Akka and their new (extremely appealing) HTTP libraries are at your disposal and you don’t have to resort to relying on bloated, bulky container models to get the job done.

Building a RESTful Service with Grizzly, Jersey, and Glassfish

I realize that my typical blog post of late usually revolves around creating something with iOS or building an application for Mac OS X or wondering what the hell is up with the Windows 8 identity crisis. If you’ve been following my blog posts for a while, you’ll know that there was a very long period of time where I was convinced that there was no easier way to create a RESTful service than with the WCF Web Api which is now a basic, included part of the latest version of the .NET Framework.

I have seen other ways to create RESTful services and one of my favorites was doing so using the Play! Framework, Scala, and Akka. That was a crapload of fun and anytime you can have fun being productive you know it’s a good thing.

Recently I had to wrap a RESTful service around some pre-existing Java libraries and I was shocked to find out how easily I could create a self-launching JAR that embeds its own web server and all the plumbing necessary to host the service, e.g. java -jar myservice.jar. All I needed was Maven and to declare some dependencies on Grizzly and Jersey.

To start with, you just create a driver class, e.g. something that hosts your main() method that will do the work required to launch the web server and discover your RESTful resources.

// Create and fire up an HTTP server
ResourceConfig rc = new PackagesResourceConfig("com.kotancode.samples");
HttpServer server = GrizzlyServerFactory.createHttpServer("http://localhost:9999", rc);

Once you have created your HTTP server, all you need to do is create classes in the com.kotancode.samples (the same  package name we specified in the ResourceConfig class) that will serve as your RESTful resources. So, to create a RESTful resource that returns a list of all of the zombies within a given area code you can just create a resource class (imagine that, calling a resource a resource … which hardly any RESTful frameworks actually do!):

// Zombies Resource
// imports removed for brevity

public class ZombieResource {

    public String getZombiesNearby(
        @PathParam("zipcode") String zipCode
    ) {

       // Do calculations
       // Get object, convert to JSON string.
       return zombiesNearbyJsonString;

In this simple class, we’re saying that the root of the zombie resource is the path /zombies. The method getZombiesNearby() will be triggered whenever someone issues a GET at the template nearby/{zipcode}. The cool part of this is that the paths are inherited, so even though this method’s path is nearby/{zipcode}, it gets the root path from the resource, e.g. http://(server root)/(resource root)/(method path) or /zombies/nearby/{zipcode}.

So the moral of this story, kids, is that there is no one best technology for everything and any developer who introduces themselves with an adjective or a qualifier, e.g. “a .NET developer” or “an iPhone developer” may be suffering from technology myopia. If you keep a closed mind, then you’ll never see all the awesome things the (code) world has to offer. If I limited my thinking to being “a .NET developer” or “an iOS developer” or “a Ruby Developer”, I would miss a metric crap-ton of really good stuff.

p.s. If you want to get this working with Maven, then just add the following dependencies to your POM file and then if you want, drop in a <build> section to uberjar it so everything you need is contained in a single JAR file – no WAR files, no XML configuration files, no Spring, no clutter.


Struggling for Elegant Simplicity in the Enterprise

As you know, the theme of this blog is kotan (枯淡), the Japanese word for the concept of the elegance one finds within simplicity. This is what I strive for every day in my code, my architecture, my designs, and in general, my life.

We often start out with the best of intentions and then, at some point down the road, we take a step back from what we’re doing and we think, “What the hell is going on here?” We started out trying to solve what looked like a relatively simple problem and things have escalated and exploded into a mountain of code, plumbing, infrastructure, and a potential maintenance nightmare.

Take a simple facade problem. Your company has a suite of services that provide functionality within your organization. You’ve actually done a great job at making these services isolated, durable, idempotent, and you’re proud of the work you’ve done. No data access is allowed that doesn’t route through these services. Now your company wants to expose some of this functionality to external customers.

This should be a fairly simple architectural problem to solve, right? You figure you can expose a thin facade layer of services on the outside of your organization which then just make calls against your internal services. Great, fire up the IDE and start coding. The first bump in the road – you can’t expose your internal service contracts to external customers because that forces them to be coupled with your internal versioning scheme. If you add fields that are only useful within your organization, you can’t drop that field as optional into your public schema.

So now you’ve got public schemas and internal (often referred to as canonical) schemas. Great. No big deal. Let’s just create some plumbing that maps the external messaging schema to the internal messaging schema. Now you need to come up with some of the things that most larger SOA implementations need like a service registry, you need logging, security, and so on. Worse yet, your infrastructure might be doing a bunch of context switching, changing between message-oriented and POxO (POCO for C#, POJO for Java, etc) oriented. Now you might have complicated code generation processes happening in order to make sure that your code objects always line up nicely with your schema-based messages … and so on. Recognize this pattern? 90% of our lines of code are not spent in support of business functionality, but rather are spent in ceremony. If you were to walk up to this system and look for the code that does actual work that advances company business processes, you’d probably see relatively few lines of code. The real problem is when you ask a developer to build a new service in the above scenario, they will spend the same ratio of time as we see in the code – 90% of their time coding ceremony and 10% of their time coding real, valuable, logic. This is neither simple nor elegant.

The point of this blog post isn’t to talk about SOA facades and message-passing architectures, that was just an example of how things can rapidly go from simple to incredibly complicated in a matter of hours. The point of this blog post is to shatter the idea that your code has an excuse to be bloated, complicated, difficult to read, and difficult to maintain simply because it’s enterprise code.

The next time you sit down and start hammering out code in advance of your newest, shiniest architecture, ask yourself whether you’re spending the majority of your time writing useful lines of code, or writing ceremony.

Here are some rules that I live by when implementing large-scale systems:

  • If you think you’re wasting your time, you probably are.
  • If you think it looks to complicated, it probably is.
  • If you’re the only one who can work on it because maintenance requires too much context, you’re doing it wrong.
  • If you think there might be an easier way to do it, there probably is.
  • If you’re writing code for a feature you don’t know if anyone’s going to use, stop. (premature optimization)

There’s no shame in buying a commercial product or utilizing open source libraries so that you can focus on writing code in service of your company’s core competency and leave the ceremony to someone else.

To summarize – enterprise, back-end code has notoriously been big, bloated, complicated, hard to read, and nearly impossible to maintain. Stop the insanity! Enterprise code can be just as beautiful, elegant, easy to read, and easy to maintain as any other type of code – you just need to be vigilant and ruthless. If an architecture doesn’t let you write elegant, simple code (even if you’re the architect!), you need to cut your losses, put it out of its misery, and move on*.

* = We all have to make exceptions to accommodate budget, timelines, and alien-abducted-pod-bosses-from-Mars. Choose your battles wisely.