Kotan Code 枯淡コード

In search of simple, elegant code

Menu Close

Tag: java (page 1 of 3)

Securing a Spring Boot Microservice

In my most recent blog post, I decided to try and explore microservices with a popular framework that I had never used before – Spring Boot. ‘Hello world’ samples are all well and good, but they rarely ever give you a good idea of what it’s going to be like to use that framework or language in production.

One production concern that just about every microservice has is security. It was because of this that I decided to try and secure the zombie service I made in the last blog post. The first step was pretty easy, adding the following line to my build.gradle file:


If you’ve ever implemented security with other Java frameworks, you probably expect another hour or so of rigging up things, configuration, and defining custom filter classes. Like any good opinionated framework, Spring Boot takes the most accepted patterns and turns them into reasonable defaults. As a result, my application is now already secured using basic HTTP authentication.

To prove it, I try and hit the previous zombie resource:

$ curl http://localhost:8080/zombies/12
{"timestamp":1440592533449,"status":401,"error":"Unauthorized","message":"Full authentication is required to access this resource","path":"/zombies/12"}

When I look at the new startup log after adding the security starter dependency, I notice a number of new things, like default filters being added. I also see the following line of trace:

Using default security password: c611a795-ce2a-4f24-97e3-a886b31586e7

I happened to read somewhere in the documentation that the default security username is user. So, I can now use basic auth to hit the same zombie URL, and this time I will get results:

$ curl -u user:c611a795-ce2a-4f24-97e3-a886b31586e7 http://localhost:8080/zombies/12

Let’s assume for a moment that I don’t want a GUID as a password, nor do I want to have to read the application logs to find the password. There is a way to override the default username and randomly generated password using an application.properties file. However, properties files are a big no-no if you’re planning on deploying to the cloud, so a better way to do it would be environment variables:


Now when I run the application, the default credentials for basic auth will be pulled from the environment variables.

Finally, let’s say I want to have more than one user, and I might want to have security roles, but I’m not quite ready to make the commitment to having a fully persistent user backing store. I can create a security configurer like the one below and the magic happens (code adapted from public Spring docs and Greg Turnquist’s “Learning Spring Boot” book):

package demo;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Configuration;
import org.springframework.http.HttpMethod;
import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder;
import org.springframework.security.config.annotation.method.configuration.EnableGlobalMethodSecurity;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;

@EnableGlobalMethodSecurity(securedEnabled = true)
public class DemoSecurityConfiguration extends WebSecurityConfigurerAdapter {

 public void configureAuth(AuthenticationManagerBuilder auth) throws Exception {
     .withUser("root").password("root").roles("USER", "ADMIN");

 protected void configure(HttpSecurity http) throws Exception {
     .antMatchers(HttpMethod.GET, "/zombies").permitAll()

With this configuration, I’ve made it so the /zombies URL is publicly accessible, but /zombies/(id) is secured and requires the basic credentials to belong to either the kevin user or the root user.

If you’ve read this blog, then you know I’ve dabbled with just about every framework and language around, including many ways of securing applications and services. So far, Spring Boot seems super easy and like a breath of fresh air compared to the tedious, complicated ways of securing services I’ve played with before.

Some of you may shoot me for this, but, I think it might even be easier than creating auth filters for Play Framework, but only time will tell if that assertion holds true.

Creating a Microservice with Spring Boot

It’s no secret that I’m a big fan of microservices. I have blogged about creating a microservice with Akka, and I’m an avid follower of all things service-oriented. This weekend I decided that I would try and see why people are so excited about Spring Boot, and, as a foot in the door to Spring Boot, I would build a microservice.

The first issue I encountered was a lot of conflicting advice on where to get started. For an opinionated framework, it felt awkward that so many people had so many recommendations just to get into the Hello World phase. You can download the Spring CLI, or you can use the Spring Boot starter service online to create a starter project. You can also choose to have your project built by Gradle or Maven.

Since I’m on a Mac, I made sure my homebrew installation was up to date and just fired off:

brew install gvm

I did this so I could have gvm manage my springboot installations. I used gvm to install spring boot as follows:

gvm install springboot

If you want you can have homebrew install springboot directly.

The next step is to create a new, empty Spring Boot project. You can do this by hitting up the Spring Initializr  (http://start.spring.io) or you can use the spring boot CLI to create your stub (this still uses the Spring Initializr service under the covers).

$ spring init --build=gradle HelloService
Using service at https://start.spring.io
Project extracted to '/Users/khoffman/Code/SpringBoot/HelloService'

This creates a new application in a directory called HelloService. There is a DemoApplication class in the demo package that is decorated with the @SpringBootApplication annotation. Without going into too much detail (mostly because I don’t know much detail), this annotation tells Spring to enable automatic configuration based on discovered dependencies and tells it to automatically scan for components to satisfy DI requirements.

Next, I need to make sure that the project has access to the right annotations and components to let me rig up a basic REST controller, so I’ll add the following dependency to my build.gradle file in the dependencies section:


Now I can create a new file called ZombieController.java in src/main/java/demo/controller:

package demo.controller;

import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.bind.annotation.RequestMapping;

public class ZombieController {
  public String getZombies() {
    return "Goodbye, cruel world";

With no additional work or wiring up, I can now do a gradle build in the root of my application directory and then I can execute the application (the web server comes embedded, which is one of the reasons why it’s on my list of good candidates for microservice building):

java -jar build/libs/demo-0.0.1-SNAPSHOT.jar

Now hitting http://localhost:8080/zombies will return the string “Goodbye, cruel world”. This is all well and good, but I don’t think it goes far enough for a sample. Nobody builds microservices that return raw strings, they build microservices that return actual data, usually in the form of JSON.

Fist, let’s build a Zombie model object using some Jackson JSON annotations:

@JsonAutoDetect(getterVisibility = JsonAutoDetect.Visibility.NONE)
@JsonIgnoreProperties(ignoreUnknown = true)
public class Zombie {

 private String name;

 private int age;

 public Zombie(String name, int age) {
   this.name = name;
   this.age = age;
 public String getName() {
   return name;
 public int getAge() {
   return age;

And now I can add a new method to my controller that returns an individual Zombie, and takes care of JSON serialization for me based on my preferences defined on the class:

 public @ResponseBody Zombie getZombie(@PathVariable("id") int id) {
   return new Zombie("Bob", id);

Now I can rebuild my application with gradle build (or I can install a gradle wrapper via gradle wrapper and then invoke ./gradlew build) and then run it again. Once it has compiled and it’s running again, I can hit the following URL with curl: http://localhost:8080/zombies/12

And I will get the following JSON reply:


And that’s basically it, at least for the simplest hello world sample of a Spring Boot service. Ordinarily, you wouldn’t return values directly from inside the controller, instead the controllers usually delegate to “auto wired” services, which perform the real work. But, for the purposes of my sample, I decided it was okay to leave the code in the controller.

So, what’s my conclusion? Well, writing a single REST method that returns fake data is by no means a way to judge an entire framework. However, if you’ve been doing RESTful services in Java and have not been using Spring Boot, then this is likely a super refreshing change of pace. I’ll likely keep poking around with it so I can get a better idea for how it behaves in a real production environment.

Building a RESTful Service with Grizzly, Jersey, and Glassfish

I realize that my typical blog post of late usually revolves around creating something with iOS or building an application for Mac OS X or wondering what the hell is up with the Windows 8 identity crisis. If you’ve been following my blog posts for a while, you’ll know that there was a very long period of time where I was convinced that there was no easier way to create a RESTful service than with the WCF Web Api which is now a basic, included part of the latest version of the .NET Framework.

I have seen other ways to create RESTful services and one of my favorites was doing so using the Play! Framework, Scala, and Akka. That was a crapload of fun and anytime you can have fun being productive you know it’s a good thing.

Recently I had to wrap a RESTful service around some pre-existing Java libraries and I was shocked to find out how easily I could create a self-launching JAR that embeds its own web server and all the plumbing necessary to host the service, e.g. java -jar myservice.jar. All I needed was Maven and to declare some dependencies on Grizzly and Jersey.

To start with, you just create a driver class, e.g. something that hosts your main() method that will do the work required to launch the web server and discover your RESTful resources.

// Create and fire up an HTTP server
ResourceConfig rc = new PackagesResourceConfig("com.kotancode.samples");
HttpServer server = GrizzlyServerFactory.createHttpServer("http://localhost:9999", rc);

Once you have created your HTTP server, all you need to do is create classes in the com.kotancode.samples (the same  package name we specified in the ResourceConfig class) that will serve as your RESTful resources. So, to create a RESTful resource that returns a list of all of the zombies within a given area code you can just create a resource class (imagine that, calling a resource a resource … which hardly any RESTful frameworks actually do!):

// Zombies Resource
// imports removed for brevity

public class ZombieResource {

    public String getZombiesNearby(
        @PathParam("zipcode") String zipCode
    ) {

       // Do calculations
       // Get object, convert to JSON string.
       return zombiesNearbyJsonString;

In this simple class, we’re saying that the root of the zombie resource is the path /zombies. The method getZombiesNearby() will be triggered whenever someone issues a GET at the template nearby/{zipcode}. The cool part of this is that the paths are inherited, so even though this method’s path is nearby/{zipcode}, it gets the root path from the resource, e.g. http://(server root)/(resource root)/(method path) or /zombies/nearby/{zipcode}.

So the moral of this story, kids, is that there is no one best technology for everything and any developer who introduces themselves with an adjective or a qualifier, e.g. “a .NET developer” or “an iPhone developer” may be suffering from technology myopia. If you keep a closed mind, then you’ll never see all the awesome things the (code) world has to offer. If I limited my thinking to being “a .NET developer” or “an iOS developer” or “a Ruby Developer”, I would miss a metric crap-ton of really good stuff.

p.s. If you want to get this working with Maven, then just add the following dependencies to your POM file and then if you want, drop in a <build> section to uberjar it so everything you need is contained in a single JAR file – no WAR files, no XML configuration files, no Spring, no clutter.


Getting the Android Emulator to Work on the Macbook Pro Retina Display

Recently I posted about my thoughts on the first impression left by the Android SDK using Eclipse and the ADT Plugin for Eclipse (this is the “traditional route” of Android developers). My first impression was less than stellar.

So I wondered if the inherent crappiness in this was due to Windows or just that overall patchwork, cobbled together Rube Goldberg feeling you get when working with certain open source projects above a certain size. So, I pulled out the Macbook Pro (a 2012 retina display model, the retina display will play a key factor in my impression) and went through the process of installing the SDK.

The Android SDK installs just fine on OS X Mountain Lion and Eclipse “Juno” installed just fine as well. I had trouble with the SSL URL for the developer tools that many books and Google themselves give you, and even had trouble just removing the S from HTTP in the URL, having to decipher what the plugin URL was for the non-SSL download. Really, they could make that process a crapload easier. Again, I had this problem both on Windows and Mac, so the suck was cross-platform.

After finally getting all of the various pieces of the puzzle installed on the Mac, I went through the process of creating an Android Virtual Device (AVD). I started the emulator and noticed that it showed up in a big window but was only drawing on a small portion of it. It didn’t seem to respond to mouse-clicks.

I wasted hours of my life “troubleshooting” this issue, checking everything from memory issues to bad APIs to re-installing the SDK. After a while of swearing and pounding the forehead against the desk, I noticed that it actually did respond to mouse-clicks, but in empty areas of the window. A button that was in the bottom of the drawn portion would respond in the bottom of the larger, empty window. A-HA!

I then thought:

I’ll bet the retina display’s annoying scaling crap is actually confusing the emulator.

Turns out I was right. The emulator is using a deprecatedabsurdly old piece of code that is unaware of situations when the pixels being drawn are not in the same place as the pixels being displayed. I could either wait for Google to fix this (really Google, this is inexcusable in a non-beta, shipping product. Open source or no, this is bull****) or I could settle for a temporary workaround.

That temporary workaround is a tool I found called SetResX. This tool allows you to set real resolution not the annoying “best for retina” and “not so best for retina” user-friendly options you get with the Mountain Lion and Lion settings dialogs. All you need to do is pick any resolution big enough to support a 1:1 pixel-faithful resolution of your emulated screen (e.g. I picked 1680×1050 to support my emulated tablet’s 1080p HD resolution) and you can then run the emulator without the bizarre side-effects.

So, what this highlights in my experience are two things, in order of suckiness:

  1. Shame on Google for releasing non-beta, shipping tools that use deprecated, out-of-date API calls that are incompatible with any Mac retina display.
  2. Shame on Apple for their Macbook Pro Retina display. They release it and without an external tool like SetResX, you can’t actually access all of the pixels the monitor is capable of, you just get scaled and obscured versions. This is a little disappointing, really. It’s my damn computer, don’t pretend to know what’s best for me and hide all of the possible resolutions the card/monitor is capable of from me. Apple used to typically allow developers to get around such “we know what’s best for you” assumptions with standard Unix command-line tools, but there is no such stock tool in this case.

So, if you have a Retina Display Macbook Pro and you want to do some Android development off-device, you’re going to need a tool that futzes with your resolution, or, you can actually hook your laptop up to a non-retina monitor and slide the emulator window onto that monitor and you no longer have an issue (because the non-retina monitor has a 1:1 pixel faithful copy).

In my quest for things that follow the kotan (elegant simplicity) philosophy, my experience with the Android toolset is that it eschews both elegance and simplicity.

Hello Android – Worst First Impression Ever

This evening I thought I would try and install the Android SDK, Eclipse, and the Android Developer Tools (ADT) Plugin for Eclipse. I was feeling adventurous and thought I might see what the other mobile developers do, given that I have written both applications and books for iOS and Windows Phone 7.

First, I installed the SDK which was a fairly painless process. Then I installed Eclipse (Juno), another fairly painless process since installing Eclipse consists entirely of copying the eclipse folder to some location on your hard drive and double-clicking the eclipse.exe file.

Next came the ADT Plugin install, which is the first place where the “first timer” experience started to suck, and it sucked big time. I got the following error message:

Cannot complete the install because one or more required items could not be found.

It showed me that one package was missing, so I de-selected it. I did this until I had no more items left to install. That couldn’t possibly be right, so I had to google to find out that I needed to add the “Juno” release repository to the list of available software update sites before installing the ADT Plugin.

I was really quite upset that the ADT Plugin, as described by the installation instructions on Google’s own web site, doesn’t install as they say it should. I consider that a first impression fail.

Finally the ADT Plugin installed. Now it was time (I had already blown 30 minutes at this point) to create a hello world application.

Step 1 of the New Android Application wizard:

Hello Android - Step 1

Hello Android – Step 1

Next the wizard asked me to supply some basic stuff like the application name and the project name and the package name. Being familiar with Java, I recognized what I should put in the package name but, what’s the difference between project and application name? Is it like a Visual Studio Solution vs. Project? I had no idea.

Hello Android - Step 2

Hello Android – Step 2

It also asked me to choose a Build SDK and a minimum SDK required. This sort of made sense, but only because I had seen similar settings in Xcode in choosing the build SDK versus the minimum required SDK version. There is some context sensitive help here and there are little “info” buttons you can click to get some more help. It feels a bit “closer to the metal” than the iOS new app wizard and waaay lower level than the Windows Phone 7 new app wizard.

Hello Android - Step 3

Hello Android – Step 3

I got a few prompts related to icons, color, foreground and background color, scaling, and a bunch of other stuff where I had no idea what the final impact would be on the end application. This was also fine – I expect to see unfamiliar things in unfamiliar territory. My usual learning process is I’ll get in there, create a hello world app, confuse myself, then work backwards until I’ve figured out what’s really going on under the covers.

Hello Android - Step 4

Hello Android – Step 4

The next step in the wizard asks me about activity name, layout name, navigation type, a hierarchical parent (would it have been so hard to use the word inheritance here??) and the title of the activity. This was also pretty straightforward.

Hello Android - Step 5

Hello Android – Step 5

Next we get to the fun part. The wizard is telling me that I am missing dependencies!!! I hold my back urge to smash a fist into the keyboard and launch into a diatribe on Twitter and/or Facebook about how pissed off I am and try and install the dependencies. After five attempts, I still get messages like the one below, indicating it can’t find the dependency file:

Hello Android - Step 6

Hello Android – Step 6

I again have to resort to internet searches (out of spite, I used Bing this time instead of Google) to figure out what the heck was going on. I had to exit Eclipse, then go and launch the Android SDK Manager in Administrator mode, and then download the missing dependencies. This finally worked and the next time I started Eclipse I was able to re-do all of my prior steps and get to the point where it created a new project for me.

By now, my patience is pretty limited so I want instant gratification. I hit the play button up top and start answering questions and then the Android tools bitch that I haven’t created an Android Virtual Device (AVD) yet. I swore a blue streak, the puppy nearby tilting its head and wondering why I was barking so much.

I created an AVD with utterly random values as it asked me for information I certainly did not feel as though I was qualified to give. The AVD failed to start. I created a different one, and this one failed to allocate memory (error code 8 if that means anything to anyone). I then created yet another AVD and it started, but the application wouldn’t launch because … drumroll … it couldn’t allocate enough memory.

After another shut down of the device emulator and a fresh creation of another AVD and a fresh hit of the play button, the application launched (after a nearly four minute wait while the emulator started up and the app installed!!).

Because I’ve been around the block I know that there is more to it than this and I know it gets better from here. However, if I was a newbie to the platform and hadn’t been around as many blocks as I have, I could very easily have been turned off by this first-timer experience and said, “Screw this, I’m gonna go use the iOS SDK.”

Shame on Google for not giving a crap about first impressions. Everyone should.

Using the Visitor Pattern to Compare Object Graphs

Recently I found myself in a position where I needed to write some unit testable code that would perform a ‘diff’ against two fairly complicated object graphs. By object graphs, what I really mean are domain objects (POJOs, POCOs, whatever you want to call them) that have plenty of attributes as well as child objects and collections of child objects.

The goal of the exercise was to traverse the entire object graph of both objects and produce a collection of POJOs that represented the list of all things that changed relative to the source. So, if I changed an attribute, I should get an object that tells me the name of the attribute that changed and the object on which the attribute changed. If I add some object to an object in the destination (or remove it from the source), I should get a result object indicating the name of the new object and the object to which it was added. Conversely, if I add something to the source (or remove it from destination), it should appear in my results as a ‘removed’ object.

There are all kinds of academic papers illustrating the best way to write differencing algorithms using all kinds of math that quite frankly just makes my head hurt. The consumers of this code will be writing HTML that displays differences on a well-known shape. In other words, I’m not doing a blind line-by-line text comparison, I am comparing two things of a known shape, which means my UX can be far more intuitive to the user than something like an SVN compare or a Windiff.

I split this problem up into a couple of chunks. The first chunk was traversal. I wanted a way to guarantee that I would zip through the entire object graph. Again, I know the shape of these objects so I do not need a generic algorithm here. The Visitor pattern seemed ideal for this.

I created a simple interface called ObjectGraphVisitable (not really, I have changed the names to protect the innocent). Each  class in my graph will implement this interface which really just defines an accept method. Each object in the hierarchy has an accept method that, in turn, invokes the visit method on the visitor. If you’ve done any Objective-C/Cocoa programming then the Visitor pattern’s form of indirection should already be very familiar to users of the delegate pattern (e.g. the visitor is just a special kind of delegate).

Here’s the ObjectGraphVisitable interface:

public interface ObjectGraphVisitable {
    void accept(GraphVisitor visitor);

And here’s the Visitor interface:

public interface GraphVisitor {
    void visitPerson(Person p);
    void visitAddress(Address a);
    void visitRegion(Region r);
    void visitHighScore(HighScore score);

At this point we have the interfaces that typically make up the visitor pattern. The idea here is that when the visitor starts at the top level of an object graph, we call accept(this) where this is the visitor instance (you’ll see that next). It is then up to the root-level object in the hierarchy to recursively (or iteratively, whatever you like) invoke accept on the children, which in turn invoke accept on their children, and so on down the line until every object in the graph has called visitXXX on the visitor. We can then create a class that implements the visitor interface called ObjectGraphComparisonVisitor (again, name sanitized, your name should be descriptive and self-documenting) and contains the actual “diff” comparison logic.

Here’s a sample root level domain object that can be visited (accepts visits from a visitor):

public class RootLevelDomainObject implements ObjectGraphVisitable {
// other class stuff here...

    public void accept(GraphVisitor visitor) {
      for (ChildNode child : this.children) {


The key aspect of this pattern that often takes a little getting used to is that the visitor is passive and not in control of what gets visited. It calls accept on the root node and then sits back while all of its visitXXX() methods are invoked. This feels kind of like SAX XML parsing where instead of actively walking the graph you are passively fed XML nodes as they are parsed.

So now that we have the object graph set up to be traverse-able we need to take that traversal and use it for comparison. To do that, we can create a class that implements GraphVisitor. This class, called something like ObjectComparisonVisitor, needs to do 3 things:

  • Find all objects in the destination graph that do not occur in source
  • Find all attributes that have changed on objects that occur in both source and destination graphs.
  • Find all objects in source graph that do not occur in destination
I won’t show all the code here, I’ll leave that as a fun exercise for the reader. But the basic psuedo-code is as follows:
  1. For each object type in the graph, maintain a hash table that stores the items in source (source is what you visit)
  2. As each object type is visited, throw it into the hash table (note you need some kind of uniqueness test for each of your domain objects… you should have that anyway)
  3. As each object type is visited, check to see if it’s in the destination graph as well. If it is, do an attribute-by-attribute comparison.
  4. When the entire source tree has been visited (your call to accept is single-threaded, so it won’t return until every node has been visited), traverse the “source” hash tables looking for visited items that do not exist in the destination.

As each of these tasks are being done, you’re progressively building up a collection of ModelComparisonResult objects, which really just contain an enum for the difference type, the name of the object, the name of the attribute, etc.

When all is said and done, the comparison visitor object can provide rich difference results in a format that is both unit testable and amenable to display via command-line or GUI. There are other variations of this that include visiting both the source and the destination trees and then doing a final comparison afterward. This second variation is typically used when you don’t have discrete uniqueness tests on every object in your graph, so you keep track of divergence points in the visitation pattern to detect graph changes while attribute changes are still easy to detect (you could even use Reflection to check for those).

Anyway, this was really the first time I’ve ever found a practical use for the Visitor pattern outside of academics and job interviews, so I thought I would share.

Speaking at ScalaDays 2012 in London

I am honored to have been invited to speak at ScalaDays 2012 in London this year. For a guy who for years was considered a “Microsoft guy” and then for a few years after that considered an “Apple guy”, this is great validation that I’m not a Microsoft guy or an Apple guy or even a Java guy – I am just a guy who spends most of his spare technical resources searching for things that exemplify Kotan – elegant simplicity, and that includes Scala.

The talk I will be giving is about my experience learning Scala and Akka 2.0 through my ScalaMUD project. Here is the description of my talk from the ScalaDays 2012 website:

When learning a new language, one of the first things I do is try and build a game in that language because games are a lot more fun than progressively complicated “hello world” samples. In this experience report, I discuss my learning process and experience, that of a developer who has spent the last 11 years in the Microsoft world. I am exploring Scala by building a simple network text adventure game that uses many language features and Akka 2.0 and encounters surprisingly real-world problems from many other industries, including finance. My focus in learning languages is on creating clean, simple, elegant code and, as always, justifying to others why this language is worth the learning curve.
The code I’ve been writing as part of this experiment is open-sourced and available on GitHub.

Looking forward to meeting a bunch of fellow Scala enthusiasts and being able to present on this compelling language.

ScalaMUD – More Multi-Threaded Command Parsing via Akka

In the last blog post on ScalaMUD, I showed how I was able to take advantage of a third party NLP library so that I could tag player input with the appropriate parts of speech like Verb, Noun, Adjective, etc. My goal here is to eventually use this information to match words in player input sentences with physical objects in the game so that when someone types ‘kill the blue dragon’, I will be able to find the ActorRef for the blue dragon, and find a verb handler for kill, and then initiate combat.

In this blog post, I’ve come one step closer to that goal by enabling the dispatching of commands. As players enter input, that input is sent off to a Commander actor which tags up the input with parts of speech. Once the Comander is finished with it, it wraps the tagged words in an EnrichedCommand message and sends that to the player, which now has the trait Commandable, allowing the player to respond to enriched commands. This has an added benefit of allowing NPCs to be scripted in the future by allowing them to pretend to ‘type’ things like “kill player”, etc.

Eventually rooms will add commands to players when they enter the room to allow them to type the name of the exit (e.g. north, south, up, window) as a verb. Mortals and Wizards alike each have a unique set of commands they need to be able to type and these commands are grouped into “command libraries” (my old alma mater MUD, Genesis, an LPMUD, called them command souls). For example, here’s the first version of the Mortal command lib:

package com.kotancode.scalamud.core.cmd

import com.kotancode.scalamud.core.ServerStats
import com.kotancode.scalamud.core.TextMessage
import com.kotancode.scalamud.Game
import com.kotancode.scalamud.core.Implicits._
import akka.actor._
import akka.routing._

abstract class CommandLibMessage
case class AttachCommandLib extends CommandLibMessage

class MortalCommandLib extends Actor {
	def receive = handleMortalCommands

	def handleMortalCommands: Receive = {
		case cmd:EnrichedCommand if cmd.firstVerb == "who" => {
		case AttachCommandLib => {

	def handleWho(issuer: ActorRef) = {
		println("player "+ issuer.name + " typed who.")
		var stringOut = "Players logged in:\n"
		for (p: ActorRef <- Game.server.inventory) {
			stringOut += p.name + "\n"
		issuer ! TextMessage(stringOut)

	def attachToSender(sender:ActorRef) = {
		sender ! AddCommand(Set("who"), self)


Those of you who program in iOS or Cocoa might recognize some of the “Delegate Pattern” here… when the command lib attaches itself to something capable of issuing commands (anything that carries the Commandable trait, like a player or NPC), it just passes the implicit sender ActorRef so that actor can now dispatch commands to that lib.

Here’s the code in the Commander object that sends enriched commands to the actor that “typed” (virtually or for real) the command:

package com.kotancode.scalamud.core.cmd

import akka.actor._
import akka.routing._

import com.kotancode.scalamud.core.lang.EnrichedWord
import java.util.ArrayList
import edu.stanford.nlp.ling.Sentence
import edu.stanford.nlp.ling.TaggedWord
import edu.stanford.nlp.ling.HasWord
import edu.stanford.nlp.tagger.maxent.MaxentTagger
import scala.collection.JavaConverters._
import scala.collection.mutable.ListBuffer
import com.kotancode.scalamud.core.cmd._

class Commander extends Actor {
	def receive = {
		case s:String => {
			val words = s.split(" ");
			val wordList = new java.util.ArrayList[String]();
			for (elem <- words) wordList.add(elem)
		    val sentence = Sentence.toWordList(wordList);
		    val taggedSentence = Commander.tagger.tagSentence(sentence).asScala.toList

			var enrichedWords = new ListBuffer[EnrichedWord]
		    for (tw : TaggedWord <- taggedSentence) {
				val ew = EnrichedWord(tw)
				enrichedWords += ew

			sender ! HandleCommand(EnrichedCommand(enrichedWords, sender))

object Commander {
	val tagger = new MaxentTagger("models/english-bidirectional-distsim.tagger")

The important bit is that the player gets the HandleCommand message, which then goes through a dispatch process in the Commandable trait, and eventually registered verb handlers (like those registered via attached command libraries) get invoked via messages. Here’s the Commandable trait:

package com.kotancode.scalamud.core.cmd

import akka.actor._
import akka.routing._
import scala.collection.mutable.HashMap
import scala.collection.mutable.HashSet

sealed abstract class CommandMessage
case class AddCommand(verbs:Set[String], handlerTarget:ActorRef) extends CommandMessage
case class Removecommand(verb:String) extends CommandMessage
case class HandleCommand(command:EnrichedCommand) extends CommandMessage

trait Commandable {

	private val verbHandlers: HashMap[Set[String], ActorRef] = new HashMap[Set[String], ActorRef]

	def handleCommandMessages:akka.actor.Actor.Receive = {
		case AddCommand(verbs, handlerTarget) => {
			verbHandlers.put(verbs, handlerTarget)

		case HandleCommand(cmd) => {

	def dispatch(cmd:EnrichedCommand) = {
		println("handled a command "+ cmd +".")
		println("command's first verb: " + cmd.firstVerb)
		val targetHandlers = verbHandlers.filterKeys(key => key.contains(cmd.firstVerb))
		targetHandlers foreach {case (key, value) => value ! cmd}

The dispatching actually happens in the targetHandlers foreach … line where the command is sent to every command handler that declared interest in that verb. In our case, we have a mortal command verb who that displays the list of connected users and the wizard command verb uptime that displays the length of time the server app has been running.

The following is sample session output from telnetting to the game:

Welcome to ScalaMUD 1.0

Login: Kevin
Welcome to ScalaMUD, Kevin
Kevin: who
Players logged in:
Kevin: uptime
Server has been up for 6 mins 39 secs.

Now that I can log in with multiple players, see who is online, and dispatch commands to handlers as well as differentiate between mortal and wizard abilities, this game is finally starting to feel like the beginnings of a real MUD.

ScalaMUD – Consuming Java from Scala and NLP Tagging

Last night I upgraded ScalaMUD’s POM file to point to the recently-available Akka 2.0-RC1. I was previously using M3 an was happy to note that all of my Akka 2.0 code continued working just fine without change from M3 to RC1. If the Akka RC is like most other RCs then there should be no further API changes, only fixes and tightening.

While I had the MUD code open I decided to start working on the problem of accepting player input. Sure, I have a socket reader that accepts text from players but what does one do with this text?

In the old days, I would’ve tokenized the string. By tokenized here I mean just splitting it blindly on spaces. Then I would have considered the first word in the array to be the verb and then dispatched the remaining parameters to some function in the MUD code that knows how to respond to that verb. For example, if I typed kill dragon then I would’ve tagged kill as the verb and dragon as the “rest”. This would have eventually found its way to some kill() method on a player that takes an array of strings as parameters.

Thankfully this isn’t the old days.

Instead what I did was declare a Maven dependency on the Stanford NLP (Natural Language Processing) project. To be specific, I wanted to use the Stanford non-linear Parts of Speech tagger. Why should I deal with parsing strings in a dumb way when someone else has spent years creating a powerful, well-trained NLP engine that can tag every sentence my player types with parts of speech?

This way, instead of relying on forcing players to type in pidgin dialects (e.g. kill dragon or cast spell or move north) I can let them type complete English sentences (if they want). I will then tag those sentences with the appropriate parts of speech and infer what they wanted to do from that.

I want this parsing to take place in the background. Once the player’s sentence has been enriched with parts of speech, I want to send the enriched sentence back to whatever typed it so that the command can be dispatched. To do this, I created a new Actor called Commander:

package com.kotancode.scalamud.core

import akka.actor._
import akka.routing._

import com.kotancode.scalamud.core.lang.EnrichedWord
import java.util.ArrayList
import edu.stanford.nlp.ling.Sentence
import edu.stanford.nlp.ling.TaggedWord
import edu.stanford.nlp.ling.HasWord
import edu.stanford.nlp.tagger.maxent.MaxentTagger
import scala.collection.JavaConverters._

class Commander extends Actor {
	def receive = {
		case s:String => {
			val words = s.split(" ");
			val wordList = new java.util.ArrayList[String]();
			for (elem <- words) wordList.add(elem)
		    val sentence = Sentence.toWordList(wordList);
		    val taggedSentence = Commander.tagger.tagSentence(sentence).asScala.toList

			var enrichedWords = new ArrayList[EnrichedWord]
		    for (tw : TaggedWord <- taggedSentence) {
		//		println(tw.value + "/" + tw.tag)
				val ew = EnrichedWord(tw)

object Commander {
	val tagger = new MaxentTagger("models/english-bidirectional-distsim.tagger")

At this point I’m just building the array of enriched words and I’m not actually sending the command back to the player (I’ll do that tonight or tomorrow, time permitting .. as always, you can check out the GitHub repo for the latest changes to the MUD). One of the interesting bits here is how I’m using a Java library from Scala. This is usually a pretty painless task but sometimes there are issues. In this case, the Stanford NLP library class Sentence has a bunch of overloads for the toWordList method. Java knows how to pick which overload but Scala doesn’t if I just use type inference and default Scala types. To get it to pick the right toWordList method I had to manually construct an ArrayList[String] because passing an Array[String] doesn’t let Scala know which overload to pick. It’s a little annoying but if I can keep the Scala->Java bridge points like this minimal then it’s not bad.

The flip side of this is that I’m getting back a regular Java array list in response to toWordList, which doesn’t support pretty Scala-native iteration because it doesn’t contain a foreach method, which is the underpinning that supports all the syntactic sugar around iteration. To deal with that, I imported the Java converters package implicits so that I could get the asScala function, which lets me call toList, which gives me a nice Scala list that I can use for easy iteration.

Here’s some sample output when I connect to the MUD and enter a sample sentence, which is then tokenized and tagged with parts of speech:

[EnrichedWord: word=attack, tag=VB, pos=Verb]
[EnrichedWord: word=the, tag=DT, pos=DontCare]
[EnrichedWord: word=green, tag=JJ, pos=Adjective]
[EnrichedWord: word=dragon, tag=NN, pos=Noun]
[EnrichedWord: word=with, tag=IN, pos=DontCare]
[EnrichedWord: word=the, tag=DT, pos=DontCare]
[EnrichedWord: word=yellow, tag=JJ, pos=Adjective]
[EnrichedWord: word=sword, tag=NN, pos=Noun]

The real goal here is that I will be taking the tagged nouns in the sentence and scanning through the player’s inventory and the environment in which the player stands for objects which have names that match the nouns and then using adjectives to disambiguate them if collisions occur. That way, when I type “kick blue bottle” I will be able to scan the surroundings for objects called “bottle” and if I find more than one, I’ll only gather up the ones that are blue.

In case you’re wondering what the EnrichedWord class looks like, which has some helper code that identifies only the parts of speech I care about, here it is:

package com.kotancode.scalamud.core.lang

import edu.stanford.nlp.ling.TaggedWord

case class PartOfSpeech
case object Noun extends PartOfSpeech
case object Verb extends PartOfSpeech
case object Adjective extends PartOfSpeech
case object DontCare extends PartOfSpeech

class EnrichedWord(value:String, tag:String, val pos:PartOfSpeech) extends TaggedWord(value, tag) {

	override def toString = "[EnrichedWord: word=" + value +", tag=" + tag + ", pos=" + pos + "]"

object EnrichedWord {
	def apply(hw: TaggedWord) = {
		val ew = new EnrichedWord(hw.value, hw.tag, rootTypeOf(hw.tag))

	def rootTypeOf(s:String) = {
		s match {
			case "VB" | "VBD" | "VBG" | "VBN" | "VBP" | "VBZ" => Verb
			case  "NN" | "NNS" | "NNP" | "NNPS" => Noun
			case "JJ" | "JJR" | "JJS" => Adjective
			case _ => DontCare

Note that the EnrichedWord Scala class inherits from the Stanford TaggedWord Java class.

I really, really, love the syntax of the string pattern matcher I use to obtain the POS root (adjective, verb, noun, don’t care) from the Penn Treebank Tags that are used by the Stanford NLP POS tagger.

The takeaway I got from this exercise is further reinforcement of my rule to never re-invent the wheel because there are wheel experts out there who have dedicated their lives and careers to building wheels more awesome than I could ever hope to build. Hence I declare a Maven dependency on the NLP library and in a single night, I’ve got a MUD that can intelligently POS-tag player sentences which I can then use to identify potential targets of player commands. In addition, I don’t have to create a pidgin dialect for interacting with the MUD. English works and the MUD should be able to deal with “I kill the dragon with the blue sword because I am the shizzle” with the same ease as “kill dragon with sword”.

Scala Companion Objects

In a single sentence, a Scala companion object is an object with the same name as a companion class in the same file that has access to all of that class’ members, including the private ones.

At first, the power and utility of these companion objects wasn’t really all that clear to me but then I started finding myself using them more and more to the point where I am pretty sure I couldn’t write a Scala app of any decent size without using them.

Let’s take a look at one of the most common uses of a companion object as a place to put factory methods using the apply method:

class BucketOSlime private (val slimeColor:String) {
    // Stuff goes here...
object BucketOSlime {
    def apply(color:String):BucketOSlime = new BucketOSlime(color)

With this code in place you can now create colorful buckets of slime with a constructor shortcut syntax that lets you leave off the word “new” entirely:

val blueSlime = BucketOSlime("blue")
val whiteSlime = BucketOSlime("white")

The fun (and power) doesn’t stop here. You can use multiple apply methods or a single apply method with pattern matching to return different instances of the same abstract object:

object BucketOSlime {
    def apply(color:String):BucketOSlime = {
        if (color == "blue")
              new BlueBucketOSlime
              new RegularBucketOSlime(color)

There are a ton of other ways you can use companion objects. Another common one is to wrap implicits to upgrade or enrich a class (like I’ve done with my MUD recently) so that the implicit doesn’t need to be imported directly.

The more I use Scala, the more I thoroughly enjoy it. I still utterly despise when I see people overloading symbols everywhere so that Scala’s complexity approaches “write once confuse everywhere” status… but with discipline I still think developers can keep Scala syntax clean, concise, and eminently readable.