Skip to content
Aug 4 14

The Fifth Vertex is Available for Purchase!

by Kevin Hoffman

I realize that this is a blog about computer programming, programming lanuages, and all things technological. However, I also realize that many of us in IT are also into things like fantasy and science fiction.

I have spent the last several years of my life working on this fantasy book while also attempting to maintain a day job, pay the mortgage, etc. This is not an easy task and the urge to simply give up the dream of writing fantasy books seemed to fight to take hold at every turn. Without the support of my family and friends, I would have given up on this book somewhere around Chapter 1 and never looked back.

So please, if you are at all interested in fantasy (anything from Tolkien to Harry Potter), please check out my book The Fifth Vertex! And tell your friends :)

Aug 2 14

The Fifth Vertex is about to be published!

by Kevin Hoffman
The Fifth Vertex Cover Art

The Fifth Vertex Cover Art

For those of you who have been following the creative half of my schizophrenic personality, I’ve got some big news! I have finished my first fantasy novel and it is about to be published. The cover art is ready, the Kindle file is ready for publishing, I’ve even got some beautiful maps that you can see even before the book is published.

I strongly encourage all of you to take a look at the following links and like the pages so you can get a notification as soon as the book is ready so you can all buy 100 copies of it!

My Facebook author page: Kevin Hoffman
The book Facebook page: The Fifth Vertex
The book series facebook page: The Sigilord Chronicles
My creative-half Twitter account: @kshmusings

Stay tuned for more updates, the book should be published any day now!

Mar 3 14

Creating a Multiplayer Chat System with the Akka Event Bus

by Kevin Hoffman

In my previous blog post, I talked about how to use the subchannel classification system with the Akka Event Bus. In that post, I showed a really simplistic way of determining if a channel is a subclass of another – using startswith(). That’s a really primitive way of doing things and doesn’t really show you the true power of the event bus.

In this blog post, I want to walk you through a slightly more realistic example. Let’s assume that we’re using Akka to build the back-end for a multiplayer game. We know that this game needs to support chatting between players, so we want to create an event bus to handle all of the chat traffic for the game. This should, in theory, give the system some nice flexibility in allowing individual players and player-actor components to subscribe and unsubscribe from different segments of the chat traffic.

We have three different segments that we want to divide traffic up into:

  1. Players – Individual players can receive private chat messages, or tells.
  2. Sectors – The players’ space ships are floating in 3D space in large areas called sectors. It is possible for those players to transmit messages to an entire sector.
  3. Groups – Players can form small groups where from 1 to 8 players band together to share experience as well as their own chat channel.

One of the interesting things to keep in mind is that our chat bus cannot read player state. In order for the chat bus to be really useful, general purpose, and, most importantly, testable, it can’t go reaching into the inside of some player object to figure out which sector object they are in or to which group they belong.

So we’re going to revise our previous sub-classification event bus sample with a new case class called ChatCoordinate that will be used as our classifier (remember in the previous blog post we used simple strings). Here’s the new code for the bus and its messages:

package chatbus

import akka.event.ActorEventBus
import akka.event.LookupClassification
import akka.event.SubchannelClassification
import akka.util.Subclassification
import akka.actor.ActorSystem
import akka.actor.Props
import akka.actor.Actor

/*
 * Chat Bus Coordinates
 * players/XXX - sends a private tell to a player
 * sectors/XXX - sends a tell to an entire sector
 * groups/XXX - sends a tell to members of a player group
 */
object ChatSegments {
	val Player = "players"
	val Sector = "sectors"
	val Group = "groups"
}

case class ChatCoordinate(segment: String, target: Option[String])

sealed trait Chat
case class ChatMessage(source: String, message: String) extends Chat
case class ChatEvent(coord:ChatCoordinate, msg: ChatMessage) extends Chat

class ChatEventBus extends ActorEventBus with SubchannelClassification {
  type Event = ChatEvent
  type Classifier = ChatCoordinate

  protected def classify(event: Event): Classifier = event.coord

  protected def subclassification = new Subclassification[Classifier] {
    def isEqual(x: Classifier, y: Classifier) = (x.segment == y.segment && x.target == y.target)

    /* Is X a subclass of Y? e.g. is Player/Bob a subclass of Player/None? YES. Is Player/None a subclass of Player/Bob? NO. */
    def isSubclass(x: Classifier, y: Classifier) = {
		val res = (x.segment == y.segment && x.target == None)
		//println(s"Subclass check: $x $y: $res")
		res
	}
  }

  protected def publish(event: Event, subscriber: Subscriber): Unit = {
    subscriber ! event.msg
  }
}

This new chat bus gives us some great flexibility. I can send a message to all players by sending a message to the Players segment with a target of None. I can send a message to all groups by sending a message to the Groups segment with a target of None, a specific group by sending to Groups/Some(“GroupId”) and a specific player by sending to Players/Some(“Player Id”).

One of the aspects of gameplay that this chat bus supports is a player moving from one sector to another. If a player leaves a sector, then ideally some actor (maybe a Player actor?) would unsubscribe from the chat bus @ Sectors/Some(“Foo”) and then subscribe to the chat bus @ Sectors/Some(“Bar”) if they moved from sector Foo to sector Bar, while always maintaining a persistent subscription to Players/Some(“Player ID”). Likewise, if a player joins a player group, they would subscribe to Groups/Some(“Group ID”) and unsubscribe from that same coordinate when they leave the group.

The system is inherently extensible and so whenever we want to add a new segment (e.g Race, Alliance, Guild, etc) we can just drop that in there and we’re good to go.

Finally, just so you believe me, here’s some Scala Test specs exercising the chat bus with some fake actors:

package tests

import akka.actor.ActorSystem
import akka.actor.Actor
import akka.actor.Props
import akka.testkit.TestKit
import akka.testkit.TestProbe
import org.scalatest.WordSpecLike
import org.scalatest.Matchers
import org.scalatest.BeforeAndAfterAll
import akka.testkit.ImplicitSender
import scala.concurrent.duration._

import chatbus._

object ChatBusSpec {
	
	val toKevin = ChatEvent(ChatCoordinate(ChatSegments.Player, Some("Kevin")), ChatMessage("system", "This goes just to Kevin"))
	val toBob = ChatEvent(ChatCoordinate(ChatSegments.Player, Some("Bob")), ChatMessage("system", "This goes just to Bob"))
	val toAllPlayers = ChatEvent(ChatCoordinate(ChatSegments.Player,None), ChatMessage("kevin", "This goes to all players"))
	
	val toSectorAlpha = ChatEvent(ChatCoordinate(ChatSegments.Sector,Some("Alpha")), ChatMessage("system", "Sector Alpha is about to explode."))
	val toSectorBeta = ChatEvent(ChatCoordinate(ChatSegments.Sector,Some("Beta")), ChatMessage("system", "Sector Beta is about to explode."))
}

class ChatBusSpec (_system: ActorSystem) extends TestKit(_system) with ImplicitSender
  with WordSpecLike with Matchers with BeforeAndAfterAll {
 
  import ChatBusSpec._

  val theProbe = TestProbe()

  def this() = this(ActorSystem("ChatBusSpec"))

  def getEchoSubscriber() = {
	system.actorOf(Props(new Actor {
	    def receive = {
	      case m:ChatMessage => theProbe.ref ! m
	    }
	}))	
  }
 
  override def afterAll {
    TestKit.shutdownActorSystem(system)
  }

  "A chat event bus" must {
     "send a private message to a player with no snoopers" in {
		val eventBus = new ChatEventBus
		val kevin = getEchoSubscriber()
		val bob = getEchoSubscriber()
		eventBus.subscribe(kevin, ChatCoordinate(ChatSegments.Player, Some("Kevin")))
		eventBus.subscribe(bob, ChatCoordinate(ChatSegments.Player, Some("Bob")))
		eventBus.publish(toKevin)
		theProbe.expectMsg(500 millis, toKevin.msg)
		theProbe.expectNoMsg(500 millis)
     }	

     "send a single message to all players" in {
		val eventBus = new ChatEventBus
		val kevin = getEchoSubscriber()
		val bob = getEchoSubscriber()
		eventBus.subscribe(kevin, ChatCoordinate(ChatSegments.Player, Some("Kevin")))
		eventBus.subscribe(bob, ChatCoordinate(ChatSegments.Player, Some("Bob")))
		eventBus.publish(toAllPlayers)
		// Each player should receive one of these, so the probe should bounce it back twice.
		theProbe.expectMsg(500 millis, toAllPlayers.msg)
		theProbe.expectMsg(500 millis, toAllPlayers.msg)
     }

     "send to all players in a sector should only deliver once per player" in {
		val eventBus = new ChatEventBus
		val kevin = getEchoSubscriber()
		val bob = getEchoSubscriber()
	
		eventBus.subscribe(kevin, ChatCoordinate(ChatSegments.Player, Some("Kevin")))
		eventBus.subscribe(kevin, ChatCoordinate(ChatSegments.Sector, Some("Alpha")))
		eventBus.publish(toSectorAlpha)
		theProbe.expectMsg(500 millis, toSectorAlpha.msg)
		theProbe.expectNoMsg(500 millis)
     }

     "support a player moving from one sector to another" in {
		val eventBus = new ChatEventBus
		val kevin = getEchoSubscriber()
		
		eventBus.subscribe(kevin, ChatCoordinate(ChatSegments.Player, Some("Kevin")))
		eventBus.subscribe(kevin, ChatCoordinate(ChatSegments.Sector, Some("Alpha")))
		eventBus.publish(toKevin)
		theProbe.expectMsg(500 millis, toKevin.msg)
		eventBus.publish(toSectorAlpha)
		theProbe.expectMsg(500 millis, toSectorAlpha.msg)
		eventBus.unsubscribe(kevin, ChatCoordinate(ChatSegments.Sector, Some("Alpha")))
		eventBus.subscribe(kevin, ChatCoordinate(ChatSegments.Sector, Some("Beta")))
		eventBus.publish(toSectorBeta)
		theProbe.expectMsg(500 millis, toSectorBeta.msg)
     }
  }
	
}
Feb 12 14

Segmenting Traffic on the Akka Event Bus with Subchannel Classification

by Kevin Hoffman

In my previous post, I provided a simple “hello world” type application that utilizes the Akka Event Bus. In this sample, I showed how you can use a simple LookupClassification to classify the events being published. I glossed over this detail in the previous post to keep things simple, but the lookup classification does what its name implies – it does a lookup on the classifier (in my case, it was a string) and returns the subscribers for that classifier. There is no hierarchy involved. With this situation, if you subscribe to /zombies/Foo you will not see events published to /zombies.

In this blog post, I want to talk about how we can add hierarchical topic support to the event bus by switching from a lookup classification to a subchannel classification. In the subchannel classification system, I can subscribe to /zombies/Foo and I will receive events published to /zombies. Additionally, if I publish to /zombies/A and I am subscribed to /zombies/B I will not receive that event. This type of topic-based subscription should be familiar to those of you who have used service buses, messaging middleware, or the evil creature known as JMS.

So let’s get to it. First, I want to create my own subchannel classification. The beauty of this is that you can provide as little or as much logic as you like. In my case, I’m just going to do a standard string comparison to classify, and I am going to use “starts with” to identify a subchannel. I could get much more involved, even using regular expressions, if I wanted to.

class ZombieSightingSubclassEventBus extends ActorEventBus with SubchannelClassification {
  type Event = ZombieSightingEvent
  type Classifier = String

  protected def classify(event: Event): Classifier = event.topic

  protected def subclassification = new Subclassification[Classifier] {
    def isEqual(x: Classifier, y: Classifier) = x == y
    def isSubclass(x: Classifier, y: Classifier) = x.startsWith(y)
  }

  protected def publish(event: Event, subscriber: Subscriber): Unit = {
    subscriber ! event.sighting
  }
}

Now if I modify my previous blog post’s application object as follows, I’ll get some very robust behavior:

object ZombieTrackerApp extends App {

  val system = ActorSystem()
  //val eventBus = new ZombieSightingLookupEventBus
  val eventBus = new ZombieSightingSubclassEventBus

  val subscriber = system.actorOf(Props(new Actor {
    def receive = {
      case s:ZombieSighting => println(s"Spotted a zombie! ${s}")
    }
  }))

  val westCoastSightingHandler = system.actorOf(Props(new Actor {
  	def receive = {
		case s:ZombieSighting => println(s"West coast zombie ${s}!!")
	}
  }))

  val eastCoastSightingHandler = system.actorOf(Props(new Actor {
	def receive = {
		case s:ZombieSighting => println(s"East coast zombie ${s}!!")
	}
  }))

  eventBus.subscribe(subscriber, "/zombies")
  eventBus.subscribe(westCoastSightingHandler, "/zombies/WEST")
  eventBus.subscribe(eastCoastSightingHandler, "/zombies/EAST")

  eventBus.publish(ZombieSightingEvent("/zombies/WEST", ZombieSighting("FATZOMBIE1", 37.1234, 45.1234, 100.0)))
  eventBus.publish(ZombieSightingEvent("/zombies/EAST", ZombieSighting("SKINNYBOY", 30.1234, 50.1234, 12.0)))

  // And this one will NOT go off into deadletter like before ... this satisfies "startsWith" on the first subscriber
  eventBus.publish(ZombieSightingEvent("/zombies/foo/bar/baz", ZombieSighting("OTHERONE", 35.0, 42.5, 50.0)))
  system.shutdown
}

Here I’ve decided to create one subscriber that listens for west coast zombies and another one that listens for east coast zombies. In the sample above I’ve used two anonymous actor classes but I could very easily have used instances of the same actor class that were primed with different constructor props (such as the region to which they were bound). As I mentioned above, the old listener should receive all events on /zombies, but also my new east and west subscribers should also receive /zombies messages in addition to their own regional localized events.

Here’s what the program run output looks like:

[info] Running ZombieTrackerApp 
Spotted a zombie! ZombieSighting(FATZOMBIE1,37.1234,45.1234,100.0)
West coast zombie ZombieSighting(FATZOMBIE1,37.1234,45.1234,100.0)!!
East coast zombie ZombieSighting(SKINNYBOY,30.1234,50.1234,12.0)!!
Spotted a zombie! ZombieSighting(SKINNYBOY,30.1234,50.1234,12.0)
Spotted a zombie! ZombieSighting(OTHERONE,35.0,42.5,50.0)
[success] Total time: 6 s, completed Feb 12, 2014 8:39:09 PM

And so this is hopefully enough to whet your appetite on the Akka Event Bus. And by “whet your appetite” I mean that in the most drug-pusher way possible. The first one’s free, now enjoy your new addiction to the event bus :)

Feb 12 14

Using the Akka Event Bus

by Kevin Hoffman

It’s been so long since I’ve written a blog post about Akka, it feels like coming home after a long business trip – the softness that only your own bed in your own home can provide.

Anyway, let’s talk about event buses. An event bus is basically an intermediary between publishers and subscribers. Publishers publish events to the event bus and subscribers listening for certain types of events will receive those events. That’s the generic definition of an event bus. The Akka Event Bus is, as you might have guessed, an Akka-specific implementation of this pattern.

I find the Akka documentation for Event Bus to be inscrutable and downright off-putting for new developers who aren’t already Akka experts. Go ahead, try and read that stuff and work backwards from that into a hello world sample. You’re going to need at least a beer. So let’s start with the basics.

To set up an event bus you need three things: an event bus, subscribers, and publishers. When you create an event bus (a base class that you extend in Scala) you need to declare two types:

  • Event Type
  • Classifier Type

This is where the documentation gets annoying. The Event Type is just the type of objects that you expect to be put on the bus. You can make this type as specific as you want or as generic (Any works just fine as an event type, but I wouldn’t really recommend that as a pattern). The Classifier Type is the data type of the event bus classifier. This sounds more complicated than it needs to be. A classifier is really just some piece of information used to differentiate one event from another, or, in Akka parlance, to classify an event.

You could use an integer as a classifier and then when you subscribe your actors to the bus, you can subscribe them to individual numbers. So, let’s say you have 3 buckets, you could have a classifier and then publish into the 1 bucket or the 2 bucket (classifier). If you’re familiar with JMS or other messaging middleware systems then you might be familiar with the “topic” style classifiers, which often look like slash-delimited hierarchies such as /buckets/1 or /buckets/2. There is some additional complexity around classifiers and classification that I will defer until the next blog post so we can get right to the hello world sample and keep things simple.

First, we need a sample domain. As such, we’ll use the old standby domain of zombies. Let’s say we want to set up a bus to which we can publish zombie sighting messages. We then have subscribers who are interested in those zombie sighting messages for whatever reason. In the past, my actor samples all directly sent messages from one actor to another. But, with the Event Bus, the actors sending the message don’t have to care about or know anything about the actors interested in receiving it. This kind of loose coupling pays off in spades in ease of maintenance and troubleshooting.

And without further ado, here’s the full source that includes the zombie sighting message, a zombie sighting event (with a string classifier), the zombie bus, and the code that sets up the subscriptions and publishes to the bus:

import akka.event.ActorEventBus
import akka.event.LookupClassification
import akka.actor.ActorSystem
import akka.actor.Props
import akka.actor.Actor

case class ZombieSighting(zombieTag: String, lat: Double, long: Double, alt: Double)
case class ZombieSightingEvent(val topic: String, val sighting: ZombieSighting)

class ZombieSightingLookupEventBus extends ActorEventBus with LookupClassification {
  type Event = ZombieSightingEvent
  type Classifier = String

  protected def mapSize(): Int = 10

  protected def classify(event: Event): Classifier = {
    event.topic
  }

  protected def publish(event: Event, subscriber: Subscriber): Unit = {
    subscriber ! event.sighting
  }
}

object ZombieTrackerApp extends App {

  val system = ActorSystem()
  val eventBus = new ZombieSightingLookupEventBus

  val subscriber = system.actorOf(Props(new Actor {
    def receive = {
      case s:ZombieSighting => println(s"Spotted a zombie! ${s}")
    }
  }))

  val indifferentSubscriber = system.actorOf(Props(new Actor {
    def receive = {
	   case s:ZombieSighting => println(s"I saw a zombie, but I don't give a crap.")
    }
  }))

  eventBus.subscribe(subscriber, "/zombies")
  eventBus.subscribe(indifferentSubscriber, "/zombies")

  eventBus.publish(ZombieSightingEvent("/zombies", ZombieSighting("FATZOMBIE1", 37.1234, 45.1234, 100.0)))
  eventBus.publish(ZombieSightingEvent("/zombies", ZombieSighting("SKINNYBOY", 30.1234, 50.1234, 12.0)))

  // And this one will go off into deadletter - nobody is subscribed to this.
  eventBus.publish(ZombieSightingEvent("/zombies/foo/bar/baz", ZombieSighting("OTHERONE", 35.0, 42.5, 50.0)))
  system.shutdown
}

And when we run this application, what we expect to see are two trace outputs from the subscriber that doesn’t give a crap and the subscriber that is outputting information about the zombie it received. Here’s the output:

[info] Running ZombieTrackerApp
 I saw a zombie, but I don't give a crap.
 Spotted a zombie! ZombieSighting(FATZOMBIE1,37.1234,45.1234,100.0)
 I saw a zombie, but I don't give a crap.
 Spotted a zombie! ZombieSighting(SKINNYBOY,30.1234,50.1234,12.0)

In the next blog post, I’ll cover how to upgrade this sample so that we can support the hierarchical classifier style (subchannels).

Jan 18 14

Concurrency in Go with Channels and Goroutines

by Kevin Hoffman

So far I’ve been walking myself through learning some of the basic features of the Go language and I’ve been impressed with what I’ve seen. I still have this nagging feeling that Go is a solution looking for a problem and I haven’t decided if that’s because I haven’t yet had that “a-ha!” moment where I figure out what problem Go solves miraculously or if it’s just because I’ve only been poking around with it for a few days.

Concurrency is one of the more heavily lauded features of the language, but the community is quick to remind you that concurrency is not parallelism. I won’t go into detail on that subject because it’s a really meaty one and I just want to provide a quick introduction to concurrency here.

The way to think about goroutines is that they are very small, light-weight chunks of processing that you compose in order to chew up a larger task. There may be a correlation here between goroutines and actors and in terms of design, I’ve noticed a near 1:1 correlation between what I would normally code into an actor with what I find being coded into goroutines in samples. The way goroutines communicate with each other and the rest of your code is through channels which can send and receive messages.

One distinction that I think is unique to Go is that sending and receiving on channels is a synchronizing operation. In other words, if you use the arrow notation (<-) to receive a value from a channel, you will block until you receive that value. Conversely, if you send a value on a channel, the code that sent the value will be blocked until some other goroutine plucks that value off the channel .You can get around this with buffered channels, but it seems to be a fundamental thing that goroutines synchronize via communication rather than shared memory.

In my little sample, I decided to take the synchronization “hello world” of a producer and a consumer and implement it in Go. In the following code, I’ve got a type called StockTick and I’ve created a channel that carries stock ticks (channels are strongly typed). I invoke a producer and a consumer and thus have two separate blocks of code communicating with each other:

package main

import (
	"fmt"
	"math/rand"
)

type StockTick struct {
	Ask		float32
	Bid		float32
	Symbol	string
}

func genTick() (tick StockTick) {
	tick.Ask = 45 * rand.Float32()
	tick.Bid = 42.50 * rand.Float32()
	tick.Symbol = "IBM"
	return tick
}

func produceTicks(stocks chan StockTick) {
	for x:=1; x<100; x++ {
		stocks <- genTick()
	}
	close(stocks)
}

func consumeTicks(stocks chan StockTick) {
	for stock := range stocks {
		fmt.Printf("Consumed a tick %v\n", stock)
	}
}

func main() {
	fmt.Printf("Stock Channels FTW\n\n")
	stocks := make(chan StockTick)
	go produceTicks(stocks)
	consumeTicks(stocks)
}

There are a couple of things that I found interesting in this code that caught me by surprise. The first is that I had to close the stocks channel when I was done producing. This is because the range operator creates a deadlock if the code reaches a point where it knows that there can be no more values to consume (the producer goroutine is done). You can get around this in real-world apps where you plan on consuming all day with an infinite loop, but in this case I knew I was done so calling close(stocks) is actually what allows the range operator to terminate in a friendly way without a deadlock.

Next, notice that I used the go keyword to fire off the produceTicks method as an asynchronous goroutine but I didn’t do that for consumeTicks. This is because if I let them both be asynchronous in the background, the main() function would terminate, which kills the go application. In a go app, as soon as main() exits, the entire app quits, even if you have active goroutines.

There are a number of higher-level patterns that you can implement with channels such as sync’ing on acknowledgements sent back on the channel from which a value is received and using a “fan-in” approach where you’re pulling from multiple channels to produce values on a single channel so that you aren’t waiting in lock-step for values to arrive in sequence across channels. As this is still just a series of blog posts covering my introduction to the language, I won’t cover those yet.

As I said, I am still unsure whether I have any problems that Go would be ideal to solve, but I am going to keep plugging away until I decide one way or the other.

Jan 17 14

Implicit Interfaces in Go

by Kevin Hoffman

In my continuing obsession with learning a new language, I’ve gotten to the point in experimenting with Go that I want to know what all this implicit interface stuff is about. In my previous blog post, I talked about how Go doesn’t actually have classes, but rather it has explicit receiver functions that look like they are methods being invoked on an object.

The rumor on the street is that Go has a facility similar to duck typing in that when you declare an interface, anything can be said to implement that interface as long as that thing has all of the method definitions declared in the interface. This isn’t entirely unique – dynamic languages are full of stuff like this and Scala’s got the ability to define type aliases that do the same thing. The curious thing about Go’s implicit interface satisfaction is that it is compile-time checked. That’s right – your code won’t compile if you have code that attempts to convert an object into an interface that it can’t satisfy.

Let’s take a real-world example: wielding a chair (what, that’s not real-world to you?). When building MUDs and even some more modern game types, you often run into this type of thing. If a developer didn’t originally intend for an object to be used as a weapon, it can’t be treated like one. But, what if you want to make it so that the player can wield a chair as easily as a sword? Sure, they will have different behavioral characteristics, but you should still be able to wield either.

In a classic programming language problem, this is easy to solve: Create an interface called Wieldable (or IWieldable if you’re a .NET person) and then two classes that both implement that interface: Chair and Sword. In Go, you still create a Wieldable interface that defines a Wield() method, but the difference is you never create a class nor do you explicitly indicate what interface you’re implementing. If there’s an explicit receiver in scope called Wield() for that object, then your code can treat it as a wieldable.

Let’s take a look at some code:

// interfaces
package main

import (
	"fmt"
)

type Wieldable interface {
	Wield()
}

type Chair struct {
	Legs	int
	Wielded bool
	Name	string
}

type Sword struct {
	DamageRating	int
	Wielded			bool
	Name			string
}

type Test struct {
	Wieldy		Wieldable
}

func (s *Sword) Wield() {
	fmt.Printf("Wielding Sword %s\n", s.Name)
	s.Wielded = true
}

func (c *Chair) Wield() {
	fmt.Printf("Wielding Chair %s\n", c.Name)
	c.Wielded = true
}

func main() {
	fmt.Println("Hello World!")

	s := Sword{
		DamageRating: 12,
		Wielded: false,
		Name: "Sword of Doom",
	}

	c := Chair{
		Legs: 4,
		Wielded: false,
		Name: "Death Seat",
	}

	t := Test {
		Wieldy: &s,
	}

	t2 := Test {
		Wieldy: &c,
	}

	t.Wieldy.Wield()
	t2.Wieldy.Wield()
}

In this code I am creating a Sword struct and a Chair struct and then I created an interface called Wieldable and another struct called Wieldy which holds a Wieldable object. To prove that I can convert between Chair/Sword and Wieldable, I create two test objects – each holding a Wieldable converted from either a sword and I then call the Wield() method on each.

So here’s something unexpected - this is a one-way conversion. In a runtime managed language when you typecast a reference to an object from one type to another, the actual underlying object remains unchanged and you get a runtime-limited view onto that object. For example, if you’ve created some serializable object called Zombie, you can cast that to a Serializable and then still get back the original zombie. If you attempt to get a Sword object out of a Test.Wieldy field, you will get an error message from the compiler.

That said, you can still prove that the wield methods did actually change the value of the things you’re wielding:

fmt.Printf("Wield status: %s, %s", s.Wielded, c.Wielded)

The big take-away that I learned from this experiment is that while implicit interfaces are pretty damn awesome, you should not be using them so you can do a pile of random typecasting or to somehow treat your structs like the target for mixins (traits in Scala). They should be used to assert that a particular set of receiver methods exist for a particular data type. And since there are no classes and there is no inheritance, that data type is strictly enforced.

Further, I might have been better off just using embedding to contain a wieldable inside either my chair or sword objects, which would make them both wieldable. Also, interfaces cannot contain fields. So you might be tempted to try and model some statement like “anything wieldable should have a wielded boolean property”. You cannot model this with interfaces.

So I’m already starting to run into scenarios where it feels awkward or downright limiting to model things like business logic (or game logic) in Go. I will keep trying, however, to see if there’s a more Go-idiomatic way of doing things that I’m not seeing because I’m still knee-deep in the class-and-inheritance world.

Jan 17 14

Object-Oriented Design in Go

by Kevin Hoffman

One of the first things I tend to look for in exploring a new language is OOD. What does a class look like? How does inheritance work? And, because I’m such a huge fan of Scala, I now ask questions like “Does it support mixins (traits)?”. The trick with Go is that there are no classes. That’s right, it’s a language that can be called object-oriented but you’ll never write a single class.

Let’s take a look at a little bit of sample code, where I’ve created a struct (remember those? Ah, the good old days…) called Mob and I’ve also created one called Coordinate:

type Coordinate struct {
	X	int
	Y	int
}

type Mob struct {
	Hitpoints	int
	Name		string
	AggroRadius	int
	Coordinate
}

Something worth noting here is that it might look like Mob inherits from Coordinate, but that’s not what’s happening. Instead, the Mob struct is embedding Coordinate struct. To really be object-oriented, I need to be able to write methods on my “objects”. Go supports this by allowing you to define functions that have something called an explicit receiver. In other words, when you invoke a method on a struct, that instance of that struct becomes the explicit receiver for that function.

Let’s create a method that moves our mob around a game board. In the process, you’ll see another one of Go’s differences from traditional C in that it can return multiple values. What happens if we invoke a method called MoveTo in a traditional OOP language and it fails? How do we know where the mob stopped en route to its original location? There are other classes we traditionally create like MoveResult or some junk like that, but that’s all ceremony slapped on top of the real problem. Here’s how to do it in Go:

func (mob *Mob) MoveTo(x, y int) (success bool, newLoc Coordinate) {
	success = true
	newLoc = Coordinate{x,y}

	mob.X = x
	mob.Coordinate.Y = y // can choose to be explicit about embed or not

	return success, newLoc
}

In the preceding code, the mob *Mob is declaring an explicit receiver of type Mob and it will be named mob (think of it as being able to provide a name for the traditional this in other OOP languages). X and Y are integer parameters to the method, and success and newLoc are the return values from the method.

Now let’s take a look at how to create a Mob and use it:

func main() {
	var brutus = Mob{
		Hitpoints: 500,
		Name: "Brutus",
		AggroRadius: 20,
		Coordinate: Coordinate{X: 100, Y: 200},
	}

	madeIt, newLoc := brutus.MoveTo(60, 50)
	fmt.Printf("Did I make it? %s\n", madeIt)
	fmt.Printf("Brutus is now at %d,%d\n", newLoc.X, newLoc.Y)

	// Can also shortcut through the embed:
	fmt.Printf("Brutus's X location - %d (or %d)\n", brutus.X, brutus.Coordinate.X)
}

I strongly encourage you, if you’re just learning Go, to copy this stuff in (remember each file needs a package) and run it with something like go run mob.go. Also remember that there’s absolutely no restriction on filename and it doesn’t need to correlate to the package it contains or the structs used. One subtle take-away from this is that you can add explicit receiver functions (methods) to any struct from anywhere in your code.

Here’s something subtle but important: Did you notice that it looked like I used a pointer in the MoveTo method? If you’ve got the code running, go ahead and delete the asterisk and run it again. Note that you don’t get any memory violations or null pointer exceptions or any of the usual crap you would expect from downgrading a pointer to a stack reference. BUT, what does happen is that the modification of the mob’s current location is ignored. Well, it’s not really ignored, but the modification is being made to a stack copy of the mob which is different than the mob reference sitting inside the main() function. To ensure that changes made inside the method actually happen to the shared instance of the mob between main() and the method, you need to use a pointer. This is because the method isn’t some vtable dispatched reflection-discovered shenanigan, it’s a legitimate C-style function that just happens to have been passed a value called mob. This is one of those dichotomies I mentioned in my previous blog post, about how it seems that Go is straddling the line between low-level, C-like functionality and modern awesomesauce like goroutines and asynchronous processing via communication over channels.

In my next post, I’ll show you some stuff about type inference and another cool feature: implicit interfaces.

Jan 16 14

First Impressions of the Go Language

by Kevin Hoffman

I recently got a chance to hang out with some very brilliant people and one of the guys mentioned how fond he was of Go, a programming language most commonly associated with Google. To be honest, the fact that it was “yet another thing that came out of Google” is exactly why I hadn’t played with it up until this point. I’d seen a few blog posts but skimmed them. This time, however, I decided to give Go a more fair shake and so I downloaded it and started hacking away. What happened next shocked even me.

I’m not a coding newb, but even I know that there’s a learning curve to each language and recently the learning curves have been getting steeper – Scala and Clojure come to mind, as well as some of the UI frameworks that aren’t languages, but they have language-sized learning curves (AngularJS is the 800 pound gorilla of learning curves). That’s why when I had ‘Hello World’ compiled and running in about 12 minutes after my first click on golang.org, I was shocked.

If you are unfamiliar with Go, it takes a little getting used to because at first glance it seems like a hodgepodge of some features you would never expect to see in the same language. Here’s a quick list that I found over on this blog post entitled Why Go?:

  • Asynchronous – We’re all about reactive programming these days and all the hip kids are building non-blocking, asynchronous code. In fact, if I hear one more Node.js zealot shout “it’s non-blocking” one more time, I will stab him in the face with a web socket. Go has these things called Goroutines that are lightweight async execution blocks and they have channels that make my inner Erlang and Akka fanboy giggle. Bottom line is you write synchronous-looking code but it’s non-blocking and there are no callbacks. Callback hell needs to be avoided at all costs.
  • Concurrency – Paired with the asynchronous support features and support for multi-core with channels is very appealing, especially considering some of the other aspects of the language (check out the next bullet, you’re going to crap yourself)
  • Static Binaries – Oh that’s right. There’s no JVM. There’s no VM of any kind. You don’t have to wait for a runtime to go dig through classpath garbage and dynamically load a hojillion JAR files. These are compilednative binaries. Remember way back in the good old days when you could just pass around and executable and that worked?? Yeah, I had almost forgotten those times as well.
  • Language Features – In addition to looking like C, having asynchronous, actor-like support for concurrency and parallel programming, it also has a lot of the goodies that people think you can only get from some of the newer JVM or functional languages  like type inference and my personal favorite, implicitly satisfied interfaces … wait wait.. it gets better… the implicitly satisfied interfaces are checked and satisfied at compile-time… I almost shed a tear when I saw that.
  • Privacy indicated via case, not keywords – Capital lettered members are exported, lowercase ones are not.
  • Imperative, object-oriented language – You’d think this is contradictory, but with Go it just seems to work. I was very skeptical until I started playing with it. Go doesn’t have classes, yet it supports message passing via methods, polymorphism, and namespacing. Crazy, right?
  • Multiple return values – The language looks like C, but you can invoke methods that return multiple values and it looks just like a Scala unapply syntax. Me likey.

So, I sat down and I downloaded Go and got it working and followed the advice of setting up the $GOPATH environment variable that pointed to a single root under which all of my Go code would reside. It felt a little awkward, but there seemed to be some method to the madness, specifically relating to the fact that Go can fetch its dependencies directly from Github and that it can resolve transitive dependencies without the need for a make file, maven, or sbt. Holy crap – compiling to a native binary without maven or sbt? That’s cause for celebration.

So, let’s get to the hello world? (You can get the instructions to download Go and set up “Hello World” from here at golang.org.

package main

import "fmt"

func main() {
  fmt.Printf("Hello world.\n")
}

When you issue a go install command at the command line, this compiles your Go code. It doesn’t just create a JAR file or a DLL file that will be interpreted (JIT or otherwise) by a runtime like the .NET CLR or the Java Virtual Machine, it creates a static standalone self-executing binary. Ah but you say Java can create executable JARs too, right? This isn’t the same, this is a native executable whereas an executable JAR is just a regular JAR with the JVM launching bootstrapped in.

When I first sat down and started reading the docs on Go, I had mixed feelings and still do. On the one hand, it feels like a step back in that some of it looks like old school C (don’t get me wrong, I love me some old school C) whereas other parts are amazingly terse yet ridiculously powerful (channels, goroutines, slices, transitive no-makefile dependency resolution). I don’t know yet what my final thoughts are, but I know that right now it is appealing to my love of the C language (I think Go is one of the most C-pure looking syntaxes I’ve seen in a long time) with my love of Erlang, the actor pattern, and asynchronous back-end programming.

This is just hello world. If you’ve read this blog before then you know the real test I have to put Go through in order to vet it as a language and I’ll be posting a series of blog posts on that soon. The real question I ask of any new language is this: Can I create a MUD with this language?

I’m about to find out – stay tuned!

Dec 7 13

Asynchronous, Non-Blocking NOM NOMs

by Kevin Hoffman

Most of the time when I encounter fascinating situations in the real world, I am struck by how well that situation might translate into a novel. For example, I constantly see situations and people that inspire character traits or plot points that someday might make their way into one of my books. However, every once in a while, I see a situation in real life that tickles my technical fancy.

There is a “deli” (the quotes are because in New York, “deli” doesn’t mean what most people think it means.. a NY deli is often a giant cafeteria-style buffet place that serves 20 kinds of food, and sells everything from chips to bobble-heads and gummy bears) near my office in Manhattan that is a marvel of modern retail efficiency. On its busiest day, you can make your way through the throng of people, order what you want, and get out in less than 20 minutes. The really remarkable thing is that the checkout line is unbelievably fast and I can be six people back and still get through the line and out the front door in less than 3 minutes.

What makes this an interesting technical blog post is that this place employs a number of techniques that people like me have been using as large-scale application development design patterns for years. My coworkers and I often refer to the lines of people feeding up toward the cash registers as non-blocking, asynchronous processing.

You really need to see these people in action to believe it. There’s one person who is handling nothing but the credit card swiping, another person is handling the cash register, yet another takes your food and puts it in a bag for you, a bag which comes pre-loaded with a fork, a knife, and napkins. There is absolutely no wasted time from the perspective of the customer. All of the things that a customer might be blocked on in a traditional purchase queue have been parallelized. 

There isn’t a per-customer wait for the bag to get the common essentials stuffed into it. When the card-swiper person is swiping your credit card, the cash register person is actually ringing up the order of the person behind you in the queue, and the bag-stuffer is working on the bag of the person behind that.

This is all well and good and getting your food fast is always a good thing, but what does it have to do with building scalable systems? Everything.

Imagine that a web request is a customer waiting in line for food. If, upon this request’s arrival at the web server, the web server has to go and do everything in serialized order, for every single request, then the time it takes to handle a single request may not appear to be all that bad, but the more requests you have, the longer the lines get. The more back-up you have building up in a queue, the worse the perceived performance of your site will be, even if the time it takes to handle a single request is fixed and relatively short. Why is that? Because each person’s request isn’t being handled immediately upon arrival at the site, their requests are piling up in a queue waiting for other requests to be processed.

Typical scaling patterns just increase the number of concurrent threads to deal with incoming requests. That’s fine, and it works for a little while, until you run out of threads. Now, let’s say you have 30 threads. The first 30 people to hit your site have a decent experience and then the 31st and thereafter experience the same delays as before. People see this and the initial knee-jerk reaction is to add more servers. So now let’s say you’ve got 4 servers, each with 30 threads (I’m simplifying to make the numbers easier to picture). The first 120 people to hit your site now have a decent experience and then thereafter, additional customers are subject to delays and waits.

This is a classic fix the symptom not the problem approach. If we apply this to retail then I’m sure you know what the knee-jerk retail reaction to the high load and peak volume problems are - add more cashiers. So now instead of four queues handling people’s orders, you have 8. Great, but you run into the same problems as above, plus an even worse one – your store runs out of room to accommodate the people waiting in the queues. In some circles, we also refer to this problem as the latency versus throughput problem. It’s actually more involved than that. If you optimize for latency, then you train your cashiers to be unbelievably fast at processing a single order. This appears to be good because this (ideally) means your queue drains faster, with customers moving through the line faster. To increase your throughput, you just add more highly trained, super-fast cashiers. Simple, right? NOPE.

What has this one deli done that developers building systems for scale failed to do? It’s remarkably simple, when you think about it. Rather than taking a monolithic process and scaling that one process out (completing the entire checkout process from start to finish), they have deconstructed all of the tasks that need to be done in order to get a customer through the line and have applied enterprise software patterns to dealing with those tasks. First, they have done some tasks before the customers even arrive, such as pre-filling the bags with forks, knives, and napkins. This removes several seconds from the per-customer pipeline which, when you multiply that out by the number of customers in this place (it’s huge, trust me) and the number of registers, makes a significant impact.

The next thing they’ve done is identify tasks that can be done in parallel. The same cashier doesn’t need to ring up your order and swipe your credit card. These can be done by two different people. This is where throughput versus latency becomes really important. The per-customer time to finish remains the same because the customer can’t leave until both tasks are complete (ring-up and swipe), however, you’re dramatically increasing your throughput because while the first customer is having their card swiped, the next customer is having their order rung up, which allows the pipeline to absorb more customers.

So what’s the moral of this long-winded story? Simple: Model your request processing pipelines like a super-efficient Manhattan deli :)