As you can imagine, building a Massively Multiplayer Online Game (MMOG, or MMORPG for “roleplaying” types, also you may have seen MMORTS for online real-time strategy games) is no small task. In fact, it’s pretty daunting. However, the technology we have now, including hardware, software, and virtualization, makes it possible to scale these games to incredible sizes.

To put it in perspective, EverQuest, what I consider the first truly huge, commercially successful MMO (yes, Ultima Online was first, but EQ quickly rose to eclipse UO’s volume), during one of the periods where I was playing, used to have a per-server capacity of around 1700-2000 players. Once our buddies started noticing that the number of players online in a particular server went up close to 2000, many of us started running like hell for the nearest safe spot, city, or useful campground.

Today, while still sharding out its content across a ridiculous number of servers, games like World of Warcraft not only have vastly more content and interaction than EQ did, but each shard can hold many more players. Being able to create a server that can handle that kind of scale and respond to messages from player clients in real-time is a daunting task. The sheer number of minutiae that you need to get absolutely perfect for things to work properly is mind-boggling.

The EverQuest servers, as well as similar servers for games that came shortly thereafter like Anarchy Online, were written in C++. To handle the volume of information from that large a number of simultaneous players, they had to manage their own threads, they had to do absurd low-level optimizations, and they had to make sure that every single line of code on that server was as fast and stable as possible. An entire shard (server process, or, later in EQ’s life, a cluster of server processes) simply could not crash. People paying $10-15/month for the privilege of playing the game (a concept which was brand new and controversial at the time) would not tolerate a server crash.

I am just one person, and giant game companies have entire armies of developers. How can I possibly hope to make an MMO server that players will actually use? Simple. Standing on the shoulders of giants. I don’t need to write all the low-level stuff, I don’t need to write the distributed grid-style asynchronous computing fabric that should underpin my game logic. It’s been done for me, and it’s called Akka.

You may have seen some of my previous blog posts where I talk about some of the low-level details like google protocol buffers and creating a non-blocking socket server using Akka IO. The next step toward being able to build an MMO is the core functionality required for a multiplayer game: asynchronous message routing and transmission.

The data flow for an MMO works something like this: A message comes into your server’s gateway (in my case, a non-blocking socket server). The gateway then does whatever it needs to do with the message and dispatches it accordingly. The message, or a version of it, then floats its way through the “business logic” (your game core) and as a result of it entering and executing your game logic, 0..n messages will be emitted as a result. Some of those messages will come back to the player (“your spell missed you blind moron”), some of those messages will arrive at another player’s client (“moronA failed to hit you with a magic missile”), and some messages will never leave the server environment (“persist this bit of information for me please, mr. database” or “hey NPC, player just missed you with a magic missile, your AI should respond.”). Whether you consider your NPC AI message-based or just part of your game logic is a minor semantic detail.

The point is this: we need to keep track of which players are connected, we need a way to identify those players so that messages can be targeted to them, and we need a way to be able to receive messages, funnel them into our game logic, and emit messages as a result. No matter what kind of game you’re building, no matter what your game logic may look like, this is the essential core nugget of functionality on which all of the rest of your MMO code must be built.

If you remember my dispatch map from this blog post, I can now add a new potential target for dispatch, the GameClientProxy, which is an actor that serves as a gateway between my game logic and the player. If I send the GCP a native scala case-class message, I can expect that as a result, the GCP will produce a protobuf message and put it on the wire, headed for the target player. In some cases I can send the GCP an already built protobuf message, which is the case when a player sends a DirectMessage, a protobuf message to send a “chat” from one player to another.

Here’s a look at my current dispatch map:

val dispatchMap: Map[Int, (Mergeable, String)] = Map(
   1 -> (ZombieSighting.defaultInstance.asInstanceOf[Mergeable], "akka://game/user/ServerCore/DispatchTarget"),
   MessageTypes.SignIn -> (SignIn.defaultInstance.asInstanceOf[Mergeable], "akka://game/user/ServerCore/Clients"),
   MessageTypes.DirectMessage -> (DirectMessage.defaultInstance.asInstanceOf[Mergeable], "akka://game/user/ServerCore/Clients")
  )

I’ve already got the code in place that dispatches these messages and does the protobuf de-serialization, so now I just need a “clients” actor, which is my GCP:

class ClientManager extends Actor with ActorLogging {

	var clientPlayerMap : scala.collection.mutable.Map[String, String] = scala.collection.mutable.Map.empty

	override def preStart() = {
		log.debug("Client manager pre-starting.")
	}

	def playerForUsername(username:String) : ActorRef = {
		context.actorFor(clientPlayerMap(username))
	}

	def removeUUIDFromPlayerMap(uuid:String) : Unit = {
		clientPlayerMap.foreach {  m => if (m._2 == uuid) clientPlayerMap.remove(m._1) }
	}

	def receive = {
		case ClientDisconnected(socketHandle) => {
			removeUUIDFromPlayerMap(socketHandle.uuid.toString())
			// Stop this actor
			context.stop(context.actorFor(socketHandle.uuid.toString()))
			log.debug(s"Player at socket ${socketHandle.uuid} disconnected.")
		}
		case ClientConnected(socketHandle) => {
			val newClient = context.actorOf(Props(new GameClientProxy(socketHandle)), name = socketHandle.uuid.toString())
		}
		case (uuid:String, signIn:SignIn) => {
			clientPlayerMap(signIn.username) = uuid
			log.debug(s"Player ${uuid} signed in ${signIn}")
		}
		case dm:DirectMessage => {
			log.debug(s"Player ${dm.sourcePlayer} sent dm to ${dm.targetPlayer}")
			playerForUsername(dm.targetPlayer) ! dm
		}

	}
}

This class maintains a mapping between socket UUIDs (what I use to identify the game client proxy/GCP actors) and player names. This mapping is the essential bit of state that allows messages to be targeted at players rather than UUIDs. Game clients will be unaware of the UUID of other players (because that could change if they disconnect and reconnect while messages are flowing toward them). Also note that the s”foo ${bar}” syntax is Scala 2.10 string interpolation syntax.

class GameClientProxy(socketHandle: IO.Handle) extends Actor with ActorLogging {

	override def preStart() = {
		log.debug(s"GameClientProxy pre-start, socket uuid: ${socketHandle.uuid}")
	}

	override def postStop() = {
		log.debug("GameClientProxy stopped.")
	}

	def writeRawMessage(msg:com.google.protobuf.GeneratedMessageLite, messageType: Int): Unit = {
		val bb: java.nio.ByteBuffer = ByteBuffer.allocate(4)
		bb.putInt(msg.getSerializedSize)
		bb.flip
		socketHandle.asSocket write ByteString(bb)

		val bb2 = ByteBuffer.allocate(4)
		bb2.putInt(messageType)
		bb2.flip
		socketHandle.asSocket write ByteString(bb2)
		socketHandle.asSocket write ByteString(msg.toByteArray)
	}

	def receive = {
		case m:DirectMessage => {
			log.debug("I received a direct message.")
			writeRawMessage(m, MessageTypes.DirectMessage)
		}
	}
}

This is also my attempt to delay side-effects as long as possiblewhich is a core functional programming mantra. A message being written on the wire to a particular socket is a side effect of the exchange of messages between Actors in my Actor System (the game grid). Note that I didn’t have a “player” class floating around anywhere that I used to represent a single player, instead that’s a GCP. In this blog post, I talked about the importance of modeling an actor system in a way that gets work done which may not be the same as creating an actor for every potential interactive thing visible to a player.

So, with this bit of infrastructure in place, I am now in a position where I can start building small multiplayer interaction. From there, you scale out, you test, and you build more interaction. The key point in a situation like mine where I’m just one person and not an army is that I make small bits work, I test as I go, and more importantly, the act of creating the game should be just as fun as playing it.

So far, with Akka, it’s been a blast.