Kotan Code 枯淡コード

In search of simple, elegant code

Menu Close

Category: Cloud (page 1 of 2)

Adding a Configurable Global Route Prefix in ASP.NET Core

This morning I was asked if it was possible to set up a configurable global route prefix in ASP.NET Core applications. I’ve done this in the past using the old (now legacy) Web API as well as with older versions of ASP.NET MVC, but I hadn’t yet tried it with ASP.NET Core.

Before I get into the how, I think it’s worth mentioning a little of the why. In this case, I have an application (and, like all applications, it’s just a microservice) that I run locally but I also deploy it in a number of environments (clouds or cloud installations). In some of those environments, the application runs as a sub-domain, e.g. myapp.mycompany.com. In other environments, it runs with a path prefix like mycompany.com/myapp. I don’t want to hard-code this capability into my application, I want the environment to be able to tell my application in which mode it is operating without me having to rebuild the app.

To do this, we need to create a new convention. Route conventions are just code that executes an Apply() method against an application model. This apply method is executed the first time Kestrel receives an inbound request, which is when it evaluates the entire system to prepare all the routes.

Filip W (@filip_woj on Twitter) has put together a sample that shows how to do this. The following snippet is from his code which can be found in Github here.

public class RouteConvention : IApplicationModelConvention
    {
        private readonly AttributeRouteModel _centralPrefix;

        public RouteConvention(IRouteTemplateProvider routeTemplateProvider)
        {
            _centralPrefix = new AttributeRouteModel(routeTemplateProvider);
        }

        public void Apply(ApplicationModel application)
        {
            foreach (var controller in application.Controllers)
            {
                var matchedSelectors = controller.Selectors.Where(x => x.AttributeRouteModel != null).ToList();
                if (matchedSelectors.Any())
                {
                    foreach (var selectorModel in matchedSelectors)
                    {
                        selectorModel.AttributeRouteModel = AttributeRouteModel.CombineAttributeRouteModel(_centralPrefix,
                            selectorModel.AttributeRouteModel);
                    }
                }

                var unmatchedSelectors = controller.Selectors.Where(x => x.AttributeRouteModel == null).ToList();
                if (unmatchedSelectors.Any())
                {
                    foreach (var selectorModel in unmatchedSelectors)
                    {
                        selectorModel.AttributeRouteModel = _centralPrefix;
                    }
                }
            }
        }
    }

The cliff notes for what’s happening here is that when the Apply() method is called, we iterate through all of the controllers that the application model knows about. For each of those controllers, we query the selectors where there exists a route model. If we have a controller that has at least one matching selector (it has a Route() attribute on it), then we combine that controller’s route attribute with the prefix. If the controller doesn’t have a matching selector (there are no route attributes), then we set it.

This has the net effect of adding the prefix we’ve defined to every single one of the RESTful routes that we’ve defined in our application. To use this new convention, we can add it while we’re configuring services during startup (also shown in the Github demo) (note that the UseCentralRoutePrefix method is an extension method that you can see in the linked github repo:

public class Startup
{
    public Startup(IHostingEnvironment env)
    {}

    public void ConfigureServices(IServiceCollection services)
    {
        services.AddMvc(opt =>
        {
            opt.UseCentralRoutePrefix(new RouteAttribute("api/v{version}"));
        });
    }

    public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
    {
        loggerFactory.AddDebug();
        app.UseMvc();
    }
}

Again, this isn’t my code, these are just extremely useful nuggets that I found on the Strathweb blog. Now that we’ve got a convention and we know how to add that convention, we can modify the above code slightly so that we can accept an override from the environment, allowing us to deploy to multiple cloud environments.

We don’t really want to read directly from the environment. Instead, we want to use AddEnvironmentVariables when setting up our configuration. This allows us to set the app’s global prefix in a configuration file, and then override it with an environment variable, giving us the best flexibility while developing locally and pushing to a cloud via CI/CD pipelines.

So, our new Startup method looks like this:

public Startup(IHostingEnvironment env)
{
  var builder = new ConfigurationBuilder()
      .SetBasePath(env.ContentRootPath)
      .AddJsonFile("appsettings.json", optional: true, reloadOnChange: false)
      .AddEnvironmentVariables()
  Configuration = builder.build();
}

public IConfigurationRoot Configuration { get; }

And we can modify the ConfigureServices method so that we read the prefix from configuration (which will have already done the overriding logic during the Startup method):

public void ConfigureServices(IServiceCollection services)
{
  var prefix = Configuration.GetSection("globalprefix") != null ? Configuration.GetSection("globalprefix").Value : String.Empty;
 services.AddMvc(opt =>
 {
     opt.UseCentralRoutePrefix(new RouteAttribute(prefix));
 });
}

And that’s it. Using the RouteConvention class we found on Filip W’s blog and combining that with environment-overridable configuration settings, we now have an ASP.NET Core microservice that can run as the root of a sub-domain (myapp.foo.com) or at any level of nesting in context paths (foo.com/this/is/myapp) all without having to hard-code anything or recompile the app.

Finally, we can put something like this in our appsettings.json file:

{
  "globalprefix" : "deep/root"
}

And optionally override that with an environment variable of the same name.

Migrating an ASP.NET Core RC1 Microservice to RC2

In a recent blog post, I talked about how to deploy an ASP.NET Core RC1 service to the cloud. With yesterday’s release of ASP.NET Core RC2, I had to make a number of minor changes to my code in order to get it working both locally and in the cloud.

The biggest change between RC1 and RC2 is an architectural change. In RC1, the thing responsible for prepping the application was inside the dnx tool, and it was tightly coupled to the notion of building web applications. In Rc2, we’re actually returning to a more explicit model with a Program class and a Main(), where we write our own code to construct the WebHostBuilder instead of it being done implicitly on our behalf.

First, there are changes to the structure of the project.json file. Rather than go through them all, I’m just going to point you to the modified project.json for the Zombies service. The easiest thing to do is to take a project.json file created by new RC2 tooling and use that as a template to adapt your existing file.

The next thing I did was add a Program class. You’ll notice that I use the AddStartup() method to invoke all of the old startup code I had from RC1:

using System;
using System.Collections.Generic;
using System.IO;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Builder;
using Microsoft.Extensions.Configuration;

namespace SampleMicroservice
{
  public class Program
  {
    public static void Main(string[] args)
    {
      var config = new ConfigurationBuilder()
        .AddCommandLine(args)
        .Build();

      var host = new WebHostBuilder()
        .UseConfiguration(config)
        .UseKestrel()
        .UseContentRoot(Directory.GetCurrentDirectory())
        .UseStartup<Startup>()
        .Build();

      host.Run();
    }
  }
}

The next change is that I had to use the AddCommandLine() method. Without this, our application isn’t going to respond to or care about the –server.urls parameter that is going to get passed to it by the buildpack when we push to Cloud Foundry.

I had to make a few tiny little changes to the Startup class, but it remains essentially unchanged:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Options;

namespace SampleMicroservice
{
    public class Startup
    {
        public Startup(IHostingEnvironment env)
        {
            var builder = new ConfigurationBuilder()
                .SetBasePath(env.ContentRootPath)
                .AddJsonFile("appsettings.json", optional: true, reloadOnChange: false)
                .AddJsonFile("ZombieConfig.json", optional: true, reloadOnChange: false)
                .AddEnvironmentVariables();
            Configuration = builder.Build();
        }

        public IConfigurationRoot Configuration { get; set; }

        public void ConfigureServices(IServiceCollection services)
        {
            services.AddMvc();
            services.AddOptions();
            services.Configure<ZombieOptions>(Configuration);

            services.AddScoped<IZombieRepository, ZombieRepository>();
            services.AddSingleton<IGlobalCounter, GlobalCounter>();
        }

        public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
        {
            loggerFactory.AddConsole(Configuration.GetSection("Logging"));
            loggerFactory.AddDebug();
            app.UseMvc();
        }
    }
}

The Configure<ZombieOptions>() method is supported by the same extension method as in RC1, except this method is in a new package, so we had to add a dependency on Microsoft.Extensions.Options.ConfigurationExtensions to our project.json.

With all this in place, we can simply run dotnet restore then dotnet build and finally cf push and that’s that… our app is now running on RC2 in Cloud Foundry.

All of the code for the RC2 version of this app can be found in my github repository.

Using Cloud Foundry Bound Services with SteelToe on ASP.NET Core

Before I begin, if you’re not familiar with the concept of backing services or bound services in Cloud Foundry (any of the implementations, including PCF Dev, support service bindings in some form), check out the Pivotal documentation. The short of it is that rather than your application hard-coding information about the backing services with which it communicates, it should instead externalize that configuration. One way of doing this is through Cloud Foundry’s services mechanism.

When you bind services, either user-provided or brokered, they appear in an environment variable given to your application called VCAP_SERVICES. While this is just a simple JSON value, and you could pretty easily parse it on your own, you shouldn’t have to. This is a solved problem and there’s a library available for .NET that includes as part of its foundation some DI-friendly classes that talk to Cloud Foundry’s VCAP_SERVICES and VCAP_APPLICATION environment variables.

This library is called Steel Toe. I will save the history and origin of the name Steel Toe for a future blog post. For now, you can think of Steel Toe as a set of libraries that enable ASP.NET applications to participate as clients in the Netflix OSS microservice ecosystem by talking to Configuration Server and the Discovery Server (Eureka).

So, the first thing you need to do is add a NuGet reference to the assembly SteelToe.Extensions.Configuration.CloudFoundry. To do this, you’ll have to use the feed SteelToe is on, since you won’t find it in the regular Microsoft feed. SteelToe packages can currently be found here:

  • Master – https://www.myget.org/F/steeltoemaster/api/v3/index.json
  • Dev – https://www.myget.org/F/steeltoedev/api/v3/index.json

Once you’ve told either your command line tools or Visual Studio 2015 where to go look for Steel Toe, you can add a reference to the Assembly mentioned above (there are more than this one, but for this blog post we’re only covering Cloud Foundry configuration).

To add Cloud Foundry’s environment variables as a configuration provider, we just use the AddCloudFoundry method in our Startup method:

 var builder = new ConfigurationBuilder()
                .AddJsonFile("appsettings.json")
                .AddEnvironmentVariables()
                .AddCloudFoundry();
 Configuration = builder.Build();

At this point we could continue and just use the syntax vcap:services:p-mysql:0:uri for raw configuration values, but I like injecting service binding information using the IOptions feature, which converts the service bindings into a collection of POCOs. To use it, our ConfigureServices method might look something like this:

  services.AddMvc();

  services.AddOptions();
  services.Configure<CloudFoundryApplicationOptions>(Configuration);
  services.Configure<CloudFoundryServicesOptions>(Configuration);

This is going to make both our application metadata and our service bindings available as injectable options. We can make use of these easily in a controller. The following is a modified “values” controller borrowed right from the Visual Studio “new project” template for ASP.NET Web API. I’ve added a constructor that allows these options to be injected:

[Route("api/[controller]")]
public class ValuesController : Controller
{
    public ValuesController(IOptions<CloudFoundryApplicationOptions> appOptions,
        IOptions<CloudFoundryServicesOptions> serviceOptions)
    {
        AppOptions = appOptions.Value;
        ServiceOptions = serviceOptions.Value;
    }

    CloudFoundryApplicationOptions AppOptions { get; set; }
    CloudFoundryServicesOptions ServiceOptions { get; set; }

    [HttpGet]
    public Object Get()
    {
        return new
        {
            AppName = AppOptions.ApplicationName,
            AppId = AppOptions.ApplicationId,
            Service = new
            {
                Label = ServiceOptions.Services[0].Label,
                Url = ServiceOptions.Services[0].Credentials["url"].Value,
                Password = ServiceOptions.Services[0].Credentials["password"].Value
            }
        };           
    }
}

Now if I bind a service to my application that has a url credential property and a password credential property, I’ll be able to see that when I issue a GET to /api/values. Here’s a sample output that I get from using the REST api for my service:

{"AppName":"bindings-example","AppId":"a605f066-8c2c-4ae3-8dbf-b2e7931fc39d","Service":{"Label":"user-provided","Url":"http://foo.bar","Password":"l33t"}}

You should try this out – create an application, copy some of the code from this blog post, push it to Cloud Foundry, and bind a couple of user-provided services to it and see how incredibly easy Steel Toe makes externalizing your backing services configuration.

Deploying ASP.NET 5 (Core) Apps to Windows in Cloud Foundry

Today I found myself sequestered in a room with a pile of super smart people. This is my favorite kind of day. We have already experimented with pushing an ASP.NET Core application to Cloud Foundry on linux, but we haven’t tried out whether it would work on Windows.

There are a bunch of wrong ways to do this, and we discovered a number of them. First, it’s possible (and actually a default in RC1) to use dnu publish to produce an application that needs to be compiled and doesn’t have its dependencies fully vendored. This is not only a bad idea, but it violates a number of the 12 factors, so we can’t have that.

In Release Candidate 1, you need to explicitly specify which runtime you want bundled with the app as well as indicate that you want your application compiled via the no-source option:

dnu publish --no-source --runtime active

Assuming we’ve already got a functioning, buildable application and we run this from the application’s project root directory, this will then run through a ton of compile spam and finally publish the app to a directory like bin/output/approot/, but you can override this location as well.

When RC2 comes around, we are likely to see the publish command show up as a parameter to the dotnet command line tool. Unfortunately, even though I’ve got an early RC2 build, there’s no way of knowing if that’s going to change so I don’t want to confuse anyone by speculating as to the final syntax.

Next we go into this directory and modify the web.cmd file. At the very end of this file you’ll see a line that looks like this:

@"%DNX_PATH%" --project "%~dp0packages\SampleMicroservice\1.0.0\root" --configuration Debug web %*

Make note of the %~dp0packages section of this. This substitution is looking at the packages sub-directory within the published approot directory. If we didn’t publish with the –no-source option, then our compiled service would not appear as a package here.

While this will work just fine on a plain vanilla Windows Server box, it’s not going to work in the cloud because cloud applications need to adhere to the port binding factor. This means that the PaaS needs to be able to tell the application on which port to listen, because it is taking care of mapping the inside-container ports to the outside-container ports (e.g. port 80).

So we modify this last line of web.cmd to look like this:

@"%DNX_PATH%" --project "%~dp0packages\SampleMicroservice\1.0.0\root" --configuration Debug --server.urls=%PORT% web %*

We’ve added the %PORT% environment variable, which we know is set by Cloud Foundry for all applications, Windows or Linux. Now our start command will use the application-bundled runtime to launch the application, which will have all of its dependencies vendored locally. The only thing we need to do before we push to Cloud Foundry is ensure that our Windows Cell has the right frameworks installed on it. For our test, we made sure that ASP.NET 4.6 was installed.

Now we can push the application, assuming we’re using the default name for the Windows stack:

cf push sample-service -b binary_buildpack -s windows2012R2 -c "approot\web.cmd"

If your application does not expose a legitimate response to a GET of the root (GET /) then you’ll want to push it with –health-check-type none so PCF doesn’t think your app is crashing. Note that you have to specify the location pointing to the web.cmd file to set your start command. Your app isn’t going to start properly without that.

It would be nice if Microsoft let us supply parameters to alter the generated start command inside web.cmd, but we’ll take what we can get.

While it’s a little bit inconvenient to have to modify the web.cmd generated by the publish step, it isn’t all that difficult to push your ASP.NET 5/Core RC1 application to Pivotal Cloud Foundry on Windows cells.

Debugging Node.js Applications in Cloud Foundry

I just added a post over on Medium describing some steps we took to do remote debugging of a Node.js application running in Cloud Foundry.

Check out the post here: https://medium.com/@KevinHoffman/debugging-node-js-applications-in-cloud-foundry-b8fee5178a09#.dblflhdxm

 

 

Migrating to ASP.NET Core in the Cloud: What you will and won’t miss from ASP.NET 5

The folks over at InfoQ have graciously provided a nice list of technology that has been discontinued in .NET Core. You can find their original list here. They present the list without much opinion or color. In this blog post, I’d like to take a look at that list from the point of view of someone considering using ASP.NET Core for cloud native application development.

Reflection

Reflection isn’t gone, it’s just changed a little bit. When you make things designed to work on multiple operating systems without having to change your code, some of your original assumptions about the code no longer apply. Reflection is one of those cases. Throughout my career, the majority of the times I’ve used Reflection in a running, production application have either been for enhanced debug information dumps in logs, or, to be honest, I was working around a complex problem in entirely the wrong way.

If you find yourself thinking that you need reflection for your .NET microservice, ask yourself if you really, really need it. Are you using it because you’re adopting some one-off, barely supported serialization scheme? Are you using it because you’re afraid your regular diagnostics aren’t good enough? I would be shocked if I ever had to use Reflection in 2016 directly when .NET Core already provides me with JSON and XML serializers.

App Domains

Manipulating application domains is something that I would consider a cloud native anti-pattern. Historically, we’ve used AppDomains for code and data isolation and as a logical unit of segmentation for CAS (Code Access Security). If your deployment target is the cloud (or basically any PaaS that uses container isolation), then I contend you should have nothing in your app that does any kind of AppDomain shenanigans. In fact, from the original article, here’s a quote from Microsoft:

For code isolation, we recommend processes and/or containers

Remoting

Ah, Remoting, how do I hate thee? Let me count the nearly infinite ways… The following statement from InfoQ’s article makes me feel old. VERY old.

These days few developers even remember the Remoting library exists, let alone how to use it.

Not only do I remember how to use it, but I wrote entire chapters for popular books on using Remoting. I’ve written custom communication channels for Remoting, implemented distributed trace collection systems that predate Splunk, and other things I shall not admit publicly… all with Remoting.

The bottom line is there is no place for a technology like Remoting in an ecosystem of services all scaling elastically and moving dynamically in the cloud. It’s dinosaur tech and those of us who used to write code against it for a living are likely also dinosaurs.

You’re not going to need this in the cloud, so it’s absence from .NET Core is a blessing.

Serialization

Most of the tools you need for serialization are still there. Binary serialization is one of those ugly things that used to go deep into the bowels of your objects, even private members, and convert POCOs into state that way.

If you need high-performance binary/byte array serialization of your .NET classes, use protocol buffers. You can use them from .NET Core.

If you’re doing more traditional service-to-service communication, then you’ll be just fine with JSON and XML. As far as building services for the cloud, you should be blissfully unaware of most serialization gaps between ASP.NET 4.x and .NET Core.

Sandboxing

Sandboxing is something that has traditionally had strong, low-level OS ties. This makes its implementation in a cross-platform framework a little more complicated and subject to least common denominator culling. If you’re just building services, you should be fine, but there’s a sentence in the InfoQ article that is decidedly not cloud-native:

The recommended alternative is to spawn separate processes that run under a user account with restricted permissions

This is not something you should be doing in the cloud. As a cloud native developer, you should be unconcerned with the identity with which your application process runs – this is a detail that is abstracted away from you by your PaaS. If you need credentials to access a backing service, you can deal with that using bound resources or bound services, which is all externalized configuration.

Sandboxing as the article describes isn’t something you should be doing when developing services for the cloud, so you should hopefully be well insulated from any of the changes or removals in .NET Core related to this.

Miscellaneous

There are a handful of miscellaneous changes also mentioned by the InfoQ article:

  • DataTable/DataSet – I feel so. very. old. Many of you will likely have no idea what these things are. This is how your grandparents communicated with SQL data sources prior to the invention of Entity Framework, nHibernate, or LINQ to SQL. That’s right, we wrote code uphill both ways in the snow. Get off my lawn. You will not need these in the cloud.
  • System.DirectoryServices – For some pretty obvious reasons, this doesn’t exist. You shouldn’t need to use this at all. If you need to talk to LDAP, you can do so using simpler interfaces or, better yet, through a token system like OAuth2.
  • System.Transactions – Distributed transactions, at least the traditional kind supported by libraries like this, are a cloud-native anti-pattern. Good riddance.
  • XSL and XmlSchema – Ok, so these might still be handy and I can see a couple types of services that might actually suffer a bit from their absence in the framework. Good news is that .NET Core is open source, so if enough people need this, either Microsoft or someone else will put it in.
  • System.Net.Mail – If you need to send mail from your cloud native service, you should consider using a brokered backing service for e-mailing. Just about every PaaS with a service marketplace has some kind of cloud-based mailing system with a simple REST API.
  • System.IO.Ports – I’m fairly certain you should not be writing code that communicates with physical serial ports in your cloud native service.
  • Workflow – Windows Workflow Foundation is also gone from .NET Core. Good riddance. I did a tremendous amount of work with that beast, and I tried harder than most mere mortals to like it, to embrace it, and to use it for greatness. I was never pleased with anything I produced in WF. The amount of hacky statefulness required to get it working would have immediately put this tech onto the cloud native naughty list anyway.
  • XAML – You do not need XAML to build a cloud service, so this is also no big loss.

Conclusion

The bottom line is that, aside from a few high-friction experiences in the current release candidate, the feature set of ASP.NET Core contains everything you need right now to build microservices for the cloud. The biggest concern isn’t what isn’t in .NET Core, it’s what third party libraries for accessing backing services are either missing or not ready for prime time. That’s where I see the risk in early adoption, not in the core feature set of the framework.

Pushing an ASP.NET 5 Microservice to Cloud Foundry

In my previous blog post, I walked through the process of creating a lightweight microservice in ASP.NET 5 that builds in a cross-platform fashion, so it can run on Windows, Linux, and OS X. I’m going to glaze over how completely amazing it is that we now live in a time where I can write cross-platform microservices in C# using emacs.

If you’re building microservices, then there’s a good chance you’re thinking about deploying that microservice out into some Platform as a Service provider. For obvious reasons, my favorite PaaS target is Cloud Foundry (you can use any of the commercial providers that rest atop the CF Foundation like Pivotal Web Services, IBM Bluemix, HP, etc).

Thankfully, there is a community supported buildpack for ASP.NET 5. I believe it’s up to date with beta 8 of ASP.NET 5, so there is a minor modification you’ll have to make in order to get your application to launch in Cloud Foundry.

First, go into the project.json file and make sure your commands section looks like this:

"commands": {
    "web": "Microsoft.AspNet.Server.Kestrel",
    "kestrel": "Microsoft.AspNet.Server.Kestrel"
  },

The reason for this is that the buildpack expects to use the default launch command from pre-RC1, kestrel. If you’ve used the RC1 project generator, then the default kestrel launch command is just called web. The easy fix for this, as shown above, is just to create a duplicate command called kestrel. Now you can launch your app with dnx web or dnx kestrel, and that’ll make the buildpack happy.

With that one tiny little change, you can now push your application to the cloud:

cf push zombieservice -b https://github.com/cloudfoundry-community/asp.net5-buildpack.git

If you’re not familiar with the cf command line or Cloud Foundry, go check out Pivotal Web Services, a public commercial implementation of Cloud Foundry.

It will take a little while to deal with uploading the app, gathering the dependencies, and then starting. But, it works! To recap, just so this sinks in: I can write an ASP.NET microservice, in C#, using my favorite IDE (vi!), on my Mac, and deploy it to a container in the cloud, which is running Linux. Oh yeah, and if I really feel like it I can run this on Windows, too.

A few years ago, I would never have believed that something like this would be possible.

Creating a Microservice with Spring Boot

It’s no secret that I’m a big fan of microservices. I have blogged about creating a microservice with Akka, and I’m an avid follower of all things service-oriented. This weekend I decided that I would try and see why people are so excited about Spring Boot, and, as a foot in the door to Spring Boot, I would build a microservice.

The first issue I encountered was a lot of conflicting advice on where to get started. For an opinionated framework, it felt awkward that so many people had so many recommendations just to get into the Hello World phase. You can download the Spring CLI, or you can use the Spring Boot starter service online to create a starter project. You can also choose to have your project built by Gradle or Maven.

Since I’m on a Mac, I made sure my homebrew installation was up to date and just fired off:

brew install gvm

I did this so I could have gvm manage my springboot installations. I used gvm to install spring boot as follows:

gvm install springboot

If you want you can have homebrew install springboot directly.

The next step is to create a new, empty Spring Boot project. You can do this by hitting up the Spring Initializr  (http://start.spring.io) or you can use the spring boot CLI to create your stub (this still uses the Spring Initializr service under the covers).

$ spring init --build=gradle HelloService
Using service at https://start.spring.io
Project extracted to '/Users/khoffman/Code/SpringBoot/HelloService'

This creates a new application in a directory called HelloService. There is a DemoApplication class in the demo package that is decorated with the @SpringBootApplication annotation. Without going into too much detail (mostly because I don’t know much detail), this annotation tells Spring to enable automatic configuration based on discovered dependencies and tells it to automatically scan for components to satisfy DI requirements.

Next, I need to make sure that the project has access to the right annotations and components to let me rig up a basic REST controller, so I’ll add the following dependency to my build.gradle file in the dependencies section:

compile("org.springframework.boot:spring-boot-starter-web")

Now I can create a new file called ZombieController.java in src/main/java/demo/controller:

package demo.controller;

import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.bind.annotation.RequestMapping;

@RestController
public class ZombieController {
  @RequestMapping("/zombies")
  public String getZombies() {
    return "Goodbye, cruel world";
  }
}

With no additional work or wiring up, I can now do a gradle build in the root of my application directory and then I can execute the application (the web server comes embedded, which is one of the reasons why it’s on my list of good candidates for microservice building):

java -jar build/libs/demo-0.0.1-SNAPSHOT.jar

Now hitting http://localhost:8080/zombies will return the string “Goodbye, cruel world”. This is all well and good, but I don’t think it goes far enough for a sample. Nobody builds microservices that return raw strings, they build microservices that return actual data, usually in the form of JSON.

Fist, let’s build a Zombie model object using some Jackson JSON annotations:

@JsonAutoDetect(getterVisibility = JsonAutoDetect.Visibility.NONE)
@JsonIgnoreProperties(ignoreUnknown = true)
public class Zombie {

 @NotEmpty
 @JsonSerialize
 @JsonProperty("name")
 private String name;

 @NotEmpty
 @JsonSerialize
 @JsonProperty("age")
 private int age;

 public Zombie(String name, int age) {
   this.name = name;
   this.age = age;
 }
 public String getName() {
   return name;
 }
 public int getAge() {
   return age;
 }
}

And now I can add a new method to my controller that returns an individual Zombie, and takes care of JSON serialization for me based on my preferences defined on the class:

 @RequestMapping("/zombies/{id}")
 public @ResponseBody Zombie getZombie(@PathVariable("id") int id) {
   return new Zombie("Bob", id);
 }

Now I can rebuild my application with gradle build (or I can install a gradle wrapper via gradle wrapper and then invoke ./gradlew build) and then run it again. Once it has compiled and it’s running again, I can hit the following URL with curl: http://localhost:8080/zombies/12

And I will get the following JSON reply:

{"name":"Bob","age":12}

And that’s basically it, at least for the simplest hello world sample of a Spring Boot service. Ordinarily, you wouldn’t return values directly from inside the controller, instead the controllers usually delegate to “auto wired” services, which perform the real work. But, for the purposes of my sample, I decided it was okay to leave the code in the controller.

So, what’s my conclusion? Well, writing a single REST method that returns fake data is by no means a way to judge an entire framework. However, if you’ve been doing RESTful services in Java and have not been using Spring Boot, then this is likely a super refreshing change of pace. I’ll likely keep poking around with it so I can get a better idea for how it behaves in a real production environment.

Google Protocol Buffers on iOS and Mac – Redux

In a previous blog post, I talked about how I used the C++ stock Google Protocol Buffers generated code and the native C++ library in my iOS application. It involved some trickery of creating a wrapper Objective-C++ class to keep the C++ from bleeding into my Cocoa project, I had to link a static library, and it actually bloated my application quite a bit.

Since then, I have found a great native Objective-C protobuf implementation that not only plugs into the regular protoc compiler, but it generates ARC-friendly code. Check out this library on Github for the Objective-C protobuf code.

Now, rather than fussing with creating a C++ object (which requires me to manually malloc and free!), I can just create a new protobuf object that feels natural and native to my iOS and Mac code:

ZombieSighting *sighting =
        [[[[[[[ZombieSighting builder] setDescription:@"This is a zombie"]
                setLatitude:21.007]
                setLongitude:18.214]
                setZombieType:ZombieTypeFast]
                setName:@"Lord Kevin of the Undead"] build];

With this implementation of protobufs, I can just do [sighting data] to get the NSData for the object, and I can de-serialize a protobuf object that I plucked off the wire far more easily than before:

DirectMessage *dm = [DirectMessage parseFromData:data];

See how much easier and cleaner everything is? The moral of the story here is good enough isn’t good enough. Just because you find one solution doesn’t mean it’s the best one. I am constantly in search of simpler, easier, more elegant ways of doing things and I think I’ve finally found a decent way of dealing with protobufs in Objective-C.

Designing Akka Actor Hierarchies for Online Games – Commerce

Lately I’ve been thinking a lot about the design of Actor systems and actor hierarchies, specifically as implemented by Akka. In a previous series of blog posts, I used Akka to build a telnet-gateway that went through to an Akka Actor System to support a MUD (Multi-User Dungeon/Dimension). Having learned a thing or two about high performance systems since then, I have changed my perspective on how some of these things should be designed.

User Space Modeling vs. Task Modeling

Let’s assume for a minute that we are building an MMO server that supports commerce. Players can walk into a shop and perform a number of tasks. Within the shop, players can sell goods from their inventory, buy goods from the shop keeper, or use the shop keeper as a means to engage in secure trade between players (assume for the sake of this example that players can’t exchange goods unless they’re standing in a shop – it helps illustrate my point later).

When modeling an Actor, especially in my MUD, it can be very easy to think of Actors with a clear correlation to objects within your virtual world. For example, we might create an Actor for the shop keeper and we might send him messages like:

shopKeep = context.ActorOf .... (find the shopkeeper standing in the same room as the player)
shopKeep ! SellStuff(player, 21, ItemPrototypes.JunkSword)
shopKeep ! SellStuff(player, 57, ItemPrototypes.Seashell)

In this scenario, every shop keeper in the game is a unique instance of the ShopKeeper actor. Each one can maintain their own state (their inventory or wares). Sounds good, right? My inner MUDder likes this because it maps pretty much to what the virtual world looks like on my game client. So let’s scale this out a bit. Let’s say the game has 5,000 shop keepers total. 5,000 instances of a shop keeper, each consuming roughly 300 bytes (Akka actor overhead is 300 bytes) plus whatever it takes for their state, that’s an overhead of roughly 1.4MB to manage the overhead of my shop keepers. Meh, 1.MB is nothing in relative terms, so that’s no big deal. So far so good, right? Maybe, but my new perspective says no. This particular type of scale-out is only useful if there is an even distribution of players compared to shop keepers. This is hugely important to remember. What if my game currently has 10,000 concurrent users but instead of being spread out evenly across my virtual universe, someone posted a “FREE BEER AT SHOPKEEPER2921’s HOUSE! PAR-TAY!” sign at the Orc’s Crossing. Now, I’ve got 7,500 players crowded into Shopkeeper2921‘s shop. Graphical client problems aside, this means this one actor is now trying to handle messages from 7,500 players who are buying, selling, and trading with each other. In short, that actor is going to get backed up, the game will appear slow to 7,500 of it’s players, and the proverbial shit has hit the fan.

Creating an actor for every shop keeper, actors for the shop keeper’s shop, actors for each person’s weapon, etc is what I am going to call user space modeling. I have no idea if this is real term, but it is now. It means that the architect of the actor system has decided to map user perceived actors to real actors. This is what I was doing with my MUD. What I should be doing for better performance and certainly better (cloud-sized) scalability is task-based modeling.

Let’s take a look at this problem from a slightly different point of view, task-based modeling. Instead of creating actors to back virtual constructs like shop keepers, swords, and shops, we create actors that perform tasks. If we take a look at our game client, we see that we might be sending packet-messages like “sell stuff”, “buy stuff”, “trade offer”, “trade counter”, and “trade confirm”. Instead of sending these messages to whichever shop keeper happens to be standing near the player (user space modeling), we’ll simply send the messages to whichever Commerce Actor happens to be available based on Akka routing.

Let’s say now that we create an actor CommerceActor who accepts messages in case class form SellBuyTradeOfferTradeCounterTradeConfirm. Instead of sending those messages to a nearby virtual shopkeeper, let’s send them to the “least busy” commerce actor among the N instances of CommerceActor that we’ve created with the Router mix-in trait.

Now our commerce logic might look something like this (did a little DSL-style stuff with the items just for giggles):


val shopKeeper = system.actorOf(Props[Commerce].withRouter(
    RoundRobinRouter(nrOfInstances = 10)))
shopKeeper ! Buy(shopId, 21 VorpalSwordOfDooom)
shopKeeper ! Trade(playerOne, playerTwo, 15 Gold)

Here the code is using a “round robin” router, which means each instance will be given a message in round robin fashion. You can also use a “least full mailbox” router, which essentially load balances it so that the shop keeper currently having the smallest backlog of transactions to process will be given the message. This feels right, and will scale. Even better, the routing configuration for a particular type of actor can be configured in the application.conf file. This means that you can dynamically change the number of instances of certain types of actors. This way, if players are suddenly doing a crapload of commerce and bogging your system down, you can beef up the number of instances of your commerce actor.

In task-based modeling for an actor system, actors are as the Akka documentation recommends as a best practice, encapsulations for behavior and state that perform discrete tasks and either complete the task or delegate sub-tasks to other actors. The supervisor/actor hierarchy is created based on the work that needs to be done, which doesn’t necessarily have a 1:1 mapping with the virtual world in which players inhabit.