Kotan Code 枯淡コード

In search of simple, elegant code

Menu Close

Tag: cloud (page 1 of 2)

Migrating an ASP.NET Core RC1 Microservice to RC2

In a recent blog post, I talked about how to deploy an ASP.NET Core RC1 service to the cloud. With yesterday’s release of ASP.NET Core RC2, I had to make a number of minor changes to my code in order to get it working both locally and in the cloud.

The biggest change between RC1 and RC2 is an architectural change. In RC1, the thing responsible for prepping the application was inside the dnx tool, and it was tightly coupled to the notion of building web applications. In Rc2, we’re actually returning to a more explicit model with a Program class and a Main(), where we write our own code to construct the WebHostBuilder instead of it being done implicitly on our behalf.

First, there are changes to the structure of the project.json file. Rather than go through them all, I’m just going to point you to the modified project.json for the Zombies service. The easiest thing to do is to take a project.json file created by new RC2 tooling and use that as a template to adapt your existing file.

The next thing I did was add a Program class. You’ll notice that I use the AddStartup() method to invoke all of the old startup code I had from RC1:

using System;
using System.Collections.Generic;
using System.IO;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Builder;
using Microsoft.Extensions.Configuration;

namespace SampleMicroservice
{
  public class Program
  {
    public static void Main(string[] args)
    {
      var config = new ConfigurationBuilder()
        .AddCommandLine(args)
        .Build();

      var host = new WebHostBuilder()
        .UseConfiguration(config)
        .UseKestrel()
        .UseContentRoot(Directory.GetCurrentDirectory())
        .UseStartup<Startup>()
        .Build();

      host.Run();
    }
  }
}

The next change is that I had to use the AddCommandLine() method. Without this, our application isn’t going to respond to or care about the –server.urls parameter that is going to get passed to it by the buildpack when we push to Cloud Foundry.

I had to make a few tiny little changes to the Startup class, but it remains essentially unchanged:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Options;

namespace SampleMicroservice
{
    public class Startup
    {
        public Startup(IHostingEnvironment env)
        {
            var builder = new ConfigurationBuilder()
                .SetBasePath(env.ContentRootPath)
                .AddJsonFile("appsettings.json", optional: true, reloadOnChange: false)
                .AddJsonFile("ZombieConfig.json", optional: true, reloadOnChange: false)
                .AddEnvironmentVariables();
            Configuration = builder.Build();
        }

        public IConfigurationRoot Configuration { get; set; }

        public void ConfigureServices(IServiceCollection services)
        {
            services.AddMvc();
            services.AddOptions();
            services.Configure<ZombieOptions>(Configuration);

            services.AddScoped<IZombieRepository, ZombieRepository>();
            services.AddSingleton<IGlobalCounter, GlobalCounter>();
        }

        public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
        {
            loggerFactory.AddConsole(Configuration.GetSection("Logging"));
            loggerFactory.AddDebug();
            app.UseMvc();
        }
    }
}

The Configure<ZombieOptions>() method is supported by the same extension method as in RC1, except this method is in a new package, so we had to add a dependency on Microsoft.Extensions.Options.ConfigurationExtensions to our project.json.

With all this in place, we can simply run dotnet restore then dotnet build and finally cf push and that’s that… our app is now running on RC2 in Cloud Foundry.

All of the code for the RC2 version of this app can be found in my github repository.

Using Cloud Foundry Bound Services with SteelToe on ASP.NET Core

Before I begin, if you’re not familiar with the concept of backing services or bound services in Cloud Foundry (any of the implementations, including PCF Dev, support service bindings in some form), check out the Pivotal documentation. The short of it is that rather than your application hard-coding information about the backing services with which it communicates, it should instead externalize that configuration. One way of doing this is through Cloud Foundry’s services mechanism.

When you bind services, either user-provided or brokered, they appear in an environment variable given to your application called VCAP_SERVICES. While this is just a simple JSON value, and you could pretty easily parse it on your own, you shouldn’t have to. This is a solved problem and there’s a library available for .NET that includes as part of its foundation some DI-friendly classes that talk to Cloud Foundry’s VCAP_SERVICES and VCAP_APPLICATION environment variables.

This library is called Steel Toe. I will save the history and origin of the name Steel Toe for a future blog post. For now, you can think of Steel Toe as a set of libraries that enable ASP.NET applications to participate as clients in the Netflix OSS microservice ecosystem by talking to Configuration Server and the Discovery Server (Eureka).

So, the first thing you need to do is add a NuGet reference to the assembly SteelToe.Extensions.Configuration.CloudFoundry. To do this, you’ll have to use the feed SteelToe is on, since you won’t find it in the regular Microsoft feed. SteelToe packages can currently be found here:

  • Master – https://www.myget.org/F/steeltoemaster/api/v3/index.json
  • Dev – https://www.myget.org/F/steeltoedev/api/v3/index.json

Once you’ve told either your command line tools or Visual Studio 2015 where to go look for Steel Toe, you can add a reference to the Assembly mentioned above (there are more than this one, but for this blog post we’re only covering Cloud Foundry configuration).

To add Cloud Foundry’s environment variables as a configuration provider, we just use the AddCloudFoundry method in our Startup method:

 var builder = new ConfigurationBuilder()
                .AddJsonFile("appsettings.json")
                .AddEnvironmentVariables()
                .AddCloudFoundry();
 Configuration = builder.Build();

At this point we could continue and just use the syntax vcap:services:p-mysql:0:uri for raw configuration values, but I like injecting service binding information using the IOptions feature, which converts the service bindings into a collection of POCOs. To use it, our ConfigureServices method might look something like this:

  services.AddMvc();

  services.AddOptions();
  services.Configure<CloudFoundryApplicationOptions>(Configuration);
  services.Configure<CloudFoundryServicesOptions>(Configuration);

This is going to make both our application metadata and our service bindings available as injectable options. We can make use of these easily in a controller. The following is a modified “values” controller borrowed right from the Visual Studio “new project” template for ASP.NET Web API. I’ve added a constructor that allows these options to be injected:

[Route("api/[controller]")]
public class ValuesController : Controller
{
    public ValuesController(IOptions<CloudFoundryApplicationOptions> appOptions,
        IOptions<CloudFoundryServicesOptions> serviceOptions)
    {
        AppOptions = appOptions.Value;
        ServiceOptions = serviceOptions.Value;
    }

    CloudFoundryApplicationOptions AppOptions { get; set; }
    CloudFoundryServicesOptions ServiceOptions { get; set; }

    [HttpGet]
    public Object Get()
    {
        return new
        {
            AppName = AppOptions.ApplicationName,
            AppId = AppOptions.ApplicationId,
            Service = new
            {
                Label = ServiceOptions.Services[0].Label,
                Url = ServiceOptions.Services[0].Credentials["url"].Value,
                Password = ServiceOptions.Services[0].Credentials["password"].Value
            }
        };           
    }
}

Now if I bind a service to my application that has a url credential property and a password credential property, I’ll be able to see that when I issue a GET to /api/values. Here’s a sample output that I get from using the REST api for my service:

{"AppName":"bindings-example","AppId":"a605f066-8c2c-4ae3-8dbf-b2e7931fc39d","Service":{"Label":"user-provided","Url":"http://foo.bar","Password":"l33t"}}

You should try this out – create an application, copy some of the code from this blog post, push it to Cloud Foundry, and bind a couple of user-provided services to it and see how incredibly easy Steel Toe makes externalizing your backing services configuration.

Deploying ASP.NET 5 (Core) Apps to Windows in Cloud Foundry

Today I found myself sequestered in a room with a pile of super smart people. This is my favorite kind of day. We have already experimented with pushing an ASP.NET Core application to Cloud Foundry on linux, but we haven’t tried out whether it would work on Windows.

There are a bunch of wrong ways to do this, and we discovered a number of them. First, it’s possible (and actually a default in RC1) to use dnu publish to produce an application that needs to be compiled and doesn’t have its dependencies fully vendored. This is not only a bad idea, but it violates a number of the 12 factors, so we can’t have that.

In Release Candidate 1, you need to explicitly specify which runtime you want bundled with the app as well as indicate that you want your application compiled via the no-source option:

dnu publish --no-source --runtime active

Assuming we’ve already got a functioning, buildable application and we run this from the application’s project root directory, this will then run through a ton of compile spam and finally publish the app to a directory like bin/output/approot/, but you can override this location as well.

When RC2 comes around, we are likely to see the publish command show up as a parameter to the dotnet command line tool. Unfortunately, even though I’ve got an early RC2 build, there’s no way of knowing if that’s going to change so I don’t want to confuse anyone by speculating as to the final syntax.

Next we go into this directory and modify the web.cmd file. At the very end of this file you’ll see a line that looks like this:

@"%DNX_PATH%" --project "%~dp0packages\SampleMicroservice\1.0.0\root" --configuration Debug web %*

Make note of the %~dp0packages section of this. This substitution is looking at the packages sub-directory within the published approot directory. If we didn’t publish with the –no-source option, then our compiled service would not appear as a package here.

While this will work just fine on a plain vanilla Windows Server box, it’s not going to work in the cloud because cloud applications need to adhere to the port binding factor. This means that the PaaS needs to be able to tell the application on which port to listen, because it is taking care of mapping the inside-container ports to the outside-container ports (e.g. port 80).

So we modify this last line of web.cmd to look like this:

@"%DNX_PATH%" --project "%~dp0packages\SampleMicroservice\1.0.0\root" --configuration Debug --server.urls=%PORT% web %*

We’ve added the %PORT% environment variable, which we know is set by Cloud Foundry for all applications, Windows or Linux. Now our start command will use the application-bundled runtime to launch the application, which will have all of its dependencies vendored locally. The only thing we need to do before we push to Cloud Foundry is ensure that our Windows Cell has the right frameworks installed on it. For our test, we made sure that ASP.NET 4.6 was installed.

Now we can push the application, assuming we’re using the default name for the Windows stack:

cf push sample-service -b binary_buildpack -s windows2012R2 -c "approot\web.cmd"

If your application does not expose a legitimate response to a GET of the root (GET /) then you’ll want to push it with –health-check-type none so PCF doesn’t think your app is crashing. Note that you have to specify the location pointing to the web.cmd file to set your start command. Your app isn’t going to start properly without that.

It would be nice if Microsoft let us supply parameters to alter the generated start command inside web.cmd, but we’ll take what we can get.

While it’s a little bit inconvenient to have to modify the web.cmd generated by the publish step, it isn’t all that difficult to push your ASP.NET 5/Core RC1 application to Pivotal Cloud Foundry on Windows cells.

Migrating to ASP.NET Core in the Cloud: What you will and won’t miss from ASP.NET 5

The folks over at InfoQ have graciously provided a nice list of technology that has been discontinued in .NET Core. You can find their original list here. They present the list without much opinion or color. In this blog post, I’d like to take a look at that list from the point of view of someone considering using ASP.NET Core for cloud native application development.

Reflection

Reflection isn’t gone, it’s just changed a little bit. When you make things designed to work on multiple operating systems without having to change your code, some of your original assumptions about the code no longer apply. Reflection is one of those cases. Throughout my career, the majority of the times I’ve used Reflection in a running, production application have either been for enhanced debug information dumps in logs, or, to be honest, I was working around a complex problem in entirely the wrong way.

If you find yourself thinking that you need reflection for your .NET microservice, ask yourself if you really, really need it. Are you using it because you’re adopting some one-off, barely supported serialization scheme? Are you using it because you’re afraid your regular diagnostics aren’t good enough? I would be shocked if I ever had to use Reflection in 2016 directly when .NET Core already provides me with JSON and XML serializers.

App Domains

Manipulating application domains is something that I would consider a cloud native anti-pattern. Historically, we’ve used AppDomains for code and data isolation and as a logical unit of segmentation for CAS (Code Access Security). If your deployment target is the cloud (or basically any PaaS that uses container isolation), then I contend you should have nothing in your app that does any kind of AppDomain shenanigans. In fact, from the original article, here’s a quote from Microsoft:

For code isolation, we recommend processes and/or containers

Remoting

Ah, Remoting, how do I hate thee? Let me count the nearly infinite ways… The following statement from InfoQ’s article makes me feel old. VERY old.

These days few developers even remember the Remoting library exists, let alone how to use it.

Not only do I remember how to use it, but I wrote entire chapters for popular books on using Remoting. I’ve written custom communication channels for Remoting, implemented distributed trace collection systems that predate Splunk, and other things I shall not admit publicly… all with Remoting.

The bottom line is there is no place for a technology like Remoting in an ecosystem of services all scaling elastically and moving dynamically in the cloud. It’s dinosaur tech and those of us who used to write code against it for a living are likely also dinosaurs.

You’re not going to need this in the cloud, so it’s absence from .NET Core is a blessing.

Serialization

Most of the tools you need for serialization are still there. Binary serialization is one of those ugly things that used to go deep into the bowels of your objects, even private members, and convert POCOs into state that way.

If you need high-performance binary/byte array serialization of your .NET classes, use protocol buffers. You can use them from .NET Core.

If you’re doing more traditional service-to-service communication, then you’ll be just fine with JSON and XML. As far as building services for the cloud, you should be blissfully unaware of most serialization gaps between ASP.NET 4.x and .NET Core.

Sandboxing

Sandboxing is something that has traditionally had strong, low-level OS ties. This makes its implementation in a cross-platform framework a little more complicated and subject to least common denominator culling. If you’re just building services, you should be fine, but there’s a sentence in the InfoQ article that is decidedly not cloud-native:

The recommended alternative is to spawn separate processes that run under a user account with restricted permissions

This is not something you should be doing in the cloud. As a cloud native developer, you should be unconcerned with the identity with which your application process runs – this is a detail that is abstracted away from you by your PaaS. If you need credentials to access a backing service, you can deal with that using bound resources or bound services, which is all externalized configuration.

Sandboxing as the article describes isn’t something you should be doing when developing services for the cloud, so you should hopefully be well insulated from any of the changes or removals in .NET Core related to this.

Miscellaneous

There are a handful of miscellaneous changes also mentioned by the InfoQ article:

  • DataTable/DataSet – I feel so. very. old. Many of you will likely have no idea what these things are. This is how your grandparents communicated with SQL data sources prior to the invention of Entity Framework, nHibernate, or LINQ to SQL. That’s right, we wrote code uphill both ways in the snow. Get off my lawn. You will not need these in the cloud.
  • System.DirectoryServices – For some pretty obvious reasons, this doesn’t exist. You shouldn’t need to use this at all. If you need to talk to LDAP, you can do so using simpler interfaces or, better yet, through a token system like OAuth2.
  • System.Transactions – Distributed transactions, at least the traditional kind supported by libraries like this, are a cloud-native anti-pattern. Good riddance.
  • XSL and XmlSchema – Ok, so these might still be handy and I can see a couple types of services that might actually suffer a bit from their absence in the framework. Good news is that .NET Core is open source, so if enough people need this, either Microsoft or someone else will put it in.
  • System.Net.Mail – If you need to send mail from your cloud native service, you should consider using a brokered backing service for e-mailing. Just about every PaaS with a service marketplace has some kind of cloud-based mailing system with a simple REST API.
  • System.IO.Ports – I’m fairly certain you should not be writing code that communicates with physical serial ports in your cloud native service.
  • Workflow – Windows Workflow Foundation is also gone from .NET Core. Good riddance. I did a tremendous amount of work with that beast, and I tried harder than most mere mortals to like it, to embrace it, and to use it for greatness. I was never pleased with anything I produced in WF. The amount of hacky statefulness required to get it working would have immediately put this tech onto the cloud native naughty list anyway.
  • XAML – You do not need XAML to build a cloud service, so this is also no big loss.

Conclusion

The bottom line is that, aside from a few high-friction experiences in the current release candidate, the feature set of ASP.NET Core contains everything you need right now to build microservices for the cloud. The biggest concern isn’t what isn’t in .NET Core, it’s what third party libraries for accessing backing services are either missing or not ready for prime time. That’s where I see the risk in early adoption, not in the core feature set of the framework.

Pushing an ASP.NET 5 Microservice to Cloud Foundry

In my previous blog post, I walked through the process of creating a lightweight microservice in ASP.NET 5 that builds in a cross-platform fashion, so it can run on Windows, Linux, and OS X. I’m going to glaze over how completely amazing it is that we now live in a time where I can write cross-platform microservices in C# using emacs.

If you’re building microservices, then there’s a good chance you’re thinking about deploying that microservice out into some Platform as a Service provider. For obvious reasons, my favorite PaaS target is Cloud Foundry (you can use any of the commercial providers that rest atop the CF Foundation like Pivotal Web Services, IBM Bluemix, HP, etc).

Thankfully, there is a community supported buildpack for ASP.NET 5. I believe it’s up to date with beta 8 of ASP.NET 5, so there is a minor modification you’ll have to make in order to get your application to launch in Cloud Foundry.

First, go into the project.json file and make sure your commands section looks like this:

"commands": {
    "web": "Microsoft.AspNet.Server.Kestrel",
    "kestrel": "Microsoft.AspNet.Server.Kestrel"
  },

The reason for this is that the buildpack expects to use the default launch command from pre-RC1, kestrel. If you’ve used the RC1 project generator, then the default kestrel launch command is just called web. The easy fix for this, as shown above, is just to create a duplicate command called kestrel. Now you can launch your app with dnx web or dnx kestrel, and that’ll make the buildpack happy.

With that one tiny little change, you can now push your application to the cloud:

cf push zombieservice -b https://github.com/cloudfoundry-community/asp.net5-buildpack.git

If you’re not familiar with the cf command line or Cloud Foundry, go check out Pivotal Web Services, a public commercial implementation of Cloud Foundry.

It will take a little while to deal with uploading the app, gathering the dependencies, and then starting. But, it works! To recap, just so this sinks in: I can write an ASP.NET microservice, in C#, using my favorite IDE (vi!), on my Mac, and deploy it to a container in the cloud, which is running Linux. Oh yeah, and if I really feel like it I can run this on Windows, too.

A few years ago, I would never have believed that something like this would be possible.

Creating a Microservice with Spring Boot

It’s no secret that I’m a big fan of microservices. I have blogged about creating a microservice with Akka, and I’m an avid follower of all things service-oriented. This weekend I decided that I would try and see why people are so excited about Spring Boot, and, as a foot in the door to Spring Boot, I would build a microservice.

The first issue I encountered was a lot of conflicting advice on where to get started. For an opinionated framework, it felt awkward that so many people had so many recommendations just to get into the Hello World phase. You can download the Spring CLI, or you can use the Spring Boot starter service online to create a starter project. You can also choose to have your project built by Gradle or Maven.

Since I’m on a Mac, I made sure my homebrew installation was up to date and just fired off:

brew install gvm

I did this so I could have gvm manage my springboot installations. I used gvm to install spring boot as follows:

gvm install springboot

If you want you can have homebrew install springboot directly.

The next step is to create a new, empty Spring Boot project. You can do this by hitting up the Spring Initializr  (http://start.spring.io) or you can use the spring boot CLI to create your stub (this still uses the Spring Initializr service under the covers).

$ spring init --build=gradle HelloService
Using service at https://start.spring.io
Project extracted to '/Users/khoffman/Code/SpringBoot/HelloService'

This creates a new application in a directory called HelloService. There is a DemoApplication class in the demo package that is decorated with the @SpringBootApplication annotation. Without going into too much detail (mostly because I don’t know much detail), this annotation tells Spring to enable automatic configuration based on discovered dependencies and tells it to automatically scan for components to satisfy DI requirements.

Next, I need to make sure that the project has access to the right annotations and components to let me rig up a basic REST controller, so I’ll add the following dependency to my build.gradle file in the dependencies section:

compile("org.springframework.boot:spring-boot-starter-web")

Now I can create a new file called ZombieController.java in src/main/java/demo/controller:

package demo.controller;

import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.bind.annotation.RequestMapping;

@RestController
public class ZombieController {
  @RequestMapping("/zombies")
  public String getZombies() {
    return "Goodbye, cruel world";
  }
}

With no additional work or wiring up, I can now do a gradle build in the root of my application directory and then I can execute the application (the web server comes embedded, which is one of the reasons why it’s on my list of good candidates for microservice building):

java -jar build/libs/demo-0.0.1-SNAPSHOT.jar

Now hitting http://localhost:8080/zombies will return the string “Goodbye, cruel world”. This is all well and good, but I don’t think it goes far enough for a sample. Nobody builds microservices that return raw strings, they build microservices that return actual data, usually in the form of JSON.

Fist, let’s build a Zombie model object using some Jackson JSON annotations:

@JsonAutoDetect(getterVisibility = JsonAutoDetect.Visibility.NONE)
@JsonIgnoreProperties(ignoreUnknown = true)
public class Zombie {

 @NotEmpty
 @JsonSerialize
 @JsonProperty("name")
 private String name;

 @NotEmpty
 @JsonSerialize
 @JsonProperty("age")
 private int age;

 public Zombie(String name, int age) {
   this.name = name;
   this.age = age;
 }
 public String getName() {
   return name;
 }
 public int getAge() {
   return age;
 }
}

And now I can add a new method to my controller that returns an individual Zombie, and takes care of JSON serialization for me based on my preferences defined on the class:

 @RequestMapping("/zombies/{id}")
 public @ResponseBody Zombie getZombie(@PathVariable("id") int id) {
   return new Zombie("Bob", id);
 }

Now I can rebuild my application with gradle build (or I can install a gradle wrapper via gradle wrapper and then invoke ./gradlew build) and then run it again. Once it has compiled and it’s running again, I can hit the following URL with curl: http://localhost:8080/zombies/12

And I will get the following JSON reply:

{"name":"Bob","age":12}

And that’s basically it, at least for the simplest hello world sample of a Spring Boot service. Ordinarily, you wouldn’t return values directly from inside the controller, instead the controllers usually delegate to “auto wired” services, which perform the real work. But, for the purposes of my sample, I decided it was okay to leave the code in the controller.

So, what’s my conclusion? Well, writing a single REST method that returns fake data is by no means a way to judge an entire framework. However, if you’ve been doing RESTful services in Java and have not been using Spring Boot, then this is likely a super refreshing change of pace. I’ll likely keep poking around with it so I can get a better idea for how it behaves in a real production environment.

Internet of Things: Spark Core Day 1 – Controlling LEDs via the Cloud

In my last post I went through my initial experience in unboxing, configuring, wiring, and coding for the Raspberry Pi 2. Overall that experience was positive, though I still think that companies like Adafruit need to be aware of the fact that when you sell something labeled a “Starter Kit”, and market it for educators and kids, that shit should work out of the box. There’s absolutely no excuse for shipping outdated SD cards 2 months after the latest version of Raspbian that is required to work with GPIO.

Anyway, next up on my list of Internet of Things experiments was the Spark Core. This thing is mind-blowingly remarkable: It’s a tiny little microcontroller that has a built-in WiFi antenna, but more importantly, it is backed by the Spark Cloud, which allows the device to phone home, be remotely addressable, remotely flashable, and expose its own firmware as a cloud-based API. It makes every single one of my geeky senses tingle.

My unboxing experience was pretty straightforward – I opened the box, saw the tiny microcontroller and it’s accompanying breadboard, and I squealed with joy. I immediately ripped the LEDs and resistors out of the previous Raspberry Pi 2 cobbler prototype and stuck them into the Core’s breadboard.

I then opened up a web browser to start coding the firmware for the Core. That’s right, the core can be flashed remotely over the webfrom anywhere, using a browser. You really need to sit back and take a moment to let that concept marinate a minute. This ain’t your old fashioned microcontroller where you have to plug it to a special EPROM burner to flash it. This thing is absolute genius. It is a thing of beauty, and technologically badass.

Using the Wiring programming language (should be familiar to anyone who has written code for an Arduino before), I then wrote a bunch of code to expose functions to the cloud that turn on and off a red LED and a green LED. The picture below shows the Spark Core and its breadboard with my LEDs off:

Spark Core with LEDs Off

Spark Core with LEDs Off

And this shows the same thing after I have used my iPhone to send the secure HTTP messages that manipulate my core via the cloud:

Spark Core with LEDs On

Spark Core with LEDs On

To recap: From unboxing to the point where I could control my Spark Core over the web, from anywhere, to remotely manipulate LEDs via the cloud… took me about an hour and a half. There are no words to describe how freaking amazing that is.

I can’t wait to start messing with sensors and inputs and branch out from just LED control!

Slide Deck from my NYC Code Camp Presentation

For those of you who attended my Code Camp NYC presentation on casual and indie gaming with Windows Phone 7 and “the cloud”, I have finally managed to find and upload the PPT slide deck. Ironically enough, the reason it took me so long is because I had trouble with the cloud drive on which the deck was stored 😉

I will be posting code samples that go along with this presentation in one form or another soon… so stay tuned!

Indie and Casual Game Development with Windows Phone 7

Struggling with SSL and Cloud-Backed Mobile Applications

While it is still certainly very common for mobile developers to build applications that stand on their own and do not communicate with the Internet, every day more and more mobile applications are released that are what we used to call “Smart Clients”. These apps sit on your mobile device (and are not browser apps or browser-shell apps). They might store data locally as a cache or back-up mechanism but the bulk of the application’s information comes from the Internet or, if you’re really upping the buzzword quota, the Cloud.

Let’s say you’re building a recipe application for mobile devices. This app is designed to let people access their recipes from any location so long as they have network data access. When you start the app, it queries a cloud-hosted server for your recipes and probably does so using some kind of secure authentication mechanism. Based on how a lot of cloud service providers are working lately, it’s very realistic to assume we’re doing Basic HTTP authentication over SSL.

Now we get into the real benefit of putting the data in the cloud – cross-platform access. I decide to write a Windows Phone 7 app and an iOS app. They both talk to the same recipe database and I should be able to share data among them seamlessly. Why would I do this? Because many people own a non-Apple phone and an iPad. Additionally, families often share service accounts where different family members have different mobile devices. Think Evernote here – they have apps for Mac, Windows, iPhone, iPad, Android – all access your shared, cloud-hosted store of notes.

Here’s the rub, the sticky bits, the source of the last 3 nights of frustration for me: not all services provided in the cloud surface clean, valid SSL certificates.

What does that mean? It means that when you make HTTP requests over SSL from an SDK, you are at the mercy of the limitations of that SDK. For example, if I am writing a desktop or server application in .NET and I know that a cloud database I’m talking to has SSL certs that don’t match their host names, I just write code that looks like this to manually override the cert validation:

ServicePointManager.ServerCertificateValidationCallback =
    ((sender, certificate, chain, sslPolicyErrors) => true);

I can put conditional logic in there to only let certain certificates from certain hosts go through, but you get the idea. Bottom line is I have some back-up plan where I can write code to counteract what most people consider “bad net citizenship” (SSL certs that trigger warnings or worse, fail outright).

What about with iOS? What happens if my iPhone application runs into a bad SSL certificate. Again, I should not have to write this kind of code, but it is possible. When you are picking your cloud service providers, I would add a lot of extra weight to providers whose data comes through on clean SSL certs.

Here’s the juicy part of the iOS code (part of an NSURLConnection delegate class) that manually overrides the SSL cert by responding to an authorization challenge:

- (BOOL)connection:(NSURLConnection *)connection
   canAuthenticateAgainstProtectionSpace:(NSURLProtectionSpace *)protectionSpace
{
    NSString *challenge = [protectionSpace challenge];
    if ([challenge isEqualToString:NSURLAuthenticationMethodServerTrust]) {
        return YES;
    }
}

- (void)connection:(NSURLConnection *)connection
    didReceiveAuthenticationChallenge:(NSURLAuthenticationChallenge *)challenge
{
    NSURLCredential *credential = nil;
    NSURLProtectionSpace *protectionSpace;
    SecTrustRef trust;

    protectionSpace = [challenge protectionSpace];
    trust = [protectionSpace serverTrust];
    credential = [NSURLCredential credentialForTrust:trust];

    // say "yes" to everybody... Note, this is BAD
    [[challenge sender] useCredential:credential
          forAuthenticationChallenge:challenge];
}

Why do I say this is bad? Well, the whole point of an SSL certificate is to establish a secure connection between you and the remote server. If someone gets between you and the remote server (called a man-in-the-middle attack) they can do all kinds of nasty crap to your data and, because you’ve written the above code, they might be able to get away with it.

But let’s say that you’re willing to accept that risk and you plow on. So now you decide to write a Windows Phone 7 app that reads from the same cloud database. Your code fails because the SSL cert doesn’t match the host name (again, this is a really common problem when cloud hosts do things to support massive numbers of clients). Well, you just use the ServicePointManager class like you did with the desktop .NET app, right? Ooops. No, actually, you’re utterly screwed. No such class exists in the WP7 SDK. In fact, there is absolutely no way in WP7 to override the validation of SSL certificates. In short, if an SSL cert chain fails and your WP7 app is on the client side of it, your app will not be able to talk to the remote server.

Case in point: I’ve been doing some experimenting with hosting an instance of CouchDB on Cloudant as an add-on to an application hosted in AppHarbor. This is great and they give you a nice SSL URL with a username and password embedded so that you can do awesome JSON goodness with CouchDB’s HTTP API. However. The certs from Cloudant fail validation on both iOS and WP7. About the only place it doesn’t fail validation is when using curl from the command line. If you’re using iOS you’ll be forced to hack around the SSL cert validation. If you’re using WP7, you’ll be totally boned.

So, what are the key points I want you to take away from this blog?

  1. If you’re using WP7, complain and never stop complaining until the folks at Microsoft catch up to all the other mobile operating systems and actually give us the ability to intercept auth challenges in the cert chain. Not having the ability to override SSL validation in a WP7 app is assinine. I realize that MS wants us all to be secure, but for the love of all that is good, even Apple allowed us to have this ability, and they are notoriously paranoid about mobile security.
  2. E-mail, phone, or smoke-signal the provider of your cloud service and complain that their SSL certs aren’t good enough. Many of them will have a facility that will let you use your own certificate purchased from one of the authorities on both Microsoft and Apple’s root authority whitelist, but that won’t fix the host name problem. If you are comparing two providers (e.g. two hosts of CouchDB) and one has certs that fail and the other doesn’t, definitely give more weight to the one with good certs.

Again, this is not a pandemic or a problem that affects the entire universe. This problem only occurs when you’re trying to access resources over SSL from mobile devices and the SSL certificate fails validation. This validation failure happens a lot when hosts do creative things with their certificates and their host names. If this applies to your host, give them a call or an e-mail and see if they have workarounds available. The last thing you want to do is write your own proxy to compensate for bad certs, because that’s just going to make the gap between your proxy and the cloud host as vulnerable to MITM attacks as your mobile device would be without the proxy.

So, go forth and consume cloud resources over SSL – just make sure your mobile SDK can override bad certs or your cloud provider has ways of making the certs look valid.

Interview with Michael Friis, Co-Founder of AppHarbor

Today I had the pleasure of being able to speak with Michael Friis, one of the co-founders of AppHarbor. As you know from my previous post about AppHarbor, I am excited about this company and it certainly sounds promising. The founders of AppHarbor are all expert .NET developers who just got sick of seeing their colleagues leave the .NET community because there weren’t any simple, easy-to-use solutions like Heroku available. Rather than give up on .NET, they chose to stick it out and build something that filled the gap.

During this conversation I got to ask a couple of questions that I think developers want to know of cloud/PaaS providers before they sink their time and effort into building an application or deploying an existing one.

One of the biggest questions everyone asks of cloud service providers is about security. Friis appropriately commented that they have “double duty” when it comes to re-assuring their customers. They need to convince them that Amazon AWS (the foundational platform on which both AppHarbor and Heroku run) is secure. Once that’s done, they need to convince developers that their applications and application code are secure. While they have process isolation and standard file system protections in place on their multi-tenant servers, they don’t currently have any advanced protocols in place. Friis indicated that they are dedicated to ensuring that such security is provided and that their customers can feel safe deploying their apps, mentioning possibly doing things like automatically obfuscating Assemblies on behalf of their customers, etc.

The other problem a lot of developers have with cloud providers is vendor lock-in. As I said in my last post, nobody wants to be stuck with code that will only work in one particular company’s data center (e.g. Windows Azure). Friis addressed this by saying that they are dedicated to making sure that everything about their platform is as flexible as possible, even going so far as to say that even their dependency on Amazon isn’t hard-coded into the platform. In short, customers should feel comfortable building solutions that work on their desktops and on AppHarbor that will be able to work in other environments as well. Friis also mentioned that in the future, they might even consider deploying from AppHabor to Windows Azure for those customers deploying solutions that rely on Azure services.

Perhaps my favorite feature of AppHarbor (other than it being “Heroku for .NET”) is its integration into the developer’s daily workflow. Friis touched on this during the conversation, talking about how using proprietary deployment mechanisms like Windows Azure can easily get your application out of sync where the version associated with a particular source control/CI build isn’t necessarily the one currently in the cloud. With AppHarbor’s deployments being triggered from git pushes, you have that tight coupling with a particular source code revision and the version of the software sitting in the cloud. To me, this kind of feature really enables larger companies and enterprises to consider facilities like AppHarbor as viable deployment targets for their builds.

They have a lot of plans for the future, including growing their already impressive Add-Ons program and adding features like being able to dynamically scale application instances (the plumbing for this exists, but they don’t have it enabled for customers yet), geographic affinity of application instances, and much more.

As I’ve said before – time will tell if a company like AppHarbor can pull off what Heroku has, but they seem to be off to a fantastic start. For now at least, the next time someone asks me about a cloud back-end for their application I can send them to AppHarbor instead of telling them “quit .NET, learn Ruby, then use Heroku”. The next time I need to house a bit of code “in the cloud”, I’m going to give AppHarbor a try rather than using Heroku as my default playground.