Kotan Code 枯淡コード

In search of simple, elegant code

Menu Close

Tag: microsoft (page 1 of 3)

Using Cloud Foundry Bound Services with SteelToe on ASP.NET Core

Before I begin, if you’re not familiar with the concept of backing services or bound services in Cloud Foundry (any of the implementations, including PCF Dev, support service bindings in some form), check out the Pivotal documentation. The short of it is that rather than your application hard-coding information about the backing services with which it communicates, it should instead externalize that configuration. One way of doing this is through Cloud Foundry’s services mechanism.

When you bind services, either user-provided or brokered, they appear in an environment variable given to your application called VCAP_SERVICES. While this is just a simple JSON value, and you could pretty easily parse it on your own, you shouldn’t have to. This is a solved problem and there’s a library available for .NET that includes as part of its foundation some DI-friendly classes that talk to Cloud Foundry’s VCAP_SERVICES and VCAP_APPLICATION environment variables.

This library is called Steel Toe. I will save the history and origin of the name Steel Toe for a future blog post. For now, you can think of Steel Toe as a set of libraries that enable ASP.NET applications to participate as clients in the Netflix OSS microservice ecosystem by talking to Configuration Server and the Discovery Server (Eureka).

So, the first thing you need to do is add a NuGet reference to the assembly SteelToe.Extensions.Configuration.CloudFoundry. To do this, you’ll have to use the feed SteelToe is on, since you won’t find it in the regular Microsoft feed. SteelToe packages can currently be found here:

  • Master – https://www.myget.org/F/steeltoemaster/api/v3/index.json
  • Dev – https://www.myget.org/F/steeltoedev/api/v3/index.json

Once you’ve told either your command line tools or Visual Studio 2015 where to go look for Steel Toe, you can add a reference to the Assembly mentioned above (there are more than this one, but for this blog post we’re only covering Cloud Foundry configuration).

To add Cloud Foundry’s environment variables as a configuration provider, we just use the AddCloudFoundry method in our Startup method:

 var builder = new ConfigurationBuilder()
                .AddJsonFile("appsettings.json")
                .AddEnvironmentVariables()
                .AddCloudFoundry();
 Configuration = builder.Build();

At this point we could continue and just use the syntax vcap:services:p-mysql:0:uri for raw configuration values, but I like injecting service binding information using the IOptions feature, which converts the service bindings into a collection of POCOs. To use it, our ConfigureServices method might look something like this:

  services.AddMvc();

  services.AddOptions();
  services.Configure<CloudFoundryApplicationOptions>(Configuration);
  services.Configure<CloudFoundryServicesOptions>(Configuration);

This is going to make both our application metadata and our service bindings available as injectable options. We can make use of these easily in a controller. The following is a modified “values” controller borrowed right from the Visual Studio “new project” template for ASP.NET Web API. I’ve added a constructor that allows these options to be injected:

[Route("api/[controller]")]
public class ValuesController : Controller
{
    public ValuesController(IOptions<CloudFoundryApplicationOptions> appOptions,
        IOptions<CloudFoundryServicesOptions> serviceOptions)
    {
        AppOptions = appOptions.Value;
        ServiceOptions = serviceOptions.Value;
    }

    CloudFoundryApplicationOptions AppOptions { get; set; }
    CloudFoundryServicesOptions ServiceOptions { get; set; }

    [HttpGet]
    public Object Get()
    {
        return new
        {
            AppName = AppOptions.ApplicationName,
            AppId = AppOptions.ApplicationId,
            Service = new
            {
                Label = ServiceOptions.Services[0].Label,
                Url = ServiceOptions.Services[0].Credentials["url"].Value,
                Password = ServiceOptions.Services[0].Credentials["password"].Value
            }
        };           
    }
}

Now if I bind a service to my application that has a url credential property and a password credential property, I’ll be able to see that when I issue a GET to /api/values. Here’s a sample output that I get from using the REST api for my service:

{"AppName":"bindings-example","AppId":"a605f066-8c2c-4ae3-8dbf-b2e7931fc39d","Service":{"Label":"user-provided","Url":"http://foo.bar","Password":"l33t"}}

You should try this out – create an application, copy some of the code from this blog post, push it to Cloud Foundry, and bind a couple of user-provided services to it and see how incredibly easy Steel Toe makes externalizing your backing services configuration.

Configuration Options and DI with ASP.NET 5

In ASP.NET 5, we now have access to a robust, unified configuration model that finally gives us the configuration flexibility that we’ve spent the past several years hoping and begging for.  Gone are the days of the hideous web.config XML files, and, if we’ve done our jobs right, so too are the days of web-dev.configweb-prod.config, and other such abominations that turn out to be cloud native anti-patterns.

The new configuration system explicitly decouples the concept of a configuration source with the actual value accessible from the Configuration object (yes, IConfiguration is always available via DI). When you use yeoman to create an ASP.NET application from a template, you’ll get some boilerplate code that goes in your Startup.cs file that looks something like this:

 public Startup(IHostingEnvironment env)
        {
            // Set up configuration sources.
            var builder = new ConfigurationBuilder()
                .AddJsonFile("appsettings.json")         
                .AddEnvironmentVariables();
            Configuration = builder.Build();
        }

What you see in this code is the application setting up two configuration sources: A JSON file called “appsettings.json” and the collection of key-value pairs pulled from environment variables. You can access these configuration settings using the colon (:) separator for arbitrary depth accessors. For example, the default “appsettings.json” file looks like this:

{
  "Logging": {
    "IncludeScopes": false,
    "LogLevel": {
      "Default": "Verbose",
      "System": "Information",
      "Microsoft": "Information"
    }
  }
}

If you wanted to know the default log level, you could access it using the configuration key string Logging:LogLevel:Default. While this is all well and good, ideally we want to segment our configuration into as many small chunks as possible, so that modules of our app can be configured separately (reducing the surface area of change per configuration modification). One thing that helps us with this is the fact that ASP.NET 5’s configuration system lets us define Options classes (POCOs) that essentially de-serialize arbitrary key-value structures from any configuration source into a C# object.

To continue adding to my previous samples,  let’s say that we’d like to provide some zombie configuration. This configuration will, in theory, be used to configure the zombie controller behavior. First, let’s create a JSON file (this is just for convenience, we could use any number of built-in or third-party config sources):

{	
	"ZombiesDescription" : "zombies",
	"MaxZombieCount" : 22,
	"EnableAdvancedZombieTracking" : true		
}

And now we need a POCO to contain this grouping of options:

using System;

namespace SampleMicroservice
{
	public class ZombieOptions
	{
		public String ZombiesDescription { get; set; }
		public int MaxZombieCount { get; set; }
		public bool EnableAdvancedZombieTracking { get; set; }
	}
}

We’re almost ready to roll. Next, we need to modify our ZombieController class to use these options. We’re going to continue to follow the inversion of control pattern and simply declare that we now depend on these options being injected from an external source:

namespace SampleMicroservice.Controllers
{		
	[Route("api/zombies")]
	public class ZombieController 
	{
		private IZombieRepository repository;
		private IGlobalCounter counter;
                private ZombieOptions options;
		
		public ZombieController(IZombieRepository zombieRepository, 
								IGlobalCounter globalCounter,
								IOptions<ZombieOptions> zombieOptionsAccessor) {
			repository = zombieRepository;	
			counter = globalCounter;
			options = zombieOptionsAccessor.Value;
		}
		
		
		[HttpGet]
        public IEnumerable<Zombie> Get()
        {
			counter.Increment();
			Console.WriteLine("count: " + counter.Count + 
				", advanced zombie tracking: " + options.EnableAdvancedZombieTracking + 
				", zombie name: '" + options.ZombiesDescription + "'");			
            return repository.ListAll();
        }
	}
}

NOTE that there’s a bug in the current RC1 documentation. In my code, I use zombieOptionsAccessor.Value whereas the docs say this property name is called OptionsValue is the one that compiles on RC1, so YMMV. Also note that we’re not injecting a direct instance of ZombieOptions, we’re injecting an IOptions<T> wrapper around it.

Finally we can modify the Startup.cs to add options support and to configure the ZombieOptions class as an options target:

// This method gets called by the runtime. Use this method to add services to the container.
        public void ConfigureServices(IServiceCollection services)
        {
            // Add framework services.
            services.AddMvc();            
            services.AddOptions();
            services.Configure<ZombieOptions>(Configuration);                       
            
            services.AddScoped<IZombieRepository, ZombieRepository>();
            services.AddSingleton<IGlobalCounter, GlobalCounter>();
 }

The important parts here are the call to AddOptions(), which enables the options service, and then the call to Configure<ZombieOptions>(Configuration), which uses the options service to pull values from a source (in this case, Configuration) and stuff them into a DI-injected instance of ZombieOptions.

There is one thing that I find a little awkward with the ASP.NET 5 configuration system: all of your sources and values are merged at the top level namespace. After you add all your sources, all of the top-level property names are now peers of each other in the master IConfiguration instance.

This means that while all the properties of ZombieOptions have meaning together, they are floating around loose at the top level with all other values. If you name one of your properties something like Enabled and someone else has a third party options integration that also has a property called Enabled, those will overlap, and the last-loaded source will set the value for both options classes. If you really want to make sure that your options do not interfere with anyone else’s, I suggest using nested properties. So our previous JSON file would be modified to look like this:

{
  "ZombieConfiguration": {
	"ZombiesDescription" : "zombies",
	"MaxZombieCount" : 22,
	"EnableAdvancedZombieTracking" : true
  }
}

Then the ZombieOptions class would move the three properties down one level below a new parent property called ZombieConfiguration. At this point, you would only have to worry about name conflicts with other config sources that contain properties called ZombieConfiguration.

In conclusion, after this relatively long-winded explanation of ASP.NET 5 configuration options, it finally feels like we have the flexibility we have all been craving when it comes to configuring our applications. As I’ll show in a future blog post, this amount of flexibility sets us up quite nicely for the ability to work both locally and deploy our applications to the cloud and still have rich  application configuration without returning to the dark times of web.config.

Dependency Injection and Services in ASP.NET 5

In this blog post, I talked about how to build a microservice using ASP.NET 5. While it was a trivial, contrived sample, it did show how easy it is to get started with ASP.NET 5 (did I mention this was on a Mac? That’s never going to get old).

Dependency Injection works on the Inversion of Control principle. Rather than using what has colloquially been referred to as “new glue” where every class creates instances of the classes on which they depend, you avoid this tight coupling by simply being explicit about the dependencies required for your class by listing them in your application’s constructor. This gives you a ton more flexibility, but, most importantly, it also makes it much easier to test your application’s classes in isolation without resorting to terrible, space-time-altering hacks to test things.

The old (“new glue”) way:

public MyConstructor() {
  subordinateClass1 = new SubordinateClass1();
  subordinateClass2 = new SubordinateClass2();
}

The new (DI) way:

public MyConstructor(ISubordinate1 subordinate1, ISubordinate2 subordinate2) 
{
  this.subordinate1 = subordinate1;
  this.subordinate2 = subordinate2;
}

Among the many advantages of the second way is that the class is usable both with and without a formal dependency injection framework, and is far more testable than the first option.

So, now that we’ve got a “why do we care about DI?” primer out of the way, let’s take a look at how this might be used to apply to an ASP.NET 5 application.

In the ConfigureServices method of our application startup, we can choose to add built-in services (like ASP.NET MVC, identity, security, entity framework, etc) or we can add services that we have created with one of 3 different lifetime configurations:

  • Transient – Every time an instance of this interface is requested, the receiving object gets a new instance.
  • Scoped – This is request scope, so the instance of the injected object will remain intact throughout the lifetime of a web request. This is a particularly useful lifetime.
  • Singleton – A single instance of this object will be dispensed to all requesting injection points.

Let’s say that we want to replace the hacky controller-embedded code we put in the previous sample for the Get() method of our zombie controller with an actual zombie repository. If we did this the old way, we would new up an instance of the repository inside our controller, incur a tight coupling penalty, and then make it horribly difficult to test said controller. Using DI, we can modify our controller so it looks like this:

[Route("api/zombies")]
	public class ZombieController 
	{
		private IZombieRepository repository;
		private IGlobalCounter counter;
		
		public ZombieController(IZombieRepository zombieRepository, IGlobalCounter globalCounter) {
			repository = zombieRepository;	
			counter = globalCounter;
		}
		
		[HttpGet]
        public IEnumerable<Zombie> Get()
        {
			counter.Increment();
			Console.WriteLine("count: " + counter.Count);			
            return repository.ListAll();
        }
	}

Now our controller’s dependencies are explicit in the constructor, and are satisfied by some external source, such as our testing framework or a DI framework. Nowhere in this controller will you find “new glue” or tight coupling to a particular implementation of these classes.

To register specific implementations and lifetimes of these objects, we can modify the ConfigureServices method of our Startup class as shown here:

  public void ConfigureServices(IServiceCollection services)
        {
            // Add framework services.
            services.AddMvc();
            
            services.AddScoped<IZombieRepository, ZombieRepository>();
            services.AddSingleton<IGlobalCounter, GlobalCounter>();
        }

When we run this application now, a new instance of ZombieRepository will be created for each HTTP request and, because it’s a hacked implementation, it just returns the same array as the previous blog post. In the console we will see a global counter continue to increment beyond the span of individual HTTP requests, and this value will continue to increment until the host process (e.g. Kestrel) is shut down.

While there are too many possibilities to list here, as you go forward building your ASP.NET 5 applications, you should always be asking yourself, “Could this be a service?” Any time you find yourself isolating some piece of functionality into a module, check to see whether it fits nicely into the services framework in ASP.NET. This isn’t just a third party extension point, remember that ASP.NET 5 is now a decoupled, modular thing and nearly all of ASP.NET’s own functionality is added to the core with this same services framework.

Pushing an ASP.NET 5 Microservice to Cloud Foundry

In my previous blog post, I walked through the process of creating a lightweight microservice in ASP.NET 5 that builds in a cross-platform fashion, so it can run on Windows, Linux, and OS X. I’m going to glaze over how completely amazing it is that we now live in a time where I can write cross-platform microservices in C# using emacs.

If you’re building microservices, then there’s a good chance you’re thinking about deploying that microservice out into some Platform as a Service provider. For obvious reasons, my favorite PaaS target is Cloud Foundry (you can use any of the commercial providers that rest atop the CF Foundation like Pivotal Web Services, IBM Bluemix, HP, etc).

Thankfully, there is a community supported buildpack for ASP.NET 5. I believe it’s up to date with beta 8 of ASP.NET 5, so there is a minor modification you’ll have to make in order to get your application to launch in Cloud Foundry.

First, go into the project.json file and make sure your commands section looks like this:

"commands": {
    "web": "Microsoft.AspNet.Server.Kestrel",
    "kestrel": "Microsoft.AspNet.Server.Kestrel"
  },

The reason for this is that the buildpack expects to use the default launch command from pre-RC1, kestrel. If you’ve used the RC1 project generator, then the default kestrel launch command is just called web. The easy fix for this, as shown above, is just to create a duplicate command called kestrel. Now you can launch your app with dnx web or dnx kestrel, and that’ll make the buildpack happy.

With that one tiny little change, you can now push your application to the cloud:

cf push zombieservice -b https://github.com/cloudfoundry-community/asp.net5-buildpack.git

If you’re not familiar with the cf command line or Cloud Foundry, go check out Pivotal Web Services, a public commercial implementation of Cloud Foundry.

It will take a little while to deal with uploading the app, gathering the dependencies, and then starting. But, it works! To recap, just so this sinks in: I can write an ASP.NET microservice, in C#, using my favorite IDE (vi!), on my Mac, and deploy it to a container in the cloud, which is running Linux. Oh yeah, and if I really feel like it I can run this on Windows, too.

A few years ago, I would never have believed that something like this would be possible.

Creating a Microservice with the ASP.NET Web API

I’ve been blogging about microservices for quite some time, and have included examples in Scala/Akka, Go, and even Java. Today I thought I would include a sample of how to build a microservice using the Microsoft ASP.NET Web API. As I’ve mentioned a few times before, the term microservice is a bit overloaded. Really, it just refers to a service that adheres to the Single Responsibility Principle (SRP), and does one thing. Sometimes people also include the fact that a service is bootstrapped (the container/HTTP server is included in the service code rather than relied upon as an external dependency) but that detail is unimportant for this discussion.

The first thing we need to do is create an empty ASP.NET application, which should be straightforward for any Microsoft developer. Next, we can create routes for our RESTful resources. There are a number of ways to do this, but I like providing a universal default resource pattern and then overriding it at the individual resource level as exceptions occur.

I can create a starter route pattern in the global.asax.cs file like so:

  GlobalConfiguration.Configure(config =>
            {
                config.MapHttpAttributeRoutes();

                config.Routes.MapHttpRoute(
                    name: "DefaultApi",
                    routeTemplate: "api/v3/{controller}/{id}",
                    defaults: new { id = RouteParameter.Optional }
                );               
            });

If I then create a Web API controller called something like ZombiesController, and put a method like Get on it, this will automatically be the target of the route api/v3/zombies. Further, a Get(id) method will then automatically get invoked by api/v3/zombies/12.

The next step is to create a model object that we want to return from our resource URLs. Since we’ve been using zombies and zombie sightings, let’s continue with that domain and create the following C# class:

using Newtonsoft.Json;
using System;

namespace KotanCode.Microservices.Models
{    
    public class ZombieSighting
    {
        [JsonProperty(PropertyName = "zombieId")]
        public Guid ZombieId { get; set; }

        [JsonProperty(PropertyName = "name)]
        public String Name { get; set; }

        [JsonProperty(PropertyName = "lat")]
        public Double Latitude { get; set; }

        [JsonProperty(PropertyName = "long")]
        public Double Longitude { get; set; }
    }
}

Now we can create a resource that exposes a collection of zombie sightings by adding a new ASP.NET Web API controller to our project:

namespace KotanCode.Microservices.Controllers
{
    public class ZombieController : ApiController
    {
        [Route("api/v7/zombies")] // override the standard route pattern
        [HttpGet]
        public IEnumerable<ZombieSighting> GetAllSightings()
        {
            ZombieDataContext ctx = new ZombieDataContext();
            var zombies = from zomb in ctx.ZombieSightings
                           select new ZombieSighting()
                           {
                               ZombieId = zomb.ID,
                               Name = zomb.ZombieName,
                               Longitude = zomb.Long,
                               Latitude = zomb.Lat
                           };
            return zombies;        
        }
    }
}

In the preceding sample, I’ve overridden the default routing pattern set up in the application’s global object so that this method will be invoked whenever someone issues a GET on /api/v7/zombies. Just for giggles, I illustrate querying from an Entity Framework context and using a LINQ projection to convert the data entities into the JSON-decorated object that is part of my public API. In a real-world example, you might have additional layers here (e.g. an anti-corruption layer), but for this sample we’re okay because all we care about here is the REST API.

That’s it. If you add the one model object and an ApiController class to your ASP.NET application and run it, you’re good to go and you now have the basic building blocks for a microservice that you can deploy locally, or out in a cloud like Cloud Foundry or Azure.

Finally Got To Touch a Microsoft Surface Pro – Review

The other day I was in the mall with my wife, giving in to the gravitational pull coming from the Starbucks black hole at the center of the building. As we were floating helplessly toward overpriced coffee, we noticed a Microsoft stand that was fully equipped with Surface RT and Surface Pro devices. Well, this was a little too much for me – coffee I can resist sometimes but shiny new tech that you can touch, poke, and prod? Nope, can’t resist that.

The first thing I thought when I picked it up and grabbed it was that the thing was hot. Not just warm but really, uncomfortably hot. I imagine a portion of this heat comes from the fact that it’s tethered to a plug and has been in use all day by people just like me wanting to poke and prod it and grab the shiny new device. All of the tablets I own get warm(ish) when charging, so I’m not going to count that in the cons column just yet.

With the kickstand extended in the back and the touch cover keyboard laid out on the table in front of it, it felt kind of like a slightly wider-than-mine iPad, with a bluetooth keyboard. I have a lot of experience using an iPad + BT keyboard combo – this is what I prefer to write my fiction on while I am on the go and don’t expect to have a flat surface (see what I did there?) on which to put my full-size laptop.

The touch experience was as expected – absolutely, completely top notch. You really must find yourself a place (Best Buy I believe has these as well) where you can touch a Surface Pro (or RT) in person. It doesn’t quite have the same loose, floppy feel of the iOS touch response, it actually feels a little stickier, as though the glass itself is a little less glossy, and that’s a good thing.

Then came the stylus. I played with it inside OneNote RT and thought it was pretty darn impressive, but then again, I remember many, many years ago (5+) I had a flip-around tablet/laptop combination that used a stylus and I thought that was impressive too… but I never used it. I can imagine using the stylus if I am taking notes in a meeting, at a conference taking notes, or am drawing out some architectures, designs, flow charts, etc. People who draw anything and want to keep a record of that will benefit from this, but that’s a benefit of OneNote and not an advantage exclusive to the Surface Pro.

The Surface Pro is the only tablet-sized device capable of running “old” Windows stuff, e.g. stuff not written using the new RT layout and sandbox. I love the WinRT look, feel, and design aesthetic. I loved it when I saw its predecessor on Windows Phone and I still love it now. As an operating environment it actually gives me much better access to what I want and how I want it than either iOS or Android.

The real question, however, is will I buy one? Nope. Until the Surface Pro comes with the same Apps that I use on my iPad right now on a daily basis, there’s absolutely no way I can justify spending $900 on a tablet, even if it has laptop-spec’d power and a stylus and is a really freaking cool device. If I did not currently own a single tablet, I would instantly snatch one of these bad boys up. However, since I currently own an iPad that does what I need, I can’t justify the cost of switching.

When my iPad breaks, I will then seriously consider a Surface Pro or whatever MS calls the newest version at that time.

Ever since I did get a chance to fondle it, I have found that my hands crave having the device in their grasp. I actually want to go back to this kiosk and play with the Surface Pro some more. I had that experience with the Windows Phone and I have it tenfold with the Surface Pro. I just wish I could justify spending the cash on it. Who knows, maybe when I get around to making another game client, I’ll buy a Surface Pro as a tax deduction for a tester device.

Details Released on Surface for Windows 8 Pro

According to Microsoft, here are some of the details that we’ve been waiting to find out regarding the Windows Surface Pro (Surface running Windows 8 Pro):

  • It looks, from a hardware and form factor point of view, just like a regular surface
  • It comes with a pen (stylus)
  • It will be running Windows 8 Pro, which means Windows RT is not a limiting factor sandbox anymore.
  • BitLocker for drive/data security (does anybody actually use this on their current Windows machine?)
  • Mini DisplayPort HD connection for extra displays
  • Full size USB 3.0 port
  • Up to 128GB of storage
  • Intel Core i5 Processor
  • Price point starting at $899
  • Due out in early 2013

So that’s what I know about the “Surface Pro” and its impending release early next year. They won’t be making the holiday season for 2012, but the “regular” (aka hamstrung, limited, WinRT version) Surface is all over the commercials and seems to have basically sponsored all of the NFL up through the Super Bowl, so no doubt many, many people will be getting a Surface for Christmas.

You can’t actually play with a Surface hands-on in any regular big box electronics store like Best Buy. Instead, you have to go to an actual Microsoft store. This is unfortunate, because I feel that the more people who actually get to touch, feel, and experience the Surface the more people are going to be impressed by it. The commercials don’t really do the device justice.

Time will tell if the Surface Pro lives up to expectations. Unfortunately, I’m still not going to buy one until I get a chance to play with it. $899 might not seem like much, but it’s a big chunk of change for a tablet-sized computer and I want to make sure I’m getting my money’s worth.

Microsoft Surface Available for Pre-Order

Today, the Microsoft Surface tablets became available for pre-order. Or at least, the WinRT ones did. The pricing for the accessories is a little confusing but it looks like you can get a 32GB model without a black touch cover for $499.00, a 32GB with touch cover for $599 and a 64GB with a touch cover for $699. You can also drop another hundred or so and get a type cover which looks more like the hard plastic bluetooth keyboard accessories you can get in iPad cases like the ones you find at Brookstone and other shops like that.

So, do you want one? Should you buy one? I can’t answer that, but I can give you a little bit of a warning that people familiar with Windows might not be used to seeing – this tablet will only run a very small subset of applications. Remember that WinRT is actually different than Windows 8.

The WinRT tablet mentioned above is only capable of running applications available through the Microsoft application store and only capable of running those that were written using the WinRT SDK. This means that you’re not going to get backwards compatibility with desktop windows applications. Microsoft is giving you a free copy of Office “RT” because, well, without that free copy you wouldn’t be able to read or write word documents, powerpoints, or anything else because WinRT won’t run the traditional version of Office. In fact, WinRT won’t run the traditional version of anything.

You should be thinking of WinRT’s view of the world the same way iOS sees the world. If it wasn’t written for WinRT, it won’t run it. The same way that the iPad and iPhone don’t run Mac applications, no matter how much it looks like they should.

If you like the WinRT (formerly called “Metro”) design look and you want a tablet for Internet browsing, for e-mail, and for office docs, and you aren’t picky about which games you want to play on your tablet, then you’ll probably be fine with a WinRT tablet. However, if you want something more substantial and you need the ability to run Windows “desktop” applications, then you’re going to want to go with alternatives like those you can get from Asus or you’re going to want to wait for the Surface Pro models, which are Surface laptops that are running full versions of Windows 8, which can run traditional desktop applications.

The real question for developers like me is: will the Surface Pro be better than competing models from Asus?

Windows Phone 7 Marketplace Continues to Disappoint

When Windows Phone 7 first came out, I remember all of the skeptics and the members of the “cult of Apple”, and even the Android fans, all telling me that WP7 was a passing fad and  that Microsoft had wasted its chance to enter the mobile phone market with its failed mobile Operating Systems, “PocketPC” and “Windows Mobile”. I told them all they were insane, because Windows Phone was awesome.

I still firmly believe this. From an overall experience point of view, Windows Phone allows me smoother, faster, more efficient access to everything I need than iOS. My common everyday workflows on a Windows Phone are faster and less intrusive than they are on an iOS device (I won’t even talk about Android… I can barely stomach 30 seconds of interaction with most droids).

So, if I still think that the core of Windows Phone 7 is so awesome, why is the word disappoint in the title of this blog post? The answer is simple: the marketplace or the “app store”. Today, long after the “it’s just a v1 product” excuse has expired, I can go to the Windows Phone marketplace and I become nauseous with what I see there.

Firstly, if you look at the list of top games in the Marketplace, they are either first-party (Microsoft made them or paid someone else to make them under their iron fist) or they are made by giant third-parties like Electronic Arts or any of it’s 80 bajilion subsidiaries. You still see a few good examples of non-MS games but, the point remains – Microsoft and the other gaming giants pretty much own that segment of the Marketplace. This is completely explainable – making a game for a mobile platform is a pure and simple ROI (Return on Investment) calculation. How many potential buyers are there * price per game – production cost total / potential buyers = potential profit. The reason we don’t see as many ports of ridiculously popular iOS games is simple: the ports would probably cost more to make than they would earn. Note probably there, because that notion is very subjective and easily influenced by the paranoia of any given decision maker. Bottom line? Developers are still skeptical.

Secondly is the experience of the marketplace. When I flip through Apple’s App Store, I see a wide variety of great stuff – so much stuff, in fact, that I can’t possibly sift through it all. However, I feel somewhat confident that Apple’s “featured”, “popular”, and other filtering and sorting systems will allow me to find relatively good examples of the type of application I am looking for.

Today, when I go sifting through the Windows Phone marketplace, I feel like I’m walking through the “old times square”. You know, the one that used to be peppered on all sides by strip clubs, peep shows, adult video stores, with a drug dealer on every corner. I’m not exaggerating – In the “most popular” filter for most of the apps in the marketplace I regularly skim past 4 or 5 apps with half-naked women on their icon. I understand that Microsoft may not have full control over this because, after all, if boobs are popular, then boobs are popular and what can they do about it??

The problem is that I can’t see the forest for the trees in their marketplace. I cannot find anything with ease and often some of the most amazing Windows Phone apps don’t show up anywhere – I have to search for them by obtuse keywords that friends have given me who told me about these great applications. Not only is the marketplace diluted with immature fart apps, “boobs” apps, and other garbage for which I have no use, but the worst offense of all is that the marketplace is sinking under a deluge of irrelevant stuff that they could easily filter out. It would literally take them a few hours. But they don’t do it.

Every single time I go to the marketplace to check out what’s new, I see countless apps written entirely in foreign languages that are obviously designed for foreign audiences. If I have to scroll past 150 apps (this is also not an exaggeration, this happened to me recently) just to see something that actually pertains to my search, I am going to give up looking.

And that’s exactly what I have done. I no longer use my Windows Phone for apps. It is a sad, sad commentary because I love Metro, I love Windows Phone, and I absolutely adore the integration, the “hub” concept, and everything else that operating system does so well. But even for a “low app” consumer like me who only regularly uses 3 or 4 apps, the current state of the marketplace turned me off.

If the marketplace’s current state can turn me off, someone with an absolutely insane tolerance for shitty experiences, then I can’t even imagine how quickly standard consumers are being turned off by that same marketplace.

Having fun with the Play! Framework

Those of you who have read this blog for a while know that I’ve spent a considerable amount of time with ASP.NET. In fact, I’ve been using ASP.NET since version 1.0, written several books that involve ASP.NET (including ASP.NET 4.0 Unleashed with Nate Dudek), and have done a ton of work with ASP.NET MVC. I’ve also written a pile of applications in Ruby on Rails. I’ve even written an application with Groovy on Grails.

To say that I am a fan of MVC-based web application frameworks would be like saying that a Bugati Veyron is a “kinda fast” car.

So when I saw Play!, a Java-based (and Scala, but we’ll cover that in another post) MVC web application framework, I figured I’d give it a shot. I have to admit that I was a bit skeptical at first. The concept of quick, fast development in a fluid, agile style doesn’t exactly scream “Java”, but I was open to having my mind changed.

The first thing I noticed was that there are no class files. The “play” interactive shell (which I believe can be run as a service for production deployments) takes care of the live compilation for you whenever anything changes. Sweet! That is about as un-java as you can expect… I fully expected to have to run some obtuse Maven build every time I changed the color of a span on a page.

The structure smells very much like an ASP.NET MVC application. There’s an app folder and beneath that you have controllers, models, and views. Each controller class is just a Java class that inherits from a base controller and is responsible for rendering a page.

There’s a routing table that works very much like ASP.NET MVC’s internal routing tables but doesn’t require me to write code to generate routes like MVC, they’re in a text file like Ruby and Grails:

# Routes
# This file defines all application routes (Higher priority routes first)
# ~~~~

# Home page
GET     /                                       Application.index

# Ignore favicon requests
GET     /favicon.ico                            404

# Map static resources from the /app/public folder to the /public path
GET     /public/                                staticDir:public

# Catch all
*       /{controller}/{action}                  {controller}.{action}

There’s a great deal of flexibility in this routes file that I haven’t covered. If you’re interested, head over to the Play! website and check the documentation and tutorials which are actually pretty good.

The Play! template language looks very similar to ASP.NET MVC as well, allowing you to blend HTML elements, function calls, variables, and more all in one fairly seamless HTML file. It’s not quite as concise as the ASP.NET MVC Razor syntax, but not as ugly as the old non-Razor syntax:

#{extends 'main.html' /}
#{set title:'Hello World' /}

Greetings, ${userName}<br/>

A view like this is made possible by calling render(userName) inside a controller. Note that unlike a property bag style usage from ASP.NET, I don’t have to give the userName variable a key – the view template knows that variable name implicitly. If I passed in a complex object such as user, then I could do things like ${user.name} in my template view.

The #{extends ‘main.html’ /} tag works very much like master pages or content place holders in the ASP.NET world. Main.html has some wrapper markup and then it identifies where extensions can take place. The hello.html content will appear wherever that content extension is indicated. You can get fairly advanced and have multiple content placeholders in a template, you can chain extensions like master page inheritance.

Finally, the other thing I liked about Play! is that it uses a built-in dependency management system for the framework itself but then resorts to Maven for resolving external dependencies. So, if my application depends on some other artifact that is floating around in a public nexus, I can just add a line like this to my dependencies.yml file:

require:
– play
org.ektorp -> org.ektorp 1.2.1

When I run play dependencies from the command line, the dependencies are resolved, downloaded, and stored in my lib directly, along with all transitive dependencies.  To be honest, if I had to manage my own dependencies for an MVC framework, I would never have even made it to “hello world”.

Overall I’m fairly impressed with Play! and will be continuing to play around with it and see what it can do and how it might be limited (or superior) to other MVC frameworks I’ve used in the past.