Kotan Code 枯淡コード

In search of simple, elegant code

Menu Close

Tag: aspnet (page 1 of 2)

Adding a Configurable Global Route Prefix in ASP.NET Core

This morning I was asked if it was possible to set up a configurable global route prefix in ASP.NET Core applications. I’ve done this in the past using the old (now legacy) Web API as well as with older versions of ASP.NET MVC, but I hadn’t yet tried it with ASP.NET Core.

Before I get into the how, I think it’s worth mentioning a little of the why. In this case, I have an application (and, like all applications, it’s just a microservice) that I run locally but I also deploy it in a number of environments (clouds or cloud installations). In some of those environments, the application runs as a sub-domain, e.g. myapp.mycompany.com. In other environments, it runs with a path prefix like mycompany.com/myapp. I don’t want to hard-code this capability into my application, I want the environment to be able to tell my application in which mode it is operating without me having to rebuild the app.

To do this, we need to create a new convention. Route conventions are just code that executes an Apply() method against an application model. This apply method is executed the first time Kestrel receives an inbound request, which is when it evaluates the entire system to prepare all the routes.

Filip W (@filip_woj on Twitter) has put together a sample that shows how to do this. The following snippet is from his code which can be found in Github here.

public class RouteConvention : IApplicationModelConvention
    {
        private readonly AttributeRouteModel _centralPrefix;

        public RouteConvention(IRouteTemplateProvider routeTemplateProvider)
        {
            _centralPrefix = new AttributeRouteModel(routeTemplateProvider);
        }

        public void Apply(ApplicationModel application)
        {
            foreach (var controller in application.Controllers)
            {
                var matchedSelectors = controller.Selectors.Where(x => x.AttributeRouteModel != null).ToList();
                if (matchedSelectors.Any())
                {
                    foreach (var selectorModel in matchedSelectors)
                    {
                        selectorModel.AttributeRouteModel = AttributeRouteModel.CombineAttributeRouteModel(_centralPrefix,
                            selectorModel.AttributeRouteModel);
                    }
                }

                var unmatchedSelectors = controller.Selectors.Where(x => x.AttributeRouteModel == null).ToList();
                if (unmatchedSelectors.Any())
                {
                    foreach (var selectorModel in unmatchedSelectors)
                    {
                        selectorModel.AttributeRouteModel = _centralPrefix;
                    }
                }
            }
        }
    }

The cliff notes for what’s happening here is that when the Apply() method is called, we iterate through all of the controllers that the application model knows about. For each of those controllers, we query the selectors where there exists a route model. If we have a controller that has at least one matching selector (it has a Route() attribute on it), then we combine that controller’s route attribute with the prefix. If the controller doesn’t have a matching selector (there are no route attributes), then we set it.

This has the net effect of adding the prefix we’ve defined to every single one of the RESTful routes that we’ve defined in our application. To use this new convention, we can add it while we’re configuring services during startup (also shown in the Github demo) (note that the UseCentralRoutePrefix method is an extension method that you can see in the linked github repo:

public class Startup
{
    public Startup(IHostingEnvironment env)
    {}

    public void ConfigureServices(IServiceCollection services)
    {
        services.AddMvc(opt =>
        {
            opt.UseCentralRoutePrefix(new RouteAttribute("api/v{version}"));
        });
    }

    public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
    {
        loggerFactory.AddDebug();
        app.UseMvc();
    }
}

Again, this isn’t my code, these are just extremely useful nuggets that I found on the Strathweb blog. Now that we’ve got a convention and we know how to add that convention, we can modify the above code slightly so that we can accept an override from the environment, allowing us to deploy to multiple cloud environments.

We don’t really want to read directly from the environment. Instead, we want to use AddEnvironmentVariables when setting up our configuration. This allows us to set the app’s global prefix in a configuration file, and then override it with an environment variable, giving us the best flexibility while developing locally and pushing to a cloud via CI/CD pipelines.

So, our new Startup method looks like this:

public Startup(IHostingEnvironment env)
{
  var builder = new ConfigurationBuilder()
      .SetBasePath(env.ContentRootPath)
      .AddJsonFile("appsettings.json", optional: true, reloadOnChange: false)
      .AddEnvironmentVariables()
  Configuration = builder.build();
}

public IConfigurationRoot Configuration { get; }

And we can modify the ConfigureServices method so that we read the prefix from configuration (which will have already done the overriding logic during the Startup method):

public void ConfigureServices(IServiceCollection services)
{
  var prefix = Configuration.GetSection("globalprefix") != null ? Configuration.GetSection("globalprefix").Value : String.Empty;
 services.AddMvc(opt =>
 {
     opt.UseCentralRoutePrefix(new RouteAttribute(prefix));
 });
}

And that’s it. Using the RouteConvention class we found on Filip W’s blog and combining that with environment-overridable configuration settings, we now have an ASP.NET Core microservice that can run as the root of a sub-domain (myapp.foo.com) or at any level of nesting in context paths (foo.com/this/is/myapp) all without having to hard-code anything or recompile the app.

Finally, we can put something like this in our appsettings.json file:

{
  "globalprefix" : "deep/root"
}

And optionally override that with an environment variable of the same name.

Migrating an ASP.NET Core RC1 Microservice to RC2

In a recent blog post, I talked about how to deploy an ASP.NET Core RC1 service to the cloud. With yesterday’s release of ASP.NET Core RC2, I had to make a number of minor changes to my code in order to get it working both locally and in the cloud.

The biggest change between RC1 and RC2 is an architectural change. In RC1, the thing responsible for prepping the application was inside the dnx tool, and it was tightly coupled to the notion of building web applications. In Rc2, we’re actually returning to a more explicit model with a Program class and a Main(), where we write our own code to construct the WebHostBuilder instead of it being done implicitly on our behalf.

First, there are changes to the structure of the project.json file. Rather than go through them all, I’m just going to point you to the modified project.json for the Zombies service. The easiest thing to do is to take a project.json file created by new RC2 tooling and use that as a template to adapt your existing file.

The next thing I did was add a Program class. You’ll notice that I use the AddStartup() method to invoke all of the old startup code I had from RC1:

using System;
using System.Collections.Generic;
using System.IO;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Builder;
using Microsoft.Extensions.Configuration;

namespace SampleMicroservice
{
  public class Program
  {
    public static void Main(string[] args)
    {
      var config = new ConfigurationBuilder()
        .AddCommandLine(args)
        .Build();

      var host = new WebHostBuilder()
        .UseConfiguration(config)
        .UseKestrel()
        .UseContentRoot(Directory.GetCurrentDirectory())
        .UseStartup<Startup>()
        .Build();

      host.Run();
    }
  }
}

The next change is that I had to use the AddCommandLine() method. Without this, our application isn’t going to respond to or care about the –server.urls parameter that is going to get passed to it by the buildpack when we push to Cloud Foundry.

I had to make a few tiny little changes to the Startup class, but it remains essentially unchanged:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Options;

namespace SampleMicroservice
{
    public class Startup
    {
        public Startup(IHostingEnvironment env)
        {
            var builder = new ConfigurationBuilder()
                .SetBasePath(env.ContentRootPath)
                .AddJsonFile("appsettings.json", optional: true, reloadOnChange: false)
                .AddJsonFile("ZombieConfig.json", optional: true, reloadOnChange: false)
                .AddEnvironmentVariables();
            Configuration = builder.Build();
        }

        public IConfigurationRoot Configuration { get; set; }

        public void ConfigureServices(IServiceCollection services)
        {
            services.AddMvc();
            services.AddOptions();
            services.Configure<ZombieOptions>(Configuration);

            services.AddScoped<IZombieRepository, ZombieRepository>();
            services.AddSingleton<IGlobalCounter, GlobalCounter>();
        }

        public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
        {
            loggerFactory.AddConsole(Configuration.GetSection("Logging"));
            loggerFactory.AddDebug();
            app.UseMvc();
        }
    }
}

The Configure<ZombieOptions>() method is supported by the same extension method as in RC1, except this method is in a new package, so we had to add a dependency on Microsoft.Extensions.Options.ConfigurationExtensions to our project.json.

With all this in place, we can simply run dotnet restore then dotnet build and finally cf push and that’s that… our app is now running on RC2 in Cloud Foundry.

All of the code for the RC2 version of this app can be found in my github repository.

Migrating to ASP.NET Core in the Cloud: What you will and won’t miss from ASP.NET 5

The folks over at InfoQ have graciously provided a nice list of technology that has been discontinued in .NET Core. You can find their original list here. They present the list without much opinion or color. In this blog post, I’d like to take a look at that list from the point of view of someone considering using ASP.NET Core for cloud native application development.

Reflection

Reflection isn’t gone, it’s just changed a little bit. When you make things designed to work on multiple operating systems without having to change your code, some of your original assumptions about the code no longer apply. Reflection is one of those cases. Throughout my career, the majority of the times I’ve used Reflection in a running, production application have either been for enhanced debug information dumps in logs, or, to be honest, I was working around a complex problem in entirely the wrong way.

If you find yourself thinking that you need reflection for your .NET microservice, ask yourself if you really, really need it. Are you using it because you’re adopting some one-off, barely supported serialization scheme? Are you using it because you’re afraid your regular diagnostics aren’t good enough? I would be shocked if I ever had to use Reflection in 2016 directly when .NET Core already provides me with JSON and XML serializers.

App Domains

Manipulating application domains is something that I would consider a cloud native anti-pattern. Historically, we’ve used AppDomains for code and data isolation and as a logical unit of segmentation for CAS (Code Access Security). If your deployment target is the cloud (or basically any PaaS that uses container isolation), then I contend you should have nothing in your app that does any kind of AppDomain shenanigans. In fact, from the original article, here’s a quote from Microsoft:

For code isolation, we recommend processes and/or containers

Remoting

Ah, Remoting, how do I hate thee? Let me count the nearly infinite ways… The following statement from InfoQ’s article makes me feel old. VERY old.

These days few developers even remember the Remoting library exists, let alone how to use it.

Not only do I remember how to use it, but I wrote entire chapters for popular books on using Remoting. I’ve written custom communication channels for Remoting, implemented distributed trace collection systems that predate Splunk, and other things I shall not admit publicly… all with Remoting.

The bottom line is there is no place for a technology like Remoting in an ecosystem of services all scaling elastically and moving dynamically in the cloud. It’s dinosaur tech and those of us who used to write code against it for a living are likely also dinosaurs.

You’re not going to need this in the cloud, so it’s absence from .NET Core is a blessing.

Serialization

Most of the tools you need for serialization are still there. Binary serialization is one of those ugly things that used to go deep into the bowels of your objects, even private members, and convert POCOs into state that way.

If you need high-performance binary/byte array serialization of your .NET classes, use protocol buffers. You can use them from .NET Core.

If you’re doing more traditional service-to-service communication, then you’ll be just fine with JSON and XML. As far as building services for the cloud, you should be blissfully unaware of most serialization gaps between ASP.NET 4.x and .NET Core.

Sandboxing

Sandboxing is something that has traditionally had strong, low-level OS ties. This makes its implementation in a cross-platform framework a little more complicated and subject to least common denominator culling. If you’re just building services, you should be fine, but there’s a sentence in the InfoQ article that is decidedly not cloud-native:

The recommended alternative is to spawn separate processes that run under a user account with restricted permissions

This is not something you should be doing in the cloud. As a cloud native developer, you should be unconcerned with the identity with which your application process runs – this is a detail that is abstracted away from you by your PaaS. If you need credentials to access a backing service, you can deal with that using bound resources or bound services, which is all externalized configuration.

Sandboxing as the article describes isn’t something you should be doing when developing services for the cloud, so you should hopefully be well insulated from any of the changes or removals in .NET Core related to this.

Miscellaneous

There are a handful of miscellaneous changes also mentioned by the InfoQ article:

  • DataTable/DataSet – I feel so. very. old. Many of you will likely have no idea what these things are. This is how your grandparents communicated with SQL data sources prior to the invention of Entity Framework, nHibernate, or LINQ to SQL. That’s right, we wrote code uphill both ways in the snow. Get off my lawn. You will not need these in the cloud.
  • System.DirectoryServices – For some pretty obvious reasons, this doesn’t exist. You shouldn’t need to use this at all. If you need to talk to LDAP, you can do so using simpler interfaces or, better yet, through a token system like OAuth2.
  • System.Transactions – Distributed transactions, at least the traditional kind supported by libraries like this, are a cloud-native anti-pattern. Good riddance.
  • XSL and XmlSchema – Ok, so these might still be handy and I can see a couple types of services that might actually suffer a bit from their absence in the framework. Good news is that .NET Core is open source, so if enough people need this, either Microsoft or someone else will put it in.
  • System.Net.Mail – If you need to send mail from your cloud native service, you should consider using a brokered backing service for e-mailing. Just about every PaaS with a service marketplace has some kind of cloud-based mailing system with a simple REST API.
  • System.IO.Ports – I’m fairly certain you should not be writing code that communicates with physical serial ports in your cloud native service.
  • Workflow – Windows Workflow Foundation is also gone from .NET Core. Good riddance. I did a tremendous amount of work with that beast, and I tried harder than most mere mortals to like it, to embrace it, and to use it for greatness. I was never pleased with anything I produced in WF. The amount of hacky statefulness required to get it working would have immediately put this tech onto the cloud native naughty list anyway.
  • XAML – You do not need XAML to build a cloud service, so this is also no big loss.

Conclusion

The bottom line is that, aside from a few high-friction experiences in the current release candidate, the feature set of ASP.NET Core contains everything you need right now to build microservices for the cloud. The biggest concern isn’t what isn’t in .NET Core, it’s what third party libraries for accessing backing services are either missing or not ready for prime time. That’s where I see the risk in early adoption, not in the core feature set of the framework.

Creating a Microservice with the ASP.NET Web API

I’ve been blogging about microservices for quite some time, and have included examples in Scala/Akka, Go, and even Java. Today I thought I would include a sample of how to build a microservice using the Microsoft ASP.NET Web API. As I’ve mentioned a few times before, the term microservice is a bit overloaded. Really, it just refers to a service that adheres to the Single Responsibility Principle (SRP), and does one thing. Sometimes people also include the fact that a service is bootstrapped (the container/HTTP server is included in the service code rather than relied upon as an external dependency) but that detail is unimportant for this discussion.

The first thing we need to do is create an empty ASP.NET application, which should be straightforward for any Microsoft developer. Next, we can create routes for our RESTful resources. There are a number of ways to do this, but I like providing a universal default resource pattern and then overriding it at the individual resource level as exceptions occur.

I can create a starter route pattern in the global.asax.cs file like so:

  GlobalConfiguration.Configure(config =>
            {
                config.MapHttpAttributeRoutes();

                config.Routes.MapHttpRoute(
                    name: "DefaultApi",
                    routeTemplate: "api/v3/{controller}/{id}",
                    defaults: new { id = RouteParameter.Optional }
                );               
            });

If I then create a Web API controller called something like ZombiesController, and put a method like Get on it, this will automatically be the target of the route api/v3/zombies. Further, a Get(id) method will then automatically get invoked by api/v3/zombies/12.

The next step is to create a model object that we want to return from our resource URLs. Since we’ve been using zombies and zombie sightings, let’s continue with that domain and create the following C# class:

using Newtonsoft.Json;
using System;

namespace KotanCode.Microservices.Models
{    
    public class ZombieSighting
    {
        [JsonProperty(PropertyName = "zombieId")]
        public Guid ZombieId { get; set; }

        [JsonProperty(PropertyName = "name)]
        public String Name { get; set; }

        [JsonProperty(PropertyName = "lat")]
        public Double Latitude { get; set; }

        [JsonProperty(PropertyName = "long")]
        public Double Longitude { get; set; }
    }
}

Now we can create a resource that exposes a collection of zombie sightings by adding a new ASP.NET Web API controller to our project:

namespace KotanCode.Microservices.Controllers
{
    public class ZombieController : ApiController
    {
        [Route("api/v7/zombies")] // override the standard route pattern
        [HttpGet]
        public IEnumerable<ZombieSighting> GetAllSightings()
        {
            ZombieDataContext ctx = new ZombieDataContext();
            var zombies = from zomb in ctx.ZombieSightings
                           select new ZombieSighting()
                           {
                               ZombieId = zomb.ID,
                               Name = zomb.ZombieName,
                               Longitude = zomb.Long,
                               Latitude = zomb.Lat
                           };
            return zombies;        
        }
    }
}

In the preceding sample, I’ve overridden the default routing pattern set up in the application’s global object so that this method will be invoked whenever someone issues a GET on /api/v7/zombies. Just for giggles, I illustrate querying from an Entity Framework context and using a LINQ projection to convert the data entities into the JSON-decorated object that is part of my public API. In a real-world example, you might have additional layers here (e.g. an anti-corruption layer), but for this sample we’re okay because all we care about here is the REST API.

That’s it. If you add the one model object and an ApiController class to your ASP.NET application and run it, you’re good to go and you now have the basic building blocks for a microservice that you can deploy locally, or out in a cloud like Cloud Foundry or Azure.

Having fun with the Play! Framework

Those of you who have read this blog for a while know that I’ve spent a considerable amount of time with ASP.NET. In fact, I’ve been using ASP.NET since version 1.0, written several books that involve ASP.NET (including ASP.NET 4.0 Unleashed with Nate Dudek), and have done a ton of work with ASP.NET MVC. I’ve also written a pile of applications in Ruby on Rails. I’ve even written an application with Groovy on Grails.

To say that I am a fan of MVC-based web application frameworks would be like saying that a Bugati Veyron is a “kinda fast” car.

So when I saw Play!, a Java-based (and Scala, but we’ll cover that in another post) MVC web application framework, I figured I’d give it a shot. I have to admit that I was a bit skeptical at first. The concept of quick, fast development in a fluid, agile style doesn’t exactly scream “Java”, but I was open to having my mind changed.

The first thing I noticed was that there are no class files. The “play” interactive shell (which I believe can be run as a service for production deployments) takes care of the live compilation for you whenever anything changes. Sweet! That is about as un-java as you can expect… I fully expected to have to run some obtuse Maven build every time I changed the color of a span on a page.

The structure smells very much like an ASP.NET MVC application. There’s an app folder and beneath that you have controllers, models, and views. Each controller class is just a Java class that inherits from a base controller and is responsible for rendering a page.

There’s a routing table that works very much like ASP.NET MVC’s internal routing tables but doesn’t require me to write code to generate routes like MVC, they’re in a text file like Ruby and Grails:

# Routes
# This file defines all application routes (Higher priority routes first)
# ~~~~

# Home page
GET     /                                       Application.index

# Ignore favicon requests
GET     /favicon.ico                            404

# Map static resources from the /app/public folder to the /public path
GET     /public/                                staticDir:public

# Catch all
*       /{controller}/{action}                  {controller}.{action}

There’s a great deal of flexibility in this routes file that I haven’t covered. If you’re interested, head over to the Play! website and check the documentation and tutorials which are actually pretty good.

The Play! template language looks very similar to ASP.NET MVC as well, allowing you to blend HTML elements, function calls, variables, and more all in one fairly seamless HTML file. It’s not quite as concise as the ASP.NET MVC Razor syntax, but not as ugly as the old non-Razor syntax:

#{extends 'main.html' /}
#{set title:'Hello World' /}

Greetings, ${userName}<br/>

A view like this is made possible by calling render(userName) inside a controller. Note that unlike a property bag style usage from ASP.NET, I don’t have to give the userName variable a key – the view template knows that variable name implicitly. If I passed in a complex object such as user, then I could do things like ${user.name} in my template view.

The #{extends ‘main.html’ /} tag works very much like master pages or content place holders in the ASP.NET world. Main.html has some wrapper markup and then it identifies where extensions can take place. The hello.html content will appear wherever that content extension is indicated. You can get fairly advanced and have multiple content placeholders in a template, you can chain extensions like master page inheritance.

Finally, the other thing I liked about Play! is that it uses a built-in dependency management system for the framework itself but then resorts to Maven for resolving external dependencies. So, if my application depends on some other artifact that is floating around in a public nexus, I can just add a line like this to my dependencies.yml file:

require:
– play
org.ektorp -> org.ektorp 1.2.1

When I run play dependencies from the command line, the dependencies are resolved, downloaded, and stored in my lib directly, along with all transitive dependencies.  To be honest, if I had to manage my own dependencies for an MVC framework, I would never have even made it to “hello world”.

Overall I’m fairly impressed with Play! and will be continuing to play around with it and see what it can do and how it might be limited (or superior) to other MVC frameworks I’ve used in the past.

Microsoft and the Movement of Cheese

As I prepare myself to step into a new “day job” role, a role that has influence over and is influenced by, many non-Microsoft languages and platforms but also includes MS, I can feel my perspective shifting toward a wider, more open world of development tools, languages, frameworks, and platforms.

This perspective shift isn’t just because of the new position. In fact, I have been feeling the effects of this shift for the last few years but lately the shift has been happening faster and its effects are more noticeable. I saw a link to the following blog post on Twitter this afternoon and felt compelled to add my own feelings to it:

http://realworldsa.blogspot.com/2011/11/microsoft-moved-my-cheese-again-and-i.html

The title of the blog post is “Microsoft moved my cheese again and I don’t care to find it”. I can’t describe my feelings any better than the author of that blog post did, so I highly recommend you read it.

As I look back at my career, I am struck by the fact that in 2000, I grabbed hold of the first pre-alpha bits of C# and the .NET Framework. I absorbed it, was obsessed by it, and wrote code in Notepad before Visual Studio .NET 1.0 was released in beta. Since then, I have spent an absolutely insane amount of energy devoted to remaining on the bleeding edge of Microsoft’s technology.

I have soaked up like a sponge creations such as ADO.NET, all versions of WCF, all versions of Workflow Foundation, WPF (something I still love), XAML for Windows 8, and many other things. I spent countless nights writing code for “Cicero” which would eventually become ASP.NET MVC 1.0. I spent months writing DSLs and using the tools that were part of this nebulous thing called Oslo … only to have Oslo retired, dismantled, and sent out to pasture. All of that work evaporated.

I was there when OData wasn’t called OData. I was there when Azure was a horrible pain in the ass to use (as opposed to the moderate pain in the ass it is now). I was there when Silverlight 1.0 came out and you couldn’t write C# code with it, I was there (and rejoiced) when Silverlight 2.0 had C# support, and I was there when the web collectively referred to Silverlight as “dead” technology. I was there for SharePoint 2007 (wrote a book on that, too). I was there for ASP.NET 4 (and wrote a book). Most recently I have been there, lauding, praising, lecturing, and blogging about Windows Phone 7. I’ve been there, through thick and thin, for all of it for the past 11 years.

I have lost “my religion” as it were. This doesn’t mean that I’m a Microsoft-basher by any stretch. Far from it. I still think that for desktop LOB applications with rich UX, WPF is hands down the best option available. But it has earned that status as a result of objective analysis. Microsoft doesn’t get the benefit of my doubt anymore and they haven’t for a long time.

What this means is that I don’t want to work with people who think that Microsoft is the only producer of development frameworks. Conversely, I don’t want to work with people who think that Java is the one true language, the one language to rule them all. To survive, thrive, and produce compelling software today you need to be aware that there isn’t one tech, there are tons of techs. There are boutique languages, script languages, specialty languages, universal frameworks and specialty frameworks.

In the past, if Microsoft moved my cheese, I would simply dump what I was doing, go grab the new bits, and re-do it all over again with the new cheese. Now, the only reason I will stay on the bleeding edge of a Microsoft (or any other) technology is because I can see clear business value in the benefits this new cheese has to offer. Gone are the days when Microsoft releases “new technology A” and I drop everything to learn it simply because Microsoft released it.

I have neither the time nor the energy to waste on efforts like that. I’m too busy building real-world, tangible, production-ready, reliable systems with leading edge technology. I’m not going to get cut by Microsoft’s bleeding edge again and, as evidenced by the link to the blog post above, I’m not alone.

Going forward, Microsoft is going to have to do a lot better job convincing people why they should be jumping on the bandwagon. The old “jump on the bandwagon…because we’re Microsoft” bit doesn’t work anymore. To be fair, that same pitch doesn’t work for Apple either. I don’t mess with their alphas and betas anymore unless there’s a feature in there I need for something that I am building or am planning on building shortly.

Life is too short to spend it following runaway cheese.

Another Good Use for ASP.NET 4 Unleashed

I’ve been handing out free copies of my latest work, ASP.NET 4 Unleashed (you have purchased this fine piece of writing, haven’t you?) and due to its relative size, people generally start joking about the various possible uses for a tome of this magnitude when they see it, hold it, and appreciate its girth.

A few that I’ve heard include:

  • Doorstop
  • A deadly weapon in case of attempted mugging
  • Used to press flowers ( this isn’t just a suggestion, my girlfriend has actually done this with her copy )
  • Use to block bullets
  • A boat anchor
  • Throw in the trunk or the back of a rear-wheel drive pickup truck for added traction in the winter
  • Replacement for concrete blocks in martial arts exhibitions
  • Booster for movie and/or desk chairs
  • With 5 or 6 of them, you can make a nice fort for a cat (yes, I’ve done this, and my cat greatly appreciated it)

But without a doubt, the best use I’ve seen for this book so far has been : baby food.

ASP.NET 4 Unleashed is Yummy!

ASP.NET 4 Unleashed is Yummy!

Using the ASP.NET Membership Provider and Authentication Service from Windows Phone 7

Those of you who have been building ASP.NET applications for a while now are no doubt familiar with the provider model, which includes provider abstractions for membership (authentication), roles (authorization), profiles (user data), and session state. These providers make it incredibly easy to provide a secure framework for your application. In fact, with an ASP.NET application template right out of the box, you can have a fully functioning authenticated, secure website in minutes.

What a lot of people have less familiarity with is the notion of provider services. You can actually create a WCF service head that sits on top of the ASP.NET membership system. This allows client applications to authenticate against your ASP.NET website using exactly the same authentication scheme that your users use. This means that a user who has access to your website should also be able to have access to the client application seamlessly.

To see how this works, create yourself a new ASP.NET MVC app (you can do this with a regular ASP.NET app, but I just happened to use MVC). Before doing anything, run the app, go to the “Log On” area, register yourself as a user and then quit the app. If you’ve got SQL Express installed properly and everything else is in order, your app now knows who you are.

Next, add a new WCF service to this application (call it Authentication.svc). Delete the files Authentication.svc.cs and the IAuthentication.cs. Open up the Authentication.svc file and replace it’s contents entirely with the following:

<%@ ServiceHost Language="C#"
    Service="System.Web.ApplicationServices.AuthenticationService" %>

Note here that there’s no code-behind. All we’re doing is exposing a service that is already part of the ASP.NET framework at a specific endpoint. You’re not quite ready yet – we have to make this service ASP.NET compatible, so open up your Web.config file and make sure that your system.servicemodel section looks like this:

<system.serviceModel>
 <services>
 <service name="System.Web.ApplicationServices.AuthenticationService"
behaviorConfiguration="AuthenticationServiceBehaviors">
 <endpoint contract="System.Web.ApplicationServices.AuthenticationService"
     binding="basicHttpBinding" />
 </service>
 <service name="Wp7AspNetMembership.HelloService"
     behaviorConfiguration="AuthenticationServiceBehaviors">
 <endpoint
     contract="Wp7AspNetMembership.IHelloService"
     binding="basicHttpBinding"/>
 </service>
 </services>
 <serviceHostingEnvironment aspNetCompatibilityEnabled="true"
     multipleSiteBindingsEnabled="true" />
 <behaviors>
 <serviceBehaviors>
 <behavior name="AuthenticationServiceBehaviors">
 <serviceMetadata httpGetEnabled="true" />
 </behavior>
 <behavior name="">
 <serviceMetadata httpGetEnabled="true" />
 <serviceDebug includeExceptionDetailInFaults="false" />
 </behavior>
 </serviceBehaviors>
 </behaviors>
 </system.serviceModel>

 <system.web.extensions>
 <scripting>
 <webServices>
 <authenticationService enabled="true" requireSSL="false"/>
 </webServices>
 </scripting>
 </system.web.extensions>

What’s going on in this Web.config file is pretty interesting. First, we’re exposing Authentication.svc and HelloService.svc (not yet created) over HTTP, with metadata allowed, with ASP.NET compatibility enabled. We’ve also used the system.web.extensions element to indicate that the authentication service is being enabled. In a production environment, you would set requireSSL to true.

At this point, you should be able to hit the URL /Authentication.svc and see the standard metadata page. Now let’s create HelloService.svc by creating a standard WCF service. Change the interface so it only has a single method called HelloWorld() that returns a string. The following is the implementation of HelloService.svc.cs:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Runtime.Serialization;
using System.ServiceModel;
using System.Text;
using System.Web;
using System.ServiceModel.Activation;

namespace MvcApplication1
{
 [AspNetCompatibilityRequirements(
    RequirementsMode = AspNetCompatibilityRequirementsMode.Required)]
 public class HelloService : IHelloService
 {
  public string HelloWorld()
  {
   if (HttpContext.Current.User.Identity.IsAuthenticated)
     return HttpContext.Current.User.Identity.Name;
   else
     return "Unauthenticated Person";
  }
 }
}

This is a pretty simple service. Just pretend that instead of returning the name of the authenticated user, we’re actually performing some vital business function that requires a valid user context.

Now, we can get to the good stuff : the WP7 application 🙂 In this WP7 application we’re going to create a login form, submit user credentials over the wire, get validation results and, if the user was authenticated, we’re going to call the HelloWorld() method with a validated, secure context – all of this with an incredibly small amount of work on the part of the WP7 client.

Add a new WP7 application (just the stock one, not the list view) to the solution, I called mine TestMembershipClient but you can pick whatever you like. Add service references to Authentication.svc and to HelloWorld.svc (I named the reference namespaces MvcWebAppReference for authentication and AspNetMvcRealReference for the service that simulates doing real work).

On your main page, drop a couple text block labels, two text boxes, and a submit button. When you run the app at this point, it should look like this screenshot:

WP7 Login Form

WP7 Login Form

Now rig up a click handler for the submit button with code that looks like this:

private void LoginButton_Click(object sender, RoutedEventArgs e)
{
 MvcWebAppReference.AuthenticationServiceClient authService =
   new MvcWebAppReference.AuthenticationServiceClient();
 cc = new CookieContainer();
 authService.CookieContainer = cc;
 authService.LoginCompleted +=
   new EventHandler<MvcWebAppReference.LoginCompletedEventArgs>(
 authService_LoginCompleted);
 authService.LoginAsync(UsernameBox.Text, PasswordBox.Text, "", true);
}

void authService_LoginCompleted(object sender,
 MvcWebAppReference.LoginCompletedEventArgs e)
{
 if (e.Error != null)
 {
 MessageBox.Show("Login failed, you Jackwagon.");
 }
 else
 {
   AspNetMvcRealReference.HelloServiceClient helloService =
    new AspNetMvcRealReference.HelloServiceClient();
   helloService.CookieContainer = cc;
   helloService.HelloWorldCompleted +=
   new EventHandler<AspNetMvcRealReference.HelloWorldCompletedEventArgs>(
     helloService_HelloWorldCompleted);
   helloService.HelloWorldAsync();
 }
}

void helloService_HelloWorldCompleted(object sender,
 AspNetMvcRealReference.HelloWorldCompletedEventArgs e)
{
 MessageBox.Show("You're logged in, results from svc: " + e.Result);
}

Most of this is pretty straightforward. When the user clicks the submit button, it calls the LoginAsync method on the membership service (remember that all service calls in Silverlight are asynchronous). When we get the results of that call, we either tell the user that their login has failed, or we invoke the HelloWorld method (also asynchronously). When the hello world method comes back from the server, we display the results of the execution in a message box that looks like the one in the screenshot below:

Results of Calling a Secured Service from WP7

Results of Calling a Secured Service from WP7

One thing to take very careful note of is the CookieContainer. Because we’re using two different proxies: 1 to talk to the authentication service and 1 to talk to the hello world service, we have to ensure that these two proxies are using the same cookie container so that the auth cookie can be used by subsequent method calls on other services. You can do this for an unlimited number of services that are on the same DNS domain, so long as they all share the same cookie container. To enable the use of cookie containers in WCF services in Silverlight (it’s disabled by default), you have to set the enableHttpCookieContainer property to true. in the binding element in the ServiceReferences.ClientConfig file.

On the surface you might not think this is all that big of a deal. You might also think this is difficult but, keep in mind that I provided a pretty detailed walkthrough. This whole thing only took me about 15 minutes to set up from start to finish once I’d figured out how the CookieContainer thing worked. So why bother with this?

Consider this: If you already have an ASP.NET application that is using the membership provider, role provider, and profile provider you can quickly, easily, and securely expose services to a mobile (WP7) client that allow that client to have secured, remote access to services exposed by that site. In short, any user of your existing web application can use their existing credentials to log in from their WP7 device and access any services you decide to make available.

ASP.NET provider services, coupled with WP7 and the fact that Silverlight has access to WCF client proxy generation, means you can very easily prep your site for a rich WP7 experience.

ASP.NET 4 Unleashed

Some of you may have heard of this little book called ASP.NET 3.5 Unleashed, written by Stephen Walther. The new version of this book, updated for ASP.NET 4.0 is now in the final stages of editing and review. Oddly enough, the new title of the book is ASP.NET 4 Unleashed.

My friend and partner in crime, Nate Dudek, and I have been working on adding new chapters to this tome (it’s something like 1800-ish pages!) as well as updating the existing chapters to show off the great new features of the ASP.NET 4.0 development platform.

ASP.NET 4.0 Unleashed Cover Art

ASP.NET 4.0 Unleashed Cover Art

The book is, as far as I know, scheduled to come out in October. But, that certainly doesn’t prevent any of you from rushing out to pre-order your copy now!

Multi-Domain Federation with ADFS v2.0

I realize that the title of this post might not be the most fascinating post but, I try to make it a point to post about things that I think will inform other developers and there are a lot of developers working on federated security right now, especially those attempting to bridge the gap to the cloud either  by moving to Azure or creating Azure-Enterprise hybrids.

Before I get to the how of this, let’s talk about the why. Why is this elegant or simple? Under the hood it is anything but simple, but the developers of individual applications within your enterprise as well as developers of partner applications will thank you for this and it does make this far simpler.

There are a couple of scenarios here, but I’ll talk about the two major ones (other scenarios are usually just variants/combinations of the below):

  1. Federation with Business Partners. In this scenario there are external business partners to whom you wish to grant access to your web app.
  2. Federation of related applications within the same enterprise, also enabling SSO. In this scenario your enterprise has many web applications and multiple active directory domains and you wish to allow anyone from either domain to access any of the web applications.

In my case I had to create something like #2, although the fact that #1 becomes possible “for free” didn’t hurt. In my scenario I had a full-blown enterprise domain that was managed by the IT people and I had a separate active directory in which customer accounts were stored and against which customers needed to authenticate. The goal here was to provide a unified interface so that internal employees and external customers authenticated against the other domain would have the same SSO login screen. ADFS let me do that, and it actually wasn’t all that difficult once I got advice from the right people and got the hang of the terminology.

So, before I get into how I got ADFS to federate authentication of all my enterprise web applications across two domains, let’s run through some terms. If you know this already, feel free to skip ahead to the good stuff:

  • Relying Party Trust – A relying party is an application that relies on the federated security system for authentication and authorization. The application agrees to give up isolated management of user information and permissions and trusts that the federated system will take care of those details. It becomes passive and expects that information about the user’s identity and permissions will come across as claims.
  • Claims Provider Trust – Claims Provider trusts are essential for multi-domain federation. This is basically a trust relationship where you are telling your ADFS configuration that you trust that a specific set of claims can come from an external ADFS server (or other standards-compliant token server). Here we might create a claims provider trust from the employee ADFS to the customer ADFS, but I’ll explain more on that later.
  • Attribute Store – Attributes are bits of information that can be returned to relying parties in the form of claims. ADFS comes pre-configured to use Active Directory as an attribute store. This lets you return claims for first name, given name, e-mail address, etc. You can also create custom attribute stores or use SQL server.

When you’re looking at the ADFS configuration, you’ll see a tree menu on the left that you can use to build these trust relationships and configure these stores:

ADFS Configuration Tree

ADFS Configuration Tree

When you configure a relying party trust, and then add claims rules (a topic for yet another post), you will be able to authenticate that website using the ADFS server instead of a mechanism built into that website.

When you have a Claims Provider Trust pointing from one ADFS server to the other, ADFS will actually bring up a screen that allows the user to choose which ADFS server they want to use for authentication. If you give them friendly descriptions, you can have the ADFS server prompt users to choose between things like Internal Employee and Customer.

The tricky part here is you have to pick the one ADFS server to serve as the one that all the applications use in their Web.config files. This server will then examine its configuration and if it is possible to get claims from another provider for a particular relying party, it will prompt the user with a dropdown. The other tricky part is that on the second ADFS server (in my case, the Customer ADFS server), you need to have a relying party configuration that points back to the Employee (or ‘master’) ADFS server.

So, to recap:  You have web applications (ASP.NET/ASP.NET MVC in my case, but they can be other frameworks) that give up their own right to manage their own authorization and authentication. In doing so, they point to a central or master ADFS server within their Web.config files. This central ADFS server has a list of Claims Provider Trusts that is basically a list of all other external (either in-building or out on the internet) ADFS servers that can provide claims. The end user, when prompted to authenticate, chooses which of these ADFS servers they want to authenticate against (they see the friendly display name, e.g. ‘Customer’ or ‘Employee’ or ‘Vendor’ or ‘Supplier’ etc). The authentication takes place on the appropriate ADFS server and then tokens get bounced around over HTTP and as cookies until the user is finally taken back to the secure website.

The end result is that you can have a bunch of web applications within your enterprise, out in the cloud, or within partner enterprises that can all share a pool of user credentials for authentication and authorization without any information ever leaving its own privacy scope. Business A can use an application hosted by Business B and that application can ask the web user whether they are a Business A or Business B user and authenticate them properly.

I realize that it took an awful lot of words to explain this and walk through how this kind of configuration works, but it’s for a good cause. What I’m trying to get at here is that the days of having to build a plethora of applications, each of which having their own private silo of user information, are gone. Tools like ADFS give us the ability to create suites of interconnected apps within our enterprise as well as allow users from outside the enterprise to use these applications without incurring huge maintenance overhead with complicated nightly user-FTP jobs to synchronize accounts, etc.