Welcome to William's ekasi

[ Log On ]
2011/10/31
Tags: WP7

UPDATE: SignalR now has a full WP7 client thanks to David Fowl, so most of this post isn’t valid anymore.

I’ve taken quite a keen interest in SignalR lately.  No, I don’t have a specific business need for it but I suspect the more I get used to it the more places I’ll think of using it.  If you don’t know what SignalR is, according to Scott Hanselman it’s “an asynchronous signaling library for ASP.NET” …which doesn’t exactly tell you much.  So my ghetto interpretation is that SignalR is a library that makes it easy for developers to leverage scalable long-running connections between clients and servers.  There’s also some super cool higher level abstractions (called Hubs) which allows you to call Javascript methods from the server side.

Note: I’m not going to look at the Hubs in this post, there’s plenty of information and samples on using hubs available online.

A quick look

SignalRProject

SignalR comes with a sample project which shows how to work with the hubs and a Javascript client – super slick!  To create a SignalR application you need a client and a server.  We’ll start with the server which we’ll host inside a MVC app.

The server

Use NuGet to add a reference to SignalR.Server.

Create a new class that represents your server logic and extend from PersistentConnection and override the OnReivedAsync method.  This method will be called whenever a client sends data to the server.

DemoConnection

In the example I’m received a JSon result representing a PriceUpdateData object.  Once I’ve deserialized the JSon I can do whatever I want with the data.

The next step is to tell SignalR to expose this connection for clients to consume.  We do this using routes.

Add a MapConnection route in your global.asax RegisterRoutes method (make sure to include SignalR.Routing in your using statements for the MapConnection extension method).

Routes

This will add a route to the routing engine so that whenever a request comes in for Server/pricing your DemoConnection class will handle it.

The Client

The client in this case is a WP7 device.  As it stands now, there is no native WP7 client available for SignalR.  Thankfully it’s not a big deal to make one work… unless you want to work with hubs – which is why I’m not touching them in this post.  The SignalR.Client project can’t be added as a reference to a WP7 app since it’s not a silverlight assembly.  Which means creating one.

But let’s first take a look at the original SignalR.Client project.

SignalR.Client

Ignoring the hubs there’s not too much work to do in order to get this to work on a WP7 device.  My first thought was just to new up a new WP7 assembly project and copy-pasta everything in.  Which works pretty well except there are a few gotchas:

  • The TPL isn’t available by default in WP7 (there’s a NuGet package available to fill that gap though)
  • WP7 doesn’t support Tracing, so all tracing commands need to be removed
  • HttpWebRequest on WP7 doesn’t support the ContentLength property
  • HttpWebRequests need to be closed before you can get access to the HttpWebResponse
  • WP7 doesn’t support System.Dynamic

None of those are dealbreakers except for System.Dynamic, but only in the case of Hubs.  Hubs currently make use of DynamicObject which means the copy-pasta route won’t work at all there… which is why I’m leaving it out for now :)

File –> New Project

I created a new SignalR.Client.WP7 project, and copied the required files form the original project.  Next we need to fix the WP7 specific issues:

Add the System.Threading.Tasks NuGet package for TPL support.

Wrap the Trace calls inside compiler directives (inside TaskAsyncHelper.cs):

Trace

Wrap the request.ContentLength = buffer.LongLength inside compiler directives (HttpHelper.cs).  While here, also close the Request Stream before trying to access the ResponseStream:

Length

Using the client

Now that we have a WP7 assembly we can use it in our apps.

Phone

And that’s it. We now have communication between the phone and a server using SignalR.

You can download my SignalR.Client.WP7 project here.  I’ve also submitted a pull-request to get it added to the main project so it might be appearing as part of a NuGet package soon.

UPDATE: SignalR now has a full WP7 client thanks to David Fowl, so most of this post isn’t valid anymore.

2011/10/21
Tags:

I’m sitting on a very full Boeing on the way back to Johannesburg.  I’m tired, I’m happy, I’m excited and I’m a little smarter than I was at the start of the week.  I’m also bent over my keyboard with little T-Rex arms – no working room in economy.

TechEd 2011 was a great conference.  Not only was it the first TechEd that I presented at, but we also had some fantastic international speakers coming too.  There’s a whole bunch of stats and things I found interesting below, just keep in mind this is my own interpretation of the sessions. Nothing official!  It also doesn’t include any whiteboard sessions.

Session Breakdowns

Categories

This year there were 85 IT Pro sessions and 74 Developer sessions.  A pretty even breakdown.  The Both category (only 4) are sessions that I think both professions would have gotten benefit from.  I’ve excluded the partner summit sessions from these stats.

I distinctly remember going through the session catalogue thinking that there weren’t that many developer sessions so I’m a little surprised that the split was actually so even.

Looking a little deeper into the dev sessions though, the topics covered by each is a little more telling:

DevSessions

First thing that grabbed my attention is that there was only 1 security session for developers.  That probably resonated with me because I was the presenter of that session.  I wonder if there isn’t more of a need for developer focused security sessions?

The next interesting bit is that SQL and BI makes up more than a quarter of the developer sessions.  This is probably due to Denali CTPs.  A lot of BI guys tend to specialize in that space, so how much other value did they get from TechEd?

There were only 8 windows phone sessions.  With Nokia World coming up next week I would have expected a lot more phone sessions – especially since they flew 2 international speakers out to talk about Windows Phone.

Cloud computing also seemed a little low to me for it’s importance.  This might have to do with the state of Flux of cloud computing in South Africa though.

Other than the very low and very high sessions, it felt relatively balanced to me.  Comparing this to the IT Pro sessions shows a completely different focus:

ITPro

Only 2 SQL sessions, compared to 13 for the developer track.  9 Security sessions, compared to 1.  1 Phone session, compared to 8.  There’s also a bigger variance than the developer tracks.

Interesting, isn’t it?

2011/10/16

I presented two sessions at TechEd Africa this year:

  1. Hack-Proofing your Microsoft ASP.NET Web Forms and MVC Applications
  2. What code-database Gap? Introducing Project Juneau

You can download the slide-decks and code samples for each of the sessions bellow.

Hack-Proofing

Slide Deck

Code Samples

Additional Info:

  • I cover SQL Injection in more detail in a post here.
  • CSRF is covered in more details here.

Juneau

Slide Deck

Sample Database

Feel free to download and use as you wish!

2011/10/16
Tags: Security

Here’s a scary stat for web developers:

19.9% of all web hacks are performed using SQL Injection
Source: http://tinyurl.com/WebHackDB

Stats_thumb3

That’s huge! 1 in every 5 hacks is performed using SQL Injection which worries me, I really thought SQL Injection was a solved problem.

How does it work?

Tarrifs_thumb

SQL Injection works because of the options we have for passing parameters to SQL. The most simple option is where the SQL statement that gets passed to SQL actually includes the text of the parameters. An example of this is shown below:

SQL1_thumb

In this example, the actual SQL statement is being created based on the searchCriteria string parameter. In a normal usage scenario, this would result in a SQL statement being executed by the database that looks something like:

SQL2_thumb

Which is fine for the happy path, but what happens if someone comes along and uses the parameter “Beer’ UNION ALL SELECT * FROM sys.tables;—“? The resulting SQL statement looks drastically different:

SQL3_thumb

This is still perfectly valid SQL and the database engine will execute it exactly as it should (note: this will only work if the columns match up). The database doesn’t know that it’s not supposed to execute the second part of the query. The problem is We haven’t separated the SQL query from the parameter data. We’re treating it all as one. Thankfully, this is an easy fix.

Preventing SQL Injection

The best way to prevent SQL injection is to parameterize your queries. Doing this separates your query from your parameters meaning that SQL will know that regardless of what is passed in as a parameter, it must be treated as a value and not as a potential query. This also has the added benefit of allowing SQL to reuse execution plans (each new parameter in the example above would have to result in a new query plan). Parameterizing your queries is a simple 2-step procedure.

Step 1. Replace the parameter parts of your query with parameter tokens

Our example uses the searchCriteria parameter to build up a string, so instead of changing the string we’re going to replace that section with a token:

SQL4_thumb

Notice how we aren’t passing the parameter in through the string anymore but instead we’re using the @SearchCriteria as part of our string? That’s the token. For each of your parameters, include a unique tokened parameter.

Step 2. Include the parameter’s value in the IDbCommand’s Parameters list

SQL5_thumb

By using the command’s parameter value we have told SQL what the query is as well as what the value of the parameter is and SQL can use that to gain meaning about the query. This means that our query is no longer vulnerable to SQL Injection.

Dynamic SQL

There are times where you need to work with dynamic SQL and this is an often neglected section that isn’t protected from SQL Injection. To protect dynamic SQL from SQL Injection the same solution works: Parameterizing your queries.

Yes, dynamic SQL can also be parameterized – this time, by SQL:

SQL6_thumb

Stored Procedures?

I don’t know where the rumour started about stored procedures being safe from SQL injection, but they aren’t. Stored procedures offer zero protection against SQL Injection!

Here’s an example.

SQL7_thumb

We’re executing a stored procedure here, but we’re still vulnerable to SQL Injection because we’re not separating the parameters from the exec statement. Exactly the same as using normal SQL. Granted, in this case it is more difficult to do SQL injection because of syntax restrictions but this is by no means a solution. The only solution is to again, parameterize it!

In ADO.NET there is a handy property on the IDbCommand object which specifies the query type. If we set the query type to Stored Procedure then ADO.NET forces us to parameterize it. It’s at that point that we get the security we’re looking for.

SQL8_thumb

Notice how here we don’t even have to specify the parameter tokens in the stored procedure name (AddComment)? That’s because ADO.NET tells SQL that we’re executing a particular stored procedure so the parameters have to match up. But remember, the security isn’t coming from the stored procedure – it’s coming from the parameterization!

2010/01/01

ASP.Net MVC includes a Html helper method called AntiForgeryToken.

The AntiForgeryToken is intended to prevent cross site request forgery (CSRF – pronounced Sea-Surf) attacks.  CSRF attacks rely on the fact that the user’s credentials are stored in cookies and are still valid.

While the user’s credentials are still valid at site X, the CSRF attacker will trick the browser to performing actions at site X while the user is browsing a different site.

Example:

William is logged into CodePlex.  The authentication cookie for CodePlex is kept for a really long time and whenever the browser performs requests for CodePlex, the cookie is sent down to the site and CodePlex trusts that William is trying to do something on the forums.

CodePlexToken

So now our requirements are met:

  1. The authentication token is stored in the browser’s cookie
  2. The authentication token is still valid

So how did William get exploited?  Well, it didn’t happen on CodePlex.  William also frequents another forum: The Justin Bieber fanclub forum.  One of the threads William was visiting had a lot of <img /> tags in it.  The source of these tags weren’t all images though.  One of the tags looked like this:

imgtag

And this is where it happened.

The browser saw a request to CodePlex and passed Williams credentials along to CodePlex.  CodePlex said “Ah, you’re William.  He’s cool, we trust him.  Let’s delete everything from OpenPOS like he asked”.

And without even knowing it, William had deleted OpenPOS from CodePlex.

Note: Yes, I know there is no “DeleteEverything” on CodePlex projects.

Prevention:

Preventing this type of attack is incredibly simple in ASP.Net MVC.  There’s a Html helper method called “AntiForgeryToken()” which generates a token.

Usage

The Action method on Controller is then decorated with the ValidateAntiForgeryToken attribute.

Attribute

And that’s it. We’re secured.  But how does this work?

The token that is generated is stored in a hidden form element which is sent down to the browser as part of the markup (similar to the ViewState).  Then, the token is encrypted and that encrypted value is stored in a cookie sent to the user.

AntiForgery

With this set up, each time the form is submitted the cookies value and the hidden form elements values are compared.  If they don’t match, we can assume the request isn’t coming from a valid source and we get a nice error page.

Viewstate

Should this type of error be handled gracefully by your application? I don’t think so. This type of error would result from the values not matching which can either only come from a CSRF attack or the server’s machine key changing during an IIS reset (you do have a machine key defined in your web.config, don’t you?).

2010/01/01
Tags:

I’m using Feedburner as my RSS feed provider and one of the things that bothered me a little was that the title of my feed that came through from the feedburner RSS was set to “Development”.  This means that subscribers of my feed have no way of identifying my feed from any others.  Easy fix though, and one that is right on the feedburner FAQ page.

Step 1: Log into feedburner and select your feed

Step 2: Go to Optimize

Step 3: Add a title and description Under Title/Description Burner

Step 4: Done!

Simple, no?

2010/01/01
Tags:

What?

1251386757_dictator

This came directly the dictator Product Owner of a project I have just started working on.  Well, if we were doing Scrum, he would have been the ideal candidate for the Product Owner role – he met all the requirements.

His theory was that the project is already in crisis stage (hence bringing extra resources onboard) and there wasn’t time for the overhead that scrum brings.  Which makes me wonder if he had ever actually been involved in an effective scrum team before.  I mean sure, he knew the buzzwords and threw them around in the right context but what experiences has he been through that makes him think scrum would slow the process down?  Let’s take a look at the scrum rituals:

Planning

Scrum usually involves some form of group planning (e.g. Planning Poker) which admittedly, does take some time.  The time involved should be time-boxed but it is still time.  This could be one of the Scrum aspects he feared would take up too much time and it’s valid.  As a manager who isn’t actually a developer it can be difficult to see any value in your developers sitting around not developing.

Planning, like everything in scrum, should be time-boxed though.  And even if it wasn’t, the project has brought on numerous new resources and a detailed planning session involving the whole team would make knowledge transfer between the new team and the existing team much faster.

Daily Standups

15 minutes a day isn’t a big chunk of time – even if you’re in crisis mode.  It is however an important part of the focus and streamlining that scrum promises.  This little morning (perhaps) ritual is probably one of the biggest contributing factors to Scrums success.

Reviews

Nothing brings clarity to the “are we there yet” question like a full review and demo to all the stakeholders. Nothing.  If the team hasn’t done anything worth demoing, the public display of nothingness will help motivate the team to not let it happen again.  These take time, but add far more value in terms of transparency than what they consume.

Retrospectives

A team under crisis is going to be under more pressure and a retrospective is the ideal outlet for frustrations and difficulties.  It’s also a place for the product owner to shower the team with praise and thank them for all the effort.

But this is a crisis!

A crisis is possibly one of the best projects for Scrum.  It adds lightweight structure to what would otherwise be chaos.  It gives the team defined milestones (sprints) that they can use to measure progress against deadlines.

It also has the side effect of almost always being story based already.  Most crisis projects I’ve been involved with in the past have had their project plans converted into Action Lists which with some minor tweaking can be made into Good User Stories.

2010/01/01
Tags:

It can sometimes be tricky to work out what to mock when testing some of the PRISM components.  One of the tricky ones is the IEventAggregator interface.  Luckily for us, most mocking tools provide some helper methods which make it easy to test that your subscriptions and publications are working correctly.

I’m going to use Rhino Mocks for this post.  Rhino Mocks has an extension method placed on all mocked objects called VerifyAllExpections.  With Rhino Mocks you can create mock objects and then you proceed to tell the mock objects how to behave using the Expect() extension method.  Once you have told the object how to behave, you can use the VerifyAllExpections() extension method to make sure that the methods you expected to be called actually were called.

An example:

Main

Here I want to test that the Subscribe method is called ONCE (and only once) from the constructor.

To do this I need to mock a couple of things out:

  1. I need to mock an IEventAggregator which should expect a call to GetEvent<LoginSuccessful>()
  2. I need to mock out a LoginSuccessful event which should expect ONLY ONE call to Subscribe()

Once we’ve mocked those two and setup the expectations correctly, we can call VerifyAllExpectations to ensure that they were called as we expected them to be called.

Test

And that’s it.  If we change our original code to not call the Subscribe method, the test fails because we told our mock object to expect a call to it.  If we change the code to make two calls to the Subscribe method, the test fails again because of the exception we told our mock object to throw.

Next time: How to check that Publishing works properly

2010/01/01
Tags:

Part 1 on testing the subscriptions here.

Now that we’ve tested that our event aggregator is subscribed correctly, all that remains is to test that the code behaves as expected when a LoginSuccessful event is published.  In order to check whether it was successfully listened to, I’ve changed our demo code to set a boolean to true once a LoginSuccessful event is published.

image

Now to test this all we have to do is simulate a LoginSuccessful event and publish it to an event aggregator and then assert that our boolean value is set to true.  In our test we create a REAL event, a REAL demo class (the one we’re testing) and a fake event aggregator.

We tell the aggregator to return our real event when asked for one.  Then we force our event to publish a success case and assert against the results.

image

For completeness we need to test the opposite: a login that is not a success results in a false boolean.

image

And that’s our event aggregator logic in the EventAggregatorDemo class tested.  Obviously this example was simple, but the concepts follow through to real-world scenarios as well.

2010/01/01
Tags:

I’m working on a new project which has a few unique security requirements for membership providers:

  • Passwords can’t be used again within 4 times of first use
  • 1-time pins must be used for authentication
  • User accounts must have validity periods that administrators can set (this account is valid between 1-jun-2010 and 14-jun-2010)
  • Accounts must be auto-disabled after 90 days of inactivity
  • Passwords have to be changed every 90 days

Rather than hardcode the values (4, 90, 90) I decided it would be best to make them parameters which the membership provider could simply read from the config.

While this is such a simple thing to do it did throw me a curve ball before I realized what was going on.

The setup:
/// 
/// Initializes the config values
/// 
/// The name that the provider has been given
/// The NameValueCollection from the config settings
public override void Initialize(string name, System.Collections.Specialized.NameValueCollection config)
{
	string passwordHistoryNumberConfig = config["PasswordHistoryNumber"];
            
	if (!string.IsNullOrEmpty(passwordHistoryNumberConfig) && int.TryParse(passwordHistoryNumberConfig, out _passwordHistoryNumber))
	{
		// No need to do anything here
	}
	else
	{
		// Either the config was empty or it wasn't a valid int.  Either way, default to 4
		_passwordHistoryNumber = 4;
	}

	base.Initialize(name, config);
}

Simple enough, simply read the value from the NameValueCollection and done right?  Unfortunately no.

The Problem

If you run this and step into the method with a debugger everything seems to be working fine but as soon as your page renders you’ll get a horrible “Server error in application” window.

image

A quick look inside the web.config shows that there is no custom configuration area defined specifically for membership providers so where does the membership provider get it’s list of valid attributes?  Firing up Reflector on SqlMembershipProvider.Initialize gives us a method (an untidy one) that looks similar to:

public override void Initialize(string name, NameValueCollection config)
{
    HttpRuntime.CheckAspNetHostingPermission(AspNetHostingPermissionLevel.Low, "Feature_not_supported_at_this_level");
    if (config == null)
    {
        throw new ArgumentNullException("config");
    }
    if (string.IsNullOrEmpty(name))
    {
        name = "SqlMembershipProvider";
    }
    if (string.IsNullOrEmpty(config["description"]))
    {
        config.Remove("description");
        config.Add("description", SR.GetString("MembershipSqlProvider_description"));
    }
    base.Initialize(name, config);
    this._SchemaVersionCheck = 0;
    this._EnablePasswordRetrieval = SecUtility.GetBooleanValue(config, "enablePasswordRetrieval", false);
    this._EnablePasswordReset = SecUtility.GetBooleanV.....

And a little lower down the method we also see:

config.Remove("connectionStringName");
config.Remove("connectionString");
config.Remove("enablePasswordRetrieval");
config.Remove("enablePasswordReset");
config.Remove("requiresQuestionAndAnswer");
config.Remove("applicationName");
config.Remove("requiresUniqueEmail");
config.Remove("maxInvalidPasswordAttempts");
config.Remove("passwordAttemptWindow");
config.Remove("commandTimeout");
config.Remove("passwordFormat");
config.Remove("name");
config.Remove("minRequiredPasswordLength");
config.Remove("minRequiredNonalphanumericCharacters");
config.Remove("passwordStrengthRegularExpression");
config.Remove("passwordCompatMode");

if (config.Count > 0)
{
	string key = config.GetKey(0);
	if (!string.IsNullOrEmpty(key))
	{
		throw new ProviderException(SR.GetString("Provider_unrecognized_attribute", new object[] { key }));
	}
}

So that's it! The SqlMembershipProvider expects that all config values will be removed from the in-memory config object and if there's any left, it wasn't a valid config element. I can see the logic there... wanting to enforce that only valid properties are placed on the config element.

That also means it’s an easy fix to change our custom provider:

/// Initializes the config values
/// 
/// The name that the provider has been given
/// The NameValueCollection from the config settings
public override void Initialize(string name, System.Collections.Specialized.NameValueCollection config)
{
	string passwordHistoryNumberConfig = config["PasswordHistoryNumber"];
            
	if (!string.IsNullOrEmpty(passwordHistoryNumberConfig) && int.TryParse(passwordHistoryNumberConfig, out _passwordHistoryNumber))
	{
		// Remove the PasswordHistoryNumber element from the config file
		config.Remove("PasswordHistoryNumber");
	}
	else
	{
		// Either the config was empty or it wasn't a valid int.  Either way, default to 4
		_passwordHistoryNumber = 4;
	}

	base.Initialize(name, config);
}

And that’s it!

2010/01/01
Tags:

I can’t be the only person who thinks that views in Mvc get way too messy way too easily – especially when grids start taking over the world.  Take this as an example:

= Html.Grid(Model.Merchants)
	.Columns(column => {
		column.For(x => Html.ActionLink("Select", "Details", new { id = x.MerchantId })).Named(String.Empty);
		column.For(x => x.Name).Named("Name");
	})

This is such a simple grid, but the view looks messy already.

MvcContrib exposes the Grid helper method from the MvcContrib.UI.Grid namespace.  The grid builds columns using the lovely syntax shown above… but there is an alternative:

Custom GridModel classes

The Grid helper exposes another extension method called WithModel which accepts a GridModel class which we can use in our grids.  If we create one for the grid above it looks like this:

public class MerchantIndexGridModel : GridModel<merchant>
{
    public MerchantIndexGridModel(HtmlHelper html)
    {
        Column.For(x => html.ActionLink("Select", "Details", new { id = x.MerchantId })).Named(string.Empty);
        Column.For(x => x.Name);
    }
}

Which we then use in our view like this:

<%= Html.Grid(Model.Merchants).WithModel(new MerchantIndexGridModel(Html)) %>

Much cleaner. I like it.

2010/01/01
Tags:

What a pain!

If a controller’s action method returns RedirectToView(“NewView”) the result is a RedirectToRouteResult.  This is good.

Unless the routing engine is actually in play (which it isn’t while we’re unit testing) the actual NewView action method won’t be called.  Which is kind of good because it forces you to test one action method at a time.  But what if your action method returns multiple RedirectToRouteResults?

public ActionResult Index()
{
    var viewModel = GetViewModel();
    
    if (SomethingNotRight(viewModel))
    {
        // redirect to error page
        return RedirectToAction("Error");
    }
    else if (SomethingDifferent(viewModel))
    {
        // redirect to details page
        return RedirectToAction("Details");
    }

    return View("Index", viewModel);
}

At first glance our test would look something like this:

[TestMethod]
public void Index_WithZeroItems_ReturnsErrorView()
{
    // Arrange
    ItemIndexViewModel mockViewModel = MockRepository.GenerateMock();
    mockViewModel.Expect(m => m.GetItems()).Return(new List()).Repeat.Any(); // should redirect to Error

    var controller = new ItemsController();
    controller.ItemIndexViewModel = mockViewModel;

    // Act
    var result = controller.Index();

    // Assert
    Assert.IsInstanceOfType(result, typeof(RedirectToRouteResult));
    }

So we have asserted that A RedirectToRouteResult has been returned, but which one?

The RedirectToRouteResult exposes a RouteName property and RouteValues property.

What it DOES give us though, is the RouteValues property is a dictionary populated with the name of the action we redirected to.  So to check which action we’re redirecting to we simply have to check the name of the action within the dictionary.  So our test method becomes:

public void Index_WithZeroItems_ReturnsErrorView()
{
    // Arrange
    ItemIndexViewModel mockViewModel = MockRepository.GenerateMock();
    mockViewModel.Expect(m => m.GetItems()).Return(new List()).Repeat.Any(); // should redirect to Error

    var controller = new ItemsController();
    controller.ItemIndexViewModel = mockViewModel;

    // Act
    var result = controller.Index();

    // Assert
    Assert.IsInstanceOfType(result, typeof(RedirectToRouteResult));
    Assert.AreEqual("Error", (result as RedirectToRouteResult).RouteValues["action"]);
}

So we tested that the CORRECT RedirectToRouteResult is returned, just as expected.

2010/01/01
Tags:

Oh StyleCop, how I loath thee…

StyleCop is generally ok except when it isn't.

My BIGGEST pet peeve with StyleCop is it’s stupid requirements for Property documentation.  If it’s a read only property it HAS to start with “Gets “ and if it’s not readonly it HAS to start with “Gets or sets “.  How irritating.  Most of the time, my properties are well named (should be?) and aren’t used for anything complex.  As soon as a property starts bloating up with logic I begin to think perhaps that should be a method.

Enter: GhostDoc

I was introduced to GhostDoc by Rob MacLean and Rudi Grobler who overheard me moaning about StyleCops annoying requirements.  What an amazing tool, this definitely does into the list of Must-Haves in my dev environment!

With a simple Ctrl + Shift + D, GhostDoc automagically spits out the proper comments required for StyleCop and best of all, the comments make sense!  I’m in love!

All this documentation was generated by GhostDoc, not a word typed by me.

image

2010/01/01
Tags:

FxCop wants all assemblies in your project to be signed.  Sometimes though you have to work with 3rd party assemblies that aren’t signed.  If you can contact the developer of the component, ask them to sign it.  Most of the time they won’t mind.

The they’re not willing or able to and you have access to the source code, you can sign it yourself and recompile.

If you don’t have access to the source code follow these steps:

  1. Disassemble the assembly using:   ildasm /all /out=TheAssembly.il TheAssembly.dll
  2. Reassemble the assembly with a signed key:   ilasm /dll /key=TheAssembly.snk TheAssembly.ilThat should work fine for most scenarios.

See here for more info.

2010/01/01
Tags:

<head runat=”server”> doesn’t go through normal asp.net processing. So when you use something like

<link href=’<%= Url.Content(“~\Content\Site.css”) %>’ rel=’stylesheet’ type=’text/css’ />

You’ll end up with something like

<link rel="stylesheet" type="text/css" href="../Views/Shared/%3C%25=%20Url.Content(%22~/Content/Site.css%22)%25%3E" />

which clearly isn’t what we wanted.  The reason for this is the <link /> elements have special processing in Asp so the <% %> code nuggets don’t work.  Thankfully it’s a quick fix:

Take advantage of the special processing and simply set the source to href=’~/Content/Site.css’.  The processing engine will resolve the url correctly.

<link href=’~/Content/Site.css’ rel=’stylesheet’ type=’text/css’ />

That will resolve to what we expect.

2010/01/01
Tags:

With MVC 3 being released last week (13 Jan 2011) we were given the option of using the new Razor View Engine.  How exactly does the Razor View Engine work?  I’m not talking about the syntax.  I want to know how we go from the razor syntax to the angle brackets for HTML.

1

To start with we have our razor syntax mixed up with HTML markup in a very neat mashup.  And it looks easy enough to convert into, but let’s take a look at the next level down the ASP.Net pipeline to see what happens.

Take a look at the call stack when we put a breakpoint in a Razor view:

2

Start from the System.Web.Mvc.BuildManagerCompiledView.Render(…) up.  The call stack goes from the BuildManagerCompiledView.Render to RazorView.RenderView to a WebPageBase class.  The interesting part of this whole call stack is that the view being rendered subclasses the WebPageBase class.  And it turns out we can change the base class that our views subclass from.  Some more detail found here from K. Scott Allen.

So we know now that our views become classes which subclass either WebViewPage or whatever base class we tell MVC to subclass from.  That still doesn’t explain exactly how the view becomes on of these classes.

Enter three classes:

  • RazorEngineHost
  • RazorTemplateEngine
  • CSharpCodeProvider

Starting from the bottom, the CSharpCodeProvider allows us to take a System.CodeDom.CodeCompileUnit and generate both the C# code and a .Net assembly from it.  To get the CodeCompileUnit that it needs we use the RazorTemplateEngine.

The RazorTemplateEngine allows us to convert the Razor syntax into a CodeCompileUnit by calling the CodeCompileUnit.GenerateCode method.  This method takes a TextReader (8 overloads) and converts that into a CodeCompileUnit.

The RazorEngineHost is used to configure the output of the RazorTemplateEngine.  Using this the namespaces, class names, imports etc are defined.

So the final process of going from a Razor file to a .Net CLR assembly goes like this:

3

And now that we have a CLR assembly of our view we can do some REALLY cool things with it.  The few that really excite me are:

  • Unit test our views (the actual HTML output) WITHOUT needing to instantiate the ASP.Net runtime
  • Put views in a separate assembly and reference them in our web project
  • Use dependency injection to inject controls (or complete views) into views (security personalisation!!!)

If you’re interested in doing a little more of this yourself, Andrew Nurse did a very cool post (and a couple presentations) on how to host Razor outside of the ASP.Net which explains a few details on how to perform the compilation of these views yourself.  I highly recommend going through it.