Given that I haven’t posted for nearly a week and half my regular readers (live the dream Mitch) would be forgiven for thinking that I have fallen off the face of the Earth. I am happy to report that this is not the case and that I have simply been extremely busy.

At the moment I am actively engaged on three separate projects for three separate government departments. One of the projects sees me doing a fair bit of Team Foundation Server work ranging from installation, to customization to developer education.

Another project gas ne doing a lot of BizTalk work. To date it has been mostly infrastructure focused work but that should hopefully be finalised now. We ran some load tests and we were able to meet our required sustained load level without too much trouble.

Actually I ended up with two neat tools; one drives load into a WebSphere MQ message queue at a sustained rate which can be ramped up interactively as the program is running. The other tool watches another queue and calculates how fast messages are arriving. With these two tools I can relatively accurately determine whether BizTalk can output the messages as fast as it sucks them in.

The third project is a short sharp review project where I am analysing a body of code to reveal potential problems. Rather than try and organise all the source on my laptop (its a VS2003 project and I am now exclusively using VS2005) I just got the customer to send me their compiled assemblies. I then uses .NET Reflector and the Code Metrics plug-in to focus my investigation activities.

The Code Metrics plug-in will return the cyclomatic complexity count and code size of methods within an assembly (amongst other things) which are useful at finding bugs in code. Code with a large number of decision points often hide logic bugs or sloppy coding practices and when you balance it against code size of a method you can fairly quickly take a tour of the worst neighbourhoods in your code base.

One interesting side effect of this process was that I found a method that builds up a list of parameters on a SqlCommand object. It showed up in the cyclomatic complexity analysis because it used a lot of conditional logic to decide which of the parametes to populate, but down the bottom of the method I spotted a parameter called @OrderByClause.

This one range a few alarm bells it potentially meant that some user input was being passed unmolested all the way down to the stored procedure and based on the variable name, it indicated that it might be used as part of a dynamic SQL execution routine.

I spent some time with the customer going over the other issues in the code but when we got around to this one we decided to jump into the code base and find out if we had any protection in the system at all. Working all the way back from the stored procedure, we got all the way up to user interface code in an ASP.NET project. Fortunately we found one little routine that by virtue of the way it had been implemented defends against a SQL injection attack (phew).

Of course, you can’t always rely on that web layer being there, especially if they expose some of the functionality via web-services so the mitigation strategy is to apply some defence in depth coding techniques to clean the string before it reaches the database.

I think I have been working with Rocky too much.