Friday, December 28, 2007

"Fixing" the Enter Key in CreateUserWizard

Problem: You find that when you press the "Enter" or "Return" key on a page that contains the CreateUserStep of your CreateUserWizard (or any other step of any other wizard), your form does not get submitted.  Put another way, something else on the page gets triggered instead of your Create User button.

Solution: In ASP.NET 2.0 the Page.Form has a new property: DefaultButton.  This effectively traps the Enter key and causes the specified button to execute its onclick event handler.  We would like to set this Page.Form.DefaultButton property to the UniqueID property of the CreateUserStep's Create User button.  Unfortunately, we do not have a reliable way of accessing this button in our code-behind.  The solution is to replace it with our own button and wire up the DefaultButton property in the button's pre-render event handler, since it is in a template and we cannot access it directly.

In your .aspx add the following CustomNavigationTemplate to your CreateUserStep:

<CustomNavigationTemplate >
     <asp:Button ID="CreateUserButton" OnPreRender="CreateUserButtonRender" runat="server" Text="Register User" CommandName="MoveNext" ValidationGroup="CreateUserWizard"/>


In your .aspx.cs add the following:

protected void CreateUserButtonRender(object sender, EventArgs e)
  this.Page.Form.DefaultButton = (sender as Button).UniqueID;

Wednesday, December 26, 2007

Choosing a Web Application Server Stack

There's this great book entitled, "The Paradox of Choice: Why More Is Less," that really opened my eyes to a rather non-intuitive way to improve my experience in life.  I won't go into it more than to say the author posits that there is an inflection point where "happiness" does not increase with additional accretion of choices.  This is non-intuitive, but he does a good job of explaining the many factors underlying this phenomenon.  For one, consider the opportunity cost of making a decision; you are buying what you consider the best at the time, but you are also carrying the burden of not having chosen from many other options.  Very often, buyer's remorse sets in after a purchase, and we have no one to blame but ourselves.

So, as I tore through blogs, email archives, tutorials, and documentation today, looking for the "best" platform for my personal pet project, I became acutely aware of just how much choice there is available to build web applications these days.  The result is that it's very late in the day, my weekend spent, and I am writing this blog post to reason aloud, as it were, and to force myself to pick one.

By way of introduction, let me say that my skill set falls squarely in the tried-and-true .NET 2.0 ASP.NET developer model.  I don't do MVC, MVP, Presenter First (PF), O/RM, TDD, DDD, BDD, or any other TLAs, except maybe DRY, SRP, and all those other solid OOP principles.  Also, I'm building this application for myself, by myself, and I can't seem to get a free copy of Visual Studio 2008, so .NET 3.5 and LINQ are out of reach.  I know I need forums, but that's about my only "requirement".  Sure, I've got lots of ideas of what I'm going to build, but I am staring at the blank canvas right now.

Here's the thing: I'm productive.  I build good applications.  I suspect that probably has more to do with my empathy and diligence rather than some prodigious development or architecture skills.  I'm certain I can get better at the latter, but my competitive advantage, if you will, comes from design good interactions.  I'm good a UI design and ideation.  And, frankly, all those TLAs are a bit intimidating, as is the prospect of writing so much more code.

It may sound like I'm leaning toward an ASP.NET 2.0 application, so let me reflect on that.  I am definitely not going that direction.  The major strengths of that platform are:

  • Tooling support---Visual Studio beats vim an SciTE hands down
  • 3rd-Party Controls---Telerik, Infragistics, etc.
  • Familiarity---myself and thousands like me use it every day

Those advantages sound great if you are an IT manager building line-of-business applications.  That's not me.  Among the disadvantages for me are:

  • Mundane---I use it for a living and wouldn't be learning anything new
  • Visual Studio 2005 is not free and the Express editions are...Express editions
  • I won't be purchasing any 3rd-party controls
  • The Page-based application model seems outmoded

So, what are my options?  Well, I would like to try building an application using the MVC/MVP/PF paradigm.  I've invested many hours learning about it, and I want to take a stab at it.  This means, almost certainly, using an IoC container--but which one?  Also, MVC differs significantly from MVP as does PF; which shall I use?  I have to select an environment to do this all in as well.

Supervising Presenter First?

I have settled on Presenter First (PF).  This doesn't have the widespread community support that MVC/MVP have today, which means less tooling and "free" functionality, but that's okay. PF takes SRP and the Humble Dialog to the extreme, forcing you to develop a testable presenter that reads like user stories and easily mocked views and models that can also be tested.  Because PF dictates an stateless and ignorant view, it should be easy to replace and change the UI.  Now, I can definitely say I won't be doing "pure" PF; I plan to allow tuples/ActiveRecord objects into my UI, because I want to use databinding and all the built-in goodness of ASP.NET when it is efficacious to do so.  In this sense, I want PF with Supervising Controller leanings.  

Views: Plain-old Pages

I believe ASP.NET Pages are a very strong candidate for views, despite what I have heard from the ALT.NET crowd.  As a template engine, they are very mature; you can even nest master pages in 2008!  The ASP.NET Membership provider (Authentication, Authorization, Personalization, etc.), declarative security, and databinding are a few great things you get out-of-the-box.  There are lots of controls out there to work in ASP.NET Pages, including all the ASP.NET AJAX stuff.  This absolutely does force you to think about your views in terms of pages at some level, but I believe partial page updates allow user controls to be views as well.

Choosing a Framework

There are lots of choices out there for doing MVC/MVP style ASP.NET application, each with their own peculiar twists.  I have mentioned a couple of these before, but here are the ones I've looked at:

The main problem with each of these is that they are not PF pattern friendly.  That isn't to say that they are antagonistic, not at all.  I'm guessing, relatively blindly, that each of these would be equally difficult to implement PF.  So, what other criteria can I use to cull the herd?  Well, MonoRail doesn't play nice with ASP.NET Pages, so it's out.  Cuyahoga is a real pain in the butt to configure, quite possibly the longest time-to-hello-world [Note: ~400MB QuickTime movie about alternative web frameworks, including plone] of all these frameworks--gone!  Rhino Igloo has some very, very interesting ideas reminiscent of Guice and Spring javaconfig in its used of an InjectAttribute.  WCSF does this, too.  Both of these require that the request is made to and handled by the view (the Page), and they use DI tricks to get the Controller involved.  ASP.NET MVC is using an extensible URL rewrite feature to put the controller first.  This lets us be RESTful in addition to being PF-friendly, e.g. we can handle requests with our controller.


My major complaint with all of these frameworks is that don't let me writing applications in a natural way.  I don't write my application in terms of logical processes, instead I implement page flows and deal with all the problems of stateless web applications.  In the nine years that I have been building web applications, my view of what a web application is and should be have changed a lot.  Now, I believe I have seen the promised land, as it were.  Web application servers and frameworks should allow me to develop my applications with logical continuations and take care of the plumbing for me.

REST + Continuations + Presenter First = ?

So, REST is good because it provides clean, bookmark-able URLs, a sort of coherent web-based command-line.  Continuations are good because they let me write applications that simply do not care that the next program event is dependent on transient sessions over a stateless protocol.  The Presenter First pattern is good because its raison d'être is making MVP more amenable to test-driven development.  Unfortunately, there are no ASP.NET web frameworks out there that take these three to be their cardinal virtues.  So, we'll just have to go invent our own.

In a follow-up post to this one, I plan to introduce my prototype for just such a framework.  Utilizing some of the functionality of ASP.NET MVC and Windows Workflow Foundation, along with a lot of new code and concepts, I am building a prototype that I hope proves to be a huge evolution in web programming on the ASP.NET platform.

Friday, December 21, 2007

Liking LINQ: A Question of Efficacy

Consider these two equivalent blocks of code.  First, the LINQ way:

foreach(var assignable in (from Assembly a in BuildManager.GetReferencedAssemblies()
select a.GetTypes() into types
from t1 in types
where typeof(I).IsAssignableFrom(t1)
select t1))
Cache.AddType(assignable.Name, Cache.ContainsType(assignable.Name) ? DUPLICATE_TYPE : assignable);

And, the "normal way":

foreach (Assembly assembly in BuildManager.GetReferencedAssemblies())
foreach (Type type in assembly.GetTypes())
if (typeof(I).IsAssignableFrom(type))
if (!Cache.ContainsType(type.Name))
Cache.AddType(type.Name, type);
Cache.AddType(type.Name, DUPLICATE_TYPE);

I cannot rightly make a judgement call on which way is better.  They have the same result.  Though, it is safe to say that normal way should perform better.  It is also clear, oddly, that the normal way has fewer lines of meaningful code; thus, it is easier to grok.  So, what do you think?  If you had to maintain the code base, which one would you prefer.  FWIW, I'm going to go with the normal way.

This leaves me with the question of efficacy.  Certainly there are niches where LINQ is superior, or the only option, but for general object collection work, should we just ignore this language feature?  Perhaps when PLINQ comes available we'll have a good reason to use it.  Time will tell.

SQL Optimization: Substring Joins

The DBA at my current client has some mad T-SQL skills.  He took a naïve join implementation I had written and improved its performance by three orders of magnitude.

My initial query looked something like this:

SELECT A.*, B.* FROM A JOIN B ON A.Message LIKE '%' + B.Identifier + '%'

Obviously this isn't the best situation in the first place. We don't have a clean relation between these two tables, instead we've got to pick out an identifier (GUID/uniqueidentifier) from a larger text field. Notwithstanding some obvious ETL or trigger possibilities, the statement above seems the simplest solution. The only problem is it is slooow.

The DBA had a great optimization which was essentially to tease out the identifier from the field as you would if you were writing a trigger to create a join column. Putting this in your query allows you to join on this derived column in your code. The performance implications are well above what I would have guessed them to be, operating–as I am–from naïveté. Here's the generalized solution based on our example above:

  SELECT A2.*,
  CAST(SUBSTRING(A2.Message, A2.GuidStart, A2.GuidEnd - A2.GuidStart) AS uniqueidentifier) AS JoinGuid
    SELECT A1.*,
    CHARINDEX('TokenAfterGuid', A1.Message, A1.Start) as GuidEnd
      SELECT A.*,
      CHARINDEX('TokenBeforeGuid:', A.Message) + LEN('TokenBeforeGuid:') + 1 AS GuidStart
      FROM A
    ) AS A1
  ) AS A2
) ON B.Guid = J.JoinGuid

Of course, you would expand the wildcard (*) references. This is really a great technique considering the performance ramifications. Certainly in our case, where the query was for a view, this was a wonderful improvement.  Obviously, the best option from a performance stand point would be to tease out the uniqueidentifier in an INSERT/UPDATE trigger, create an index on the new column, and join the tables on that; however, in situations where you don't have the option of doing ETL or triggers this can be useful.

Sunday, December 16, 2007

Liking LINQ: The Learning Curve

This is the second post in a series on Language Integrated Query.  I'm pushing LINQ's buttons and bumping into some of its boundaries. 

Every abstraction leaks at some point, and LINQ to SQL is no exception.  Consider the following code:

NorthwindDataContext d = new NorthwindDataContext(); 
int? quantityThreshold = null;
var sales = from p in d.Products
join od in d.Order_Details on p.ProductID equals od.ProductID
where !p.Discontinued && (quantityThreshold.HasValue ? od.Quantity >= quantityThreshold.Value : true)
select p;

So, when you begin fetching data out of "sales" you'll see the problem.  A run-time error is thrown because the expression tree visitor attempts to greedily evaluate quantityThreshold.Value.  Let's try to move the evaluation out of the LINQ expression.

Predicate<Order_Detail> hasSufficientQuantity = o => quantityThreshold.HasValue ? o.Quantity >= quantityThreshold : true;
var sales = from p in d.Products
join od in d.Order_Details on p.ProductID equals od.ProductID
where !p.Discontinued && hasSufficientQuantity.Invoke(od)
select p;

Well, that doesn't work either.  "The method or operation is not implemented." The expression tree visitor has no idea what this hasSufficientQuantity method is... changing it to hasSufficientQuantity.Invoke(od) reveals that we are barking up the wrong tree, no pun intended.  The error given then is that our Predicate function cannot be translated.  Okay... let's look at why.

This fun LINQ expression syntax in C# is just syntactic sugar for a bunch of extension methods with signatures so jam-packed with Generics, you'd think it was Wal-Mart.  So, we are grateful to our C# language team for the sugar.  But, it does tend to hide what is really going on, making it difficult to figure out why the syntax seems so finicky.  Our LINQ expression above would translate into imperative code similar to the following:

var sales = d.Products.Join(d.Order_Details, p => p.ProductID, o => o.ProductID, (p, o) => new { Product = p, Order_Detail = o }).Where(p => !p.Product.Discontinued && hasSufficientQuantity.Invoke(p.Order_Detail)).Select(p => p.Product);

This isn't exactly pretty, and it doesn't really help us to understand why our function can't be translate, or does it?  Consider what these function calls are doing.  They are taking arguments, primarily Func<...> objects, and storing them internal in an expression tree.  We know from stepping through the code that the execution of our supplied Func<...> objects (the lambda expressions above) is deferred until we start accessing values from "sales".  So, there must be some internal storage of our intent.  Further, the code above must be translated to SQL by the System.Data.Linq libraries, and we can gather from they call stack on our exception that they are using the Visitor pattern to translate the nodes of the expression tree into SQL statements.

What happens when they visit the node that calls invokes the hasSufficientQuantity Predicate?  Well, that code--the Preciate object instance itself--is not available in SQL, so the translation fails.  This seems obvious, but consider that if we were using LINQ to Objects here, any of these approaches would work fine, as the predicate would be available in the execution environment of the translated expression tree, where it wasn't for SQL.

This is a contrived example, of course, and we could "code around" this in any number of ways, e.g.

where !p.Discontinued && od.Quantity >= (quantityThreshold ?? 0)

However, we are still seeing the LINQ to SQL abstraction leak pretty severely.

There are some gotchas out there as well, of course.  Consider the following SQL statement that answers the question, "How many orders have my customers had for each of my products?"

SELECT o.CustomerID, od.ProductID, COUNT(*) as [Number of Orders] 
FROM dbo.Orders o JOIN dbo.[Order Details] od
ON o.OrderID = od.OrderID
GROUP BY od.ProductID, o.CustomerID

How might we attempt to answer the same question with LINQ to SQL?  Notice that we are specifying two columns to group by in our query.  Here's what we might like to write in LINQ:

NorthwindDataContext d = new NorthwindDataContext();
var results = from o in d.Orders
join od in d.Order_Details on o.OrderID equals od.OrderID
group by o.CustomerID, od.ProductID into g
select new {g.CustomerID, g.ProductID, g.Count()};

Of course, this doesn't even come close to compiling.  Here's the right way to do use multiple columns in a groupby: use a tuple!

var results = from od in d.Order_Details
group od by new {od.Order.CustomerID, od.ProductID} into orders
select new { orders.Key.CustomerID, orders.Key.ProductID, NumberOfOrders = orders.Count() };

Once you start getting the gestalt of LINQ, you'll find yourself creating tuples all over the place.  Consider this query expression to retrieve the total sales of each product in each territory:

var territorySales = from p in d.Products
join od in d.Order_Details on p.ProductID equals od.ProductID
join o in d.Orders on od.OrderID equals o.OrderID
join e in d.Employees on o.EmployeeID equals e.EmployeeID
join et in d.EmployeeTerritories on e.EmployeeID equals et.EmployeeID
join t in d.Territories on et.TerritoryID equals t.TerritoryID
where !p.Discontinued
group new { od.ProductID, p.ProductName, t.TerritoryID, t.TerritoryDescription, od.Quantity }
by new { od.ProductID, t.TerritoryID, p.ProductName, t.TerritoryDescription } into sales
orderby sales.Key.TerritoryDescription descending, sales.Key.ProductName descending
select new
{ Product = sales.Key.ProductName.Trim(), Territory = sales.Key.TerritoryDescription.Trim(), TotalSold = sales.Sum(s => s.Quantity) };

The interesting part of that expression is that I created a tuple in my to "select" the data to pass on to the next expression.

What if what we really wanted were the top ten best-selling products in each territory?  Well, there's no "top" LINQ query expression keyword.  The standard query operators include a couple of methods that look interesting: Take(int) and TakeWhile(predicate).  Unfortunately, TakeWhile is among the standard query operators that is not supported in LINQ to SQL.  Why?  Well, it's because you couldn't write equivalent SQL, I imagine.  And, while Take(int) is supported, its not immediately useful in a situation like this where you want to apply it to subsets of your results.  Therefore, a more procedural result seems warranted.  I'll investigate this further in my next post on the topic.

It is interesting to note the situation that arises with certain standard LINQ query operators not being supported by various flavors of LINQ.  Because the standard query operators are implemented using extension methods, every LINQ provider must handle them all, including those they cannot support.  This means throwing the NotSupportedException from the implementation of those methods.  The System.Linq.Queryable static class is where the standard query operators are implemented, defining the operators on IQueryable<T>.  LINQ to SQL classes like Table implement this interface, as do all class that participate in LINQ expressions.

Despite using the same syntax, the LINQ providers will each have their own significant learning curve due to variations in the operators they support and their own quirks. Next time we'll try to implement a top(x) query in LINQ to SQL.

Thursday, December 13, 2007

More Round-ups

Every body knows about Project & Bugzilla, or Enterprise-y and OSS-y.  But, recently, on an ALT.NET mailing list thread, I heard about some other options out there.  My current project is missing something like this, and so I was hoping to look into these in the near future. Standing on the shoulders of giants, I present YARU (yet-another-round-up):

And, as long as I have you here, I'll list some Continous Integration options that I've found.

  • CI Factory (uses afaik)
  • TeamCity from JetBrains
  • Visual Studio Team Foundation Server 2008
  • Of course, you can do what I've been doing and fiddle with a morass of free tools to get CI working with CruiseControl.NET

Wednesday, December 12, 2007

Getting MbUnit working with CruiseControl.NET

This is not exactly a straightforward process.  If you are having problems, here are a few hints about the bumps in the road.

  • If you are having trouble managing the dependencies of MbUnit on your build server, then just include the "bin" folder of your test project in source control.  That way, all of those locally copied assemblies will get pulled out of source control along with the rest of your project.
  • In addition to this information, there are a couple more steps to ensure your MbUnit output makes it into the web dashboard.  Make sure you put your file merge task into the publishers section of your project configuration in the ccnet.config file.  This will ensure the output of your MbUnit tests are always merged with the project configuration.  However, if you specify a publishers section, you must also specify a project xmlLogger.  Order matters here! The xmlLogger should be after your merge task to get your file merged in before the project log gets published to the dashboard.  So, your publisher section might look something like this:
      <xmlLogger />
  • Speaking of your results file, you can use command-line switches of MbUnit.Cons.exe to specify that the file name is the same everytime you run it.
  • MbUnit.Cons.exe can load an .exe assembly as easily as a .dll, so keep your tests project a console application so developers can step through their tests.

Tuesday, December 11, 2007

Dependency Injection Round-up

Here are a few dependency injection frameworks for .NET.

  • Castle Windsor
    • 1.0 RC3 release September 2007
    • Bundles with other Rails inspired Castle projects
  • StructureMap
    • Last release in April 2007 (v2.0)
    • Developed and maintained by two developers
    • v1.1 released December 2007
    • Active community, Java following
    • Bundles with Spring web framework
  • ObjectBuilder
    • Comes with Enterprise Library
    • Not a lot of community activity
    • Microsoft's Patterns & Practices group hasn't abandoned it (MVP)

Monday, December 10, 2007

Automated Testing: CruiseControl.NET, MbUnit, and WATiN

I have been using WATiR off and on, mostly off, to do automated web application testing since I first heard about it on Hanselminutes about two years ago.  Really, I just wanted an excuse to learn Ruby and to dip my feet into the Continuous Integration stream.

Well, that stream has become a river, and the next version--or should I say current version--of Team Foundation Server will have (has) baked in support for Continuous Integration, for real this time.

Those of us who do not have such fancy-dancy tools may be wondering what is left for us.  WATiR is great, but its a tough sell to a lot of rank-and-file developers, since you have to learn a new language.  Not only that, the most common CI tool for .NET solutions, CruiseControl.NET, doesn't play very nicely with WATiR output.  To make it work nicely, you end up with a software stack that reads like an equipment list in an adventure game: WATiR, CI Reporter, Nant, Test::Unit, Rspec, Rake, gaaaaaah!  Run away!

Seriously, this is bad news.  I just want one tool to write tests in that can output to a format that will show up in CruiseControl.NET's web dashboard.  Is that too much to ask?  Well, apparently, yes, it is.  However, it can be a lot simpler, and we won't have to learn a new language.

Enter WATiN: this is the WATiR inspired, .NET-based, IE automation tool we will use.  Of course, we still still need to write our tests, so we'll use MbUnit because it easily integrates with Cruise and complies with the WATiN STA requirement very easily.  There's a pretty good guide on how to integrate Simian, and MbUnit works pretty much the same way.

Now, you'll just need to configure a test runner.  Since we are using MbUnit, we'll just use MbUnit.Cons.exe and output to Xml.  Once you have that command-line ready, just configure your project in Cruise to execute it after your build task.

Best of all, we can use WatiN Test Recorder to record our tests for us.

Fonts: Because your Eyes are Worth It

If you use Visual Studio 2005, you can download the Consolas font pack to make your eyes a little happier.  If you're feeling really frisky, you can even use it as your console font.  If you want to have a little fun, instead of firing up regedit to edit the registry, use this:

Set-ItemProperty "HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Console\TrueTypeFont" -Name "00" -Value "Consolas"

Convert SID to NTAccount with Powershell

Here's a quick script to convert a given SID (e.g. S-1-0-234...) to a NT Account (e.g. DOMAIN\user):

$sid = new-object System.Security.Principal.SecurityIdentifier "S-1-0-234..."

$ntacct = $sid.Translate([System.Security.Principal.NTAccount])

Friday, December 7, 2007

Volta & Other ASP.NET Miscellany

Erik Meijer posted about Volta on Lamda the Ultimate.  Last I heard from him, he was working in the VB.NET team as a language designer, pushing to make VB.NET a more dynamic language a la Ruby (kind of like it was before it came to .NET).  Now, he’s working under Ray Ozzie in Live Labs, an incubator division started to get things done and shipped without having to bear the crushing weight of being the progeny of all things Microsoft.  For we .NET developers, this is an exciting time, because it means faster innovation from Microsoft on the web.  Erik is a fan of dynamic languages and aspect-oriented programming, and Volta reflects that.  Interestingly, I believe Volta is the first Microsoft product outside of Microsoft Research to utilize bytecode level AOP injection.

There are some other interesting things happening with .NET 3.5 in the web space besides Volta.  First, Scott Guthrie and some new folks on the ASP.NET team are working hard on ASP.NET MVC.  Yup, now Microsoft is throwing their hat into the MVC arena.  Here’s a nice round-up of links about the new framework.  How this will compare with the Spring.NET web framework or the Castle Project’s MonoRail remains to be seen.  From what I’ve seen so far, ASP.NET MVC is motivated by a desire to support TDD for ASP.NET web applications and to provide a proper MVC.  I think the major difference among this implementations will be how they implement IoC.  Spring loves XML configuration; MonoRail is about convention over configuration; and ASP.NET MVC seems to behave more like Guice in that the configuration is done in code.  Personally, I think Guice is really awesome, so I have high hopes for ASP.NET MVC, and I hope it facilitates Presenter First.

On top of all this, of course, there is Silverlight, and its more familiar desktop-style development model.  The web becomes just an application delivery platform for Silverlight applications.

And if all these different ways of building web applications aren’t variety enough for you, there are the new platform innovations of .NET 3.5.  Language Integrate Query (LINQ) abounds allowing you to LINQ to SQL, LINQ to Objects, LINQ to DataSets, and LINQ to XML.  I expect you’ll see LINQ to NHibernate, as I know LINQ to LLBLGenPro is in development.  LINQ to SQL comes with a proper ORM tool built into Visual Studio 2008, so bringing ORM to clients has never been easier.  To make LINQ work, the platform needed some enhancements.  Besides the nullable types, static classes, and anonymous delegates that came in .NET 2.0, we get anonymous types, extension methods, and lambda expressions in .NET 3.5.  Oh momma!

Yes, the .NET development tool stack is getting just as confusing as Java’s, if not more so since you could write your applications against these frameworks in IronRuby, IronPython, Boo, VB, C#, and many others.

Wednesday, December 5, 2007

Crockford's Gambit

Doug Crockford has been blogging about making the traditional WWW software stack sufficient to face the forthcoming challenges from Silverlight and Flex. I won't link to his blog here, because Yahoo! 360 is annoying in the extreme.

I think Doug just has to understand this quote to know that Google and Yahoo are sufficiently bland to ensure the long-term viability of plain-old Ajax.

My works are like water. The works of the great masters are like fine wine. But, everybody drinks water.
Mark Twain

That being said, and presuming (safely) that HTML, JavaScript, and browsers aren't going to be fixed, what does that mean for those of us looking down the road?

I think the following points are particularly germane to this line of thinking:

  1. The web isn't going anywhere

  2. The web (HTML + Javascript + CSS) is already insufficient to support modern applications, notwithstanding the heroism of GWT, YUI, Dojo, et alia.

  3. The web is fantastic as an application delivery platform

  4. A modern application must compete with the iPhone experience, specifically:

    • Scalable vector presentation

    • Scales from phone to desktop

    • Quality video

  5. My data is NOT your data...proprietary vendor lock-in of data created via software is unacceptable. Someone will figure out how to scrape it out. This is Web 2.0

So what might we deduce from these points? Well, we can certainly apply these as heuristics when choosing a next-generation platform for developing applications.

Using arrays in SQL?

This is an excellent method of using arrays of integers in SQL queries. The basic idea is to convert your integer array to a byte array (the semantic equivalent of varbinary(MAX)), then to write a CLR table-valued function to convert the varbinary(MAX) to a table of integer values.

That's just cool.

I picked this up in a comment to Ayende Rahien's post about performance issues with his technique for working around the 2100 item limit imposed by SQL Server for an IN clause in T-SQL. This could show up in a number of situations where a linked server query wouldn't work. Say, for example, you were calling an API for the most recent searches on and wanted to match those keywords against recent searches on your site. You'd have, basically, an array of search terms that you wanted to find in your database. There are a number of solutions to this, but from what I've seen the best performance dynamics (assuming a somewhat large dataset) are given by creating some sort of temporary table and running your query against that.

The method of using integer arrays above is the most imaginative I've seen. Though, other alternatives exist, such as BULK INSERTing data, running your query, and then rolling back the transaction, as suggested in another of the comments.

Powershell Script to Add an Event Log

When setting up logging for your ASP.NET application, you may want to write to a custom Event Log. If you do not install your web application, however, you may be dismayed that the service account for your web application does not have sufficient permissions to create an Event Log when you write to it.

Here is a PowerShell script to help you out:

$creationData = new-object System.Diagnostics.EventSourceCreationData "AppName", "LogName"
$creationData.MachineName = "TargetServerName"

Tuesday, December 4, 2007

WebResource.axd error: Padding is invalid and cannot be removed.

If you are experiencing the error detailed here, but you are running a web garden, not a web farm, the query string parameters of your WebResource.axd requests are not being properly decrypted. The root cause is that a different decryption key is being created for each of the processes in your web garden.

To fix this problem, you have to explicitly set a machineKey element in your web.config:


See this article on web deployment considerations for more information.

You may also have seen this error manifest as wierd Javascript errors when you increased the number of worker processes.

You will need to restart IIS for this to take effect, it seems.

Here's a Powershell script to generate a key. Place this in a .ps1 file. Keep in mind, the validation is done with a hash, and so can use SHA1 with a 128-bit key, but decryption is done with a reversible encryption algorithm so the key should probably be 64-bit with AES (Rijndael).

$len = 128

if($args[0] > 0)
$len = $args[0]

[byte[]] $buff = new byte[] ($len/2)

$rng = new System.Security.Cryptography.RNGCryptoServiceProvider


$sb = new System.Text.StringBuilder $len

for ($i = 0; $i -lt $buff.Length; $i += 1)
$junk = $sb.Append([System.String]::Format("{0:X2}", $buff[$i]));