Friday, December 28, 2007

"Fixing" the Enter Key in CreateUserWizard

Problem: You find that when you press the "Enter" or "Return" key on a page that contains the CreateUserStep of your CreateUserWizard (or any other step of any other wizard), your form does not get submitted.  Put another way, something else on the page gets triggered instead of your Create User button.

Solution: In ASP.NET 2.0 the Page.Form has a new property: DefaultButton.  This effectively traps the Enter key and causes the specified button to execute its onclick event handler.  We would like to set this Page.Form.DefaultButton property to the UniqueID property of the CreateUserStep's Create User button.  Unfortunately, we do not have a reliable way of accessing this button in our code-behind.  The solution is to replace it with our own button and wire up the DefaultButton property in the button's pre-render event handler, since it is in a template and we cannot access it directly.

In your .aspx add the following CustomNavigationTemplate to your CreateUserStep:

<CustomNavigationTemplate >
     <asp:Button ID="CreateUserButton" OnPreRender="CreateUserButtonRender" runat="server" Text="Register User" CommandName="MoveNext" ValidationGroup="CreateUserWizard"/>


In your .aspx.cs add the following:

protected void CreateUserButtonRender(object sender, EventArgs e)
  this.Page.Form.DefaultButton = (sender as Button).UniqueID;

Wednesday, December 26, 2007

Choosing a Web Application Server Stack

There's this great book entitled, "The Paradox of Choice: Why More Is Less," that really opened my eyes to a rather non-intuitive way to improve my experience in life.  I won't go into it more than to say the author posits that there is an inflection point where "happiness" does not increase with additional accretion of choices.  This is non-intuitive, but he does a good job of explaining the many factors underlying this phenomenon.  For one, consider the opportunity cost of making a decision; you are buying what you consider the best at the time, but you are also carrying the burden of not having chosen from many other options.  Very often, buyer's remorse sets in after a purchase, and we have no one to blame but ourselves.

So, as I tore through blogs, email archives, tutorials, and documentation today, looking for the "best" platform for my personal pet project, I became acutely aware of just how much choice there is available to build web applications these days.  The result is that it's very late in the day, my weekend spent, and I am writing this blog post to reason aloud, as it were, and to force myself to pick one.

By way of introduction, let me say that my skill set falls squarely in the tried-and-true .NET 2.0 ASP.NET developer model.  I don't do MVC, MVP, Presenter First (PF), O/RM, TDD, DDD, BDD, or any other TLAs, except maybe DRY, SRP, and all those other solid OOP principles.  Also, I'm building this application for myself, by myself, and I can't seem to get a free copy of Visual Studio 2008, so .NET 3.5 and LINQ are out of reach.  I know I need forums, but that's about my only "requirement".  Sure, I've got lots of ideas of what I'm going to build, but I am staring at the blank canvas right now.

Here's the thing: I'm productive.  I build good applications.  I suspect that probably has more to do with my empathy and diligence rather than some prodigious development or architecture skills.  I'm certain I can get better at the latter, but my competitive advantage, if you will, comes from design good interactions.  I'm good a UI design and ideation.  And, frankly, all those TLAs are a bit intimidating, as is the prospect of writing so much more code.

It may sound like I'm leaning toward an ASP.NET 2.0 application, so let me reflect on that.  I am definitely not going that direction.  The major strengths of that platform are:

  • Tooling support---Visual Studio beats vim an SciTE hands down
  • 3rd-Party Controls---Telerik, Infragistics, etc.
  • Familiarity---myself and thousands like me use it every day

Those advantages sound great if you are an IT manager building line-of-business applications.  That's not me.  Among the disadvantages for me are:

  • Mundane---I use it for a living and wouldn't be learning anything new
  • Visual Studio 2005 is not free and the Express editions are...Express editions
  • I won't be purchasing any 3rd-party controls
  • The Page-based application model seems outmoded

So, what are my options?  Well, I would like to try building an application using the MVC/MVP/PF paradigm.  I've invested many hours learning about it, and I want to take a stab at it.  This means, almost certainly, using an IoC container--but which one?  Also, MVC differs significantly from MVP as does PF; which shall I use?  I have to select an environment to do this all in as well.

Supervising Presenter First?

I have settled on Presenter First (PF).  This doesn't have the widespread community support that MVC/MVP have today, which means less tooling and "free" functionality, but that's okay. PF takes SRP and the Humble Dialog to the extreme, forcing you to develop a testable presenter that reads like user stories and easily mocked views and models that can also be tested.  Because PF dictates an stateless and ignorant view, it should be easy to replace and change the UI.  Now, I can definitely say I won't be doing "pure" PF; I plan to allow tuples/ActiveRecord objects into my UI, because I want to use databinding and all the built-in goodness of ASP.NET when it is efficacious to do so.  In this sense, I want PF with Supervising Controller leanings.  

Views: Plain-old Pages

I believe ASP.NET Pages are a very strong candidate for views, despite what I have heard from the ALT.NET crowd.  As a template engine, they are very mature; you can even nest master pages in 2008!  The ASP.NET Membership provider (Authentication, Authorization, Personalization, etc.), declarative security, and databinding are a few great things you get out-of-the-box.  There are lots of controls out there to work in ASP.NET Pages, including all the ASP.NET AJAX stuff.  This absolutely does force you to think about your views in terms of pages at some level, but I believe partial page updates allow user controls to be views as well.

Choosing a Framework

There are lots of choices out there for doing MVC/MVP style ASP.NET application, each with their own peculiar twists.  I have mentioned a couple of these before, but here are the ones I've looked at:

The main problem with each of these is that they are not PF pattern friendly.  That isn't to say that they are antagonistic, not at all.  I'm guessing, relatively blindly, that each of these would be equally difficult to implement PF.  So, what other criteria can I use to cull the herd?  Well, MonoRail doesn't play nice with ASP.NET Pages, so it's out.  Cuyahoga is a real pain in the butt to configure, quite possibly the longest time-to-hello-world [Note: ~400MB QuickTime movie about alternative web frameworks, including plone] of all these frameworks--gone!  Rhino Igloo has some very, very interesting ideas reminiscent of Guice and Spring javaconfig in its used of an InjectAttribute.  WCSF does this, too.  Both of these require that the request is made to and handled by the view (the Page), and they use DI tricks to get the Controller involved.  ASP.NET MVC is using an extensible URL rewrite feature to put the controller first.  This lets us be RESTful in addition to being PF-friendly, e.g. we can handle requests with our controller.


My major complaint with all of these frameworks is that don't let me writing applications in a natural way.  I don't write my application in terms of logical processes, instead I implement page flows and deal with all the problems of stateless web applications.  In the nine years that I have been building web applications, my view of what a web application is and should be have changed a lot.  Now, I believe I have seen the promised land, as it were.  Web application servers and frameworks should allow me to develop my applications with logical continuations and take care of the plumbing for me.

REST + Continuations + Presenter First = ?

So, REST is good because it provides clean, bookmark-able URLs, a sort of coherent web-based command-line.  Continuations are good because they let me write applications that simply do not care that the next program event is dependent on transient sessions over a stateless protocol.  The Presenter First pattern is good because its raison d'être is making MVP more amenable to test-driven development.  Unfortunately, there are no ASP.NET web frameworks out there that take these three to be their cardinal virtues.  So, we'll just have to go invent our own.

In a follow-up post to this one, I plan to introduce my prototype for just such a framework.  Utilizing some of the functionality of ASP.NET MVC and Windows Workflow Foundation, along with a lot of new code and concepts, I am building a prototype that I hope proves to be a huge evolution in web programming on the ASP.NET platform.

Friday, December 21, 2007

Liking LINQ: A Question of Efficacy

Consider these two equivalent blocks of code.  First, the LINQ way:

foreach(var assignable in (from Assembly a in BuildManager.GetReferencedAssemblies()
select a.GetTypes() into types
from t1 in types
where typeof(I).IsAssignableFrom(t1)
select t1))
Cache.AddType(assignable.Name, Cache.ContainsType(assignable.Name) ? DUPLICATE_TYPE : assignable);

And, the "normal way":

foreach (Assembly assembly in BuildManager.GetReferencedAssemblies())
foreach (Type type in assembly.GetTypes())
if (typeof(I).IsAssignableFrom(type))
if (!Cache.ContainsType(type.Name))
Cache.AddType(type.Name, type);
Cache.AddType(type.Name, DUPLICATE_TYPE);

I cannot rightly make a judgement call on which way is better.  They have the same result.  Though, it is safe to say that normal way should perform better.  It is also clear, oddly, that the normal way has fewer lines of meaningful code; thus, it is easier to grok.  So, what do you think?  If you had to maintain the code base, which one would you prefer.  FWIW, I'm going to go with the normal way.

This leaves me with the question of efficacy.  Certainly there are niches where LINQ is superior, or the only option, but for general object collection work, should we just ignore this language feature?  Perhaps when PLINQ comes available we'll have a good reason to use it.  Time will tell.

SQL Optimization: Substring Joins

The DBA at my current client has some mad T-SQL skills.  He took a naïve join implementation I had written and improved its performance by three orders of magnitude.

My initial query looked something like this:

SELECT A.*, B.* FROM A JOIN B ON A.Message LIKE '%' + B.Identifier + '%'

Obviously this isn't the best situation in the first place. We don't have a clean relation between these two tables, instead we've got to pick out an identifier (GUID/uniqueidentifier) from a larger text field. Notwithstanding some obvious ETL or trigger possibilities, the statement above seems the simplest solution. The only problem is it is slooow.

The DBA had a great optimization which was essentially to tease out the identifier from the field as you would if you were writing a trigger to create a join column. Putting this in your query allows you to join on this derived column in your code. The performance implications are well above what I would have guessed them to be, operating–as I am–from naïveté. Here's the generalized solution based on our example above:

  SELECT A2.*,
  CAST(SUBSTRING(A2.Message, A2.GuidStart, A2.GuidEnd - A2.GuidStart) AS uniqueidentifier) AS JoinGuid
    SELECT A1.*,
    CHARINDEX('TokenAfterGuid', A1.Message, A1.Start) as GuidEnd
      SELECT A.*,
      CHARINDEX('TokenBeforeGuid:', A.Message) + LEN('TokenBeforeGuid:') + 1 AS GuidStart
      FROM A
    ) AS A1
  ) AS A2
) ON B.Guid = J.JoinGuid

Of course, you would expand the wildcard (*) references. This is really a great technique considering the performance ramifications. Certainly in our case, where the query was for a view, this was a wonderful improvement.  Obviously, the best option from a performance stand point would be to tease out the uniqueidentifier in an INSERT/UPDATE trigger, create an index on the new column, and join the tables on that; however, in situations where you don't have the option of doing ETL or triggers this can be useful.

Sunday, December 16, 2007

Liking LINQ: The Learning Curve

This is the second post in a series on Language Integrated Query.  I'm pushing LINQ's buttons and bumping into some of its boundaries. 

Every abstraction leaks at some point, and LINQ to SQL is no exception.  Consider the following code:

NorthwindDataContext d = new NorthwindDataContext(); 
int? quantityThreshold = null;
var sales = from p in d.Products
join od in d.Order_Details on p.ProductID equals od.ProductID
where !p.Discontinued && (quantityThreshold.HasValue ? od.Quantity >= quantityThreshold.Value : true)
select p;

So, when you begin fetching data out of "sales" you'll see the problem.  A run-time error is thrown because the expression tree visitor attempts to greedily evaluate quantityThreshold.Value.  Let's try to move the evaluation out of the LINQ expression.

Predicate<Order_Detail> hasSufficientQuantity = o => quantityThreshold.HasValue ? o.Quantity >= quantityThreshold : true;
var sales = from p in d.Products
join od in d.Order_Details on p.ProductID equals od.ProductID
where !p.Discontinued && hasSufficientQuantity.Invoke(od)
select p;

Well, that doesn't work either.  "The method or operation is not implemented." The expression tree visitor has no idea what this hasSufficientQuantity method is... changing it to hasSufficientQuantity.Invoke(od) reveals that we are barking up the wrong tree, no pun intended.  The error given then is that our Predicate function cannot be translated.  Okay... let's look at why.

This fun LINQ expression syntax in C# is just syntactic sugar for a bunch of extension methods with signatures so jam-packed with Generics, you'd think it was Wal-Mart.  So, we are grateful to our C# language team for the sugar.  But, it does tend to hide what is really going on, making it difficult to figure out why the syntax seems so finicky.  Our LINQ expression above would translate into imperative code similar to the following:

var sales = d.Products.Join(d.Order_Details, p => p.ProductID, o => o.ProductID, (p, o) => new { Product = p, Order_Detail = o }).Where(p => !p.Product.Discontinued && hasSufficientQuantity.Invoke(p.Order_Detail)).Select(p => p.Product);

This isn't exactly pretty, and it doesn't really help us to understand why our function can't be translate, or does it?  Consider what these function calls are doing.  They are taking arguments, primarily Func<...> objects, and storing them internal in an expression tree.  We know from stepping through the code that the execution of our supplied Func<...> objects (the lambda expressions above) is deferred until we start accessing values from "sales".  So, there must be some internal storage of our intent.  Further, the code above must be translated to SQL by the System.Data.Linq libraries, and we can gather from they call stack on our exception that they are using the Visitor pattern to translate the nodes of the expression tree into SQL statements.

What happens when they visit the node that calls invokes the hasSufficientQuantity Predicate?  Well, that code--the Preciate object instance itself--is not available in SQL, so the translation fails.  This seems obvious, but consider that if we were using LINQ to Objects here, any of these approaches would work fine, as the predicate would be available in the execution environment of the translated expression tree, where it wasn't for SQL.

This is a contrived example, of course, and we could "code around" this in any number of ways, e.g.

where !p.Discontinued && od.Quantity >= (quantityThreshold ?? 0)

However, we are still seeing the LINQ to SQL abstraction leak pretty severely.

There are some gotchas out there as well, of course.  Consider the following SQL statement that answers the question, "How many orders have my customers had for each of my products?"

SELECT o.CustomerID, od.ProductID, COUNT(*) as [Number of Orders] 
FROM dbo.Orders o JOIN dbo.[Order Details] od
ON o.OrderID = od.OrderID
GROUP BY od.ProductID, o.CustomerID

How might we attempt to answer the same question with LINQ to SQL?  Notice that we are specifying two columns to group by in our query.  Here's what we might like to write in LINQ:

NorthwindDataContext d = new NorthwindDataContext();
var results = from o in d.Orders
join od in d.Order_Details on o.OrderID equals od.OrderID
group by o.CustomerID, od.ProductID into g
select new {g.CustomerID, g.ProductID, g.Count()};

Of course, this doesn't even come close to compiling.  Here's the right way to do use multiple columns in a groupby: use a tuple!

var results = from od in d.Order_Details
group od by new {od.Order.CustomerID, od.ProductID} into orders
select new { orders.Key.CustomerID, orders.Key.ProductID, NumberOfOrders = orders.Count() };

Once you start getting the gestalt of LINQ, you'll find yourself creating tuples all over the place.  Consider this query expression to retrieve the total sales of each product in each territory:

var territorySales = from p in d.Products
join od in d.Order_Details on p.ProductID equals od.ProductID
join o in d.Orders on od.OrderID equals o.OrderID
join e in d.Employees on o.EmployeeID equals e.EmployeeID
join et in d.EmployeeTerritories on e.EmployeeID equals et.EmployeeID
join t in d.Territories on et.TerritoryID equals t.TerritoryID
where !p.Discontinued
group new { od.ProductID, p.ProductName, t.TerritoryID, t.TerritoryDescription, od.Quantity }
by new { od.ProductID, t.TerritoryID, p.ProductName, t.TerritoryDescription } into sales
orderby sales.Key.TerritoryDescription descending, sales.Key.ProductName descending
select new
{ Product = sales.Key.ProductName.Trim(), Territory = sales.Key.TerritoryDescription.Trim(), TotalSold = sales.Sum(s => s.Quantity) };

The interesting part of that expression is that I created a tuple in my to "select" the data to pass on to the next expression.

What if what we really wanted were the top ten best-selling products in each territory?  Well, there's no "top" LINQ query expression keyword.  The standard query operators include a couple of methods that look interesting: Take(int) and TakeWhile(predicate).  Unfortunately, TakeWhile is among the standard query operators that is not supported in LINQ to SQL.  Why?  Well, it's because you couldn't write equivalent SQL, I imagine.  And, while Take(int) is supported, its not immediately useful in a situation like this where you want to apply it to subsets of your results.  Therefore, a more procedural result seems warranted.  I'll investigate this further in my next post on the topic.

It is interesting to note the situation that arises with certain standard LINQ query operators not being supported by various flavors of LINQ.  Because the standard query operators are implemented using extension methods, every LINQ provider must handle them all, including those they cannot support.  This means throwing the NotSupportedException from the implementation of those methods.  The System.Linq.Queryable static class is where the standard query operators are implemented, defining the operators on IQueryable<T>.  LINQ to SQL classes like Table implement this interface, as do all class that participate in LINQ expressions.

Despite using the same syntax, the LINQ providers will each have their own significant learning curve due to variations in the operators they support and their own quirks. Next time we'll try to implement a top(x) query in LINQ to SQL.

Thursday, December 13, 2007

More Round-ups

Every body knows about Project & Bugzilla, or Enterprise-y and OSS-y.  But, recently, on an ALT.NET mailing list thread, I heard about some other options out there.  My current project is missing something like this, and so I was hoping to look into these in the near future. Standing on the shoulders of giants, I present YARU (yet-another-round-up):

And, as long as I have you here, I'll list some Continous Integration options that I've found.

  • CI Factory (uses afaik)
  • TeamCity from JetBrains
  • Visual Studio Team Foundation Server 2008
  • Of course, you can do what I've been doing and fiddle with a morass of free tools to get CI working with CruiseControl.NET

Wednesday, December 12, 2007

Getting MbUnit working with CruiseControl.NET

This is not exactly a straightforward process.  If you are having problems, here are a few hints about the bumps in the road.

  • If you are having trouble managing the dependencies of MbUnit on your build server, then just include the "bin" folder of your test project in source control.  That way, all of those locally copied assemblies will get pulled out of source control along with the rest of your project.
  • In addition to this information, there are a couple more steps to ensure your MbUnit output makes it into the web dashboard.  Make sure you put your file merge task into the publishers section of your project configuration in the ccnet.config file.  This will ensure the output of your MbUnit tests are always merged with the project configuration.  However, if you specify a publishers section, you must also specify a project xmlLogger.  Order matters here! The xmlLogger should be after your merge task to get your file merged in before the project log gets published to the dashboard.  So, your publisher section might look something like this:
      <xmlLogger />
  • Speaking of your results file, you can use command-line switches of MbUnit.Cons.exe to specify that the file name is the same everytime you run it.
  • MbUnit.Cons.exe can load an .exe assembly as easily as a .dll, so keep your tests project a console application so developers can step through their tests.

Tuesday, December 11, 2007

Dependency Injection Round-up

Here are a few dependency injection frameworks for .NET.

  • Castle Windsor
    • 1.0 RC3 release September 2007
    • Bundles with other Rails inspired Castle projects
  • StructureMap
    • Last release in April 2007 (v2.0)
    • Developed and maintained by two developers
    • v1.1 released December 2007
    • Active community, Java following
    • Bundles with Spring web framework
  • ObjectBuilder
    • Comes with Enterprise Library
    • Not a lot of community activity
    • Microsoft's Patterns & Practices group hasn't abandoned it (MVP)

Monday, December 10, 2007

Automated Testing: CruiseControl.NET, MbUnit, and WATiN

I have been using WATiR off and on, mostly off, to do automated web application testing since I first heard about it on Hanselminutes about two years ago.  Really, I just wanted an excuse to learn Ruby and to dip my feet into the Continuous Integration stream.

Well, that stream has become a river, and the next version--or should I say current version--of Team Foundation Server will have (has) baked in support for Continuous Integration, for real this time.

Those of us who do not have such fancy-dancy tools may be wondering what is left for us.  WATiR is great, but its a tough sell to a lot of rank-and-file developers, since you have to learn a new language.  Not only that, the most common CI tool for .NET solutions, CruiseControl.NET, doesn't play very nicely with WATiR output.  To make it work nicely, you end up with a software stack that reads like an equipment list in an adventure game: WATiR, CI Reporter, Nant, Test::Unit, Rspec, Rake, gaaaaaah!  Run away!

Seriously, this is bad news.  I just want one tool to write tests in that can output to a format that will show up in CruiseControl.NET's web dashboard.  Is that too much to ask?  Well, apparently, yes, it is.  However, it can be a lot simpler, and we won't have to learn a new language.

Enter WATiN: this is the WATiR inspired, .NET-based, IE automation tool we will use.  Of course, we still still need to write our tests, so we'll use MbUnit because it easily integrates with Cruise and complies with the WATiN STA requirement very easily.  There's a pretty good guide on how to integrate Simian, and MbUnit works pretty much the same way.

Now, you'll just need to configure a test runner.  Since we are using MbUnit, we'll just use MbUnit.Cons.exe and output to Xml.  Once you have that command-line ready, just configure your project in Cruise to execute it after your build task.

Best of all, we can use WatiN Test Recorder to record our tests for us.

Fonts: Because your Eyes are Worth It

If you use Visual Studio 2005, you can download the Consolas font pack to make your eyes a little happier.  If you're feeling really frisky, you can even use it as your console font.  If you want to have a little fun, instead of firing up regedit to edit the registry, use this:

Set-ItemProperty "HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Console\TrueTypeFont" -Name "00" -Value "Consolas"

Convert SID to NTAccount with Powershell

Here's a quick script to convert a given SID (e.g. S-1-0-234...) to a NT Account (e.g. DOMAIN\user):

$sid = new-object System.Security.Principal.SecurityIdentifier "S-1-0-234..."

$ntacct = $sid.Translate([System.Security.Principal.NTAccount])

Friday, December 7, 2007

Volta & Other ASP.NET Miscellany

Erik Meijer posted about Volta on Lamda the Ultimate.  Last I heard from him, he was working in the VB.NET team as a language designer, pushing to make VB.NET a more dynamic language a la Ruby (kind of like it was before it came to .NET).  Now, he’s working under Ray Ozzie in Live Labs, an incubator division started to get things done and shipped without having to bear the crushing weight of being the progeny of all things Microsoft.  For we .NET developers, this is an exciting time, because it means faster innovation from Microsoft on the web.  Erik is a fan of dynamic languages and aspect-oriented programming, and Volta reflects that.  Interestingly, I believe Volta is the first Microsoft product outside of Microsoft Research to utilize bytecode level AOP injection.

There are some other interesting things happening with .NET 3.5 in the web space besides Volta.  First, Scott Guthrie and some new folks on the ASP.NET team are working hard on ASP.NET MVC.  Yup, now Microsoft is throwing their hat into the MVC arena.  Here’s a nice round-up of links about the new framework.  How this will compare with the Spring.NET web framework or the Castle Project’s MonoRail remains to be seen.  From what I’ve seen so far, ASP.NET MVC is motivated by a desire to support TDD for ASP.NET web applications and to provide a proper MVC.  I think the major difference among this implementations will be how they implement IoC.  Spring loves XML configuration; MonoRail is about convention over configuration; and ASP.NET MVC seems to behave more like Guice in that the configuration is done in code.  Personally, I think Guice is really awesome, so I have high hopes for ASP.NET MVC, and I hope it facilitates Presenter First.

On top of all this, of course, there is Silverlight, and its more familiar desktop-style development model.  The web becomes just an application delivery platform for Silverlight applications.

And if all these different ways of building web applications aren’t variety enough for you, there are the new platform innovations of .NET 3.5.  Language Integrate Query (LINQ) abounds allowing you to LINQ to SQL, LINQ to Objects, LINQ to DataSets, and LINQ to XML.  I expect you’ll see LINQ to NHibernate, as I know LINQ to LLBLGenPro is in development.  LINQ to SQL comes with a proper ORM tool built into Visual Studio 2008, so bringing ORM to clients has never been easier.  To make LINQ work, the platform needed some enhancements.  Besides the nullable types, static classes, and anonymous delegates that came in .NET 2.0, we get anonymous types, extension methods, and lambda expressions in .NET 3.5.  Oh momma!

Yes, the .NET development tool stack is getting just as confusing as Java’s, if not more so since you could write your applications against these frameworks in IronRuby, IronPython, Boo, VB, C#, and many others.

Wednesday, December 5, 2007

Crockford's Gambit

Doug Crockford has been blogging about making the traditional WWW software stack sufficient to face the forthcoming challenges from Silverlight and Flex. I won't link to his blog here, because Yahoo! 360 is annoying in the extreme.

I think Doug just has to understand this quote to know that Google and Yahoo are sufficiently bland to ensure the long-term viability of plain-old Ajax.

My works are like water. The works of the great masters are like fine wine. But, everybody drinks water.
Mark Twain

That being said, and presuming (safely) that HTML, JavaScript, and browsers aren't going to be fixed, what does that mean for those of us looking down the road?

I think the following points are particularly germane to this line of thinking:

  1. The web isn't going anywhere

  2. The web (HTML + Javascript + CSS) is already insufficient to support modern applications, notwithstanding the heroism of GWT, YUI, Dojo, et alia.

  3. The web is fantastic as an application delivery platform

  4. A modern application must compete with the iPhone experience, specifically:

    • Scalable vector presentation

    • Scales from phone to desktop

    • Quality video

  5. My data is NOT your data...proprietary vendor lock-in of data created via software is unacceptable. Someone will figure out how to scrape it out. This is Web 2.0

So what might we deduce from these points? Well, we can certainly apply these as heuristics when choosing a next-generation platform for developing applications.

Using arrays in SQL?

This is an excellent method of using arrays of integers in SQL queries. The basic idea is to convert your integer array to a byte array (the semantic equivalent of varbinary(MAX)), then to write a CLR table-valued function to convert the varbinary(MAX) to a table of integer values.

That's just cool.

I picked this up in a comment to Ayende Rahien's post about performance issues with his technique for working around the 2100 item limit imposed by SQL Server for an IN clause in T-SQL. This could show up in a number of situations where a linked server query wouldn't work. Say, for example, you were calling an API for the most recent searches on and wanted to match those keywords against recent searches on your site. You'd have, basically, an array of search terms that you wanted to find in your database. There are a number of solutions to this, but from what I've seen the best performance dynamics (assuming a somewhat large dataset) are given by creating some sort of temporary table and running your query against that.

The method of using integer arrays above is the most imaginative I've seen. Though, other alternatives exist, such as BULK INSERTing data, running your query, and then rolling back the transaction, as suggested in another of the comments.

Powershell Script to Add an Event Log

When setting up logging for your ASP.NET application, you may want to write to a custom Event Log. If you do not install your web application, however, you may be dismayed that the service account for your web application does not have sufficient permissions to create an Event Log when you write to it.

Here is a PowerShell script to help you out:

$creationData = new-object System.Diagnostics.EventSourceCreationData "AppName", "LogName"
$creationData.MachineName = "TargetServerName"

Tuesday, December 4, 2007

WebResource.axd error: Padding is invalid and cannot be removed.

If you are experiencing the error detailed here, but you are running a web garden, not a web farm, the query string parameters of your WebResource.axd requests are not being properly decrypted. The root cause is that a different decryption key is being created for each of the processes in your web garden.

To fix this problem, you have to explicitly set a machineKey element in your web.config:


See this article on web deployment considerations for more information.

You may also have seen this error manifest as wierd Javascript errors when you increased the number of worker processes.

You will need to restart IIS for this to take effect, it seems.

Here's a Powershell script to generate a key. Place this in a .ps1 file. Keep in mind, the validation is done with a hash, and so can use SHA1 with a 128-bit key, but decryption is done with a reversible encryption algorithm so the key should probably be 64-bit with AES (Rijndael).

$len = 128

if($args[0] > 0)
$len = $args[0]

[byte[]] $buff = new byte[] ($len/2)

$rng = new System.Security.Cryptography.RNGCryptoServiceProvider


$sb = new System.Text.StringBuilder $len

for ($i = 0; $i -lt $buff.Length; $i += 1)
$junk = $sb.Append([System.String]::Format("{0:X2}", $buff[$i]));


Wednesday, November 28, 2007

Constraints & Good Design

The best of artists hath no thought to show, which the rough stone in its superfluous shell, doth not include; to break the marble spell, is all the hand that serves the brain can do. —Michelangelo

A well-designed application does not include artificial constructs to model business processes; it simply models what is there.

If we use solid development practices, this tends to happen naturally. If, instead, we immediately adopt crutches in our design process, we begin to heavily rely on artificial constructs and our users may even become convinced that what we have given them is "the way it is".

Consider the case of a project management web application. A given user with a couple of projects going in the application calls for support. What information does the support person ask for in order to identify the customer? Let's imagine two scenarios. The first is an application where "the DBA" uses auto-incrementing integral surrogate primary keys for everything. The second is a scenario where natural keys are used as much as possible.

Both Scenarios
Tech Support: Hello, thanks for calling XYZ Support, can I help you?
Customer: Hi there, yeah, I am having problems accessing one of my projects.
Tech Support: I can help with that. I just need to get some information first.

Scenario #1: Surrogate Keys
Tech Support: What's your customer ID?
Customer: Eh, wha-? I don't know, where do I get that?
Tech Support: Don't worry, I can look it up for you. What's your name?

Scenario #2: Natural Keys
Tech Support: May I have your name please?

Customer: John Public
Tech Support: Thank you Mr. Public, could you confirm your billing address?
Customer: 100 W Main St., Smithville, Oregon
Tech Support: Thank you sir. For security purposes, please confirm the last four digits of your social security number.
Customer: 9999
Tech Support: Thank you Mr. Public. Now, which project are you having trouble accessing?
Customer: My "foo for bar" project. I can't even get into it.

Scenario #1: Surrogate Key
Tech Support: Mr. Public, I see you have several "foo for bar" projects here.
Customer: yeah, I was going to try and start over, I guess, but I don't want to do all that work again. Can you help me get into my old one?
Tech Support: Yes, sir, what is the project ID?
Customer: I'm sorry, the project ID? It's "foo for bar".
Tech Support: No, sir, I mean what is the numeric ID of the project?
Customer: I-uh-where do I get that?
Tech Support: It's okay sir, let me see if I can figure out which one you want.

Scenario #2: Natural Keys
Tech Support: Mr. Public, I see you have a few "foo for bar" projects here. There's "foo for bar", "foo for bar 2", and "foo for bar 3".
Customer: Yeah, I was going to start over, I guess. I want my original one, just "foo for bar".
Tech Support: Okay, sir, I can help you with that.

What happened here? In scenario #1, we lazily just used numeric Ids all over the place. Our tech support staff thinks in those terms now, and our customer feels less than valued and perhaps a little stupid. In scenario #2, we used the natural key of customer identfier + project title as the primary key and our application reflects that. Customers cannot create duplicate projects with the same name; tech support staff cannot just reduce people and their work to numbers; and the application accurately models the real world processes it is designed to support without artificially introducing ids.

Make your software support your users, not vice versa.

Saturday, November 17, 2007

Front-to-Back Security Framework

I've had a bit of wine this evening, so if this post seems to ramble, you know why. Also, please permit me the novelty of thinking aloud in this blogpost. Generally, I reserve this blog as a sort of lab notes area, but this time my discussion is not a direct product of my work. Instead, I'm trying to connect some things in my head that may not have a natural connection, but I think they do.

Most web frameworks will list security among the tenets of their design. Lift will talk about precluding XSS vulnerabilities, ASP.NET will mention its MAC validation of viewstate, and, honestly, I don't have any other examples at hand. But, I think it should be self-evident that web frameworks think about security.

...their security, the security of the platform, the bumpers in the gutter of web security bowling lanes are what they tout. But, dear reader, what have they offered in the way of a framework for application security? Most of us, rightly, assume that our platform is secure against exploits out of our control. Further, we appreciate those features that force us to fall into the pit of success as concerns security or naively rail against the nominal complications imposed by those features. What I have never heard anyone talk about is application security.

In line-of-business applications, the security conversation isn't about XSS or buffer overruns or SQL Injection or phishing. For those of us that build custom LOB applications, the conversation is about:

  • sales people walking away with lists of customers,

  • call center employees stealing credit card information,

  • preventing users from making expensive mistakes,

  • protecting the identity of patients, customers, or abuse victims.

In short, the LOB application developer, be she working for a government human welfare agency or a gypsum supplier, is less concerned about external hackers and more concerned about internal security. This is as it should be, as it must be. Most LOB developers don't have the capacity to evaluate the security of a web framework as a platform, and the threats they are being paid to guard against aren't from the outside.

Where is the web framework that makes securing the individual fields that compose a record easy? Where is the framework that makes it transparent for a certain user to be able to view columns A, B, and C without being able to edit them, while being able to view and edit columns D, and providing a full audit record of all of these actions? Where is the framework that makes it easy to prevent user's from expensing dinners over $80 without management approval? Why doesn't this framework then apply the same rules at the domain model level as well as giving us full support at the UI level?

Obviously, this is a rhetorical question, because someone has most certainly dealt with this in some non-mainstream framework. The answer, I believe, lies partially in the impedance mismatch found between the tiers of a traditional web application. It also something to do with a lack of security primitives in most platforms, I believe. Let's first look at the impedance mismatch problem, and ask if there are new technologies that could help up us close this gap.

Many of us have developed some sort of custom framework for dealing with security. However, as do most attempts at creating "security" whole cloth, it invariably has serious flaws. The problem lies in the seemingly simple question: where do we put the security rules? If we put the rules in the database, we will create triggers, constraints, stored procedures, schemas, etc. in order to enforce access control. If we choose the middle-tier, we may create policies, use code access security, or security aware factory methods. If we choose the UI, we should be fired, but the reader should not be at all surprised that this is probably the most common way security is enforced in LOB applications. When the page or screen is loaded a bunch of imperative code is executed that sets visible and disable properties on controls.

There are problems with each of these approaches. And, like most problems, they can be solved by approximating a correct solution through a lot of code. The UI approach doesn't scale as the number of screens and fields on those screens increase. The middle-tier, or business object, approach pulls too much data from the persistent store and doesn't easily tell the UI how it should behave, forcing a duplication of security logic. The database approach leaves the other tiers to fend for themselves and makes it hard to incorporate business logic into security decisions.

A through-going exposition of these shortcomings is beyond the scope of this post. No, seriously, it is, but anyone who has spent any time in the trenches and has seriously thought about this issue will have nodded their head through the last paragraph, I think. I did, however, intimate that their might be something coming out for .NET that could help us with this problem. And, I believe there is such a beast: expression trees. The expression trees that made Linq to Sql (aka dlinq) possible are exactly what you need in a generalized security framework.

Dlinq expression trees are a representation of the current execution environment's intent to access data. From a data-reading perspective, they are declarative intent. Presuming this model is extended to the other CRUD operations: create, update, and delete, the various expressions trees could be pruned. Or, perhaps, the leaf nodes could be filtered. The point is that expression trees and the lambda expressions that create them are data, and security policies can be applied to data. If we used declarative code for our UI bound to declarative data access intent, shouldn't we be able to defer access evaluation to runtime using declarative security policies?

Tuples are the key. Can we formulate a way to secure tuples through the application tiers? Tune in next time when I present my thoughts on how this might be possible in ASP.NET.

Wednesday, November 14, 2007

Why I Love PowerShell

There are a lot of tools out there to help you get your job done, and your effectual use of them is the singular most important factor in your productivity as a developer. One of my favorite questions to ask a candidate in an interview is, "Please name your top five favorite tools besides your IDE." This should be an easy one for a productive developer.

PowerShell is one of my favorite tools, and here's a short list of reasons. PowerShell, I love you because:

  • You make it easy to validate my regex is working

  • You give me an easy way to test my ExpressionBuilder

  • You make it easy to translate a SID to a username

  • You make it easy to find the biggest files in a folder recursively

  • You make it easy to copy only the largest files that don't have a certain extension

  • You're an object-oriented, dynamic language driven SHELL!!!

Monday, November 12, 2007

On Liking LINQ

This past Sunday marked Veteran's Day on the calendar. I observed the holiday in true patriotic fashion: by working.

Truthfully, I spent the day working through the LINQ hands-on lab. You can find all the links you'd need to get started with LINQ and the VS2008 Beta2 release at Charlie Calvert's blog.

First Impressions

It didn't take me long to get used to the new syntax, though I think my experience with JavaScript and Erlang might account for that. I absolutely love the inclusion of lamda expressions; these are huge syntax improvement over writing delegates today in-line today. The extension methods provide a uniformity across the various "LINQ to" APIs that make moving from one type of data source to another very easy. I was pleased that it was so easy to see the SQL output of LINQ to SQL, and I pleasantly suprised that code generated by the designer for LINQ to Objects was so readable and well organized.

I will update this post with more comments later.

Wednesday, November 7, 2007

Tips on Optimizing VirtualPC 2007 Performance

Here a few helpful tips on getting for Virtual PC 2007 image to run blazing fast.

  1. Purchase VMWare Workstation 6

  2. Install VMWare Workstation

  3. Import your slow, crufty Virtual PC 2007 image into VMWare

  4. Start up your new VMWare virtual machine and enjoy the speed

  5. Profit

Tuesday, October 30, 2007

OutOfMemoryException: Diagnosis and Indications

If you are seeing System.OutOfMemoryException in your ASP.NET application logs, you are probably looking for a way to determine what is chewing up all of your RAM.

Static code analysis such as that performed by FXCop and the Developer SKU of Visual Studio Team System is only useful if you know what to look for and that what you are looking for is in your code. It is possible that you are accreting too much into the Session, but it is also possible that ASP.NET 2.0 has a memory leak or that the Telerik controls are doing something funky. Static code analysis cannot tell you that.

Performance monitoring tools cannot, generally, peek into your process space and say, "this session is using this much memory" or "this http handler is not releasing resources". These tools (e.g. perfmon) can only track the information supplied by the application's stack, i.e. IIS, ASP.NET, our code, etc. So, information like cache misses or worker process age is available, but the internals of the application are not. One way to work around this is to host the session in the state server, as the ASP.NET Session State Server provides a lot of visibility in the way of performance counters. This may or may not aid us depending on if the problem is indeed session state memory pressure.

Another tool that is not often mentioned and egregiously under-utilized by development and operations teams are profilers. If you are familar with SQL Server Profiler, then you already have some idea of how profilers work. The .NET platform itself comes with a free profiler from Microsoft referred to as the CLR Profiler. This can be used to gain completely visibility into any process that runs on the .NET framework. Whereas static code analysis forces you to pontificate on how the application might be, and the performance monitoring tools have limited visibility, the profiler tools gives you a complete picture as to how the process actually is--if you know how to use them.

K. Scott Allen—whose blog you most certainly have seen if you've been doing serious ASP.NET development—has an excellent article on getting CLR Profiler to work with Cassini (Visual Studio's built-in web server).

If you are not familiar with CLR Profiler, keep it in mind if you see this error in your own applications. It has some limitations. For example, it cannot be used to profile a running process, and it will decrease the performance of your application an awful lot. So it is used as a modeling tool rather than a forensic tool.

However, other approaches may be of more immediate use. The approach we chose to implement was to run the application in a web garden configuration. In a web garden configuration, you generally have the application running in its own application pool. The Session state must be kept out-of-process, i.e. using the ASP.NET Session State Server or SQL Server, as requests have no affinity to the worker process that created their Session. This approach may work especially well on a multi-core machine where additional processes can take advantage of the parallelism. We then proactively recycle worker processes in our application pool to reset memory usage.

In the end, given time and budget, identifying the culprit in your application with a profiler is the best solution. But, IIS and ASP.NET provide great tools to deal with this kind of intermittent problem.

Tuesday, October 23, 2007

Power Collections

So I have been reading code to be a better developer, specifically the PowerShell Community Extenstions source code. In so doing I ran across the Wintellect "sponsored" Power Collections library, which would appear to have been specified in 2004.

All I can say is "bravo" to Wintellect. Jeffery Richter is the bee's knees. In this library you'll find the collections the bridge the gab between what you would find in Java and C++ STL: Dequeue, Set, OrderedSet, etc. Shiny.

Tuesday, September 18, 2007

Replacing Enum Constructs with Classes

In my previous post on this topic, I promised an exposition of the design pattern that I proposed. To briefly restate the motivation, rather when this pattern is indicated, when you have a set of relatively static values, typically something that might become an enumeration in your application, consider using this pattern. It frees you from having to map values to enumeration members when persisting, serializing, integrating, etc. Instead the class can implicitly be casted to the appropriate value. Please see the previous post for a clearer explanation.

The basic mechanism to implement our pattern is to create static accessors (public static fields) on our "category" class for each "category". Imagine we have three types of parts:

  • Widgets
  • Whoozits
  • Whatzits

Each of which corresponds to a code in our enteprise systems:
  • G
  • O
  • A

Some of enterprise web services take this code as an argument, and our data access layer certainly will require specifying the part type. Our design pattern is indicated here. Here's the code.

As you can see, we expose all of the currently known part types as static properties of the PartTypes class. In practice, this looks much like using an enum to consumers of our PartTypes class without the attendant complications we reviewed in my previous post on the topic.

This is an easy pattern to apply and can great simplify using these categories in code. A common extension to the pattern that I use in my applications is to expose a Predicate<T> for each of the categories. I will leave that as an exercise for the reader.

Monday, July 30, 2007

Finding an Active Directory User by Their Security Identitfier (SID)

Problem: You need to store a unique identifier for a Windows user in your application database that is not subject to change. Their NT Login Name is potentially volatile, so you choose to store the NT Security Identifier, the SID. You need to look up certain details per user in the Directory when the user logs into your application, rather than storing duplicate data in your membership table(s). Since you are only storing the SID, you must perform this lookup by obtaining the SID of the authenticated principal and looking it up in your membership table(s).

Sounds fairly straightforward, until you discover and/or realize that security identifiers for Active Directory user objects are not stored using the Security Descriptor Definition Language format (i.e. SecurityIdentifier.ToString(), e.g. "S-1-5-9-..."). The SID of a Windows principal is stored as a binary value in the directory. Constructing a LDAP filter to find the appropriate user's directory entry means understanding:

  • how the LDAP specification represents binary values as strings,

  • which property of a user directory entry contains the Windows SID

  • how to obtain the binary representation of SID from a SecurityIdentifier object

  • how to apply string formatting codes to a byte value in order to obtain a hex string

Fortunately, dear reader, I have done all the leg work on this one. This article is a continuation of a loosely-coupled series of articles giving you the know-how required, without specifying exactly how, to build a dual-mode authentication (Windows/Forms Auth) membership architecture for your ASP.NET applications.

First off, the RFC says that LDAP formats binary value in filter strings by delimiting each byte with a backslash ("\") character and writing the hexadecimal value of the byte. So, we need to:

  1. obtain the binary value of the SID

  2. convert each byte to this string format

  3. append it to our LDAP filter

//(1) assuming a SecurityIdentifer sid;
byte[] sidBytes = new byte[sid.BinaryLength];
sid.GetBinaryForm(sidBytes, 0);

//(2) assuming a StringBuilder sb;
foreach(byte b in sidBytes)
//formats a byte using a two-digit hex value representation

//(3) to be used in an LDAP query
String.Format("(&(objectClass="user")(objectSid="{0}"))", sb.ToString());

I hope this helps!

Wednesday, July 25, 2007

A PowerShell Script to Determine Directory Size

Here's a script I wrote to determine the size (in megabytes) of all the files in a directory and its subdirectories. This should be a bit more accurate, if less precise, than what you get in Windows Explorer.

function GetSize()
$size= gci -recurse -force |
? { $_.GetType() -like 'System.IO.DirectoryInfo'} |
% {$_.GetFiles() } |
Measure-Object -Property Length -Sum |
Measure-Object -Property Sum -Sum

$size2 = (Get-item .).GetFiles() |
Measure-Object -Property Length -Sum

[System.Math]::Round(($size.Sum + $size2.Sum) /
(1024 * 1024)).ToString() + "MB"

Friday, July 20, 2007

Aspect-Oriented... JavaScript?

Or, how to create an event handler where there wasn't one.

Some background: I wanted to open a window (a RadWindow actually from the Telerik RadAjaxManager) and refresh a portion of the screen (a RadAjaxPanel) when the window was closed. Unfortunately, the developer of the window library (Telerik) didn't anticipate this by creating an OnClose property of the window class. In fact, they had not event handlers at all in the client-side JavaScript object model for the window.

So, what to do? How can I know when the window is closed? Well, a procedural programmer might simply poll the window handle periodically until the window is closed. An object-oriented programmer would decry the lack of an observer pattern in the 3rd-party's object model, and fall back to wrapping the procedural approach in a bunch of classes. What about the functional programmer? The answer looks something like this:

whenReturning(getWindow(),'Close', refreshScreen);

In the code above, getWindow returns a window object reference, 'Close' is the method called to close a window, and refreshScreen is a function object that we want to call when the close method returns. Take a moment to drink that in... What this code describes is a standard way to create your own event handler. In plain English, what the code above does (I'll show you how in a moment) can be stated as, "whenReturning from the getWindow().Close method, refreshScreen". That's powerful.

The semantics are different from wiring up event handlers; we're not making an assignment to an "OnXXXX" property of the object. In the .NET parlance, we're not making a "type safe assignment of a delegate instance to a multicast delegate". To a functional programmer, this semantic is more natural. To folks acquainted with aspect-oriented programming, this may also seem familiar. The function "whenReturning" is providing advice ("refreshScreen") at a join-point ("getWindow().Close"). In fact, this is the perspective of the article where I first picked up this technique.

In a blog entry "Aspects in JavaScript", William Taysom describes how to implement AOP advice using some of the features of JavaScript, including closures, type mutability, and dynamic scoping. (He's a fan of dynamic languages.) The article assumes a lot of context and understanding on the part of the reader regarding these language features.

If you're looking for a quick solution, you might have better luck using dojo.event.connect. Here's a snippet from their site, "[...] lets say that I want to get called whenever is called. We can set this up the same way that we do with DOM events:"

dojo.event.connect(exampleObj, "foo", exampleObj, "bar");

So, if you need a quick win here, go grab dojo and use their event connector. If you're new to JavaScript as a dynamic, functional language, you would do well to learn about how functions are first-class objects, all of an object's properties are an index into a dynamic associative array, inheritance through prototypes, and how lexical scoping works with "this" and closures. At the very least, to understand the code in the "Aspects in JavaScript" article, you need to understand function call dispatching in JavaScript.

Building off of William Taysom's work, we could extend this advice to all instances of the window (RadWindow) object by modifying the prototype. That may be the subject of a future post. Until then, dear reader, best wishes.

Befuddling Error: "The control collection cannot be modified during DataBind, Init, Load, PreRender or Unload phases."

If you ever modify the controls collection (e.g. myControl.Controls.Add()) of a parent control, such as the Page, inside a UserControl you my have seen the following error.

"The control collection cannot be modified during DataBind, Init, Load, PreRender or Unload phases."

Unfortunately, this error is a bit obtuse, since we can most certainly modify the controls collection in event handlers that take place during those phases. So, what should we divine from this error message? It helps to remember this error is being thrown by the control collection that the current control is contained in, or another level up in the controls collections hierarchy.

My understanding is that you can modify down, not up, in the controls hierarchy. In other words, if you want to add a control dynamically to your parent, the parent needs to have a place to put it. The solution is to add a PlaceHolder control to the parent, then add your controls dynamically to that PlaceHolder.

Wednesday, July 11, 2007

An IE OnChange Textbox AutoPostback Twice RadAjaxManager Workaround!

I cannot personally justify the difficulty of finding this workaround, but I know someone will find it valuable. An ASP.NET box with AutoPostback="true" on my page was causing an ajax postback twice through the RadAjaxManager, and it took me a long time to figure out why. So, I post my findings here in the hope that Google will help another frustrated developer find the solution they need.

First, a description of the difficulty I was having in my ASP.NET application. I'd created several textbox controls that I wanted to have AutoPostback. Then, using the RadAjaxManager, I wanted to have the textbox controls perform their postback using an ajax request and update another portion of the screen where the results of the request would be displayed. When the request completed, I wanted the textbox control that caused the postback to have an empty value.

So, I set the AutoPostback property of the textboxes to True, created AjaxSettings to have the results panel and the search panel (where the textboxes live) update when one of the textboxes caused a postback. I entered some text into one of the ajaxified textboxes, hit ENTER, then, voilà, it performed an ajax postback updating the panels I specified...

Then, it did it again.

The ajax postback was performed twice. Every time. Great... So, I set about to determine why this was occurring. Along the way I discovered a few things about Internet Explorer, the ASP.NET JavaScript postback model, and how the Telerik RadAjaxManager works with them.

If you created a little test page that fires an alert in a textbox's onchange event, you'll notice a difference in behavior between IE and Firefox. For example, put this element into an otherwise blank page.

<input type="text" onchange="alert('fired');" />

In Firefox, entering some text and pressing the ENTER key will result in the alert being shown; in IE, however, this is not the case. IE doesn't fire the onchange event until the textbox loses focus.

This creates a browser incompatibility issue for libraries. Pertinently, consider setting the AutoPostback property of an ASP.NET textbox to true; when the textbox is changed in the browser, the client should perform a postback automatically. Obviously, the ASP.NET developer's must have worked around this behavioral discrepancy. Indeed they did:

<input type="text"
onkeypress="if (WebForm_TextBoxKeyHandler(event) == false) return false;"
onchange="javascript:setTimeout('__doPostBack(\'textbox1\',\'\')', 0)"
... />
Please note that some rendered properties of the textbox have been elided for clarity.

What is this WebForm_TextBoxKeyHandler? Some clever use of our Javascript debugging tools reveals the answer: it fires the onchange event for the textbox when the ENTER key is pressed and returns false.

That the WebForm_TextBoxKeyHandler returns false when the ENTER key is pressed (after firing onchange event handler) is significant. It means that the browser will cancel the keypress event. Thus, Firefox will not trigger the change event for the textbox.

We now have a few clues to the mystery of why our page performs the postback twice when we change our textbox:

  1. When we hit the ENTER key in both browsers, the onkeypress event handler executes the onchange handler. We changed this behavior.

  2. When we navigate out of the textbox, the change event is fired. This behavior remains unchanged.

Please note the added emphasis in the above two items. The ENTER key causes the onchange handler to be executed, while navigating out of the textbox causes the change event to be fired which executes the onchange event handler. I am not making a semantic distinction here; these are two entirely different behaviors.

Notice there isn't a problem so far, however. No matter how the onchange event handler gets called, it calls __dopostback which is going to cause our page to go away. The problem only presents itself when the page doesn't go away: when we peform AJAX postbacks.

With an AJAX postback (i.e. after we wire up the controls using the RadAjaxManager), the textbox that caused the postback stays in the page, and when we navigate out of it the change event is going to be fired. Here's what's happening:

  1. Navigate into textbox--current value is stored as original value.

  2. Enter text and press ENTER.

  3. onchange event handler is executed (via the onkeypress event handler)

  4. AJAX postback occurs

  5. navigate out textbox (e.g. loading screen displayed or user-initiated)

  6. change event is triggered

  7. onchange event handler is executed by change event

  8. AJAX postback occurs

The two postbacks can seem to occur concurrently in Firefox if you cause the textbox to lose focus when performing the postback, e.g. if you displayed the loading panel over it. The postbacks always occur serially in IE in the same situation; because, although the requests are both queued up by simultaneously by the scripting engine just as in Firefox, AJAX in IE relies on the a single-threaded COM component (XMLHttpRequest) to communicate with the server. So, IE can only execute one AJAX request at a time in this situation.

We have a problem here, gentle reader. We do not own any of the code in the scenario enumerated. How should we go about creating a solution, then? One's first instinct might be to clear the textbox in the onblur event handler. Unfortunately, browser divergences strike again. Whereas Firefox fires the user-supplied onblur event handler before checking if the textbox has changed, IE checks the change first.

What about clearing the textbox in the onchange event handler? This would only work if:

  1. we could ensure that the textbox would not lose focus before our code was run, and

  2. we clear the textbox only after all other onchange code has run to ensure that we actually send a value!

Thankfully, there is a solution.

The RadAjaxManager supplies a set of ClientEvents that we can use to get some of our code executed in the midst of its ajax postback. We will utilize the OnRequestSent client event of the manager to clear the value of the textbox. In the function assigned to the OnRequestSent event, we place the following code:

args.EventTargetElement.value = "";

Keep in mind that this event gets fired for every ajaxified control associated with your RadAjaxManager, so you may find it necessary to qualify the aforementioned line of code to only clear the appropriate textbox controls.

I sincerely hope this helps others.

Friday, July 6, 2007

Some Cool Web 2.0 Applications

Our first is related to the still-born It's called WebSnapr, and I'm going to try it here on the blog for a while. Preview thumbnails of pages that I link to will show up inside of this preview bubble:

Second, we have the Y-Combinator funded Snipshot. This is a primitive image editing program that runs completely in the web browser.

Lastly, and perhaps the most impressive to readers of this blog, I want to mention Yahoo Pipes.

Thursday, July 5, 2007

Do You Hear a Bell Tolling?

“No man is an island, entire of itself ... Any man's death diminishes me, because I am involved in mankind. And therefore, never send to know for whom the bell tolls; it tolls for thee.”

—John Donne from "For Whom the Bell Tolls"

Has Microsoft died? Should I be concerned that your career might go with it? Its savior is long quiet; Ozzie last posted about Live Clipboard in April of 2006. I first expressed my concern publicly four months later, but I recently heard new development is going away from .NET and toward dynamic languages, at least for ThoughtWorks.

Microsoft made huge gains in the server market in the first part of the 2000s. .NET was to be their coup de grâce; and it worked for a while.

That is until the innovation in the web-space picked up again. It no longer pays to be average. So, besides serving as a brain-backup device for various solutions I've hammered out, this blog will also be the place where I try to incorporate what I learn from these high-falutin' technologies into my work as a C# programmer.

And, who knows? Maybe I'll learn a new language and escape the cave. If so, despite Plato's warnings, I will return to share what I've seen. Until then, however, I will continue to be impressed by Windows Workflow Foundation

Wednesday, July 4, 2007

More on Closures

In yet another simple and highly illustrative example of why closures matter, here's a blog entry representing asynchronous (non-blocking) network I/O with closures. Note the significant reduction in code needed and elimination of explicit state management.

I've reproduced the author's closure-enabled Python code in C#:

The above code doesn't exactly mirror the original author's intent; we could easily utilize an AsyncCallback to to that. What it does do is clearly illustrate how to anonymous methods are used to create lexical closures. In "DoOperationB", we see that after it returns its local "stage2" keeps a reference to another local, "stage3".

The order of execution will be what we intended: DoOperationB, stage2, stage3; however, all of this happens in a non-blocking manner. We could call DoOperationB several times, and upon each execution a new lexical scope would be created that the respective "stage2" and "stage3" delegates would act upon. Without real-world code it may be difficult to see, but the benefits of this approach include:

  • not having to utilize callbacks, and

  • not having to bookmark/track execution state, and (perhaps most subtly)

  • the entire "operation" is modeled in exactly one named method (DoOperationB).


Can you hear me now? Has anyone ever been able to actually have a live chat using one of these sites?

When did customer service become so hard?

Monday, July 2, 2007

Replace enum Constructs with Classes

Anyone who knows me, knows I'm perfectly happy writing C# code. But I think its important to keep tabs on what the rest of the industry is doing, especially things that come out of the Java space. Witness the omniscient debugger, Guice, and db4o. Along those lines, I've read a bit from a book entitled Effective Java, written by Joshua Bloch, and I'm happy to report that it has made me a better programmer. My favorite gem, so far, is something that I attempted to do about five years ago--replace enums with classes.

First, some background on my initial motivation. What I kept running into were string values that corresponded to various codes that would be kept in the database. You know what I mean, you've got addresses and address types (e.g. home & work, or mailing & billing, etc.). So, you store a string that indicates what type of address it is and expose that as a property of your address object. The problem is a distinct code smell that something is wrong:

if (addr.AddressType == "M")

Of course, this looks like a magic number, so to speak... and comes with all the attendant difficulties in maintainability of the code: what is "M", what does "M" mean, what happens when we phase out "M"? We could probably make an accurate guess given something as prosaic as addresses, but what about exception policies or authentication modes?

What I wanted at the time was a "typesafe string enum". Put in more concrete terms, I wanted to do the following:

if(addr.AddressType == AddressTypes.Billing)

Now, Java doesn't have built-in enum support, whereas .NET does support named integral enumerations as a first-class type. So, we could implement the above with the following declaration:

public enum AddressTypes

This would work fine, except that we need some fixed mechanism for identifying address types outside of our code (i.e. in the database or in an XML wire-serialization). So, we simply explicitly identify these constants:

Okay, we're golden! Or, are we? Let's look at this design choice in practice. Consider what a customer address record might look like:

CustomerID AddressType AddressLine1 AddressLine2 ...
243843 2 1032 North St. NULL

Hmmm, that's not so bad. We could use a lookup table and a view to make reporting and querying easier. What about XML serialization of a customer object:

Still, that's not terrible, but it could be better. There's a very good reason why XML is human-readable. It makes consuming data easier for systems outside of the system where the data was originated. That is, when the marketing department hires someone to come in and integrate their CRM system with your customer database, they have to figure out that "2" means "Billing".

There's that magic number again! What we have implicitly done with the enum serialization is leaked an implementation detail, e.g. how we represent Billing addresses in our system. We have violated the encapsulation principle. Cue the warning music (dum-dum DUM)!

It's not just other systems that will have some difficulty with these magic numbers. Consider how you would store a default address type in a configuration file, for example defaultAddressType=2.

There are other difficulties with enums. Probably the most pertinent is related to how enums are implemented. Specifically, an integral value is implicitly convertible to an enum instance. In other words:

if(addr.AddressType == 2) //perfectly valid

Well, that's fine, right? I mean, we're storing 2 in the database as well, so we can't go changing the enum values. The problem is we've effectively lost the raison d'être of our enumeration, i.e. type safety. Imagine data coming in to the system from that marketing consultant. How would you perform a validity check on AddressType?

There are lots of problems with this situation, and they all revolve around how enums are implemented. In this example, there is nothing wrong with casting 6 to an AddressTypes instance... until you try to persist that value to your database and you get a referential integrity violation.

Obviously, we need a better solution. And that is where the concept of a strongly-typed string enum comes into play. Bloch's basic concept of the "typesafe enum pattern" is outline in Item 21 of his aforementioned book. More important there is his discussion about the different ways the pattern can be used, e.g. extensible implementations (inheritance) and comparable implementations (sorting). Most of what he says applies to what we are going to build, as well, despite being focused on Java.

We are going implement what I would call a typical usage of the pattern, based on my exeprience. Along the way we will leverage some of .NET's strengths and find ourselves with a much more expressive and extensible way to represent short sets of values in our systems. For now, I hope I've made the case for why we need the typesafe enum pattern and that you'll join me next time for an exposition of my .NET implementation of the pattern.