Wednesday, December 17, 2008

A Corollary to Ockham's Razor

The popular phrasing of the maxim known as Ockham's Razor is, "the simplest solution is usually the right one."  This is not the original statement or William of Ockham.  He might better be paraphrased: "a solution with fewer assumptions is to be preferred."  That restatement, though still not precisely correct, captures the spirit but leaves out the concept of simplicity.  Karl Popper, the imminent epistemologist, would have us define simplicity as the "degree of falsifiability [sic]" of our solution; i.e., how easy is it to prove the solution incorrect?  Popper goes further:

Simple statements, if knowledge is our object, are to be prized more highly than less simple ones because they tell us more; because their empirical content is greater; and because they are better testable.

He could have well been talking about programming! If would follow that declarative programming, currying, constraint programming, and domain-specific languages are to be preferred over alternate paradigms, but so too would an entry to the obfuscated C contest and your average unreadable Perl program.  To rectify this seeming contradiction, we need to understand that a "simple statement", according to Popper, is one with fewer parameters--precisely because it is more testable.

So more testable code is better code, and we might induce that a test driven approach is to be preferred, insofar as the practice encourages the writing of simpler, more testable statements.  But, Ockham's razor is about solutions, not approaches.  So, what can we say about deciding between solutions?

I offer the Programmer's Corollary to Ockham's Razor:

To pick the best solution architecture, choose the one that consistently requires fewest number of parameters.

Tuesday, December 16, 2008

HOWTO: SQL Server Return Value with Enterprise Library

    int returncode = 0;
Database db = DatabaseFactory.CreateDatabase();
DbCommand dbCommand = db.GetStoredProcCommand("<your procedure>");
db.AddParameter(dbCommand, "@RETURN_VALUE", DbType.Int32, ParameterDirection.ReturnValue, String.Empty, DataRowVersion.Default, null);
db.ExecuteNonQuery(dbCommand);
if (!Int32.TryParse(Convert.ToString(dbCommand.Parameters["@RETURN_VALUE"].Value), out returncode))
returncode = -1;

The returncode local variable is useful, but generally not necessary. It's good to state your assumptions about the operation of external code explicitly.

Wednesday, November 12, 2008

Everything Crashes, But Programmers Are Still Jocular

This is completely off-topic, but I couldn't resist sharing this with anyone and everyone.  Whilst researching the frequent crashes on my iPhone, I ran across this post, and the first comment made me chuckle.

Posted by Kee Hinckley
24 July 2007 @ 4am

It appears that if the OS was unable to get a traceback for the crash (which seems to be the case in a lot of my crashes), the exception code given is: 0×8badf00d.

Could be worse. I remember one company getting a confused call from a customer about a 0xdeadbeef crash.

Tuesday, November 4, 2008

Dealing with Globalization Differences in Windows Versions

Recently I was charged with leading an effort to localize an ASP.NET application to a few different locales, including The Netherlands and India.  I was reminded again how difficult it is to get this right.  There are those who argue that using the same view across all the cultures supported in your application is a bad idea.  For example, an RTL culture, e.g. Hebrew, when done right, requires a mirroring of the entire view layout: not an easy thing to accomplish without changing the view.

Regardless, the .NET Framework and ASP.NET provide many facilities for localizing your application around a single view: App_LocalResources, App_GlobalResources, CultureInfo, etc.  ASP.NET has a particularly helpful feature; you can set the CultureInfo used by the application in the web.config using the globalization element.

There are lots of different ways of representing a culture in Windows: culture name, culture identifier, locale identifier (LCID), and others.  Generally, though, the culture name is used, e.g. en-US, en-GB, and es-MX for English (US), English (Great Britain), and Spanish (Mexico), respectively.  As the world's second most populous country with many ancient and highly varied cultures, India have no less than nine culture names defined in the .NET Framework.  This is, however, a narrow view of India's diversity with its 28 states, 7 union territories, and literally hundreds of spoken languages.  Obviously, there isn't 100% coverage of this diversity in the cultures supported in .NET.

One particularly useful culture name in use in Windows is en-IN.  In India, the de facto language of the law and by extension government and business is English, due to both the historical influence of British colonization and the need for a lingua franca in such an amazingly diverse country.  The en-IN culture code codifies this, making it possible for an application developer to effectively gloss over this diversity in the Indian market.  Unfortunately, en-IN is not available in the core .NET Framework release.  As you can see from this list, Windows Vista supports this culture, as does Windows 2008.

Obviously, this is problematic for developers doing development on Vista but deploying to Windows 2003 R2.  Fortunately, Microsoft developers faced with the incredible diversity of the world's cultures provided a way to customize the available cultures on any Windows installation: the CultureAndRegionInfoBuilder (CARIB) class.  There is a standard how-to create custom cultures with this class, but we will take a slightly different approach in this entry.  We will export the en-IN culture from our Windows Vista development machine, and import it on our Windows 2003 R2 server.

We'll use PowerShell here, but the examples should be clear enough that you can write your own console or Windows application to execute these steps.  Let's begin with exporting the en-IN culture.  The CARIB class is in sysglobl.dll, so we need a "reference" to that assembly. We can load the assembly from the GAC in PowerShell.  To do so we use its strong name, obtained by using "gacutil /l sysglobl" from a Visual Studio command prompt. In PowerShell on the Vista machine,

PS> [System.Reflection.Assembly]::Load("sysglobl, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a, processorArchitecture=MSIL")

Now that we can reference the class, let's export the culture into an industry standard format called Locale Data Markup Language (LDML) version 1.1 using the Save method.  We can import this later with the CreateFromLdml method.

PS> $enIN = New-Object System.Globalization.CultureAndRegionInfoBuilder("en-IN", "Replacement")
PS> $enIN.LoadDataFromCultureInfo((New-Object System.Globalization.CultureInfo("en-IN")))
PS> $enIN.LoadDataFromRegionInfo((New-Object System.Globalization.RegionInfo("IN")))
PS> $enIN.Save("enIN.ldml")

Now we can copy enIN.ldml to our server and import it.  There are three very important modification we must make to enIN.ldml, however.  The en-IN culture is defined in terms of some other Windows locale information that won't be found on Windows 2003 R2, specifically "text info", "sort", and "fallback".  If you open enIN.ldml, you'll find the following three elements.

<msLocale:textInfoName type="en-IN" />
<msLocale:sortName type="en-IN" />
...
<msLocale:consoleFallbackName type="en-IN" />

If we try to load a CARIB from this file as-is, we'll receive the following error.  For more information, see the section "Exporting Operating System-Specific Cultures" in this CodeProject on-line book.

Culture name 'en-in' is not supported.

We've got to change these to a sensible alternative that is supported on Windows 2003 R2.  I chose "en-US", though "en-GB" would've also been appropriate.  For other cultures, this may prove more difficult.  My changed LDML file contains these lines.

<msLocale:textInfoName type="en-US" />
<msLocale:sortName type="en-US" />
...
<msLocale:consoleFallbackName type="en-US" />

With that done, we can load and register the culture on the Windows 2003 server.

PS> [System.Reflection.Assembly]::Load("sysglobl, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a, processorArchitecture=MSIL")
PS> $enIN = [System.Globalization.CultureAndRegionInfoBuilder]::CreateFromLdml("enIN.ldml")
PS> $enIN.Register()

The culture en-IN is now available to all applications running on your server.  To see this in action, you could now augment the web.config of you web application with a globalization element:

...
<system.web>
  <globalization culture="en-IN" uiCulture="en-IN" />
...


Create a file called i18n.aspx and add the following code:

<%@ Page Language="C#"%>
<% = (10.5).ToString("c") %>

Navigate to i18n.aspx and you should see the following:

Rs. 10.50

I hope this helps!

Sunday, October 19, 2008

Using ASP.NET to Write Files to a Remote UNC Share

Scenario: Let's say you have an ASP.NET site on a public web server.  The server itself is in an un-trusted domain, e.g. a workgroup server, and the site needs to write files to a UNC share that is in the domain.  There are many reasons you may need to do this, but the difficulty is that you cannot specify NetworkCredentials for the DirectoryInfo class.  You cannot, therefore, authenticate against the UNC share programmatically, and since your application is running in an un-trusted domain, the ASP.NET process identity is of no help.  If possible, the easiest thing to do is write the file locally and have the system that needs the file pull from there via a UNC.  Of course, if you're attempting to share the file across web applications, you'd likely be in the same scenario.

Solution: The solution I have developed for a slightly more complex scenario utilizes a few relatively little known facilities:.  Basically, we'll use an ASP.NET FileUpload to receive a file, write it to a temporary location, perform any needed processing, and then use the WebClient class to perform an HTTP PUT to a local virtual directory that points to an UNC share.  Let's start with the page that receives the file.

Using the FileUpload control couldn't be easier.  Add it to your page and call its SaveAs(string) method to save the file locally.  Of course, that doesn't help us get that file to a remote UNC share, does it?  What we'll do is save that file to a temporary location on the web server, do any necessary processing, and then upload that file.  Here's the code to do that.

    WebClient wc = new WebClient();
string tempFileName = Path.GetTempFileName();
fileUpload1.SaveAs(tempFileName);
wc.UploadFile(uploadUrl, "PUT", tempFileName);


Let's talk about that uploadUrl variable.  You are probably already familiar with GET and POST, but you may not have had occasion to use PUT and DELETE.  When specifying PUT in an HTTP request, the request URL is the location where the request's file payload should be stored by the web server.  IIS 6 supports HTTP PUT requests via its WebDAV module, but the functionality is disabled by default.  Turn it on the same way you turn on ASP.NET, by clicking Allow after selecting WebDAV in the Web Extensions in IIS manager.  Once it's turned on, the IIS server hosting your application will now accept HTTP PUT requests.  If you do not turn on WebDAV, PUT requests will result in 405 Not Implemented being returned by IIS.



Having enabled PUT, you now need a place to put the files.  Let's create a virtual directory in our site that points at the UNC path that is our final destination.  There are three key points here.  First, we need to ensure that the new virtual directory has Write access.  Second, we need to tell IIS to connect as an account with the same username and password as an account on the machine where the UNC share exists.  Finally, we have to allow Anonymous access to the virtual directory.  If the UploadFile call returns a 401 Unauthorized, one of these three things is wrong.  The account that authenticates to the UNC share should have the necessary permissions to to the share of course, including write access.



There you have it a few lines of code and a virtual directory.  You're not likely to find a simpler solution to this problem.

Thursday, July 17, 2008

BizTalk Mapper: Mapping a the Value of a Node when Present, Otherwise Nil

If you have an optional node in your source schema (e.g. minOccurs="0") who value must map to a nillable node in your destination schema, this tidy little mapper construct is for you.

BizTalk Conditional Value or Nil mapping

It occurs to me that a cookbook containing many of these patterns would be useful to folks just getting started with the BizTalk mapper.

Thursday, July 10, 2008

Calling a WCF Service from a SQL Server 2005 CLR User-Defined Function

On my current project I developed a Membership Service that consolidates all user identities into a single authority, allowing for things like enterprise single sign-on, impersonation, and centralized auditing.  Now that we're ready to convert our users over from the previous system, we need to register them all through the service interface.  Since the existing user data will be staged to a SQL Server, it was appropriate to give that import access to the service.

Enter a user-defined function.  Adding a SQL Server Project to my Membership Service solution, I quickly wrapped the call to the RegisterUser web method in a SqlFunction, i.e. a method tagged with the Microsoft.SqlServer.Server.SqlFunction attribute.  David Hayden has a good post on the steps necessary to get this right.  Here's an even more detailed discussion to get you going.   In addition to these two posts, this article covers how to impersonate the principal in calling your function; it also discusses how you can modify the proxy class to not require the UNSAFE permission set.  The SQLCLR team also has a great post on doing this.

All of these authors used wsdl.exe to generate the proxy class for the web service.  This is a fine approach, but I prefer the ease of adding a Web Service to my project.

Both approaches also used the sgen utility to generate an XML Serialization Assembly for the classes in the web service proxy that was created with wsdl.exe or adding a Web Reference.  This knowledge base article explains why this is absolutely necessary when hosting your proxy in SQLCLR.

There is an addition consideration with using wsdl.exe and/or Web Reference generated proxies against a WCF Service that uses the wsHttpBinding, highlighted here.  These proxies only support the basicHttpBinding, that is they don't support WS-Addressing as required by the wsHttpBinding.  So, if you got your sql clr function or stored procedure calling your WCF web service, but you are getting a web service timeout, check to make sure it is using a compatible binding.  You may also notice the System.ServiceModel.ActionMismatchAddressingException in the service log.

Tuesday, June 10, 2008

XML Schema dateTime Primer for C# Developers

The XML Schema dateTime primitive data type represents the current date and time (June 10th, 2008 at 9:30am Eastern Daylight Savings at the time of this writing) in the following format:

2008-06-10T13:30:00Z

The above format represents the UTC date/time.  UTC, or coordinated universal time, is just Greenwich Mean Time (or Zulu time, hence the "Z").  UTC is the SI name for this zero time offset.  To get the UTC date/time for your local time, simply add your time zone offset.  In Indiana, where I'm from, our GMT offset is currently -04:00 (colloquially, "four hours behind").

Here's a C# one-liner to format the current time in the XML Schema dateTime format.

string xsDateTime = System.DateTime.Now.ToUniversalTime().ToString("o");

Thursday, June 5, 2008

PowerShell: Convert Unix line-ending to Windows CrLf

function GetFiles
{
  foreach ($i in $input) {
   ? -inputObject $i -filterScript { $_.GetType().Name -eq "FileInfo" }
  }
}

function ConvertToCrLf
{
  param([string] $path)
  gci $path |
  GetFiles |
  % {
    $c = gc $_.FullName
    $c | % {
             if (([regex] "`r`n$").IsMatch($_) -eq $false) {
               $_ -replace "`n", "`r`n"
             } else {
               $_
             }
           } |
        set-content $path
  }
}

Thursday, May 29, 2008

SafeExecute: A Functional Pattern for Cross-cutting Exception Handling

In some cases, you may find yourself implementing the exact same exception handling logic in several methods within a class, and your brain screams DRY!  Here's way to use C# 3.0 to put that exception handling logic in one place:

        public string Foo(string name, string greeting)
{
return SafeExecute<string>(() =>
{
return greeting + " " + name;
});
}

public int Bar(int i)
{
return SafeExecute<int>(() =>
{
return i + 1;
});
}


protected T SafeExecute<T>(Func<T> funcToExecute)
{
try
{
return funcToExecute.Invoke();
}
catch (Exception)
{
throw;
}
//if your error handling does not throw an exception
//return default(T);
}



The details of proper exception handling are elided above to allow the reader to focus on the functional aspects of the pattern.  This pattern is really only useful when you are using exactly the same exception handling logic.  This is often the case in classes due to common implications of SRP.



Actually, this pattern is more generalized by the common functional pattern of wrapping the execution of another function, but this provides a useful concrete example.

Wednesday, May 28, 2008

Learning New Tools

I sometimes wonder which is more complex, in the information theory sense: the functioning of a given vital organ or the functioning of a modern IDE.  In the case of Emacs, I would tend to favor the the IDE.

I learned vi in college; because, truthfully, it was the default editor for elm, and I wanted to send email.  Most other folks used pine, but I didn't hear about that for a while.  Anyway, that was how it began, and I came to love vi and later vim.  This loved flowered even after I found out Bill Joy wrote it, probably because he hated Emacs.

Now, there are a few flawed attempts to bring syntax highlighting and indenting for Erlang to my favorite little editor, but they're far from perfect.  There's even an aborted attempt to bring Erlang to Eclipse.  Alas, it too is seriously flawed.  It would seem that most of the Erlang programming universe writes code in Emacs.

I've never been a joiner, and I wore my iconoclastic editor as a badge of honor.  Now, however, I've resolved to follow the pack and learn Emacs.  As a side benefit, it would appear that I'll be learning a little Lisp along the way, well elisp at least, more out of necessity than desire.

What bugs me is that I don't really want to learn Emacs, I'd rather spend that time exploring OTP. But, c'est la vie!  It will just take some time to become proficient.

At work these days I'm learning the ins-and-outs of BizTalk.  This, too, is less than ideal.  BizTalk is really powerful, very enterprise-y message-oriented middleware.  Truthfully, I wouldn't mind it so much if its development tools had any notion of refactoring.  The whole thing just seems to brittle, and could there be a more miserable way to spend ones time mapping messages types?

Getting Erlang Going on a Vanilla Ubuntu Install

Before you are able to configure and make Erlang OTP from source, you're going to need to be sure your have the right dependencies installed.  For a vanilla Ubuntu installation, this means:
sudo apt-get install libc6-dev
sudo apt-get install libncurses6-dev
sudo apt-get install m4
sudo apt-get install libssl-dev
sudo apt-get install sun-java6-jdk

The last two aren't strictly necessary, but libssl-dev is the source files the OTP system needs to utilize OpenSSL.  Now, that last dependency on the JDK may seem a bit odd, but the standard makefile for the OTP system tries to build jinterface, an OTP application that enables running an Erlang node in Java.  (As an aside, there is a similar package called twotp that does the same thing for Python.)

Assuming all goes well, you should be able to cd into the source directory (wherever it is you extracted it to when you downloaded it) and execute the following commands in order:
./configure
make
sudo make install

Depending on the permissions to /usr/local/* you may not need elevated privileges, but again this is for a vanilla install.  Now, you should be able to run Erlang.  Next time, I'll have some notes on setting up vim and Emacs for Erlang.

Saturday, May 24, 2008

The ALT.NET Times for May 24th, 2008

A bit of a quiet week on the list this week.  In the U.S. this coming Monday is Memorial Day, and we get the day off--a fact that compels many to plan their vacations around it.  Whatever the reason, there wasn't much traffic, and so there's not much to report.

Probably the most interesting messages of the week revolved around ALT programming languages.  The Boo programming language--which is a full-on CLR language in the style of Python--seemed to be a favorite of many on the list.  Oren Eini is a primary contributor and uses Boo for his Windsor configuration tool, Binsor.  There's a healthy community around the language, one which created VS2008 integration for it.

There was also some discussion of F#, a language that is something like a CLR implementation of OCaml, with some changes required to behave well with the runtime and other CLR languages.  Additionally, it is missing one quintessential OCaml feature: functors.  Nevertheless, F# delivers many of the familiar mainstays of a dynamic functional language--like currying, first-class functions, pattern matching, and tuples/records--while integrating with all the .NET libraries and providing strong typing through type inference.  It also works seamlessly when installed over Visual Studio 2008.  To learn more, a good place to start would be to RTM, or Ted Neward's F# Primer.  The place where the F# community gathers is called hubFS.

So, that's the short version (or at least the thematically relevant version) of what I learned from the alt.netters this week.  Have a great one!

Wednesday, May 21, 2008

VMWare Blues

image

I suspect it's my fault.  I let Vista update to SP1 while I had the VM paused then tried resuming.  I got this BSOD while VMWare was restoring the virtual machine's state.  Since Vista now shows 4GB of RAM instead of 3.5GB after the update, I'm guessing the memory manager changed, and that caused VMWare to choke on its old memory map.

Saturday, May 17, 2008

The ALT.NET Times for May 17th, 2008

They say if you do something three times, it becomes a habit.  This is the second post in a series highlighting what I gleaned from the sharp minds in the ALT.NET scene over last week.

Early in the week, Harry McIntyre from London shared his process and custom code for managing change to DBML files, the XML that defines your classes in Linq to SQL.  Folks using Linq to SQL know the pain of managing change to these files.  You cannot regenerate them from your database without wiping out any customizations that you made.  The Linq to SQL designer is pretty good about capturing most of the idiosyncrasies of your tables, but you inevitably have to customize the generated DBML to cover the corner cases, such as having a default value on a non-nullable CreatedOn column.

Harry's project is called Linq to DBML Runner, and he has a post about the process he uses.  It's dependent on a couple of other projects, including Linq to XSD and in yet another case of trickle-down technology from Ruby: Migrator.NET.  If you're committed to effectively managing change to your DBML files, and you will be if you use it on greenfield projects involving more than a couple of developers, check out Linq to DBML Runner.

There was a lot of talk about MVC and MVP, along with specializations formalized by Fowler including Passive View.  A few astute alt.netters shared some exposition on the venerable MVC pattern as it was first conceived and developed in Smalltalk.  Among these posts, Derek Greer's extensive exposition of these patterns was linked up.  This is a fantastic, thorough, and cogent examination of the patterns and the origin, highly recommended.

You can also add another implementation to your MVP list: MVC#.  This looks very interesting, but I haven't had a chance to delve into it.

Also in the rich-client/WPF world, I heard the first rumblings about Prism this week.  Glenn Block has been a frequent poster to the mailing list in the past.  He's the Technical Product Planner for the Client User Experience (UX) program in Microsoft's Patterns & Practices (PnP) team.  Glenn posted about a code drop of a Prism reference implementation.  This is essentially an application that will bake-in all the concepts that will eventually be extracted, formalized, and generalized into the Prism framework for composite applications.  This is cool stuff, and it will be good to see a lot of the disparate PnP frameworks rolled up.

One ActionScript developer (now that is ALT) asked about dealing with nested asynchronous web service calls.  Two suggestions were given: Flow-based programming and continuation passing style.  The latter is familiar, I would imagine, to most developers, at least those with some exposure to functional programming, and it is now very easy to accomplish with C# and lambda expressions.  The former, Flow-based programming, is an unfamiliar programming style to me, but at first blush it appears to fit in very well with the current interest in SOA.

Our final topic is an appropriate one to this column, namely, how do we process all the technology out there? Asked by J.P. Hamilton on the list, the most interesting response was from Oren:

I don't.  I read some interesting blogs, but I don't go into technology unless I have a need for it.  I try to keep an eye for new ideas, but technology itself is not something that I even try to keep up.  I find that knowing he fundamentals helps immensely when you learn new stuff, so focusing on that is never a bad idea.

Have a great week!

Monday, May 12, 2008

Hpricot Ruby Script to Digest ISO Currency Codes

UPDATE: I fixed a couple of bugs in this and changed the XML Schema DataType namespace alias to “xs”.  You will likely want to remove some enumeration items because they aren’t particular useful for e-commerce applications, e.g. palladium troy ounce.

require 'hpricot'
require 'open-uri'
doc = Hpricot(open("http://en.wikipedia.org/wiki/ISO_4217"))
codetable = doc.search("//table[@class='wikitable sortable']")[0]
rows = codetable.search("//tr")
for i in 1..rows.length
    tds = rows[i].search("//td")
    unless rows[i] == nil
        puts '<xs:enumeration id="' + tds[3].search("//a[@title]").inner_html.inner_html.gsub(/\s/, '_') + '"  value="' + tds[0].inner_html + '" />'
    end
end

Also, here's a Powershell script to process the ISO 3166 country code list (semi-colon delimited):

gc countrycodes.txt | ? {$_ -match ';'} | % { $s0 = $_.split(';')[0]; $s1 = $_.split(';')[1]; "<xsd:enumeration id=`"$s0`" value=`"$s1`" />" }  | out-file codes.txt

Saturday, May 10, 2008

The ALT.NET Times for May 10th, 2008

This is the inaugural post of a series I propose to do over Saturday morning coffee.  I hope to cherry-pick the posts on the ALT.NET mailing list, and record here for posterity (and my feeble, fallible brain) the best information of the week.  Think of it as an ALT.NET Lazy Web distilled.

So, what did we learn this week?  Well, if you're writing, thinking about writing, or desperately need an HTML parser, you should check out Hpricot.  Incidentally, if you're not a fan of Ruby, beware!  The usage syntax of this library might make you a convert.  If you're looking for a .NET library, SgmlReader is a great place to start.

Speaking of Ruby, Dave Newman let us know about his VS plug-in for syntax highlighting of Haml.  Haml is a templating language for creating views in RoR.

A topic that frequently comes up made another appearance this week: "TDD: Where to Start?"  Ah, a fine question and one the ALT.NET crowd answers rather well.  A resounding chorus of alt.netters always seem to respond with one book title in particular: Working Effectively with Legacy Code by Michael Feathers.  Starting out with TDD is hard work, says the crowd, and starting out with TDD on a brownfield project that doesn't utilize inversion of control is considerably harder.  Another popular suggestion is to see TDD in action.

The Managed Extensibility Framework got some pixels towards the beginning of the week.  To quote Krzysztof Cwalina, contributing author of ".NET Framework Design Guidelines" and PM of the Application Framework Core team at Microsoft:

MEF is a set of features referred in the academic community and in the industry as a Naming and Activation Service (returns an object given a “name”), Dependency Injection (DI) framework, and a Structural Type System (duck typing). These technologies (and other like System.AddIn) together are intended to enable the world of what we call Open and Dynamic Applications, i.e. make it easier and cheaper to build extensible applications and extensions.

This is exciting news for .NET developers.  Incorporating these features into the BCL means we don't have to get permission from our clients and project managers to use them.  We won't have to take dependencies on Unity, Windsor, Structure Map, AutoFac, or Spring.NET to get inversion of control goodness into our applications.

One classically "ALT" exchange about distributing configuration changes mentioned some technologies with which I was unfamiliar.  ActiveMQ is a huge, mature Apache project for Message Brokering and Enterprise Integration Patterns.  Under its umbrella are projects such as the .NET Messaging API which provides a common API for interacting with messaging providers (ActiveMQ, STOMP, MSMQ, and EMS).  IKVM is, to put it succinctly, Java running on Mono.  The OP was keen on using custom WCF extensions to make distributed configuration caching with Memcached transparent.  Oren Eini posted some time ago about using Memcached with .NET, and it is definitely something you should check out, should the need for a distributed object cache arise.

That's all I have to report this week.  Have a great weekend and to quote Joe Krutulis of Appirio, "Think & Enjoy!"

Monday, May 5, 2008

Using LINQ from .NET 2.0

A few months back when I was migrating a project to VS 2008, the PM was very concerned that .NET 3.5 would cause problems on our servers, interfering with existing applications or creating new bugs in the existing codebase.  I did my best to educate my PM on the nature of .NET 3.5, that it still ran on the .NET 2.0 CLR, that it was a bunch of new libraries, that it include SDK tools that compiled the new C# 3.0 language, that it wouldn't affect existing applications.  Ever the pragmatic fellow, he insisted on late night installs and system testing, nevertheless.

Now, I don't mind pragmatism, in fact, I applaud it; however, a different situation can arise than the one I just described.  A non-technical PM or a draconian operations manager might insist that you don't "use .NET 3.5" and stick with ".NET 2.0".  If you take this at face value, it means you can't use C# 3.0, LINQ, etc.  Well, first you must get them into a Slammer worm recovery group.  Next, you can still use some .NET 3.5 goodness without having to install it on the production server.

Here's a minimalist approach: include System.Core.dll in your .NET 2.0 project.  You may already know that you can build strictly for .NET 2.0 from VS 2008, so just set the reference, set it to copy local, and roll. You'll have to convince them to let you install .NET 2.0 SP1, though.  Another thing you could try is LINQBridge, a .NET 2.0 re-implementation of LINQ to Objects along with Action<T> and Func<T>.

At the end of the day, you should probably just stay within the environment's strictures.  If you can make the case that .NET 3.5 will save time and money, you'll be much more successful in changing minds than if you just think it is cool.

Monday, April 28, 2008

Take Ownership and Full Control in Vista

Sometimes you move a bunch of files from an old machine, in a different domain, and you want to modify those files in the new environment.  Generally, the process is to take ownership of the files, then grant yourself full control.  This can be a tedious process in Windows Vista, especially in light of LUA.  So, I present here some scripting you can use from PowerShell perform this burdensome task.

First, taking ownership is still best accomplished using the "takeown" command.  You can look the command syntax up, but if you wanted to take ownership of all the files in the current directory, you can do this from your PowerShell prompt:

gci | % { takeown /f $_}

Next, you want to grant yourself full control.  This is normally done with the cacls command, but in Windows Vista this has been deprecated and you should use the icacls command.  Here's how to grant yourself full control on all the files in the current directory, replacing your current permissions:

gci | % { icacls $_.FullName /grant:r your_domain\your_username:F }

Replace your_domain and your_username above with the information appropriate to your environment.

Sunday, April 20, 2008

.NET 3.5 TDD Frameworks from the ALT.NET Scene

From the ALTNET mailing list this week, I've come across to very capable frameworks for enabling true test-driven development.  Both seem to be born out of a dissatisfaction with current implementations.

First up, I'll mention MoQ; pronounced "Maw-kyoo" or just "mawk", it is written alternatively as both moq and MoQ.  The tagline from the MoQ Google code site says it is "The simplest mocking library for .NET 3.5 with deep C# 3.0 integration."  From here, we can already say that his library isn't for everyone.  Folks working on greenfield projects or converting existing projects may be able to choose .NET 3.5, but many are working in situations where lambdas and LINQ are out-of-bounds.

Those of us lucky enough to be using Func and Action in our code will find moq to be an elegant approach to unit testing interfaces.  But, other Mocking frameworks support some of the new C# syntax, so why moq? According to Daniel Cazzulino (kzu), a developer on moq and a Microsoft XML MVP,

The value we think Moq brings to the community is simplicity through a more natural programming model.

So, what does kzu mean by "natural"?  Well, traditional mocking uses a record/playback model for setting expectations in TDD, and, due to a legacy that often extends back to .NET 1.0, they have "more than one way to do it", to use an oft invoked Perl-ism and a big reason why Perl is frequently referred to as "write once, read never".  Certainly, simpler APIs are to be preferred as long as they can get the job done, and APIs tend to become simpler and more natural over time as long as they aren't required to prevent breaking changes.  So, moq was meant to be a simpler mocking framework that leverages C# 3.0 language features.  And, so it does.

If you are just getting into TDD and are a little overwhelmed by the myriad mocking frameworks out there and their unfamiliar semantics, you should definitely look into MoQ.

Now, the next item on our agenda is Total Recall.  Okay, not Total Recall, but another story by Phillip K. Dick: Autofac; in any event, we're going to take a look at the eponymous Inversion of Control container.  Autofac is an MIT licensed project that has gotten some pixel time on the ALT.NET mailing list as of late.  The community around Autofac writes on Google code that:

Autofac was designed with modern .NET features and obsessive object-orientation in mind. It will change the way you approach dependency injection in .NET.

Well, if you aren't doing DI just yet, it will certainly change the way you do it.  If you're using many of the other .NET IoC containers out there, you're probably not leveraging lambda expressions and LINQ either, so Autofac would be a change.  Like MoQ, Autofac sheds some of the legacy cruft and fully embraces .NET 3.5 as a platform.  Also note that Autofac leverages LinqBridge to remain compatible with .NET 2.0 applications.

So, will I be doing my next greenfield project using DDD with moq, Autofac, and db4o?  Well, I don't think my clients are ready for that yet.

Sunday, April 13, 2008

Advanced Web Programming Techniques: Dynamic Script Tags

I was researching some techniques for doing Comet programming, and I ran across a PowerPoint presentation on Dynamic Script Tags.  If you speak GWT, you'll find they outline this technique in their FAQ.  Here's the basic idea.

First, a little background about the Javascript Same-Origin Policy, first introduced in Netscape 2.  From The Art of Software Security Assessment blog, "same-origin prevents a document or script loaded from one site of origin from manipulating properties of or communicating with a document loaded from another site of origin. In this case the term origin refers to the domain name, port, and protocol of the site hosting the document."  These restrictions extended to XmlHttpRequests (xhr), i.e. a script cannot make an xhr to a domain other than the one from whence the script originated.

The Same-Origin Policy poses a bit of a problem for two kinds of AJAX applications.  First, the developer attempting to make xhr calls to sub-domains (see my previous post on web performance optimization) will have problems.  Generally (IE & FF only?), Resources are not of the same origin if they have a different URL, essentially.  Scripts, however, from the same top-level domain can set the document.domain property to the TLD to allow them to interact.  This doesn't solve the xhr problem.  The second type of AJAX application that can be fettered by the Same-Origin Policy is the mashup.  Most mashups proxy the browser request to get the data from the others sites.  In other word, xyzmaps.com would have all of its scripts/xhr point to xyzmaps.com which would in turn make an HTTP request to e.g. maps.google.com, essentially proxying the browser request.  This presents a huge difficulty for implementing mashups, as all requests have to be proxied through the origin server.  A more thorough understanding of the Same-Origin Policy leads us to a better solution.

If you've ever used the Google maps API or the Virtual Earth api, you'll note that you include the api in your page via script tags.  Of course, this works perfectly for many sites, but doesn't it violate the Same-Origin Policy?  It does not, in fact, because the browser rightly assumes that scripts included via tags in the document are "safe" insofar as they are not alien.  So, we can get around the Same-Origin Policy by  adding script tags from other domains in our document.  This works statically in or pages, but we can also add script tags dynamically via script DOM manipulation!  Thus, a better solution than proxying requests for our mashups is found.

So, SiteA dynamically adds a script tag whose src attribute is set to a url on SiteB that will generate JavaScript, embedding the name of SiteA's callback method in the request.

There are several other rather unsatisfactory approaches outlined at Abe Fettig's Weblog.

Saturday, March 29, 2008

Website Performance Talk

This is a really enlightening talk from Steve Souders, author of High Performance Websites, and Chief Performance Yahoo!.  Below is a summary of his talk, but they also have a page detailing their rules.

  1. Make fewer HTTP requests: combing CSS and Javascript files, CSS sprites*, use image maps instead of multiple images where possible, inline images*
  2. Use a Content Distribution Network.  Akamai, Panther Express, Limelight, Mirror Image, SAVVIS.  Distribute your static content before creating a distributed architecture for your dynamic content.
  3. Add an Expires header.  If you aren't already, you should be changing the URL of your resources when you version them, to ensure that all users will get updates and fixes.  Since you then won't be updating a given resource, you should give it a far future expires header.
  4. Gzip all content: html, css, and javascript.
  5. Put stylesheets at the top.  IE will not begin rendering until all CSS files are downloaded.  Use the link HTML tag to pull in stylesheets, not the @import CSS directive, as IE will defer the download of the stylesheet.
  6. Put javascripts at the bottom.  HTTP 1.1 allows for two parallel server connections per hostname, but all downloads are blocked until the script is downloaded.  The defer attribute of the script block is not supported in Firefox and doesn't really defer in IE.
  7. Avoid CSS expressions.  The speaker doesn't cover this, but I gather from my own experience that this can seriously delay rendering as the expression will not be evaluated until the page is fully loaded.
  8. Make CSS and JS external.  This rule can be bent in situations where page views are low and users don't revisit often.  In this situation in-lining is suggested.
  9. Reduce DNS lookups.  He didn't cover this either, but it follows from rule #1 that additional network requests negatively impact performance.  He does mention later in the talk that splitting your requests across domains can drastically increase response time.  This is a consequence of the two connections per domain limit in the HTTP 1.1 specification.  It is important to remember that JavaScripts from one domain cannot affect JavaScripts or pages from another domain, due to browser security restrictions.  Also, this is a case of diminishing returns, as beyond four domains causes negative returns from, presumably DNS lookups and resource contention in the browser itself.
  10. "Minify" Javascript.  JSMin written by Doug Crockford is the most popular tool.  This is just removing whitespace, generally.  YUI compressor may be preferred and also does CSS.
  11. Avoid redirects.
  12. Remove duplicate scripts.
  13. Configure ETags.  These are used to uniquely identify a resource in space and time, such that if the ETag header received is identical to the cached result from a previous request for that resource, the browser can choose to use the local cache without need to download the remainder of the response stream.  The speaker doesn't go into this subject, so questions about how this improves caching performance over caching headers remain unanswered until you read his book.
  14. Make AJAX cacheable.  If you embed a last modified token into the URL of your AJAX request, you can cause the browser to pull from cache when possible.

A great tool to analyze your site's conformance to these rules is YSlow.  It's an add-on for Firebug.  During development your YSlow grade can give you a very good indication of what is happening to the response time of your application.  This metric, the YSlow grade, seems to me to be a much better quality gate for iterative development than does something as variable and volatile measured response time.

Some additional rules from the next version of the book:

  • As mentioned in my commentary in rule #9, split dominant content domains.
  • Reduce cookie weight.
  • Make static content cookie free.
  • Minify CSS (see rule #10 comments above).
  • Use iframes wisely.  The are a very expensive DOM operation.  Think about it--it's an entirely new page, but linked to another.
  • Optimize images.  Should your GIFs be PNGs?  Should your PNGs be JPGs?  Can you shrink your color palette?

Wednesday, March 19, 2008

Erlang News

Robert Virding's First Rule: "Any sufficiently complicated concurrent program in another language contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Erlang."

"I now know based on hard-won experience that you could replace "concurrent program" with "distributed system" in Robert's rule and it would still be just as accurate." --Steve Vinosky

Yariv wrote a cogent response to "What Sucks About Erlang".

Erlang Flavored Lisp anyone?

Monday, March 10, 2008

Programming Erlang: Chapter 8 Problem 2

'Tis an odd title, I know.  My puppy is prancing around the kitchen trying to get me interested in his chew toy, dropping it near me so I can see that it bounces and wiggles.

Do you want to see Erlang bounce and wiggle? The code below creates a circular linked list of Erlang N processes, then sends a Message around it M times. Pretty cool, huh? Well, it was meant to be used by readers of "Programming Erlang" to compare their favorite language to Erlang. I didn't bother doing this with C# threads, though it would be interesting to try it out with the concurrency and co-ordination runtime.

Here's the out put of the code (time is in milliseconds):

Sent 30000000 messages in 27453 / 29186


-module(problem2).
-compile(export_all).

start(N, M, Message) when is_integer(N), is_integer(M) ->
statistics(runtime),
statistics(wall_clock),
FirstPid = spawn(?MODULE,loop,[[]]),
LastPid = create(1, N, FirstPid),
FirstPid ! LastPid, %% the ring is closed
send(0, M, FirstPid, Message),
{_, Time1} = statistics(runtime),
{_, Time2} = statistics(wall_clock),
io:format("Sent ~p messages in ~p / ~p~n", [N * M, Time1, Time2]).

send(C, C, _Pid, _Message) -> void;
send(I, C, Pid, Message) ->
Pid ! {self(), Message},
receive
finished -> void
end,
send(I+1, C, Pid, Message).

create(C, C, Pid) -> Pid;
create(I, C, Pid) ->
create(I+1, C, spawn(?MODULE, loop, [Pid])).

loop(Next) ->
receive
{Caller, FirstPid, Message} ->
if
FirstPid =:= self() -> Caller ! finished;
true -> Next ! {Caller, FirstPid, Message}
end,
loop(Next);
{Caller, Message} ->
Next ! {Caller, self(), Message},
loop(Next);
NewNext when Next =:= [] ->
loop(NewNext)
end.

Friday, February 15, 2008

Presenter First with POA

Plain-old ASP.NET (POA) has a lot of advantages: rapid UI prototyping, incomparable support from 3rd-party vendors, and a simplified programming model that abstracts away the problems of a stateless application to name a few.  But there remains the problem of testing.  ASP.NET applications--any web-based application--is difficult to test.  The traditional architecture of having your code-behind orchestrate your business logic components in response to events generated through the postback mechanism requires that you instigate those postbacks in order to test the orchestration.  In other words, you have to drive the UI to test your application.

UI testing is expensive.  If you've done it, you already know that.  If you've not done it, then trust us on this one; the cost of tools, the complexity, and brittleness of web tests drive the cost up.  Furthermore, if an application isn't designed to be UI tested, it becomes a nearly intractable problem to do so.  The folks at Atomic Object know this from experience.  As proponents of test-driven development, they have looked down the barrel of that gun and found a better way.  They call it Presenter First.

The basic idea behind Presenter First (PF) is to obviate the need for UI testing.  It is a Model-View-Presenter (MVP) variant that is closely related to Martin Fowler's Passive View.  The application architecture is such that UI is so humble that there is no need to test the UI.  This architecture requires that the Presenter be responsible for orchestrating the Model (business components) in response to events generated by the View, and updating the View appropriately.  A View in PF only implements events and property set methods.  This View can be defined by contract, e.g. IView, and mocked to allow for test-driven development. 

A hallmark of PF is that Presenters are constructed with an IView and an IModel and do not implement any other public interface members.  Thus, the encapsulated worfklow is driven completely through IView and (possibly) IModel events.  Concordantly, it becomes evident that PF architectures are implicitly stateful, i.e. the Presenter is extant between events.  In POA, the HttpContext only exists for the duration of the processing of a request.  While the HttpSession remains there is a non-trivial amount of complexity involved with it; ultimately, this too should be considered transient in the general case.  From these facts one might conclude that PF is not applicable to POA.

The purpose of this writing is to propose a method of implementing Presenter First in ASP.NET.  The general approach is outlined below:

  1. All page's implementing a PF IView (e.g. ICustomerEditView) must inherit from PFPage.
  2. PFPage implements a Dictionary<typeof(IView),PresenterInstance> lookup.
  3. Before Page processing, the PresenterInstance is given a reference to the View, i.e. the current page.
  4. The IView events are fired (from postback) and the Presenter handles them appropriately, updating the View via setters.
  5. The Presenter handles the Page.Unload event and drops its View reference

This approach diverges from "classical" PF in a few ways.  Most importantly, the actual concrete View acted upon by the Presenter instance is transient.  This should not pose any particular difficulty as the ASP.NET postback model ameliorates this stateless condition of web applications.  Further, while this requires the Presenter interface to implement a mutator for its View reference, the Presenter implementation need not diverge from the "classical" PF approach, save in one important way.  IModel events would need to be queued.[1]

Furthermore, the Presenter instance and the dictionary must be serializable if that pattern is to be used in an ASP.NET deployment where the HttpSession was out-of-process.  This might propose problems with Presenter references to cached objects that can't be extended to support serialization; though this is considered a minor flaw in the general case and an impediment that can be ameliorated by recreating objects from metadata.

To compose views it would only be necessary to create a PFControl base class that implemented nearly identical functionality to the PFPage.  Also, though declarative databinding is no longer possible, DTOs could be used in View mutators for databinding.

Through the use of mock objects, this approach to ASP.NET development using Presenter First enables Test-Driven Development of the business objects and their orchestration in response UI events to without the need to involve ASP.NET pages.[2]  Additionally, this allows for the tandem of presentation (.aspx) and interaction (.aspx.cs) to be isolated from business logic (Model) and orchestration (Presenter).

Please let me know what you think.  Is your organization using a very similar approach?  How have you fared?

[1] Because the Presenter MUST NOT call mutator methods on the View between postbacks, IModel events must be queued until a new View instance is received.

[2] While issues such as declarative security and AJAX can make UI testing a necessity, we are assuming that these are not direct considerations for vetting the business logic and orchestration.  Further this kind of UI testing will generally be done as a part of UAT and thus does not "cost" the development team.

Tuesday, February 12, 2008

Upgrading to Windows Vista

There's a certain voodoo to searching for problems. I don't expect that anyone is actually going to read this before upgrading to Vista... but here are the three problems I had.

First, make sure if you are running DaemonTools that you uninstall it first.  The Vista upgrade will force you to do this before proceeding, but sptd.sys will not be removed.  So... remove it.  It's in your Windows directory.

Next, make sure you download Vista updates for all your drivers.  I couldn't get my external monitor for my laptop to use it's native resolution.  The generic monitor driver in Vista didn't allow it.  I went to my manufacturer's website, NEC's, and download Vista drivers for the monitor.

Lastly (I hope), you might have trouble connecting to a cheaper broadband router.  Well, you'll be able to connect, but you won't get a DHCP address.  This knowledge base article should do the trick.

So, what do I think of Vista...?

I guess I don't know what all the hullabaloo has been about in the blogs and press.  Surely no one was expecting Mac OSX?  I run Ubuntu Linux too, and I don't expect it to be OSX.  In fact, I think Ubuntu's desktop experience sucks compared to Vista, but I love it as a Linux distro.  I ran Slackware in '98, and I can tell you things have REALLY changed in ten years.  And, I can tell you Vista is better than XP.  You have to take that with a grain of salt; I relish change.

Sure, you will have more trouble pirating music, but it's more secure, prettier, and cooler.

I guess that's why Vista is making Microsoft money.  Go figure.

Sunday, February 3, 2008

Weekend Update with Christopher Atkins

Dennis Miller was my favorite Weekend Update anchorman.  His wry, sardonic--if verbose--commentaries always made me laugh.  As I think about it, that sketch seems a direct ancestor of the news shows on Comedy Central so popular with the college crowd.

Anyway, my weekend has been interesting.  I found out that despite what the documentation says, ECDiffieHellmanCng is not supported on Windows XP.  In fact, its constructors check for the presence of NCrypt.dll: a library only available for Windows Vista and Windows Server 2003+.  So, despite that fact that it ships with .NET 3.5, you can't use it on XP.  Joy.  I hope this is not a trend.  You'll also find that you cannot use a BigInt, because it is marked internal.  If you're trying to do modern cryptography on Windows XP, in other words, you are out of luck.

So, I spent the remainder of my weekend working through Programming Erlang, and I have to say I am even more excited about it.  I'm only on page 51, but I've already learned about list comprehensions and custom flow control abstractions using higher-order functions.  The cool thing is I can apply some of this stuff to C#, now that we have lambda expressions.

I'm excited about the year ahead.  There are a lot of changes on the horizon.  Imagine, Windows Server 2008, SQL Server 2008, a Yahoo! and Microsoft merger, widespread adoption of OpenID, and a new presidential election are all on their way.  To quote the folk poet Bob Dylan:

Come writers and critics
Who prophesize with your pen
And keep your eyes wide
The chance won't come again
And don't speak too soon
For the wheel's still in spin
And there's no tellin' who
That it's namin'.
For the loser now
Will be later to win
For the times they are a-changin'

Wednesday, January 30, 2008

Competence != Compensation

In America we are conditioned to believe that working hard, being diligent, and producing great work is the road to fiscal success.  Anyone who has really worked here will tell you that is simply not the case.  Prima facie, you must recognize that you are not paid what you are "worth"; that is, your personal worth has nothing to do with your income.  One must never forget this simple fact.  Despite the fact that you may do three times the work of your peers, you will not be compensated even twice as much.

Witness the Home Depot CEO who purposefully runs his stock in the ground, knowing well that a golden parachute awaits for him once the board of directors requests that he "step down."  Negative output results in a cash payout worth over $200 million dollars.  In America, you are worth what someone will pay you.  There is no such thing as intrinsic value, only assessed value.  In software development there are two ways of having your value assessed, by managers and by investors.

Make sure you don't hang your hopes on managers.

Continuous Integration with TFS: Pre-compiling a Web Application Project

If you are looking to pre-compile your web application projects with TFS 2008 Team Build, here are some tips.

First, you'll want to add the following to your TFSBuild.proj:

<Target Name="AfterDropBuild">
<
AspNetCompiler
PhysicalPath="$(DropLocation)\$(BuildNumber)\Release\_PublishedWebsites\web_project_name"
Debug="false"
Force="true"
TargetPath="C:\path_to_copy_precompiled_web_to"
VirtualPath="/Path_to_your_web"
/>
</
Target>


There a few things to note here.  First, the Target Name is crucial; this Target will get executed after the drop is made, i.e. after the build successfully runs and your files are copied to the drop folder you specified when you created the build.  Secondly, remember that this is running from the machine where your build agent runs.  The AspNetCompiler MSBuild tag shown here will launch aspnet_compiler.exe on that build agent machine, and so "/Path_to_your_web" points to the virtual directory on the default web site of your build agent machine.  Please note that "/Path_to_your_web" would be "/" in the case of your application being in the root directory.  Finally, the Force parameter is required since we will be writing over the existing pre-compiled web on subsequent builds.



If you want setup a build agent on another machine, perhaps a development web server where you would prefer to deploy the pre-compiled web application, you'll need to configure that agent according to these instructions.  This is a good idea when running verification tests that drive the web UI.



Happy Coding!

Saturday, January 26, 2008

On a Positive Note

Jean-Paul Boodhoo is a developer and consultant who used to work at ThoughtWorks and has been featured on the Polymorphic Podcast and DNRtv.  He recently posted a very encouraging post about dreams, risk, and reward.

Here's an excerpt.

I was speaking with a friend yesterday who made an interesting comment:

“You seem to regularly take on more stress than most other people would ever think to take on”

I corrected him and made this statement. I don’t feel like I am stressed out that much. In all honesty, the times that I feel stressed out almost always constitute a failure to plan on my part. I did say that what I do take on regularly is: Challenge and Risk. I am not afraid of the opportunity to fall flat on my face taking a risk, because I now that it is in the times of struggle/pain that growth happens.

Bravo.  That's well said.  Paul Graham (all your Lisp are belong to us) wrote a year ago explaining: How to Do What You Love.  Here's a quote, emphasis added:

Once, when I was about 9 or 10, my father told me I could be whatever I wanted when I grew up, so long as I enjoyed it. I remember that precisely because it seemed so anomalous. It was like being told to use dry water. Whatever I thought he meant, I didn't think he meant work could literally be fun—fun like playing. It took me years to grasp that.

Sure, sure, I heard you saying... do what you love, right?  Well, I love vacations, the ocean, playing guitar, and chasing my puppy--how do I get paid to do that?  The truth is you don't; you won't; and you shouldn't.  Again, from Mr. Graham:

Unproductive pleasures pall eventually. After a while you get tired of lying on the beach. If you want to stay happy, you have to do something.

I can't tell you how invigorating working in the yard can be or how exciting solving some mildly difficult programming problem can be.  It's enough to make people blog about their experiences.  But, even these are just short-term examples, what is it that you would love to do over the long haul?  Is there some sort of heuristic that we can apply to help use decide?  Here's Graham's idea:

I think the best test is one Gino Lee taught me: to try to do things that would make your friends say wow.

If you hate what you do, I cannot convince you that trying to impress your friends with it is going to make it fun.  So, think of the things you do that you do try to impress your friends with?  Do you love to cook?  Master the art, learn the language, hone your technique, and blog about the experience.  Start a in-home cooking class consultancy.  Do you love to take photographs?  Practice, learn, take your camera everywhere, make it your first thought.  Do you love it enough to do it for free?

The test of whether people love what they do is whether they'd do it even if they weren't paid for it—even if they had to work at another job to make a living. How many corporate lawyers would do their current work if they had to do it for free, in their spare time, and take day jobs as waiters to support themselves?

If you don't have enough passion to work hard at something and get better, then you either don't love it or you don't have an audience.  You might just need new friends.  Seek them out in classes or online discussion groups.  But seek them out.

If you are worried about money, if you're worried about being responsible for the well-being of your children or spouse, remember that you have a responsibility to them.  You are responsible for loving them, for inspiring them, for trying to be the best person you can for them.  Those are your true responsibilities; you can find a way to pay the really important bills.  Don't let fear admit apathy.  Don't use your family as an excuse.

If you don't already know what you love to do--and so few of us really do--Graham has further advice:

It's hard to find work you love; it must be, if so few do. So don't underestimate this task. And don't feel bad if you haven't succeeded yet. In fact, if you admit to yourself that you're discontented, you're a step ahead of most people, who are still in denial. [..] Finding great work you love does usually require discipline.  Is there some test you can use to keep yourself honest? One is to try to do a good job at whatever you're doing, even if you don't like it. [...] Another test you can use is: always produce.

If you are telling yourself that you hate your day job, and one day you will be set free to do what you love, then you better being doing what you love and consistently.  Otherwise, you're lying to yourself and everyone around you.  The hardest thing to admit is that at age 20, 30, 40, or 50 is that you don't know what you love; you don't know what you want to do when you grow up.  My point here is that you must do something; you must produce.  Even if you prove to be rather amateurish in all you attempt, but you really gave it your best, you will be happier--and much more interesting.  The worst thing you can be in your work-life, truly, is be a dilettante.

I will close by leaving you with these further words from Graham's essay (please read it):

"Always produce" is also a heuristic for finding the work you love. If you subject yourself to that constraint, it will automatically push you away from things you think you're supposed to work on, toward things you actually like. "Always produce" will discover your life's work the way water, with the aid of gravity, finds the hole in your roof.

One last note before parting that I have is simply this: seek out good teachers.  My wife recently told me what the tuition for a four-year degree at local technical college is: nearly $84,000.  Most of their students do not finish, and most of them are there because they want to make more money.  And, sadly, most of them don't learn a thing--because they're not there to learn, they're there to make more money (via a diploma).  This is not the road to happiness.  It would be much better for those people to find work they love and to see out the best teachers.  I guarantee you that those technical colleges do not have the best teachers.

Thursday, January 24, 2008

IndyTFS January Meeting: These Guys Rawk!

Jamie Kurtz of Open Solutions presented on TFS Work Items.  Now, I really hate going to user group meetings in the midwest, because most of the time the speakers are not really all that knowledgeable about their topic.  This session, however, could have been entitled, "TFS: All Questions Answered."  Between Paul Hacker and Jamie Kurtz, not one question was a stumper.  I learned more about TFS in that two hours than I could have in two days on my own.  It was truly fantastic; they had a great attitude!

Here are some of the things we learned.

  • Customizing Work Item types: workflow, states, custom form fields, custom field controls
  • Details and gotchas on how to get business users setup to edit work items from Excel
  • Details about securing fields and work items
  • Editing MSBuild files to work appropriately with WorkItems

Kudos to both gentlemen.  I encourage anyone using or considering using TFS to member up and start attending meetings.

Saturday, January 12, 2008

List of Links for Learning LINQ

Since I often use this blog as a brain back-up device.  I will add another list of links, this time for learning LINQ.

Building a LINQ Provider

Links to LINQ (another link-post)

How LINQ Works: Creating Queries (part of a series of great posts on LINQ internals)

The .NET Standard Query Operators (slightly out-of-date, but very relevant)

Standard Query Operator Translation (LINQ to SQL)

Friday, January 11, 2008

Happy 70th Birthday to Don Knuth

I still haven't read The Art of Computer Programming, Volume 1, but I'm definitely an admirer of this man.  Here's to many, many more years where we are blessed to have him among us.

Don Knuth is my homeboy

Thursday, January 10, 2008

A Twitter-style Blog Post from a User Group

Larry Clarkin is a developer evangelist for Microsoft based out of Milwaukee.  He gave a talk this evening to the IndyNDA group at the Junior Achievement center on Keystone.  He spent the first twenty minutes of his talk showing us Microsoft's Virtual Earth maps site.  To use the Virtual Earth client libraries you need to include their script library and create a DIV to contain the map renderings.  A very informative site for learning Virtual Earth was demonstrated; it's called "The Virtual Earth Interactive SDK".  Very cool.

He's showed us a pretty tame example of overlaying a Silverlight control on a Virtual Earth map.  The Silverlight control references the map via Javascript.  Pretty simple, but I believe the concept is solid.  You could really have a lot of fun with this approach.

7:09pm and we've been enlightened that SOAP has lost the battle to REST on the public web.  Now we're getting some of the Twitter hype...  boring.  More Twitter, more boring.  I guess I just don't get Twitter. It seems like a way to fill the gaps when you are alone with yourself.  What's next--a microphone for your thoughts?  I will never waste my time reading that crap.

7:23pm and we're learning that RSS is gee-wowie!  "We need more things like RSS."  Good to know.  Larry likes Flickr.

The guy sitting behind me has returned to his seat and reeks of cigarette smoke.  Joy!

Oooh, now I'm learning how to get an ugly slideshow in Blend from a Flickr RSS feed.  It's 7:37pm and he's done--felt longer but under an hour.

The highlight was hearing guys get excited about being able to update Twitter via SMS.  They thought, "Wow, I can point my applications at that feed and then send control codes via SMS."  It feels like you should probably skip the middle-man.  Though, a free SMS to RSS gateway is a fun idea for homebrew mash-ups.

Well, the pizza was better this time.

Wednesday, January 9, 2008

TFS Build Failure: MSBuild Tool missing sgen.exe

We're running TFS 2008 now at my client.  I installed the trial edition just to expedite things, as I'm not the one who works with the licenses.  (You can do an in-place upgrade of the trial to a live license.)  As I was getting the continuous integration build setup for our pilot project, I kept getting a build error similar to the following. 

error MSB3091: Task failed because "sgen.exe" was not found, or the correct Microsoft Windows SDK is not installed. The task is looking for "sgen.exe" in the "bin" subdirectory beneath the location specified in the InstallationFolder value of the registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SDKs\Windows\v6.0A. You may be able to solve the problem by doing one of the following:  1) Install the Microsoft Windows SDK for Windows Server 2008 and .NET Framework 3.5.  2) Install Visual Studio 2008.  3) Manually set the above registry key to the correct location.  4) Pass the correct location into the "ToolPath" parameter of the task.


Some extensive searching only yielded this post, but I ignored the suggestion that followed the original posting... at first.  Turns out, it works quite fine.  Basically, you should have two registry keys under HKLM\SOFTWARE\Microsoft\Microsoft SDKs



image



The InstallationFolder value in the .NETFramework\v2.0 subkey is the thing we're after in the case.  Make an exact copy of that value under the Windows\V6.0A key.  I did this, as the previously mentioned post suggested, and things worked fine!



Mas Importante!  You may need to install the .NET Framework 2.0 SDK onto your TFS Server/Build Agent machine in order to do the above.

Saturday, January 5, 2008

Lambdas & Closures in C# 3.0

Here's some code the exhibits the creation of a closure in C#.  This was possible with the advent of anonymous delegates in C# 2.0, but it looks a lot cleaner in C# 3.0.

using System;
using System.Linq;
using System.Collections.Generic;

public class Program
{
public static void Main(string[] args)
{
ClosuresWithLambdas a = new ClosuresWithLambdas();

List<Func<int>> fs = a.GetNumberFunctions();

Func<int> increments_i_func = fs[0];
Func<int> just_returns_i_func = fs[1];

//prints 1
a.PrintReturnValue(increments_i_func);
//prints 2
a.PrintReturnValue(just_returns_i_func);
}

public class ClosuresWithLambdas
{
public List<Func<int>> GetNumberFunctions()
{
//this variable gets included in the closure
int j = 1;

//we use this to return two functions
List<Func<int>> fs = new List<Func<int>>(2);

//don't get confused by the postfix operator
//the f lambda returns j BEFORE incrementing it
Func<int> f = () => j++;

//g just returns the variable j
Func<int> g = () => j;

//add the two functions to the list and return it
fs.Add(f);
fs.Add(g);
return fs;
}

public void PrintReturnValue(Func<int> f)
{
Console.WriteLine(f());
}
}
}


This closure of which I speak is not immediately evident to the untrained eye.  The basic concept in this example is that the variable "j" has to survive past the end of its normal scope, i.e. the execution of GetNumberFunctions.  It has to survive because the two functions passed out of GetNumberFunctions--the lambdas "f" and "g"--refer to "j" outside of the function in which they were defined.  The compiler detects this (perfectly legal) reference as a lexical closure; it generates a class to contain these functions and the variable "j".  Here is that generated helper class viewed from Lutz Roeder's Reflector.



[CompilerGenerated]
private sealed class <>c__DisplayClass2
{
// Fields
public int j;

// Methods
public int <GetNumberFunctions>b__0()
{
return this.j++;
}

public int <GetNumberFunctions>b__1()
{
return this.j;
}
}


As you can probably gather b__0 is "f" and b__1 is "g" (or, "increments_i_func" and "just_returns_i_func" respectively).  They share "j" in the scope of an instance of this compiler generated sealed class instance.  There are other ways for language and compiler to implement closures, but this generated inner class works just fine.  For a practical example of using closures, please see my previous article on doing asynchronous network I/O with closures.