Tuesday, June 10, 2008

XML Schema dateTime Primer for C# Developers

The XML Schema dateTime primitive data type represents the current date and time (June 10th, 2008 at 9:30am Eastern Daylight Savings at the time of this writing) in the following format:

2008-06-10T13:30:00Z

The above format represents the UTC date/time.  UTC, or coordinated universal time, is just Greenwich Mean Time (or Zulu time, hence the "Z").  UTC is the SI name for this zero time offset.  To get the UTC date/time for your local time, simply add your time zone offset.  In Indiana, where I'm from, our GMT offset is currently -04:00 (colloquially, "four hours behind").

Here's a C# one-liner to format the current time in the XML Schema dateTime format.

string xsDateTime = System.DateTime.Now.ToUniversalTime().ToString("o");

Thursday, June 5, 2008

PowerShell: Convert Unix line-ending to Windows CrLf

function GetFiles
{
  foreach ($i in $input) {
   ? -inputObject $i -filterScript { $_.GetType().Name -eq "FileInfo" }
  }
}

function ConvertToCrLf
{
  param([string] $path)
  gci $path |
  GetFiles |
  % {
    $c = gc $_.FullName
    $c | % {
             if (([regex] "`r`n$").IsMatch($_) -eq $false) {
               $_ -replace "`n", "`r`n"
             } else {
               $_
             }
           } |
        set-content $path
  }
}

Thursday, May 29, 2008

SafeExecute: A Functional Pattern for Cross-cutting Exception Handling

In some cases, you may find yourself implementing the exact same exception handling logic in several methods within a class, and your brain screams DRY!  Here's way to use C# 3.0 to put that exception handling logic in one place:

        public string Foo(string name, string greeting)
{
return SafeExecute<string>(() =>
{
return greeting + " " + name;
});
}

public int Bar(int i)
{
return SafeExecute<int>(() =>
{
return i + 1;
});
}


protected T SafeExecute<T>(Func<T> funcToExecute)
{
try
{
return funcToExecute.Invoke();
}
catch (Exception)
{
throw;
}
//if your error handling does not throw an exception
//return default(T);
}



The details of proper exception handling are elided above to allow the reader to focus on the functional aspects of the pattern.  This pattern is really only useful when you are using exactly the same exception handling logic.  This is often the case in classes due to common implications of SRP.



Actually, this pattern is more generalized by the common functional pattern of wrapping the execution of another function, but this provides a useful concrete example.

Wednesday, May 28, 2008

Learning New Tools

I sometimes wonder which is more complex, in the information theory sense: the functioning of a given vital organ or the functioning of a modern IDE.  In the case of Emacs, I would tend to favor the the IDE.

I learned vi in college; because, truthfully, it was the default editor for elm, and I wanted to send email.  Most other folks used pine, but I didn't hear about that for a while.  Anyway, that was how it began, and I came to love vi and later vim.  This loved flowered even after I found out Bill Joy wrote it, probably because he hated Emacs.

Now, there are a few flawed attempts to bring syntax highlighting and indenting for Erlang to my favorite little editor, but they're far from perfect.  There's even an aborted attempt to bring Erlang to Eclipse.  Alas, it too is seriously flawed.  It would seem that most of the Erlang programming universe writes code in Emacs.

I've never been a joiner, and I wore my iconoclastic editor as a badge of honor.  Now, however, I've resolved to follow the pack and learn Emacs.  As a side benefit, it would appear that I'll be learning a little Lisp along the way, well elisp at least, more out of necessity than desire.

What bugs me is that I don't really want to learn Emacs, I'd rather spend that time exploring OTP. But, c'est la vie!  It will just take some time to become proficient.

At work these days I'm learning the ins-and-outs of BizTalk.  This, too, is less than ideal.  BizTalk is really powerful, very enterprise-y message-oriented middleware.  Truthfully, I wouldn't mind it so much if its development tools had any notion of refactoring.  The whole thing just seems to brittle, and could there be a more miserable way to spend ones time mapping messages types?

Getting Erlang Going on a Vanilla Ubuntu Install

Before you are able to configure and make Erlang OTP from source, you're going to need to be sure your have the right dependencies installed.  For a vanilla Ubuntu installation, this means:
sudo apt-get install libc6-dev
sudo apt-get install libncurses6-dev
sudo apt-get install m4
sudo apt-get install libssl-dev
sudo apt-get install sun-java6-jdk

The last two aren't strictly necessary, but libssl-dev is the source files the OTP system needs to utilize OpenSSL.  Now, that last dependency on the JDK may seem a bit odd, but the standard makefile for the OTP system tries to build jinterface, an OTP application that enables running an Erlang node in Java.  (As an aside, there is a similar package called twotp that does the same thing for Python.)

Assuming all goes well, you should be able to cd into the source directory (wherever it is you extracted it to when you downloaded it) and execute the following commands in order:
./configure
make
sudo make install

Depending on the permissions to /usr/local/* you may not need elevated privileges, but again this is for a vanilla install.  Now, you should be able to run Erlang.  Next time, I'll have some notes on setting up vim and Emacs for Erlang.

Saturday, May 24, 2008

The ALT.NET Times for May 24th, 2008

A bit of a quiet week on the list this week.  In the U.S. this coming Monday is Memorial Day, and we get the day off--a fact that compels many to plan their vacations around it.  Whatever the reason, there wasn't much traffic, and so there's not much to report.

Probably the most interesting messages of the week revolved around ALT programming languages.  The Boo programming language--which is a full-on CLR language in the style of Python--seemed to be a favorite of many on the list.  Oren Eini is a primary contributor and uses Boo for his Windsor configuration tool, Binsor.  There's a healthy community around the language, one which created VS2008 integration for it.

There was also some discussion of F#, a language that is something like a CLR implementation of OCaml, with some changes required to behave well with the runtime and other CLR languages.  Additionally, it is missing one quintessential OCaml feature: functors.  Nevertheless, F# delivers many of the familiar mainstays of a dynamic functional language--like currying, first-class functions, pattern matching, and tuples/records--while integrating with all the .NET libraries and providing strong typing through type inference.  It also works seamlessly when installed over Visual Studio 2008.  To learn more, a good place to start would be to RTM, or Ted Neward's F# Primer.  The place where the F# community gathers is called hubFS.

So, that's the short version (or at least the thematically relevant version) of what I learned from the alt.netters this week.  Have a great one!

Wednesday, May 21, 2008

VMWare Blues

image

I suspect it's my fault.  I let Vista update to SP1 while I had the VM paused then tried resuming.  I got this BSOD while VMWare was restoring the virtual machine's state.  Since Vista now shows 4GB of RAM instead of 3.5GB after the update, I'm guessing the memory manager changed, and that caused VMWare to choke on its old memory map.

Saturday, May 17, 2008

The ALT.NET Times for May 17th, 2008

They say if you do something three times, it becomes a habit.  This is the second post in a series highlighting what I gleaned from the sharp minds in the ALT.NET scene over last week.

Early in the week, Harry McIntyre from London shared his process and custom code for managing change to DBML files, the XML that defines your classes in Linq to SQL.  Folks using Linq to SQL know the pain of managing change to these files.  You cannot regenerate them from your database without wiping out any customizations that you made.  The Linq to SQL designer is pretty good about capturing most of the idiosyncrasies of your tables, but you inevitably have to customize the generated DBML to cover the corner cases, such as having a default value on a non-nullable CreatedOn column.

Harry's project is called Linq to DBML Runner, and he has a post about the process he uses.  It's dependent on a couple of other projects, including Linq to XSD and in yet another case of trickle-down technology from Ruby: Migrator.NET.  If you're committed to effectively managing change to your DBML files, and you will be if you use it on greenfield projects involving more than a couple of developers, check out Linq to DBML Runner.

There was a lot of talk about MVC and MVP, along with specializations formalized by Fowler including Passive View.  A few astute alt.netters shared some exposition on the venerable MVC pattern as it was first conceived and developed in Smalltalk.  Among these posts, Derek Greer's extensive exposition of these patterns was linked up.  This is a fantastic, thorough, and cogent examination of the patterns and the origin, highly recommended.

You can also add another implementation to your MVP list: MVC#.  This looks very interesting, but I haven't had a chance to delve into it.

Also in the rich-client/WPF world, I heard the first rumblings about Prism this week.  Glenn Block has been a frequent poster to the mailing list in the past.  He's the Technical Product Planner for the Client User Experience (UX) program in Microsoft's Patterns & Practices (PnP) team.  Glenn posted about a code drop of a Prism reference implementation.  This is essentially an application that will bake-in all the concepts that will eventually be extracted, formalized, and generalized into the Prism framework for composite applications.  This is cool stuff, and it will be good to see a lot of the disparate PnP frameworks rolled up.

One ActionScript developer (now that is ALT) asked about dealing with nested asynchronous web service calls.  Two suggestions were given: Flow-based programming and continuation passing style.  The latter is familiar, I would imagine, to most developers, at least those with some exposure to functional programming, and it is now very easy to accomplish with C# and lambda expressions.  The former, Flow-based programming, is an unfamiliar programming style to me, but at first blush it appears to fit in very well with the current interest in SOA.

Our final topic is an appropriate one to this column, namely, how do we process all the technology out there? Asked by J.P. Hamilton on the list, the most interesting response was from Oren:

I don't.  I read some interesting blogs, but I don't go into technology unless I have a need for it.  I try to keep an eye for new ideas, but technology itself is not something that I even try to keep up.  I find that knowing he fundamentals helps immensely when you learn new stuff, so focusing on that is never a bad idea.

Have a great week!

Monday, May 12, 2008

Hpricot Ruby Script to Digest ISO Currency Codes

UPDATE: I fixed a couple of bugs in this and changed the XML Schema DataType namespace alias to “xs”.  You will likely want to remove some enumeration items because they aren’t particular useful for e-commerce applications, e.g. palladium troy ounce.

require 'hpricot'
require 'open-uri'
doc = Hpricot(open("http://en.wikipedia.org/wiki/ISO_4217"))
codetable = doc.search("//table[@class='wikitable sortable']")[0]
rows = codetable.search("//tr")
for i in 1..rows.length
    tds = rows[i].search("//td")
    unless rows[i] == nil
        puts '<xs:enumeration id="' + tds[3].search("//a[@title]").inner_html.inner_html.gsub(/\s/, '_') + '"  value="' + tds[0].inner_html + '" />'
    end
end

Also, here's a Powershell script to process the ISO 3166 country code list (semi-colon delimited):

gc countrycodes.txt | ? {$_ -match ';'} | % { $s0 = $_.split(';')[0]; $s1 = $_.split(';')[1]; "<xsd:enumeration id=`"$s0`" value=`"$s1`" />" }  | out-file codes.txt

Saturday, May 10, 2008

The ALT.NET Times for May 10th, 2008

This is the inaugural post of a series I propose to do over Saturday morning coffee.  I hope to cherry-pick the posts on the ALT.NET mailing list, and record here for posterity (and my feeble, fallible brain) the best information of the week.  Think of it as an ALT.NET Lazy Web distilled.

So, what did we learn this week?  Well, if you're writing, thinking about writing, or desperately need an HTML parser, you should check out Hpricot.  Incidentally, if you're not a fan of Ruby, beware!  The usage syntax of this library might make you a convert.  If you're looking for a .NET library, SgmlReader is a great place to start.

Speaking of Ruby, Dave Newman let us know about his VS plug-in for syntax highlighting of Haml.  Haml is a templating language for creating views in RoR.

A topic that frequently comes up made another appearance this week: "TDD: Where to Start?"  Ah, a fine question and one the ALT.NET crowd answers rather well.  A resounding chorus of alt.netters always seem to respond with one book title in particular: Working Effectively with Legacy Code by Michael Feathers.  Starting out with TDD is hard work, says the crowd, and starting out with TDD on a brownfield project that doesn't utilize inversion of control is considerably harder.  Another popular suggestion is to see TDD in action.

The Managed Extensibility Framework got some pixels towards the beginning of the week.  To quote Krzysztof Cwalina, contributing author of ".NET Framework Design Guidelines" and PM of the Application Framework Core team at Microsoft:

MEF is a set of features referred in the academic community and in the industry as a Naming and Activation Service (returns an object given a “name”), Dependency Injection (DI) framework, and a Structural Type System (duck typing). These technologies (and other like System.AddIn) together are intended to enable the world of what we call Open and Dynamic Applications, i.e. make it easier and cheaper to build extensible applications and extensions.

This is exciting news for .NET developers.  Incorporating these features into the BCL means we don't have to get permission from our clients and project managers to use them.  We won't have to take dependencies on Unity, Windsor, Structure Map, AutoFac, or Spring.NET to get inversion of control goodness into our applications.

One classically "ALT" exchange about distributing configuration changes mentioned some technologies with which I was unfamiliar.  ActiveMQ is a huge, mature Apache project for Message Brokering and Enterprise Integration Patterns.  Under its umbrella are projects such as the .NET Messaging API which provides a common API for interacting with messaging providers (ActiveMQ, STOMP, MSMQ, and EMS).  IKVM is, to put it succinctly, Java running on Mono.  The OP was keen on using custom WCF extensions to make distributed configuration caching with Memcached transparent.  Oren Eini posted some time ago about using Memcached with .NET, and it is definitely something you should check out, should the need for a distributed object cache arise.

That's all I have to report this week.  Have a great weekend and to quote Joe Krutulis of Appirio, "Think & Enjoy!"

Monday, May 5, 2008

Using LINQ from .NET 2.0

A few months back when I was migrating a project to VS 2008, the PM was very concerned that .NET 3.5 would cause problems on our servers, interfering with existing applications or creating new bugs in the existing codebase.  I did my best to educate my PM on the nature of .NET 3.5, that it still ran on the .NET 2.0 CLR, that it was a bunch of new libraries, that it include SDK tools that compiled the new C# 3.0 language, that it wouldn't affect existing applications.  Ever the pragmatic fellow, he insisted on late night installs and system testing, nevertheless.

Now, I don't mind pragmatism, in fact, I applaud it; however, a different situation can arise than the one I just described.  A non-technical PM or a draconian operations manager might insist that you don't "use .NET 3.5" and stick with ".NET 2.0".  If you take this at face value, it means you can't use C# 3.0, LINQ, etc.  Well, first you must get them into a Slammer worm recovery group.  Next, you can still use some .NET 3.5 goodness without having to install it on the production server.

Here's a minimalist approach: include System.Core.dll in your .NET 2.0 project.  You may already know that you can build strictly for .NET 2.0 from VS 2008, so just set the reference, set it to copy local, and roll. You'll have to convince them to let you install .NET 2.0 SP1, though.  Another thing you could try is LINQBridge, a .NET 2.0 re-implementation of LINQ to Objects along with Action<T> and Func<T>.

At the end of the day, you should probably just stay within the environment's strictures.  If you can make the case that .NET 3.5 will save time and money, you'll be much more successful in changing minds than if you just think it is cool.

Monday, April 28, 2008

Take Ownership and Full Control in Vista

Sometimes you move a bunch of files from an old machine, in a different domain, and you want to modify those files in the new environment.  Generally, the process is to take ownership of the files, then grant yourself full control.  This can be a tedious process in Windows Vista, especially in light of LUA.  So, I present here some scripting you can use from PowerShell perform this burdensome task.

First, taking ownership is still best accomplished using the "takeown" command.  You can look the command syntax up, but if you wanted to take ownership of all the files in the current directory, you can do this from your PowerShell prompt:

gci | % { takeown /f $_}

Next, you want to grant yourself full control.  This is normally done with the cacls command, but in Windows Vista this has been deprecated and you should use the icacls command.  Here's how to grant yourself full control on all the files in the current directory, replacing your current permissions:

gci | % { icacls $_.FullName /grant:r your_domain\your_username:F }

Replace your_domain and your_username above with the information appropriate to your environment.

Sunday, April 20, 2008

.NET 3.5 TDD Frameworks from the ALT.NET Scene

From the ALTNET mailing list this week, I've come across to very capable frameworks for enabling true test-driven development.  Both seem to be born out of a dissatisfaction with current implementations.

First up, I'll mention MoQ; pronounced "Maw-kyoo" or just "mawk", it is written alternatively as both moq and MoQ.  The tagline from the MoQ Google code site says it is "The simplest mocking library for .NET 3.5 with deep C# 3.0 integration."  From here, we can already say that his library isn't for everyone.  Folks working on greenfield projects or converting existing projects may be able to choose .NET 3.5, but many are working in situations where lambdas and LINQ are out-of-bounds.

Those of us lucky enough to be using Func and Action in our code will find moq to be an elegant approach to unit testing interfaces.  But, other Mocking frameworks support some of the new C# syntax, so why moq? According to Daniel Cazzulino (kzu), a developer on moq and a Microsoft XML MVP,

The value we think Moq brings to the community is simplicity through a more natural programming model.

So, what does kzu mean by "natural"?  Well, traditional mocking uses a record/playback model for setting expectations in TDD, and, due to a legacy that often extends back to .NET 1.0, they have "more than one way to do it", to use an oft invoked Perl-ism and a big reason why Perl is frequently referred to as "write once, read never".  Certainly, simpler APIs are to be preferred as long as they can get the job done, and APIs tend to become simpler and more natural over time as long as they aren't required to prevent breaking changes.  So, moq was meant to be a simpler mocking framework that leverages C# 3.0 language features.  And, so it does.

If you are just getting into TDD and are a little overwhelmed by the myriad mocking frameworks out there and their unfamiliar semantics, you should definitely look into MoQ.

Now, the next item on our agenda is Total Recall.  Okay, not Total Recall, but another story by Phillip K. Dick: Autofac; in any event, we're going to take a look at the eponymous Inversion of Control container.  Autofac is an MIT licensed project that has gotten some pixel time on the ALT.NET mailing list as of late.  The community around Autofac writes on Google code that:

Autofac was designed with modern .NET features and obsessive object-orientation in mind. It will change the way you approach dependency injection in .NET.

Well, if you aren't doing DI just yet, it will certainly change the way you do it.  If you're using many of the other .NET IoC containers out there, you're probably not leveraging lambda expressions and LINQ either, so Autofac would be a change.  Like MoQ, Autofac sheds some of the legacy cruft and fully embraces .NET 3.5 as a platform.  Also note that Autofac leverages LinqBridge to remain compatible with .NET 2.0 applications.

So, will I be doing my next greenfield project using DDD with moq, Autofac, and db4o?  Well, I don't think my clients are ready for that yet.

Sunday, April 13, 2008

Advanced Web Programming Techniques: Dynamic Script Tags

I was researching some techniques for doing Comet programming, and I ran across a PowerPoint presentation on Dynamic Script Tags.  If you speak GWT, you'll find they outline this technique in their FAQ.  Here's the basic idea.

First, a little background about the Javascript Same-Origin Policy, first introduced in Netscape 2.  From The Art of Software Security Assessment blog, "same-origin prevents a document or script loaded from one site of origin from manipulating properties of or communicating with a document loaded from another site of origin. In this case the term origin refers to the domain name, port, and protocol of the site hosting the document."  These restrictions extended to XmlHttpRequests (xhr), i.e. a script cannot make an xhr to a domain other than the one from whence the script originated.

The Same-Origin Policy poses a bit of a problem for two kinds of AJAX applications.  First, the developer attempting to make xhr calls to sub-domains (see my previous post on web performance optimization) will have problems.  Generally (IE & FF only?), Resources are not of the same origin if they have a different URL, essentially.  Scripts, however, from the same top-level domain can set the document.domain property to the TLD to allow them to interact.  This doesn't solve the xhr problem.  The second type of AJAX application that can be fettered by the Same-Origin Policy is the mashup.  Most mashups proxy the browser request to get the data from the others sites.  In other word, xyzmaps.com would have all of its scripts/xhr point to xyzmaps.com which would in turn make an HTTP request to e.g. maps.google.com, essentially proxying the browser request.  This presents a huge difficulty for implementing mashups, as all requests have to be proxied through the origin server.  A more thorough understanding of the Same-Origin Policy leads us to a better solution.

If you've ever used the Google maps API or the Virtual Earth api, you'll note that you include the api in your page via script tags.  Of course, this works perfectly for many sites, but doesn't it violate the Same-Origin Policy?  It does not, in fact, because the browser rightly assumes that scripts included via tags in the document are "safe" insofar as they are not alien.  So, we can get around the Same-Origin Policy by  adding script tags from other domains in our document.  This works statically in or pages, but we can also add script tags dynamically via script DOM manipulation!  Thus, a better solution than proxying requests for our mashups is found.

So, SiteA dynamically adds a script tag whose src attribute is set to a url on SiteB that will generate JavaScript, embedding the name of SiteA's callback method in the request.

There are several other rather unsatisfactory approaches outlined at Abe Fettig's Weblog.

Saturday, March 29, 2008

Website Performance Talk

This is a really enlightening talk from Steve Souders, author of High Performance Websites, and Chief Performance Yahoo!.  Below is a summary of his talk, but they also have a page detailing their rules.

  1. Make fewer HTTP requests: combing CSS and Javascript files, CSS sprites*, use image maps instead of multiple images where possible, inline images*
  2. Use a Content Distribution Network.  Akamai, Panther Express, Limelight, Mirror Image, SAVVIS.  Distribute your static content before creating a distributed architecture for your dynamic content.
  3. Add an Expires header.  If you aren't already, you should be changing the URL of your resources when you version them, to ensure that all users will get updates and fixes.  Since you then won't be updating a given resource, you should give it a far future expires header.
  4. Gzip all content: html, css, and javascript.
  5. Put stylesheets at the top.  IE will not begin rendering until all CSS files are downloaded.  Use the link HTML tag to pull in stylesheets, not the @import CSS directive, as IE will defer the download of the stylesheet.
  6. Put javascripts at the bottom.  HTTP 1.1 allows for two parallel server connections per hostname, but all downloads are blocked until the script is downloaded.  The defer attribute of the script block is not supported in Firefox and doesn't really defer in IE.
  7. Avoid CSS expressions.  The speaker doesn't cover this, but I gather from my own experience that this can seriously delay rendering as the expression will not be evaluated until the page is fully loaded.
  8. Make CSS and JS external.  This rule can be bent in situations where page views are low and users don't revisit often.  In this situation in-lining is suggested.
  9. Reduce DNS lookups.  He didn't cover this either, but it follows from rule #1 that additional network requests negatively impact performance.  He does mention later in the talk that splitting your requests across domains can drastically increase response time.  This is a consequence of the two connections per domain limit in the HTTP 1.1 specification.  It is important to remember that JavaScripts from one domain cannot affect JavaScripts or pages from another domain, due to browser security restrictions.  Also, this is a case of diminishing returns, as beyond four domains causes negative returns from, presumably DNS lookups and resource contention in the browser itself.
  10. "Minify" Javascript.  JSMin written by Doug Crockford is the most popular tool.  This is just removing whitespace, generally.  YUI compressor may be preferred and also does CSS.
  11. Avoid redirects.
  12. Remove duplicate scripts.
  13. Configure ETags.  These are used to uniquely identify a resource in space and time, such that if the ETag header received is identical to the cached result from a previous request for that resource, the browser can choose to use the local cache without need to download the remainder of the response stream.  The speaker doesn't go into this subject, so questions about how this improves caching performance over caching headers remain unanswered until you read his book.
  14. Make AJAX cacheable.  If you embed a last modified token into the URL of your AJAX request, you can cause the browser to pull from cache when possible.

A great tool to analyze your site's conformance to these rules is YSlow.  It's an add-on for Firebug.  During development your YSlow grade can give you a very good indication of what is happening to the response time of your application.  This metric, the YSlow grade, seems to me to be a much better quality gate for iterative development than does something as variable and volatile measured response time.

Some additional rules from the next version of the book:

  • As mentioned in my commentary in rule #9, split dominant content domains.
  • Reduce cookie weight.
  • Make static content cookie free.
  • Minify CSS (see rule #10 comments above).
  • Use iframes wisely.  The are a very expensive DOM operation.  Think about it--it's an entirely new page, but linked to another.
  • Optimize images.  Should your GIFs be PNGs?  Should your PNGs be JPGs?  Can you shrink your color palette?

Wednesday, March 19, 2008

Erlang News

Robert Virding's First Rule: "Any sufficiently complicated concurrent program in another language contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Erlang."

"I now know based on hard-won experience that you could replace "concurrent program" with "distributed system" in Robert's rule and it would still be just as accurate." --Steve Vinosky

Yariv wrote a cogent response to "What Sucks About Erlang".

Erlang Flavored Lisp anyone?

Monday, March 10, 2008

Programming Erlang: Chapter 8 Problem 2

'Tis an odd title, I know.  My puppy is prancing around the kitchen trying to get me interested in his chew toy, dropping it near me so I can see that it bounces and wiggles.

Do you want to see Erlang bounce and wiggle? The code below creates a circular linked list of Erlang N processes, then sends a Message around it M times. Pretty cool, huh? Well, it was meant to be used by readers of "Programming Erlang" to compare their favorite language to Erlang. I didn't bother doing this with C# threads, though it would be interesting to try it out with the concurrency and co-ordination runtime.

Here's the out put of the code (time is in milliseconds):

Sent 30000000 messages in 27453 / 29186


-module(problem2).
-compile(export_all).

start(N, M, Message) when is_integer(N), is_integer(M) ->
statistics(runtime),
statistics(wall_clock),
FirstPid = spawn(?MODULE,loop,[[]]),
LastPid = create(1, N, FirstPid),
FirstPid ! LastPid, %% the ring is closed
send(0, M, FirstPid, Message),
{_, Time1} = statistics(runtime),
{_, Time2} = statistics(wall_clock),
io:format("Sent ~p messages in ~p / ~p~n", [N * M, Time1, Time2]).

send(C, C, _Pid, _Message) -> void;
send(I, C, Pid, Message) ->
Pid ! {self(), Message},
receive
finished -> void
end,
send(I+1, C, Pid, Message).

create(C, C, Pid) -> Pid;
create(I, C, Pid) ->
create(I+1, C, spawn(?MODULE, loop, [Pid])).

loop(Next) ->
receive
{Caller, FirstPid, Message} ->
if
FirstPid =:= self() -> Caller ! finished;
true -> Next ! {Caller, FirstPid, Message}
end,
loop(Next);
{Caller, Message} ->
Next ! {Caller, self(), Message},
loop(Next);
NewNext when Next =:= [] ->
loop(NewNext)
end.

Friday, February 15, 2008

Presenter First with POA

Plain-old ASP.NET (POA) has a lot of advantages: rapid UI prototyping, incomparable support from 3rd-party vendors, and a simplified programming model that abstracts away the problems of a stateless application to name a few.  But there remains the problem of testing.  ASP.NET applications--any web-based application--is difficult to test.  The traditional architecture of having your code-behind orchestrate your business logic components in response to events generated through the postback mechanism requires that you instigate those postbacks in order to test the orchestration.  In other words, you have to drive the UI to test your application.

UI testing is expensive.  If you've done it, you already know that.  If you've not done it, then trust us on this one; the cost of tools, the complexity, and brittleness of web tests drive the cost up.  Furthermore, if an application isn't designed to be UI tested, it becomes a nearly intractable problem to do so.  The folks at Atomic Object know this from experience.  As proponents of test-driven development, they have looked down the barrel of that gun and found a better way.  They call it Presenter First.

The basic idea behind Presenter First (PF) is to obviate the need for UI testing.  It is a Model-View-Presenter (MVP) variant that is closely related to Martin Fowler's Passive View.  The application architecture is such that UI is so humble that there is no need to test the UI.  This architecture requires that the Presenter be responsible for orchestrating the Model (business components) in response to events generated by the View, and updating the View appropriately.  A View in PF only implements events and property set methods.  This View can be defined by contract, e.g. IView, and mocked to allow for test-driven development. 

A hallmark of PF is that Presenters are constructed with an IView and an IModel and do not implement any other public interface members.  Thus, the encapsulated worfklow is driven completely through IView and (possibly) IModel events.  Concordantly, it becomes evident that PF architectures are implicitly stateful, i.e. the Presenter is extant between events.  In POA, the HttpContext only exists for the duration of the processing of a request.  While the HttpSession remains there is a non-trivial amount of complexity involved with it; ultimately, this too should be considered transient in the general case.  From these facts one might conclude that PF is not applicable to POA.

The purpose of this writing is to propose a method of implementing Presenter First in ASP.NET.  The general approach is outlined below:

  1. All page's implementing a PF IView (e.g. ICustomerEditView) must inherit from PFPage.
  2. PFPage implements a Dictionary<typeof(IView),PresenterInstance> lookup.
  3. Before Page processing, the PresenterInstance is given a reference to the View, i.e. the current page.
  4. The IView events are fired (from postback) and the Presenter handles them appropriately, updating the View via setters.
  5. The Presenter handles the Page.Unload event and drops its View reference

This approach diverges from "classical" PF in a few ways.  Most importantly, the actual concrete View acted upon by the Presenter instance is transient.  This should not pose any particular difficulty as the ASP.NET postback model ameliorates this stateless condition of web applications.  Further, while this requires the Presenter interface to implement a mutator for its View reference, the Presenter implementation need not diverge from the "classical" PF approach, save in one important way.  IModel events would need to be queued.[1]

Furthermore, the Presenter instance and the dictionary must be serializable if that pattern is to be used in an ASP.NET deployment where the HttpSession was out-of-process.  This might propose problems with Presenter references to cached objects that can't be extended to support serialization; though this is considered a minor flaw in the general case and an impediment that can be ameliorated by recreating objects from metadata.

To compose views it would only be necessary to create a PFControl base class that implemented nearly identical functionality to the PFPage.  Also, though declarative databinding is no longer possible, DTOs could be used in View mutators for databinding.

Through the use of mock objects, this approach to ASP.NET development using Presenter First enables Test-Driven Development of the business objects and their orchestration in response UI events to without the need to involve ASP.NET pages.[2]  Additionally, this allows for the tandem of presentation (.aspx) and interaction (.aspx.cs) to be isolated from business logic (Model) and orchestration (Presenter).

Please let me know what you think.  Is your organization using a very similar approach?  How have you fared?

[1] Because the Presenter MUST NOT call mutator methods on the View between postbacks, IModel events must be queued until a new View instance is received.

[2] While issues such as declarative security and AJAX can make UI testing a necessity, we are assuming that these are not direct considerations for vetting the business logic and orchestration.  Further this kind of UI testing will generally be done as a part of UAT and thus does not "cost" the development team.

Tuesday, February 12, 2008

Upgrading to Windows Vista

There's a certain voodoo to searching for problems. I don't expect that anyone is actually going to read this before upgrading to Vista... but here are the three problems I had.

First, make sure if you are running DaemonTools that you uninstall it first.  The Vista upgrade will force you to do this before proceeding, but sptd.sys will not be removed.  So... remove it.  It's in your Windows directory.

Next, make sure you download Vista updates for all your drivers.  I couldn't get my external monitor for my laptop to use it's native resolution.  The generic monitor driver in Vista didn't allow it.  I went to my manufacturer's website, NEC's, and download Vista drivers for the monitor.

Lastly (I hope), you might have trouble connecting to a cheaper broadband router.  Well, you'll be able to connect, but you won't get a DHCP address.  This knowledge base article should do the trick.

So, what do I think of Vista...?

I guess I don't know what all the hullabaloo has been about in the blogs and press.  Surely no one was expecting Mac OSX?  I run Ubuntu Linux too, and I don't expect it to be OSX.  In fact, I think Ubuntu's desktop experience sucks compared to Vista, but I love it as a Linux distro.  I ran Slackware in '98, and I can tell you things have REALLY changed in ten years.  And, I can tell you Vista is better than XP.  You have to take that with a grain of salt; I relish change.

Sure, you will have more trouble pirating music, but it's more secure, prettier, and cooler.

I guess that's why Vista is making Microsoft money.  Go figure.

Sunday, February 3, 2008

Weekend Update with Christopher Atkins

Dennis Miller was my favorite Weekend Update anchorman.  His wry, sardonic--if verbose--commentaries always made me laugh.  As I think about it, that sketch seems a direct ancestor of the news shows on Comedy Central so popular with the college crowd.

Anyway, my weekend has been interesting.  I found out that despite what the documentation says, ECDiffieHellmanCng is not supported on Windows XP.  In fact, its constructors check for the presence of NCrypt.dll: a library only available for Windows Vista and Windows Server 2003+.  So, despite that fact that it ships with .NET 3.5, you can't use it on XP.  Joy.  I hope this is not a trend.  You'll also find that you cannot use a BigInt, because it is marked internal.  If you're trying to do modern cryptography on Windows XP, in other words, you are out of luck.

So, I spent the remainder of my weekend working through Programming Erlang, and I have to say I am even more excited about it.  I'm only on page 51, but I've already learned about list comprehensions and custom flow control abstractions using higher-order functions.  The cool thing is I can apply some of this stuff to C#, now that we have lambda expressions.

I'm excited about the year ahead.  There are a lot of changes on the horizon.  Imagine, Windows Server 2008, SQL Server 2008, a Yahoo! and Microsoft merger, widespread adoption of OpenID, and a new presidential election are all on their way.  To quote the folk poet Bob Dylan:

Come writers and critics
Who prophesize with your pen
And keep your eyes wide
The chance won't come again
And don't speak too soon
For the wheel's still in spin
And there's no tellin' who
That it's namin'.
For the loser now
Will be later to win
For the times they are a-changin'