Sunday, July 25, 2010

Dynamic Dispatch (Multimethods) in C#?

I’ve recently become enamored with the multimethods system of Clojure, as well as its approach to polymorphism and “type” hierarchies in general.  Having never heard the term, I consulted Wikipedia about multimethods, hoping to have its origins elucidated, though it appears it is simply a synonym of multiple dispatch.

Polymorphism, “the ability of one type to appear as and be used like another type”, is limited to inheritance with simple overloading semantics (and generics) in C#.  The ability to be “used as” another type is implemented by allowing subclasses to implement any virtual methods defined on their respective superclasses.  Then, at compile time, the method to be called is determined based on the actual types of the objects upon which the function/method is invoked; we call this compile-time binding or early-binding.

Let’s look at it from a more mechanical perspective.  In most OO languages, when we write obj.Foo() we are implicitly writing (Foo obj).  In other words, obj is the first argument in the invocation of Foo; an argument called this in many languages.  (See JavaScript’s call/apply.) So, in our inheritance example, the compiler looks at the actual type of this first argument obj (the target of the invocation) when determining the Foo to call. This is called single dispatch.

What about the other arguments to the function?  Can we vary which function is called based on the other arguments to a method besides the target (i.e. the first implicit argument)?  Well, of course, we can have method overloads that take a different number and/or type of arguments.  However, unlike the first argument, there’s no mechanism in C# to evaluate the the derived types of these other arguments to make a dispatch decision; there is no multiple dispatch in C#.

This blog article, “The Visitor Pattern and Multiple Dispatch”, usefully explains the problem in terms of the Visitor pattern.  As a means of implementing double dispatch the Visitor pattern has some shortcomings.  First, the targets must be aware of and receive the visitor, violating the single responsibility principle.  Second, the visitor itself must evaluate the type of the target and invoke the correct method. Though some suggest using reflection to invoke the correct method, that implementation, reflecting on the runtime type of the object and using dynamic invocation through the type to get to the right method can be expensive. 

We can simplify that approach using the dynamic keyword to effect dynamic dispatch.  Using the dynamic keyword to “box” the arguments to the target method (Foo above), we can let the runtime system do the reflection for us. In the code below, I build on the Visitor example in “The Visitor Pattern and Multiple Dispatch” article, see line 28.

Code Snippet
  1. class Program
  2. {
  3.     abstract class Expression { }
  4.  
  5.     class ConstantExpression : Expression
  6.     {
  7.         public int constant;
  8.     }
  9.  
  10.     class SumExpression : Expression
  11.     {
  12.         public Expression left, right;
  13.     }
  14.  
  15.     class EvaluateVisitor
  16.     {
  17.         public int Visit(Expression e)
  18.         {
  19.             throw new Exception("Unsupported type of expression"); // or whatever
  20.         }
  21.  
  22.         public int Visit(ConstantExpression e)
  23.         {
  24.             return e.constant;
  25.         }
  26.         public int Visit(SumExpression e)
  27.         {
  28.             return Visit(e.left as dynamic) + Visit(e.right as dynamic);
  29.         }
  30.     }
  31.     
  32.     static void Main(string[] args)
  33.     {
  34.         var one = new ConstantExpression { constant = 1 };
  35.         var two = new ConstantExpression { constant = 2 };
  36.         var sum = new SumExpression { left = one, right = two };
  37.         var vistor = new EvaluateVisitor();
  38.         Console.WriteLine("Visit result {0}", vistor.Visit(sum));
  39.         Console.ReadKey();
  40.     }

 

This technique has some utility but should be used wisely.  Obviously there will be some cost for “dynamic” dispatch.  It’s important to note that this isn’t a generalized system for multiple dispatch, just a great spot welding technique to make the Visitor pattern more palatable.

In contrast, Clojures multimethods allow you to define a function on the arguments that is evaluated and cached, while the “overloads” define the results of that evaluation that they correspond to.  In this way dispatch on multimethods in Clojure can consider not only the “types” and values of all the arguments, but really any sort of inspection or evaluation you choose.

Tuesday, July 20, 2010

Using the Web to Create the Web

Wikis do this, as do blogs.

Fast JavaScript in browsers is enabling a new generation of programmers to develop applications completely in their browser.  While the most obvious commercial example is Force.com, there are many other ideas out there.  In no particular order:

The common connection between these frameworks is the notion of bootstrapping the web; that is, using the web to create the web.

If you’ll forgive the inchoate thoughts, let me attempt to connect some mental dots.

Dr. Alan Kay has of late been discussing the SmallTalk architecture of real objects (computers) all the way down and how this might improve the nature of software on the Internet.

In September 2009 in an interview, Dr. Kay said,

The ARPA/PARC research community tried to do as many things ‘no center’ as possible and this included Internet […] and the Smalltalk system which was ‘objects all the way down’ and used no OS at all. This could be done much better these days, but very few people are interested in it (we are). We’ve got some nice things to show not quite half way through our project. Lots more can be said on this subject.

This month in an interview with ComputerWorld Australia, Dr. Kay expounded,

To me, one of the nice things about the semantics of real objects is that they are “real computers all the way down (RCATWD)” – this always retains the full ability to represent anything. The old way quickly gets to two things that aren’t computers – data and procedures – and all of a sudden the ability to defer optimizations and particular decisions in favour of behaviours has been lost.

In other words, always having real objects always retains the ability to simulate anything you want, and to send it around the planet. If you send data 1000 miles you have to send a manual and/or a programmer to make use of it. If you send the needed programs that can deal with the data, then you are sending an object (even if the design is poor).

And RCATWD also provides perfect protection in both directions. We can see this in the hardware model of the Internet (possibly the only real object-oriented system in working order).

You get language extensibility almost for free by simply agreeing on conventions for the message forms.

My thought in the 70s was that the Internet we were all working on alongside personal computing was a really good scalable design, and that we should make a virtual internet of virtual machines that could be cached by the hardware machines. It’s really too bad that this didn’t happen.

Is OOP the wrong path? What is this RCATWD concept really about?  Doesn’t the stateless communication constraint of REST force us to think of web applications in the browser as true peers of server applications?  Should we store our stateful browser-based JavaScript applications in a cloud object-database, in keeping with the Code-On-Demand constraint of REST?  Can we make a them “real objects” per Dr. Kay?  Are RESTful server applications just functional programs?  If so, shouldn’t we be writing them in functional languages?

I definitely believe we can gain many benefits from adopting a more message-passing oriented programming style.  I would go so far as to say that OO classes should only export functions, never methods.  (They can use methods privately of course, to keep things DRY.)

I’ve written extensively in a never published paper about related topics: single-page applications, not writing new applications to build and deliver applications for every web site, intent-driven design, event sourcing, and others.  Hopefully I’ll find the time to return to that effort and incorporate some of this thinking.

RavenDB: In the Code, Part 1—MEF

If you’ve not heard of RavenDB, it’s essentially a .NET-from-the-ground-up document database taking its design cues from CouchDB (and MongoDB to a lesser degree). Rather than go into the details about its design and motivations, I’ll let Ayende speak for himself.

Instead, I would like to document some of the great things I’ve found in the codebase of RavenDB, as I read to be a better developer.  This series of articles discusses RavenDBs use of the following .NET 4 features.

  • Managed Extensibility Framework (MEF)
  • New Concurrency Primitives in .NET 4.0
  • The new dynamic keyword in C# 4

While discussing RavenDB’s use of these features, I hope to provide a gentle introduction to these technologies.  In this, the first post of the series, we discuss MEF.  For a very brief introduction to MEF and its core concepts, see the Overview in the wiki.

Managed Extensibility Framework

MEF was originally in the Patterns & Practices team and has since moved into the BCL as the System.ComponentModel.Composition namespace.  Glenn Block has nominated it as a plug-in framework, an application partitioning framework, and has given many reasons why you may not want to attempt to use it as your inversion-of-control container (especially if you listen to Uncle Bob’s advice). RavenDB uses MEF to handle extensibility for it’s RequestResponder classes.

RavenDB’s communication architecture is essentially an HTTP server that has a number of registered handlers of requests, not unlike the front-controller model of ASP.NET MVC.  Akin to MVC’s Routes, each RequestResponder provides a UrlPattern and SupportedVerbs to identify those requests it will handle. A given RequestResponder will vary it’s work depending on the HTTP verbs, headers, and body of the request.  It is in this sense that RavenDB can be considered RESTful (even if it isn’t, see street REST).

Code Snippet
  1. public class HttpServer : IDisposable
  2.     {
  3.         [ImportMany]
  4.         public IEnumerable<RequestResponder> RequestResponders { get; set; }

This HttpServer class dispatches requests to one of the items in the RequestResponders. This is populated by MEF because of the ImportManyAttribute.    MEF looks in its catalogs and finds the RequestResponder class is exported, as is all of it’s subclasses; see below.

Code Snippet
  1. [InheritedExport]
  2. public abstract class RequestResponder

The InheritedExportAttribute ensures that MEF considers all subclasses of the attributed class are themselves as exports.  So, if your class inherits from RequestResponder and MEF can see your class, it will automatically be considered for each incoming request.

How does MEF “see your class”? Out-of-the-box MEF provides for the definition of what is discoverable in a number of useful ways. RavenDB makes use of these by providing it’s own MEF CompositionContainer.

Code Snippet
  1. public HttpServer(RavenConfiguration configuration, DocumentDatabase database)
  2. {
  3.     Configuration = configuration;
  4.  
  5.     configuration.Container.SatisfyImportsOnce(this);

Above, in the constructor of the HttpServer class, we see the characteristic call to SatisfyImportsOnce on the CompositionContainer. This instructs the container to satisfy all the imports for the HttpServer, namely the RequestResponders.  The configuration.Container property is below:

Code Snippet
  1. public CompositionContainer Container
  2. {
  3.     get { return container ?? (container = new CompositionContainer(Catalog)); }

And the Catalog property is initialized in the configuration class’ constructor like this:

Code Snippet
  1. Catalog = new AggregateCatalog(
  2.     new AssemblyCatalog(typeof (DocumentDatabase).Assembly)
  3.     );

So the container is created with a single AggregateCatalog that can contain multiple catalogs.  That AggregateCatalog is initialized with an AssemblyCatalog which pulls in all the MEF parts (classes with Import and Export attributes) in the assembly containing the DocumentDatabase class (more on that later).

That takes care of the built-in RequestResponders, because those are in the same assembly as the DocumentDatabase class.  If that smells like it violates orthogonality, you are not alone. But, I digress; what about extensibility? How does Raven get MEF to see RequestResponder plugins?

The configuration class also has a PluginsDirectory property; in the setter, is the following code.

Code Snippet
  1. if(Directory.Exists(pluginsDirectory))
  2. {
  3.     Catalog.Catalogs.Add(new DirectoryCatalog(pluginsDirectory));
  4. }

So, in Raven’s configuration you can specify a directory where MEF will look for parts.  That’s the raison d'être of MEF’s DirectoryCatalog, since a plugins folder is such a common deployment/extensibility pattern.  You can learn more about the various MEF catalogs in the CodePlex wiki.

Now, the real extensibility story for RavenDB is its triggers.

RavenDB Triggers

The previously mentioned DocumentDatabase class is responsible for the high-level orchestration of the actual database work.  It maintains four groups of triggers.

Code Snippet
  1. [ImportMany]
  2. public IEnumerable<AbstractPutTrigger> PutTriggers { get; set; }
  3.  
  4. [ImportMany]
  5. public IEnumerable<AbstractDeleteTrigger> DeleteTriggers { get; set; }
  6.  
  7. [ImportMany]
  8. public IEnumerable<AbstractIndexUpdateTrigger> IndexUpdateTriggers { get; set; }
  9.  
  10. [ImportMany]
  11. public IEnumerable<AbstractReadTrigger> ReadTriggers { get; set; }

Following the same pattern as RequestResponders, the DocumentDatabase calls configuration.Container.SatisfyImportsOnce(this). So, the imports are satisfied in the same way, i.e. from DocumentDatabase’s assembly and from a configured plug-ins directory.

In RavenDB triggers are the way to perform some custom action when documents are “put” (i.e. upsert) or read or deleted.  RavenDB triggers also provide a way to block any of these actions from happening.

Raven also allows for custom actions to be performed when the database spins up using the IStartupTask interface.

Startup Tasks

When the DocumentDatabase class is constructed, it executes the following method after initializing itself.

Code Snippet
  1. private void ExecuteStartupTasks()
  2. {
  3.     foreach (var task in Configuration.Container.GetExportedValues<IStartupTask>())
  4.     {
  5.         task.Execute(this);
  6.     }
  7. }

This method highlights the use of the CompositionContainer’s GetExportedValues<T> function, which returns all of the IStartupTasks in the catalogs created in the configuration object.

Conclusion

We’ve seen three important extensibility points in RavenDB supported by MEF: RequestResponders, triggers, and startup tasks.  Next time, we’ll look at two more—view generators and dynamic compilation extensions—while learning more about RavenDB indices.