Using the Web to Create the Web
Wikis do this, as do blogs.
Fast JavaScript in browsers is enabling a new generation of programmers to develop applications completely in their browser. While the most obvious commercial example is Force.com, there are many other ideas out there. In no particular order:
The common connection between these frameworks is the notion of bootstrapping the web; that is, using the web to create the web.
If you’ll forgive the inchoate thoughts, let me attempt to connect some mental dots.
Dr. Alan Kay has of late been discussing the SmallTalk architecture of real objects (computers) all the way down and how this might improve the nature of software on the Internet.
In September 2009 in an interview, Dr. Kay said,
The ARPA/PARC research community tried to do as many things ‘no center’ as possible and this included Internet […] and the Smalltalk system which was ‘objects all the way down’ and used no OS at all. This could be done much better these days, but very few people are interested in it (we are). We’ve got some nice things to show not quite half way through our project. Lots more can be said on this subject.
This month in an interview with ComputerWorld Australia, Dr. Kay expounded,
To me, one of the nice things about the semantics of real objects is that they are “real computers all the way down (RCATWD)” – this always retains the full ability to represent anything. The old way quickly gets to two things that aren’t computers – data and procedures – and all of a sudden the ability to defer optimizations and particular decisions in favour of behaviours has been lost.
In other words, always having real objects always retains the ability to simulate anything you want, and to send it around the planet. If you send data 1000 miles you have to send a manual and/or a programmer to make use of it. If you send the needed programs that can deal with the data, then you are sending an object (even if the design is poor).
And RCATWD also provides perfect protection in both directions. We can see this in the hardware model of the Internet (possibly the only real object-oriented system in working order).
You get language extensibility almost for free by simply agreeing on conventions for the message forms.
My thought in the 70s was that the Internet we were all working on alongside personal computing was a really good scalable design, and that we should make a virtual internet of virtual machines that could be cached by the hardware machines. It’s really too bad that this didn’t happen.
Is OOP the wrong path? What is this RCATWD concept really about? Doesn’t the stateless communication constraint of REST force us to think of web applications in the browser as true peers of server applications? Should we store our stateful browser-based JavaScript applications in a cloud object-database, in keeping with the Code-On-Demand constraint of REST? Can we make a them “real objects” per Dr. Kay? Are RESTful server applications just functional programs? If so, shouldn’t we be writing them in functional languages?
I definitely believe we can gain many benefits from adopting a more message-passing oriented programming style. I would go so far as to say that OO classes should only export functions, never methods. (They can use methods privately of course, to keep things DRY.)
I’ve written extensively in a never published paper about related topics: single-page applications, not writing new applications to build and deliver applications for every web site, intent-driven design, event sourcing, and others. Hopefully I’ll find the time to return to that effort and incorporate some of this thinking.