Archive for November, 2009

Monday, November 30th, 2009

The best thing you can do…

The best thing a user can do to advance the Web is to help move people off IE 6

— Ryan Servatius, senior product manager for Internet Explorer.

Source. -m

Sunday, November 29th, 2009

The Model Endpoint Template (MET) organizational pattern for XRX apps

One of the lead bullets describing why XForms is cool always mentions that it is based on a Model View Controller framework. When building a full XRX app, though, MVC might not be the best choice to organize things overall. Why not?

Consider a typical XRX app, like MarkLogic Application Builder. (You can download a your copy of MarkLogic, including Application Builder, under the community license at the developer site.) For each page, the cycle goes like this:

  1. The browser requests a particular page, say the one that lets you configure sorting options in the app you’re building
  2. The page loads, including client-side XForms via JavaScript
  3. XForms requests the project state as XML from a designated endpoint; this becomes the XForms Instance Data
  4. Stuff happens on the page that changes the client-side state
  5. Just before leaving the page, XML representing the updated state is HTTP PUT back to the endpoint

The benefit of this approach is that you are dealing with XML all the way through, no impedance mismatches like you might find on an app that awkwardly transitions from (say) relational data to Java objects to urlencoded name/value pairs embedded in HTML syntax.

So why not do this in straight MVC? Honestly, MVC isn’t a bad choice, but it can get unwieldy. If an endpoint consists of a separate model+view+controller files, and each individual page consists of separate model+view+controller files, it adds up to a lot of stuff to keep track of. In truly huge apps, this much attention to organization might be worth it, but most apps aren’t that big. Thus the MET pattern.

Model: It still makes sense to keep the code that deals with particular models (closely aligned with Schemas) as a separate thing. All of Application Builder, for example, has only one model.

Endpoint: The job of an endpoint is to GET and PUT (and possibly POST and DELETE) XML, or other equivalent resource bundles depending on how many media types you want to deal with. It combines an aspect of controllers by being activated by a particular URL and views by providing the data in a consistent format.

Template: Since XForms documents already contain MVC mechanics, it not a high-payoff situation to further use MVC to construct the XForms and XHTML wrapper themselves. The important stuff happens within XForms, and then you need various templating mechanisms for example to provide consistent headers, footers, and other pieces across multiple pages. For this, an ordinary templating mechanism suffices. I can imagine dynamic assembly scenarios where this wouldn’t be the case, but again, many apps don’t need this kind of flexibility, and the complexity that comes along with it.

What about separation of concerns? Oh yeah, what about it? :-) Technically both Endpoints and Templates violate classical SOC. In an XRX app, this typically doesn’t lead to the kinds of spaghetti situations that it might otherwise. Endpoints are self contained, and can focus on doing just one thing well; with limited scope comes limited ability to get into trouble. For those times when you need to dig into the XQuery code of an endpoint, it’s actually helpful to see both the controller and view pieces laid out in one file.

As for Templates, simplicity wins. With the specifics of models and endpoints peeled away, the remaining challenge in developing individual pages is getting the XForms right, and again, it’s helpful to minimize the numbers of files one XForms page are split across. YAGNI applies to what’s left, at least in the stuff I’ve built.

So, I’ve been careful in the title to call this an “organizational pattern”, not a “design pattern” or an (ugh) “architectural pattern”. Nothing too profound here. I’d be happy to start seeing XRX apps laid out with directory names like “models”, “endpoints”, and “templates”.

What do you think? Comments welcome.

-m

Friday, November 27th, 2009

Gandhi

Richard Attenborough’s epic biopic is available to watch instantly on Netflix, but only until November 30. Recommended viewing for the weekend. -m

Sunday, November 22nd, 2009

How Xanadu Works: technical overview

One particular conversation I’ve overheard several times, often in the context of web and standards development, has always intrigued me. It goes something like this:

You know, Ted Nelson’s hypertext system from the 60’s had unbreakable, two-way links. It was elegant. But then came along Tim Berners-Lee and HTML, with its crappy, one-way, breakable links, and it took over the world.

The general moral of the story is usually about avoiding over-thinking problems and striving for simplicity. This has been rolling around in the back of my mind ever since the first time I heard the story. Is it an accurate assessment of reality? And how exactly did Nelson’s system, called Xanadu (R), manage the trick of unbreakable super-links? Even if the web ended up going in a different direction, there still might be lessons to learn for the current generation of people building things that run (and run on) the web.

Nelson’s book Literary Machines describes the system in some detail, but it’s hard to come by in the usual channels like Amazon, or even local bookstores. One place does have it, and for a reasonable price too: Eastgate Systems. [Disclosure: I bought mine from there for full price. I’m not getting anything for writing this post on my blog.] The book has a versioning notation, with 93.1 being the most recent, describing the “1993 design” of the software.

Pause for a moment and think about the history here. 1993 is 16 years ago as I write this, about the same span of time between Vannevar Bush’s groundbreaking 1945 article As We May Think (reprinted in full in Literary Machines) and Nelson’s initial work in 1960 on what would become the Xanadu project. As far as software projects go, this one has some serious history.

So how does it work? The basic concepts, in no particular order, are:

  • A heavier-weight publishing process: Other than inaccessible “privashed” (as opposed to “pub”lished) documents, once published, documents are forever, and can’t be deleted except in extraordinary circumstances and with some kind of waiting period.
  • All documents have a specific owner, are royalty-bearing, and work through a micropayment system. Anyone can quote, transclude, or modify any amount of anything, with the payments sorting themselves out accordingly.
  • Software called a “front end” (today we’d call it a “browser”) works on behalf of the user to navigate the network and render documents.
  • Published documents can be updated at will, in which case unchanged pieces can remain unchanged, with inserted and deleted sections in between. Thus, across the history of a document, there are implicit links forward and backward in time through all the various editions and alternatives.
  • In general, links can jump to a new location in the docuverse or transclude part of a remote document into another, and many more configurations, including multi-ended links, and are granular to the character level, as well as attached to particular characters.
  • Document and network addressing are accomplished through a clever numbering system (somewhat reminiscent of organic versioning, but in a way infinitely extensible on multiple axes). These address, called tumblers, represent a Node+User+Document+Subdocument, and a minor variant to the syntax can express ranges between two points therein.
  • The system uses its own protocol called FEBE (Front End Back End) which contains at several verbs including on page 4/61: RETRIEVEV (like HTTP GET), DELETEVSPAN, MAKELINK, FINDNUMOFLINKSTOTHREE, FINDLINKSFROMTOTHREE, and FINDDOCSCONTAINING [Note that “three” in this context is an unusual notation for a link type] Maybe 10 more verbs are defined in total.

A few common themes emerge. One is the grandiose scope: This really is intended as a system to encompass all of literature past, present, and future, and to thereby create a culture of intellect and reshape civilization. “We think that anyone who actually understands the problems will recognize ours approach as the unique solution.” (italics from original, 1993 preface)

Another theme is simple solutions to incredibly difficult problems. So the basic solution to unbreakable links is to never change documents.  Sometimes these solutions work brilliantly, sometimes they fall short, and many times they ends up somewhere in between. In terms of sheer vision, nobody else has come close to inspiring as many people working on the web. Descriptions of what today we’d call a browser would sound familiar, if a bit abstract, even to casual users of Firefox or IE.

Nothing like REST seems to have occurred to Nelson or his associates. It’s unclear how widely deployed Xanadu prototypes ever were, or how many nodes were ever online at any point. The set of verbs in the FEBE protocol reads like that a competent engineer would come up with. The benefits of REST, in particular of minimizing verbs and maximizing nouns, are non-obvious without a significant amount of web-scale experience.

Likewise Creative Commons seems like something the designers never contemplated.  “Ancient documents, no longer having a current owner, are considered to be owned by the system–or preferably by some high-minded literary body that oversees their royalties.” (page 2/29) While this sounds eerily like the Google Books settlement, this misses the implications of truly free-as-in-beer content, but equally misses the power of free-as-in-freedom documents. In terms of social impact there’s a huge difference between something that costs $0 and $0.000001.

In this system anyone can include any amount of any published document into their own without special permission. In a world where people writing Harry Potter Lexicons are getting sued by the copyright industry, it’s hard to imagine this coming to pass without kicking and screaming, but it is a nice world to think about. Anyway, in Xanadu per-byte royalties work themselves out according to the proportion of original vs. transcluded bytes.

Where is Google in this picture? “Two system directories, maintained by the system itself, are anticipated: author and title, no more” (page 2/49) For additional directories or search engines, it’s not clear how that would work: is a search results page a published or privashed document? Does every possible older version of every result page stick around in the system? (If not, links to/from might break) It’s part of a bigger question about how to represent and handle dynamic documents in the system.

On privacy: “The network will not, may not monitor what is written in private documents.” (page 2/59) A whole section in chapter 3 deals with these kinds of issues, as does Computer Lib, another of Nelson’s works.

He was early to recognize the framing problem: how in a tangle of interlinked documents, to make sense of what’s there, to discern between useful and extraneous chunks. Nelson admits to no general solution, but points at some promising directions, one of which is link typing–the more information there is on individual links, the more handles there are to make sense of the tangle. Some tentative link types include title, author, supersession, correction, comment, counterpart, translation, heading, paragraph, quote, footnote, jump-link, modal jump-link, suggested threading, expansion, citation, alternative version, comment, certification, and mail.

At several points, Nelson mentions algorithmic work that makes the system possible. Page 1/36 states “Our enfilade data structures and methods effectively refute Donald Knuth’s list of desirable features that he says you can’t have all at once (in his book Fundamental Algorithms: Sorting and Searching)”. I’m curious if anyone knows more about this, or if Knuth ever got to know enough details to verify that claim, or revise his.

So was the opening anecdote a valid description of reality? I have to say no, it’s not that simple. Nelson rightly calls the web a shallow imitation of his grand ideas, but those ideas are–in some ways literally–from a different world. It’s not a question of “if only things had unfolded a bit differently…”. To put it even more strongly, a system with that kind of scope cannot be designed all at once, in order to be embraced by the real world it has to be developed with a feedback loop to the real world. This in no way diminishes the value and influence of big ideas or the place that Roarkian stick-to-your-gunnedness has in our world, industry, and society. We may have gotten ourselves into a mess with the architecture of the present web, but even so, Nelson’s vision will keep us aspiring toward something better.

I intend to return to this posting and update it for accuracy as my understanding improves. Some additional topics to maybe address are: a more detailed linking example (page 2/45), comparing XLink to Xanadu, comparing URIs and tumblers, and mention the bizarre (and yet oddly familiar if you’ve ever been inside a FedEx Kinkos) notion of “SilverStands”.

For more on Nelson, there is the epic writeup in Wired. YouTube has some good stuff too.

Comments are welcome. -m

Xanadu is a registered trademark, here used for specific identifying purpose.

Wednesday, November 18th, 2009

MarkLogic and XSLT

MarkLogic fans should check out Norm Walsh’s posting about his talk at the NY User Group. If you follow the right Twitter feeds, this is probably not too much of a surprise, but now the cat is officially disjoint with the volume inside the bag. Disclaimer: be sure to read the disclaimer there. -m

Saturday, November 14th, 2009

Geek Thoughts: if this keeps going

If Moore’s law applies to flash (and flash-like) memory storage, and it certainly seems like it does, in another decade we will all be carrying around a terabyte on our phones.

What happens then?

More collected Geek Thoughts at http://geekthoughts.info.

Sunday, November 8th, 2009

High Temperature Superconductors

If this site is accurate, it’s now possible to have superconducting material at household freezer temperatures: 254k, or a tiny bit below 0F. From power lines to maglevs to supercolliders to energy storage, the potential applications boggle the mind. -m

Note: I’m having trouble finding independent verification of this, other than what appears to be re-hashes of the superconductor.org article. If you have any additional proof or refutation, please post it in the comments.

Thursday, November 5th, 2009

Metadata FTW

Link credit goes to Joho.

This looks pretty significant. The AZ Supreme Court ruled that document metadata must be disclosed under existing public records law. This may start a chain reaction with other states following suit. With the movement toward open data including data.gov and the Federal Register, this fits in well. Quite often metadata including creation date and author and the like make for much better searching and faceting. -m

Wednesday, November 4th, 2009

Geek Thoughts: unlikely tail

Tractors are to dogs as rocking chairs are to cats.

More collected Geek Thoughts at http://geekthoughts.info.