Archive for the 'browsers' Category

Tuesday, October 29th, 2013

Skunklink a decade later

Alex Milowski asks on Twitter about my thoughts on Skunklink, now a decade old.

Linking has long been thought one of the cornerstones of the web, and thereby a key part of XML and related syntaxes. It’s also been frustratingly difficult to get right. XLink in particular once showed great promise, but when it came down to concrete syntax, didn’t get very far. My thinking at the time is still well-reflected in what is, to my knowledge, the only fiction ever published on A Hyperlink Offering. That story ends on a hopeful note, and a decade out, I’m still hoping.

For what it purports to do, Skunklink still seems like a good solution to me. It’s easy to explain. The notion of encoding the author’s intent, then letting devices work out the details, possibly with the aid of stylesheets and other such tools, is the right way to tackle this kind of a problem. Smaller specifications like SkunkLink would be a welcome breath of fresh air.

But a bigger question lurks behind the scenes: that of requirements. Does the world need a vocabulary-independent linking mechanism? The empirical answer is clearly ‘no’ since existing approaches have not gained anything like widespread use, and only a few voices in the wilderness even see this as a problem. In fact, HTML5 has gone in quite the opposite direction, rejecting the notion of even a vocabulary-independent syntax, to say nothing of higher layers like intent. I have to admit this mystifies me.

That said, it seems like the attribute name ‘href’ has done pretty well in representing intended hyperlinks. The name ‘src’ not quite as well. I still consider it best practice to use these names instead of making something else up.

What do you think? -m


Thursday, August 5th, 2010

Balisageurs: XML and JSON

At David Lee’s nocturne about XML and JSON round-trippimg, several folks were talking about a site that listed several “off-the-shelf” conversion methods, but nobody could remember the site.

Late that night, with 15 minutes of battery remaining, I found it. The operative search term is XSLTJSON. -m

Wednesday, July 7th, 2010

Grokking Selenium

As the world of web apps gets more framework-y, I need to get up to speed on contemporary automation testing tools. One of the most popular ones right now is the open source Selenium project. From the look of it, that project is going through an awkward adolescent phase. For example:

  • Selenium IDE lets you record tests in a number of languages, but only HTML ones can be played back. For someone using only Selenium IDE, it’s a confusing array of choices for no apparent reason.
  • Selenium RC has bindings for lots of different languages but not for the HTML tests that are most useful in Selenium IDE. (Why not include the ability to simply play through an entire recorded script in one call, instead of fine grained commands like selenium.key_press(input_id, 110), etc.?)
  • The list of projects prominently mentions Selenium Core (a JavaScript implementation), but when you click through to the documentation, it’s not mentioned. Elsewhere on the site it’s spoken of in deprecating terms.
  • If you look at the developer wiki, all the recent attention is on Web Drivers, a new architecture for remote-controlling browsers, but those aren’t mentioned in the docs (yet) either.

So yeah, right now it’s awkward and confusing. The underlying architecture of the project is undergoing a tectonic shift, something that would never see public light of day in a proprietary project. In the end it will come out leaner and meaner. What the project needs in the short term is more help from fresh outsiders who can visualize the desirable end state and help the ramped and productive developers on the project get there.

By the way, if this kind of problem seems interesting to you, let me know. We’re hiring. If you have any tips for getting up to speed in Selenium, comment below.


Tuesday, February 2nd, 2010

Larry Masinter on overspecification

Some thoughts worth considering on state of HTML development today. -m

Monday, November 30th, 2009

The best thing you can do…

The best thing a user can do to advance the Web is to help move people off IE 6

— Ryan Servatius, senior product manager for Internet Explorer.

Source. -m

Sunday, November 29th, 2009

The Model Endpoint Template (MET) organizational pattern for XRX apps

One of the lead bullets describing why XForms is cool always mentions that it is based on a Model View Controller framework. When building a full XRX app, though, MVC might not be the best choice to organize things overall. Why not?

Consider a typical XRX app, like MarkLogic Application Builder. (You can download a your copy of MarkLogic, including Application Builder, under the community license at the developer site.) For each page, the cycle goes like this:

  1. The browser requests a particular page, say the one that lets you configure sorting options in the app you’re building
  2. The page loads, including client-side XForms via JavaScript
  3. XForms requests the project state as XML from a designated endpoint; this becomes the XForms Instance Data
  4. Stuff happens on the page that changes the client-side state
  5. Just before leaving the page, XML representing the updated state is HTTP PUT back to the endpoint

The benefit of this approach is that you are dealing with XML all the way through, no impedance mismatches like you might find on an app that awkwardly transitions from (say) relational data to Java objects to urlencoded name/value pairs embedded in HTML syntax.

So why not do this in straight MVC? Honestly, MVC isn’t a bad choice, but it can get unwieldy. If an endpoint consists of a separate model+view+controller files, and each individual page consists of separate model+view+controller files, it adds up to a lot of stuff to keep track of. In truly huge apps, this much attention to organization might be worth it, but most apps aren’t that big. Thus the MET pattern.

Model: It still makes sense to keep the code that deals with particular models (closely aligned with Schemas) as a separate thing. All of Application Builder, for example, has only one model.

Endpoint: The job of an endpoint is to GET and PUT (and possibly POST and DELETE) XML, or other equivalent resource bundles depending on how many media types you want to deal with. It combines an aspect of controllers by being activated by a particular URL and views by providing the data in a consistent format.

Template: Since XForms documents already contain MVC mechanics, it not a high-payoff situation to further use MVC to construct the XForms and XHTML wrapper themselves. The important stuff happens within XForms, and then you need various templating mechanisms for example to provide consistent headers, footers, and other pieces across multiple pages. For this, an ordinary templating mechanism suffices. I can imagine dynamic assembly scenarios where this wouldn’t be the case, but again, many apps don’t need this kind of flexibility, and the complexity that comes along with it.

What about separation of concerns? Oh yeah, what about it? :-) Technically both Endpoints and Templates violate classical SOC. In an XRX app, this typically doesn’t lead to the kinds of spaghetti situations that it might otherwise. Endpoints are self contained, and can focus on doing just one thing well; with limited scope comes limited ability to get into trouble. For those times when you need to dig into the XQuery code of an endpoint, it’s actually helpful to see both the controller and view pieces laid out in one file.

As for Templates, simplicity wins. With the specifics of models and endpoints peeled away, the remaining challenge in developing individual pages is getting the XForms right, and again, it’s helpful to minimize the numbers of files one XForms page are split across. YAGNI applies to what’s left, at least in the stuff I’ve built.

So, I’ve been careful in the title to call this an “organizational pattern”, not a “design pattern” or an (ugh) “architectural pattern”. Nothing too profound here. I’d be happy to start seeing XRX apps laid out with directory names like “models”, “endpoints”, and “templates”.

What do you think? Comments welcome.


Sunday, November 22nd, 2009

How Xanadu Works: technical overview

One particular conversation I’ve overheard several times, often in the context of web and standards development, has always intrigued me. It goes something like this:

You know, Ted Nelson’s hypertext system from the 60’s had unbreakable, two-way links. It was elegant. But then came along Tim Berners-Lee and HTML, with its crappy, one-way, breakable links, and it took over the world.

The general moral of the story is usually about avoiding over-thinking problems and striving for simplicity. This has been rolling around in the back of my mind ever since the first time I heard the story. Is it an accurate assessment of reality? And how exactly did Nelson’s system, called Xanadu (R), manage the trick of unbreakable super-links? Even if the web ended up going in a different direction, there still might be lessons to learn for the current generation of people building things that run (and run on) the web.

Nelson’s book Literary Machines describes the system in some detail, but it’s hard to come by in the usual channels like Amazon, or even local bookstores. One place does have it, and for a reasonable price too: Eastgate Systems. [Disclosure: I bought mine from there for full price. I’m not getting anything for writing this post on my blog.] The book has a versioning notation, with 93.1 being the most recent, describing the “1993 design” of the software.

Pause for a moment and think about the history here. 1993 is 16 years ago as I write this, about the same span of time between Vannevar Bush’s groundbreaking 1945 article As We May Think (reprinted in full in Literary Machines) and Nelson’s initial work in 1960 on what would become the Xanadu project. As far as software projects go, this one has some serious history.

So how does it work? The basic concepts, in no particular order, are:

  • A heavier-weight publishing process: Other than inaccessible “privashed” (as opposed to “pub”lished) documents, once published, documents are forever, and can’t be deleted except in extraordinary circumstances and with some kind of waiting period.
  • All documents have a specific owner, are royalty-bearing, and work through a micropayment system. Anyone can quote, transclude, or modify any amount of anything, with the payments sorting themselves out accordingly.
  • Software called a “front end” (today we’d call it a “browser”) works on behalf of the user to navigate the network and render documents.
  • Published documents can be updated at will, in which case unchanged pieces can remain unchanged, with inserted and deleted sections in between. Thus, across the history of a document, there are implicit links forward and backward in time through all the various editions and alternatives.
  • In general, links can jump to a new location in the docuverse or transclude part of a remote document into another, and many more configurations, including multi-ended links, and are granular to the character level, as well as attached to particular characters.
  • Document and network addressing are accomplished through a clever numbering system (somewhat reminiscent of organic versioning, but in a way infinitely extensible on multiple axes). These address, called tumblers, represent a Node+User+Document+Subdocument, and a minor variant to the syntax can express ranges between two points therein.
  • The system uses its own protocol called FEBE (Front End Back End) which contains at several verbs including on page 4/61: RETRIEVEV (like HTTP GET), DELETEVSPAN, MAKELINK, FINDNUMOFLINKSTOTHREE, FINDLINKSFROMTOTHREE, and FINDDOCSCONTAINING [Note that “three” in this context is an unusual notation for a link type] Maybe 10 more verbs are defined in total.

A few common themes emerge. One is the grandiose scope: This really is intended as a system to encompass all of literature past, present, and future, and to thereby create a culture of intellect and reshape civilization. “We think that anyone who actually understands the problems will recognize ours approach as the unique solution.” (italics from original, 1993 preface)

Another theme is simple solutions to incredibly difficult problems. So the basic solution to unbreakable links is to never change documents.  Sometimes these solutions work brilliantly, sometimes they fall short, and many times they ends up somewhere in between. In terms of sheer vision, nobody else has come close to inspiring as many people working on the web. Descriptions of what today we’d call a browser would sound familiar, if a bit abstract, even to casual users of Firefox or IE.

Nothing like REST seems to have occurred to Nelson or his associates. It’s unclear how widely deployed Xanadu prototypes ever were, or how many nodes were ever online at any point. The set of verbs in the FEBE protocol reads like that a competent engineer would come up with. The benefits of REST, in particular of minimizing verbs and maximizing nouns, are non-obvious without a significant amount of web-scale experience.

Likewise Creative Commons seems like something the designers never contemplated.  “Ancient documents, no longer having a current owner, are considered to be owned by the system–or preferably by some high-minded literary body that oversees their royalties.” (page 2/29) While this sounds eerily like the Google Books settlement, this misses the implications of truly free-as-in-beer content, but equally misses the power of free-as-in-freedom documents. In terms of social impact there’s a huge difference between something that costs $0 and $0.000001.

In this system anyone can include any amount of any published document into their own without special permission. In a world where people writing Harry Potter Lexicons are getting sued by the copyright industry, it’s hard to imagine this coming to pass without kicking and screaming, but it is a nice world to think about. Anyway, in Xanadu per-byte royalties work themselves out according to the proportion of original vs. transcluded bytes.

Where is Google in this picture? “Two system directories, maintained by the system itself, are anticipated: author and title, no more” (page 2/49) For additional directories or search engines, it’s not clear how that would work: is a search results page a published or privashed document? Does every possible older version of every result page stick around in the system? (If not, links to/from might break) It’s part of a bigger question about how to represent and handle dynamic documents in the system.

On privacy: “The network will not, may not monitor what is written in private documents.” (page 2/59) A whole section in chapter 3 deals with these kinds of issues, as does Computer Lib, another of Nelson’s works.

He was early to recognize the framing problem: how in a tangle of interlinked documents, to make sense of what’s there, to discern between useful and extraneous chunks. Nelson admits to no general solution, but points at some promising directions, one of which is link typing–the more information there is on individual links, the more handles there are to make sense of the tangle. Some tentative link types include title, author, supersession, correction, comment, counterpart, translation, heading, paragraph, quote, footnote, jump-link, modal jump-link, suggested threading, expansion, citation, alternative version, comment, certification, and mail.

At several points, Nelson mentions algorithmic work that makes the system possible. Page 1/36 states “Our enfilade data structures and methods effectively refute Donald Knuth’s list of desirable features that he says you can’t have all at once (in his book Fundamental Algorithms: Sorting and Searching)”. I’m curious if anyone knows more about this, or if Knuth ever got to know enough details to verify that claim, or revise his.

So was the opening anecdote a valid description of reality? I have to say no, it’s not that simple. Nelson rightly calls the web a shallow imitation of his grand ideas, but those ideas are–in some ways literally–from a different world. It’s not a question of “if only things had unfolded a bit differently…”. To put it even more strongly, a system with that kind of scope cannot be designed all at once, in order to be embraced by the real world it has to be developed with a feedback loop to the real world. This in no way diminishes the value and influence of big ideas or the place that Roarkian stick-to-your-gunnedness has in our world, industry, and society. We may have gotten ourselves into a mess with the architecture of the present web, but even so, Nelson’s vision will keep us aspiring toward something better.

I intend to return to this posting and update it for accuracy as my understanding improves. Some additional topics to maybe address are: a more detailed linking example (page 2/45), comparing XLink to Xanadu, comparing URIs and tumblers, and mention the bizarre (and yet oddly familiar if you’ve ever been inside a FedEx Kinkos) notion of “SilverStands”.

For more on Nelson, there is the epic writeup in Wired. YouTube has some good stuff too.

Comments are welcome. -m

Xanadu is a registered trademark, here used for specific identifying purpose.

Wednesday, August 5th, 2009

Misunderstanding Markup

On this comic‘s panel 9 describes XHTML 1.1 conformance as:

the added unrealistic demand that documents must be served with an XML mime-type

I can understand this viewpoint. XHTML 1.1 is a massively misunderstood spec, particularly around the modularization angle. But because of IE, it’s pretty rare to see the XHTML media-type in use on the open web. Later, panel 23 or thereabouts:

If you want, you can even serve your documents as application/xhtml+xml, instantly transforming them from HTML 5 to XHTML 5.

Why the shift in tone? What makes serving the XML media type more realistic in the HTML 5 case? IE? Nope, still doesn’t work. I’ve observed this same shift in perspective from multiple people involved in the HTML5 work, and it baffles me. In XHTML 1.1 it’s a ridiculous demand showing how out of touch the authors were with reality. In HTML5 the exact same requirement is a brilliant solution, wink, wink, nudge, nudge.

As it stands now, the (X)HTML5 situation demotes XHTML to the backwaters of the web. Which is pretty far from “Long Live XHTML…”, as the comic concludes. Remember when X stood for Extensible?


Friday, July 31st, 2009

Pragmatic Namespaces

In case any of the 7 regular readers here aren’t following xml-dev, check out and add to the discussion about Pragmatic Namespaces, proposed as a solution for the “distributed extensiblity” problem in HTML5.

For years people have been pointing to Java as the model for how XML namespaces should work, so this proposal goes that direction. Either it will work, or else it will get people to finally shut up about the whole idea. :)

It’s heavily based on Tom Bradford’s Clean Namespaces proposal, which doesn’t have a living URL anymore but is available on


Wednesday, July 29th, 2009

Object-Oriented CSS

I enjoyed Nicole Sullivan‘s talk at the BayJax Meetup on Object-Oriented CSS, something I hadn’t run in to before. Adding predictability to CSS development seems like a huge win. I need to wrap my head around it better. Anyone with experience using this technique care to comment? -m

Thursday, July 2nd, 2009

And then there were one…

On May 8 I wrote:

it’s time for the W3C to show some tough love and force the two (X)HTML Working Groups together.

On July 2, the W3C wrote:

Today the Director announces that when the XHTML 2 Working Group charter expires as scheduled at the end of 2009, the charter will not be renewed. By doing so, and by increasing resources in the Working Group, W3C hopes to accelerate the progress of HTML 5 and clarify W3C’s position regarding the future of HTML.

The real test is whether the single HTML Working Group can be held to the standard of other Working Groups, and be able to recruit some much-needed editorial help from some of the displaced XHTML 2 gang.  -m

Tuesday, June 23rd, 2009

RDFa List Apart

A great introduction article. Maybe it’s just the crowd I hang with, but RDFa looks like it’s moving from trendy to serious tooling. -m

Wednesday, February 4th, 2009

XSLTForms beta

XSLTForms, the cross-browser XForms engine (written about previously) that makes ingenious use of built-in XSLT processing, reached an important milestone today, with a beta release. Tons of bug fixes and additional support for CSS and Schema.

If you’re thinking about getting involved with XForms and are looking for something small and approachable, give it a look. -m

Monday, December 29th, 2008

Opera 9.6.3 includes a fantastic JavaScript debugger

Have you checked out Opera lately? You should. Their briliant strategy it to include a JavaScript debugger so excellent that you’d be willing to test on that browser just to use the tool.

If you’ve been having the same kinds of troubles that I have with Firebug lately (not to demean the thousands who use that tool daily, but I draw the line when the debugger is the source of bugs) check it out. -m

Friday, December 19th, 2008

XSLTForms looks promising

Implementing client-side forms libraries is, and has been, all the rage. I’ve seen Mozquito Factory do amazing things in Netscape 4, Technical Pursuits TIBET on the perpetual verge of release, UGO, and others. In a more recent time scale, Ubiquity XForms impresses me and many others, and it has the right combination of funding and willing developers.

From a comment on my recent posting about Ubiquity XForms, I was pleased to learn about XSLTforms, a rebirth of AjaxForms, which I thought well of two years ago until its developer mysteriously left the project. But Software Libre lives on, and a new developer has taken over, this time using client-side XSLT instead of server-side Java to do the first pass of processing. Given the strong foundation, the project has come a long way in a short time, and already runs against a wide array of non-trivial examples. Check it out.

I’d like to hear what others think about this project. -m

Monday, December 8th, 2008

XML 2008 liveblog: Ubiquity XForms

I will talk about one or more sessions from XML 2008 here.

Mark Birbeck of Web Backplane talking about Ubiquity XForms.

Browsers are slow to adopt new standards. Ajax libraries have attempted to work around this. Lots of experimentation which is both good and bad, but at least has legitimzed extensions to browsers. JavaScript is the assembly language of the web.

Ubiquity XForms is part of a library, which wil also include RDFa and SMIL. Initially based on YUI, but in theory sould be adaptable to other libraries like jQuery.

Declarative: tools for creation and validation. Easier to read. Ajax libraries are approaching the level of being their own language anyway, so might as well take advantage of a standard.

Example: setting the “inner value” of a span: <span value="now()"></span>.

Script can do this easily: onclick="this.innerHTML = Date().toLocaleString();" But crosses the line from semantics to specific behavior. The previous one is exactly how xforms:output works.

Another exapmple: tooltips. Breaks down to onmouseover, onmouseout event handlers, show and hide. A jQuery-like approach can search the document for all tooltip elements and add the needed handlers, avoiding explicit behavioral code. This is the essence of Ubiquity XForms (and in fact XForms itself).

Patterns like these compose under XForms. A button (xf:trigger) or any form control can easily have a tooltip (xf:hint). These are all regular elements, stylable with CSS, accesible via DOM, and so forth. Specific events (like xforms-hint) fire for specific events, and a spreadsheet-like engine can update interdependencies.

Question: Is this client-side? A: Yes, all running within Firefox. The entire presentation is one XForms document.

Demo: a range control with class=”geolocation” that displays as a map w/ Google Maps integration. The Ubiquity XForms library contains many such extensibility points.

Summary: Why? Simple, declarative. Not a programming language. Speeds up development. Validatable. Link:

Q&A: Rich text? Not yet, but not hard (especially with YUI). Formally XForms compliant? Very nearly 1.1 conforming.


Friday, October 10th, 2008

More mobile XForms goodness

I haven’t tried this, but these guys claim to have a solution where

The form definitions are saved and exchanged as XForms, and the data as XForm[s] models. The data can be exchanged over http (if the phone users can afford GPRS and have a data connection) or over compressed SMS messages.

Sounds like they have the right idea… -m

Thursday, October 2nd, 2008

XForms spambots on the loose

A determined spambot has been submitting the XForms contact form on XForms Institute. OK, so it’s probably more Flash-aware than XForms-aware, but still. -m

Saturday, August 23rd, 2008

MarkLogic RDFa parser

This post will be continuously updated to contain the most recent details about an XQuery 1.0 RDFa parser I wrote for Mark Logic. It follows the Functional RDFa pattern.

At present there is little to say, but eventually code and more will be available. Stay tuned.


Friday, August 8th, 2008

It would be awesome if somebody…

It would be awesome of someone made a site that catalogued all the common mis-encodings. Even in 2008, I see these things all over the web–mangled quotation marks, apostrophes, em-dashes. I’d love to see a pictoral guide.

curly apostrophe looks like ?’ – original encoding=_________ mislabeled as __________ .

That sort of thing. Surely somebody has done this arleady, right? -m

Monday, July 28th, 2008

eRDF 1.1 Proposal Discussion

The W3C RDFa specification is now in Candidate Recommendation phase, with an explicit call for implementations (of which there are several). Momentum for RDFa is steadily building. What about eRDF, which favors the existing HTML syntax over new attributes?

There’s still a place for a simpler syntactic approach to embedding RDF in HTML, as evidenced by projects like Yahoo! SearchMonkey. And eRDF is still the only game in town when it comes to annotating RDF within HTML-without-the-X.

One thing the RDFa folks did was define src as a subject-bearing node, rather than an object. At first I didn’t like this inversion, but the more I worked with it, the more it made sense. When you have an image, which can’t have children in (X)HTML, it’s very often useful to use the src URL as the subject, with a predicate of perhaps cc:license.

So I propose one single change to eRDF 1.1. Well, actually several changes, since one thing leads to another. The first is to specify that you are using a different version of eRDF. A new profile string of:


The next is changing the meaning of a src value to be a subject, not an object. Perhaps swapping the subject and object. Many existing uses of eRDF involving src already involve properties with readily available inverses. For example:

<!-- eRDF 1.0 -->
<img class="foaf.depiction" src="" />

<!-- eRDF 1.1 -->
<img src="" class="foaf.depicts" />

With the inherent limitations of existing syntax, the use case of having a full image URL and a license URL won’t happen. But XHTML2 as well as a HTML5 proposal suggest that adding href to many attributes might come to pass. In which case this possibility opens:

<img src="" class="cc.license"
href="" />

Comments? -m

Thursday, July 17th, 2008

Website Optimization is on the shelves

Andy King’s Website Optimization is now in print from O’Reilly. This book covers it all: performance, SEO, conversion rates, analytics, you name it. If you run a web site, you’ll find this useful. I tech edited and contributed a small portion, about the growing trend of metadata as site advantage. Go check it out. -m

Saturday, June 28th, 2008

Firefox 3 CPU issue: extension to blame

Several folks, including me, have experienced increased CPU usage on Firefox 3, especially on OSX. Try disabling it, going back to the bookmarklet. -m

Thursday, May 29th, 2008


Bumped into XRX today. XForms + REST + XQuery. I like the sound of this, and XForms on the client just got a whole bunch easier…

I’m seeing multiple signs that the confluence of XForms and XQuery has legs. (And REST just plain makes sense in any situation). -m

Thursday, May 22nd, 2008

XForms Ubiquity

I just found out about a nice little XForms engine called Ubiquity. (Having dinner with Mark Birbeck, TV Raman, and Leigh Klotz certainly helps one find out about such things) :-)

It’s a JavaScript implementation done right. Open source under the Apache 2.0 license. Seems like a nice fit with, oh maybe MarkLogic Server? -m

Wednesday, May 21st, 2008

XQuery Annoyances…

If you are used to XSLT 1.0 and XForms, you see { $book/bk:title } and think nothing of it. XSLT 1.0 calls the curly-brace construct an Attribute Value Template, which is pretty descriptive of where it’s used. Always in an attribute, always converted into a string, even if you are actually pointing to an element.

In XQuery, though, the curly-brace construct can be used in many different places. Depending on the context, the above code might well insert a bk:title element into your output. The proper thing to do, of course, is { $book/bk:title/text() }. Many XSLT and XForms authors would omit the extra text() selector as superfluous, but in XQuery it matters.

What’s worse, depending on your browser, you might not see any output on the page within a <bk:title> element (or a title element of any namespace). Caveat browser! -m


Wednesday, May 14th, 2008

Reminder: SearchMonkey developer launch party Thursday

Reminder: Thursday evening at Yahoo! Sunnyvale headquarters is the launch party for the developer-facing side of SearchMonkey. In case you haven’t been paying attention, SearchMonkey is a new platform that lets developers craft their own awesomized search results. If you’re interested in SEO or general lowercase semantic web tools, you’ll love it. Meet me there. Upcoming link. Party starts at 5:30. -m

Update: The developer tool is live. Rasmus has a nice walkthrough.

Friday, May 2nd, 2008

SearchMonkey dev party

If you have webdev skillz, you might be interested in the SearchMonkey launch party on May 15. Good food, good drink, good coding. Space is limited, but I have a few invites to share. Comment here or contact me offline if interested. -m

Monday, March 10th, 2008

Getting what you asked for

Some time ago, Doug Crockford’s excellent blog pointed me to this page on “excessive DTD traffic” at the W3C. Go ahead and follow that link, I’ll wait…

All the standard templates that show how to construct a basic XHTML page include a public identifier of and often a namespace name of As the blog points out, these are not actually hyperlinks, they only play them on TV. Huge quantities of software are requesting these URLs 24×7, putting a load on their servers. Often times this results from unfortunate defaults in off-the-shelf XML components such as parsers.

But what did you expect?

This is the web equivalent of having a front-desk receptionist hand out a stacks of self-addressed, stamped postcards, then complaining about how much mail the company gets from all around the world.

HTTP URLs are great for identifiers on a technical basis: they are based on DNS names and have the important qualities of uniqueness and persistence. But as far as human factors go, they are a terrible choice (though with a great deal of inertia at this point). -m

Monday, January 7th, 2008

Yahoo! introduces mobile XForms

Admittedly, their marketing folks wouldn’t describe it that way, but essentially that’s what was announced today. (documentation in PDF format, closely related to what-used-to-be Konfabulator tech; here’s the interesting part in HTML) The press release talks about reaching “billions” of mobile consumers; even if you don’t put too much emphasis on press releases (you shouldn’t) it’s still talking about serious use of and commitment to XForms technology.

Shameless plug: Isn’t it time to refresh your memory, or even find out for the first time about XForms? There is this excellent book available in printed format from Amazon, as well as online for free under an open content license. If you guys express enough interest, good things might even happen, like a refresh to the content. Let’s make it happen.

From a consumer standpoint, this feels like a welcome play against Android, too. Yahoo! looks like it’s placing a bet on working with more devices while making development easier at the same time. I’ll bet an Android port will be available, at least in beta, before the end of the year.

Disclaimer: I have been out of Yahoo! mobile for several months now, and can’t claim any credit for or inside knowledge of these developments. -m

P. S. Don’t forget the book.