Archive for the 'web20' Category

Sunday, March 31st, 2013

Introducing node-node:node.node

Naming is hard to do well, almost as hard as designing good software in the first place. Take for instance the term ‘node’ which depending on the context can mean

  1. A fundamental unit of the DOM (Document Object Model) used in creating rich HTML5 applications.
  2. A basic unit of the Semantic Web–a thing you can say stuff about. Some nodes are even unlabeled, and hence ‘blank nodes’.
  3. In operations, a node means, roughly, a machine on the network. E.g. “sixteen-node cluster”
  4. A software library for event-driven, asynchronous development with JavaScript.

I find myself at the forefront of a growing chorus of software architects and API designers that are fed up with this overloading of a perfectly good term. So I’m happy today to announce node-node:node.node.

The system is still in pre-alpha, but it solves all of the most pressing problems that software developers routinely run in to. In this framework, every node represents a node, for the ultimate in scalable distributed document storage. In addition, every node additionally serves as a node, which provides just enough context to make open-world assumption metadata assertions at node-node-level granularity. Using the power of Node, every node modeled as a node has instant access to other node-node:nodes. The network really is the computer. You may never write a program the old way again. Follow my progress on Sourceforge, the latest and most cutting-edge social code-sharing site. -m

Wednesday, July 7th, 2010

Grokking Selenium

As the world of web apps gets more framework-y, I need to get up to speed on contemporary automation testing tools. One of the most popular ones right now is the open source Selenium project. From the look of it, that project is going through an awkward adolescent phase. For example:

  • Selenium IDE lets you record tests in a number of languages, but only HTML ones can be played back. For someone using only Selenium IDE, it’s a confusing array of choices for no apparent reason.
  • Selenium RC has bindings for lots of different languages but not for the HTML tests that are most useful in Selenium IDE. (Why not include the ability to simply play through an entire recorded script in one call, instead of fine grained commands like selenium.key_press(input_id, 110), etc.?)
  • The list of projects prominently mentions Selenium Core (a JavaScript implementation), but when you click through to the documentation, it’s not mentioned. Elsewhere on the site it’s spoken of in deprecating terms.
  • If you look at the developer wiki, all the recent attention is on Web Drivers, a new architecture for remote-controlling browsers, but those aren’t mentioned in the docs (yet) either.

So yeah, right now it’s awkward and confusing. The underlying architecture of the project is undergoing a tectonic shift, something that would never see public light of day in a proprietary project. In the end it will come out leaner and meaner. What the project needs in the short term is more help from fresh outsiders who can visualize the desirable end state and help the ramped and productive developers on the project get there.

By the way, if this kind of problem seems interesting to you, let me know. We’re hiring. If you have any tips for getting up to speed in Selenium, comment below.

-m

Tuesday, September 22nd, 2009

XForms Developer Zone

Another XForms site launched this week. This one seems pretty close to what I would like XForms Institute to become, if I had an extra 10 hours per week. -m

Wednesday, July 29th, 2009

Object-Oriented CSS

I enjoyed Nicole Sullivan‘s talk at the BayJax Meetup on Object-Oriented CSS, something I hadn’t run in to before. Adding predictability to CSS development seems like a huge win. I need to wrap my head around it better. Anyone with experience using this technique care to comment? -m

Thursday, July 2nd, 2009

And then there were one…

On May 8 I wrote:

it’s time for the W3C to show some tough love and force the two (X)HTML Working Groups together.

On July 2, the W3C wrote:

Today the Director announces that when the XHTML 2 Working Group charter expires as scheduled at the end of 2009, the charter will not be renewed. By doing so, and by increasing resources in the Working Group, W3C hopes to accelerate the progress of HTML 5 and clarify W3C’s position regarding the future of HTML.

The real test is whether the single HTML Working Group can be held to the standard of other Working Groups, and be able to recruit some much-needed editorial help from some of the displaced XHTML 2 gang.  -m

Thursday, August 7th, 2008

Great comment on the eRDF 1.1 discussion

On the eRDF discussion posting, Toby Inkster, an implementer of eRDF, talks about why it’s bad to steal the id attribute, and why RDFa is better suited for general purpose metadata. Worth a read. -m

Monday, July 28th, 2008

eRDF 1.1 Proposal Discussion

The W3C RDFa specification is now in Candidate Recommendation phase, with an explicit call for implementations (of which there are several). Momentum for RDFa is steadily building. What about eRDF, which favors the existing HTML syntax over new attributes?

There’s still a place for a simpler syntactic approach to embedding RDF in HTML, as evidenced by projects like Yahoo! SearchMonkey. And eRDF is still the only game in town when it comes to annotating RDF within HTML-without-the-X.

One thing the RDFa folks did was define src as a subject-bearing node, rather than an object. At first I didn’t like this inversion, but the more I worked with it, the more it made sense. When you have an image, which can’t have children in (X)HTML, it’s very often useful to use the src URL as the subject, with a predicate of perhaps cc:license.

So I propose one single change to eRDF 1.1. Well, actually several changes, since one thing leads to another. The first is to specify that you are using a different version of eRDF. A new profile string of:

"http://purl.org/NET/erdf11/profile"

The next is changing the meaning of a src value to be a subject, not an object. Perhaps swapping the subject and object. Many existing uses of eRDF involving src already involve properties with readily available inverses. For example:

<!-- eRDF 1.0 -->
<img class="foaf.depiction" src="http://example.org/picture" />

<!-- eRDF 1.1 -->
<img src="http://example.org/picture" class="foaf.depicts" />

With the inherent limitations of existing syntax, the use case of having a full image URL and a license URL won’t happen. But XHTML2 as well as a HTML5 proposal suggest that adding href to many attributes might come to pass. In which case this possibility opens:

<img src="http://example.org/picture" class="cc.license"
href="http://creativecommons.org/licenses/by/2.0/" />

Comments? -m

Monday, July 21st, 2008

Review: Web 2.0: A Strategy Guide

Actually, instead of a review, let me quote the opening testimonial from the inside-front cover.

Competing globally with dynamic capabilities is the top priority of multinational executives and managers everywhere. Rethinking strategy in a highly networked world is the big challenge. How can your company navigate successfully in this turbulent, highly networked and socially connected environment? …

If this does it for you, I couldn’t recommend this book more highly. -m

Thursday, March 13th, 2008

The (lowercase) semantic web goes mainstream

So today Yahoo! announced a major facet of what I’ve been working on lately: making the web more meaningful. Lots of fantastic coverage, including TechCrunch and ReadWriteWeb (and others, please link in the comments), and supportive responses and blog posts across the board. It’s been a while since I’ve felt this good about being a Yahoo.

So what exactly is it?

A few months ago I went through the pages on this very blog and added hAtom markup. As a result of this change…well, nothing happened. I had a good experience learning about exactly what is involved in retrofitting an existing site with microformats, but I didn’t get any tangible benefit. With the “SearchMonkey” platform, any site using microformats, or RDFa or eRDF, is exposed to developers who can enhance search results. An enhanced result won’t directly make my my site rank higher in search, it it most certainly make it prone to more clicks, and ultimately more readership, more inlinks, and better organic ranking.

How about some questions and answers:

Q: Is this Tim Berners-Lee‘s vision of the Semantic Web finally getting fulfilled?

A: No.

Q: Does this presuppose everybody rushing to change their sites to include microformats, RDF, etc?

A: No. After all, there is a developer platform. Naturally, developers will have an easier time with sites that use official and community standards for structuring data, but there is no obligation for any site to make changes in order to participate and benefit.

Q: Why would a site want to expose all its precious data in an easily-extractable way?

A: Because within a healthy ecosystem it results in a measurable increase in traffic and customer satisfaction. Data on the public web is already extractable, given enough eyeballs. An openness strategy pays off (of which SearchMonkey is an existence proof).

Q: What about metacrap? We can never trust sites to provide honest metadata.

A: The system does have significant spam deterrents built in, of which I won’t say more. But perhaps more importantly, the plugin nature of the platform uses the power of the community to shape itself. A spammy plugin won’t get installed by users. A site that mixes in fraudulent RDFa metadata with real content will get exposed as fraudulent, and users will abandon ship.

Q: Didn’t ask.com prove that having a better user interface doesn’t help gain search market share?

A: Perhaps. But this isn’t about user interface–it’s about data (which enables a much better interface.)

Q: Won’t (Google|Microsoft|some startup) just immediately clone this idea and take advantage of all the new metadata out there?

A: I’m sure these guys will have some kind of response, and it’s true that a rising tide lifts all boats. But I don’t see anyone else cloning this exactly. The way it’s implemented has a distinctly Yahoo! appeal to it. Nobody has cloned Yahoo! Answers yet, either. In some ways, this is a return to roots, since Yahoo! started off as a human-guided directory. SearchMonkey is similar, except a much broader group of people can now participate. And there are some specific human, technical and financial reasons why as well, but I suggest inviting me out for beers if you want specifics. :-)

Disclaimer: as always, I’m not speaking for my employer. See the standard disclaimer. -m

Update: more Q and A

Q: How is SearchMonkey related to the recently announced Yahoo! Microsearch?

A: In brief, Microsearch is a research project (and a very cool one) with far-reaching goals, while SearchMonkey is targeted as imminently shipping software. I frequently talk to and compare notes with Peter Mika, the lead researcher for Microsearch.

Monday, January 7th, 2008

Yahoo! introduces mobile XForms

Admittedly, their marketing folks wouldn’t describe it that way, but essentially that’s what was announced today. (documentation in PDF format, closely related to what-used-to-be Konfabulator tech; here’s the interesting part in HTML) The press release talks about reaching “billions” of mobile consumers; even if you don’t put too much emphasis on press releases (you shouldn’t) it’s still talking about serious use of and commitment to XForms technology.

Shameless plug: Isn’t it time to refresh your memory, or even find out for the first time about XForms? There is this excellent book available in printed format from Amazon, as well as online for free under an open content license. If you guys express enough interest, good things might even happen, like a refresh to the content. Let’s make it happen.

From a consumer standpoint, this feels like a welcome play against Android, too. Yahoo! looks like it’s placing a bet on working with more devices while making development easier at the same time. I’ll bet an Android port will be available, at least in beta, before the end of the year.

Disclaimer: I have been out of Yahoo! mobile for several months now, and can’t claim any credit for or inside knowledge of these developments. -m

P. S. Don’t forget the book.

Monday, October 1st, 2007

simple parsing of space-seprated attributes in XPath/XSLT

It’s a common need to parse space-separated attribute values from XPath/XSLT 1.0, usually @class or @rel. One common (but incorrect) technique is simple equality test, as in {@class=”vcard”}. This is wrong, since the value can still match and still have other literal values, like “foo vcard” or “vcard foo” or ” foo vcard bar “.

The proper way is to look at individual tokens in the attribute value. On first glance, this might require a call to EXSLT or some complex tokenization routine, but there’s a simpler way. I first discovered this on the microformats wiki, and only cleaned up the technique a tiny bit.

The solution involves three XPath 1.0 functions, contains(), concat() to join together string fragments, and normalize-space() to strip off leading and trailing spaces and convert any other sequences of whitespace into a single space.

In english, you

  • normalize the class attribute value, then
  • concatenate spaces front and back, then
  • test whether the resulting string contains your searched-for value with spaces concatenated front and back (e.g. ” vcard “

Or {contains(concat(‘ ‘,normalize-space(@class),’ ‘),’ vcard ‘)} A moment’s thought shows that this works well on all the different examples shown above, and is perhaps even less involved than resorting to extension functions that return nodes that require further processing/looping. It would be interesting to compare performance as well…

So next time you need to match class or rel values, give it a shot. Let me know how it works for you, or if you have any further improvements. -m

Wednesday, September 26th, 2007

Recruitment picking up?

In the last few weeks, I’ve been getting more recruitment pitches, including from the well known person ________ who is now at _______, for a think-tank position with _______, multiple LinkedIn requests from Web 2.0 company ________ and even ________.

So, is this a sign that the general industry is picking up? -m

P.S. I’m not looking. :)

Monday, April 30th, 2007

Why does ‘rich client’ equal ‘bad separation of presentation from content’?

I started writing this post back when doing tech editing the “Rich Client Alternatives” chapter on Web 2.0, the book. Now, with Apollo getting some attention, it’s worth revisiting.

What do XUL, Yahoo! Widgets, OpenLaszlo, Silverlight, and Apollo have in common? All of them mix content with presentation to some degree. Years of experience on the web have shown that a properly-done CSS layout gives you:

  • smaller, faster pages
  • better accessibility and user control of rendering
  • better adaptation to different screen resolutions
  • easier repurposing of data, including microformats
  • better mobile compatibility

Initial HTML browsers didn’t have these advantages, and gave in to early pressure to implement things like blink and font tags. Today, most webfolks would admit that these presentational tags were a mistake, and contemporary web design avoids them.
So what is it about “rich” clients that’s different? Are developers missing out on the hard lessons learned on the web? Or is there something inherent in the definition of “rich clients” that changes the balance? Your comments are welcome. -m

Sunday, April 1st, 2007

HTTPoetry

I can’t talk on the phone right now. Can you follow up on email?
Consider it placed on my todo list.
Let me give you my new address.
Hmm, I don’t have it.
What are you talking about?

(If you get the pattern, post below…) -m

Tuesday, February 13th, 2007

changes the architecture of the house, not just the color of the paint

ERH’s comments on XForms, as part of his predictions for 2007. Worth a read. -m

Tuesday, January 23rd, 2007

My .02 on Wikipedia and nofollow

The nofollow setting on an outbound link should be a user-editable option, subject to the same community process that all other content on wikipedia already is. (Site guidelines, dispute resolution, restricted editing on certain articles for unregistered users, etc.) By default, links would get nofollow, but over time, they could be ‘blessed’, perhaps after a certain amount of time or human review. Wasn’t this how nofollow was supposed to work in the first place?
The community process works. Why maneuver around it? -m

Wednesday, October 18th, 2006

Experts: Don’t try to copy Web 2.0 on phones

Link. My comment: “duh”. In fact, don’t even try to copy Web 1.0 on phones. Even the concept of what’s uable differs on the small screen. -m

Thursday, September 28th, 2006

11 Best Practices for URLs

Article (with a non-best-practice URL) from seomoz. If you’re into this kind of thing, Web 2.0 The Book has an entire chapter on it. Nitpick: Also note how normal folks say URL, not the even-more-geeky URI. -m

Wednesday, September 6th, 2006

Mark Birbeck: ‘Ajax makes browser choice irrelevant’

A must-read posting from Mark Birbeck, who knows a few things about XForms and Web Forms 2.0.

He talks about the respective approaches embodied in XForms and Web Forms 2.0, and concludes that the primary difference between them has little to do with simplicity. He goes on to analyze differences in how developers and users view browsers. Go read it, it’s worth it. -m

Wednesday, August 16th, 2006

Web 2.0, The Book on Amazon

Listed with a pub date of December 6, 2006. (Eric let me in on this link.) The tech editing is out the door, and things are moving along. -m

Tuesday, August 8th, 2006

Datacratic

Still tech editing the final pieces of the Web 2.0 book. Such a huge part of what people mean when they talk about Web 2.0 is the ugly term “user-generated content”. As many have pointed out, all three words comprising that phrase are inaccurate or obtuse (or both). We need a better term.

How about datacratic, or rule by data?

By itself, “data” isn’t such a descriptive word, but I like how it fits in here. Also the so-called users aren’t directly mentioned, except by reference to demos from a similar, if overused, word.

Could you see yourself using the term? -m

Tuesday, July 18th, 2006

XForms 1.1 and mobileOK

Word on the street is that some of the new stuff in XForms 1.1 is fantastic. Also on my to-carefully-read list, the mobileOK Scheme. As always, any thoughts welcome here. -m

Tuesday, July 18th, 2006

The right way to do Ajax is declaritively

Write up by Duncan Cragg. More and more momentum is building for this meme. -m

Monday, June 12th, 2006

Conversations with people you don’t know

To me, the true power of the web is in mediating conversations between parties that have never met.

I consider it a success when a new name posts a comment–and comments have been picking up here. -m

Wednesday, June 7th, 2006

The devastating two-pronged attack

It’s no secret that Yahoo! has two different photo sites. And two different social bookmarking sites. Until pretty recently I thought this was craziness. But gradually I’ve realized the power of this approach.

You take a smaller, hipper embodiment of an idea alongside a mainstream site. The resulting double-threat can’t easily be matched be either fresh startups or entrenched players. The smaller site can keep innovating and experimenting away, while the larger site adopts the best ideas and keeps improving without freaking out their base. It’s effective. Car companies have known this for years.

Now, my second realization: The W3C is doing the exact same thing with XHTML 1.x and XHTML 2.x+XForms. -m

Monday, June 5th, 2006

Still crazy after all these years

Part of tech reviewing means dusting off a Windows machine again. I haven’t done more than check email or run Quickbooks online on a Windows machine since I was writing my book in 2003. Remarkably, Windows XP is still the latest desktop OS available. But it needs updates.

Checking my update history, I had 37 updates installed, with Windows Update insisting on installing three more things including “Genuine Advantage”. Reboot. Yay, now I’m advantaged. Apparently the main new feature in Windows Update is a five-minute “Checking for the latest updates for your computer…” screen. Next Service Pack 2, which has to be installed separately.

This is taking a while, so I have time to re-appreciate the nuances of the Windows UI. In the system tray, I see room for six icons, but only four present. (Clicking the little arrow, though, causes a wiggle, with six icons showing in the same space; after a second, another wiggle and back to four). All of the icons are blurry, two of them enough that I have no idea what they’re supposed to represent.

I couldn’t make stuff like this up, but it blue-screened 73 minutes into the ordeal. Unbelievable. On the bright side, it did recognize that the whole Service Pack didn’t need to be downloaded again.

As an aside, the crash tool suggested that I run the Windows Memory Diagnostic tool, so it’s possible the blue screen was hardware related. Amusingly, the Windows Memory Diagnostic tool is exactly 640kb. If you don’t get the tragic coincidence, post a comment and I’ll tell you. :)

The second run through installing Service Pack 2…blue screens again, this time with some USB error. Upon rebooting, a Windows Setup screen draws little dots for several minutes while “restoring previous configuration”, and the desktop warns me ominously that the system is in an “unstable state”, and that I need to go to Control Panel -> Add/Remove Programs and uninstall SP2. The uninstall program helpfully warns me that lots of programs, including “hearts” and “solitaire” toward the top of the list, might stop working, but I bravely press on.

Reboot again. 640×480 resolution, and all kinds of messages like “found new hardware — disk drive”. On the change resolution screen in Control Panel, the “OK” and “Cancel” buttons are off the screen. And another reboot to get networking set up again. At this point I’m three hours wasted, six reboots, and I have nothing to show but an even more unstable system and Genuine Advantage. Wheeeee! At what point does Microsoft throw the “rewrite from scratch” swich? The saga continues, check comments on this post. -m

Friday, June 2nd, 2006

Web 2.0, The Book

Still in development, but I have clearance to blog about a forthcoming Web 2.0 book. So far I haven’t seen a good book that covers all the technical angles of Web 2.0, from designing URL spaces to Ajax to proper use of HTTP. I’m tech reviewing this book, so I have high expectations for it.

Far more impressive than my meager contributions, however, is the list of authors (in no particular order):

  • Eric van der Vlist
  • Alessandro Vernet
  • Danny Ayers
  • Joe Fawcett
  • Erik Bruchez

That’s a lot of XForms folks. Hmm. :) As you’d expect from a Web 2.0 book, we have tags.

I’ll have more details later–chapters are already rolling in and I have work to do. Keep your eyes open, it’s coming soon from Wiley. -m