Archive for the 'annoyance' Category

Sunday, March 31st, 2013

Introducing node-node:node.node

Naming is hard to do well, almost as hard as designing good software in the first place. Take for instance the term ‘node’ which depending on the context can mean

  1. A fundamental unit of the DOM (Document Object Model) used in creating rich HTML5 applications.
  2. A basic unit of the Semantic Web–a thing you can say stuff about. Some nodes are even unlabeled, and hence ‘blank nodes’.
  3. In operations, a node means, roughly, a machine on the network. E.g. “sixteen-node cluster”
  4. A software library for event-driven, asynchronous development with JavaScript.

I find myself at the forefront of a growing chorus of software architects and API designers that are fed up with this overloading of a perfectly good term. So I’m happy today to announce node-node:node.node.

The system is still in pre-alpha, but it solves all of the most pressing problems that software developers routinely run in to. In this framework, every node represents a node, for the ultimate in scalable distributed document storage. In addition, every node additionally serves as a node, which provides just enough context to make open-world assumption metadata assertions at node-node-level granularity. Using the power of Node, every node modeled as a node has instant access to other node-node:nodes. The network really is the computer. You may never write a program the old way again. Follow my progress on Sourceforge, the latest and most cutting-edge social code-sharing site. -m

Friday, March 1st, 2013

WFH

The valley is buzzing about Marissa’s edict putting the kibosh on Yahoos working from home. I don’t have any first-hand information, but apparently this applies somewhat even to one-day-a-week telecommuters. Some are saying Marissa’s making a mistake, but I don’t think so. She’s too smart for that. There’s no better way to get extra hours of work out of a motivated A-lister than letting them skip the commute, and I work regularly with several full-time telecommuters. It works out just fine.

This is a sign that Y is still infested with slackers. From what I’ve seen, a B-or-C-lister will ruthlessly take advantage of a WFH policy. If that dries up, they’ll move on.

If I’m right, the policy will indeed go into effect at Yahoo starting this summer, and after a respectable amount of time has passed (and the slackers leave) it will loosen up again. And Yahoo will be much stronger for it. Agree? -m

Monday, December 31st, 2012

New Year’s Resolution

Holding steady at 1440 x 900.

Relevant. -m

Tuesday, August 7th, 2012

Balisage Bound

I’m en route to Balisage 2012, though beset by multiple delays. The first leg of my flight was more than two hours delayed, which made the 90 minute transfer window…problematic. My rebooked flight, the next day (today, that is) is also delayed. Then through customs. Maybe all I’ll get out of Tuesday is Demo Jam. But I will make it.

I’m speaking on Thursday about exploring large XML datasets. Looking forward to it!

-m

Thursday, September 29th, 2011

facebook Challenge results

Andromeda took the facebook Challenge, and found 52 separate requests in 24 hours that would have gone to the facebook mothership. Watch her blog for more updates. How about you?

If you look through these logs, pay particular attention to the referer field. This tells you on which site you were browsing when the data set out on its voyage toward facebook.

Tuesday, September 27th, 2011

Take the facebook Challenge

Worried about how much data facebook is collecting on you, even on 3rd party sites, even if you’re signed out? Try this for 24 hours:

  1. Find a file named ‘hosts’ on your computer. On Mac/Linux systems, it’s under /etc/. On Windows, it used to be under System32 somewhere, but who knows now. Stash a backup copy somewhere.
  2. Add the following on a new line:     127.0.0.1 www.facebook.com
  3. Configure a web server running on your local machine.
This will forcibly redirect all calls to facebook to local. At the end of 24 hours, take a look at your web server’s access log. Every line in there is something that would have gone to facebook. Every ‘like’ button, every little banner, all those things track your movements across the web, whether you are signed in to facebook or not. You’ll marvel at how many blank rectangles appear on sites you visit.
Bonus points: At the end of the 24 hours, don’t restore your hosts file.
Please post your facebook-free experiences here.
-m

Saturday, December 4th, 2010

Yahoo Mail’s inexplicable, inexcusable lack of https support

Dear Yahoo,

What’s the deal? Shortly after FireSheep was announced on Oct 24, 2010, you should have had an emergency security all-hands meeting. You should have had an edict passed down from the “Paranoids” group to get secure or else. Maybe these things happened–I have no way of knowing.

But it is clear that it’s been 6 weeks and security hasn’t changed. It’s simply not possible to read Yahoo mail over https–try it and you get redirected straight back to an insecure channel. As such, anyone accessing Yahoo mail on a public network, say a coffee shop or a workplace, is vulnerable to having their private information read, forwarded, compromised, or deleted.

Wait, did I say 6 weeks?–SSL had apparently been rolled out for mail more than 2 years ago, but pulled back due to problems. Talk about failure to execute.

I feel like I missed an announcement. What’s the deal, Y? Show me that you care about your users. No excuses.

Sincerely,

-m

Friday, May 14th, 2010

Geek Thoughts: verbing facebook

Facebook (v): to deliberately create an impenetrable computer user interface for purposes of manipulating users.

More collected Geek Thoughts at http://geekthoughts.info.

Sunday, April 18th, 2010

The challenge of an XProc GUI

I’ve been thinking lately about what a sleek UI for creating XProc would look like. There’s plenty of big-picture inspiration to go around, from Yahoo Pipes to Mac OSX Automator, but neither of these are as XML-focused as something working with XProc would be.

XML, or to be really specific, XML Namespaces, comes with its own set of challenges. Making an interface that’s usable is no small task, particularly when your target audience includes the 99.9% of people that don’t completely understand namespaces. Take for example a simple step, like p:delete.

In brief, that step takes an XSLTMatchPattern, following the same rules as @match in XSLT, which ends up selecting various nodes from the document, then returns a document without any of those nodes. An XSLTMatchPattern has a few limitations, but it is a very general-purpose selection mechanism. In particular, it could reference an arbitrary number of XML Namespace prefix mappings. Behind a short string like a:b lies a much longer namespace URI mapping to each prefix.

What would an intuitive user interface look like to allow entry of these kinds of expressions? How can a user keep track of unbound prefixes and attach them properly? A data-driven approach could help, say offering a menu of existing element, attribute, or namespace names taken from a pool of existing content. But by itself this falls short in 1) richer selectors, like xhtml:p[@class = "invalid"] and 2) doesn’t help in the general case, when the nodes you’re manipulating might have come from the pipeline, not your original content. (Imagine one step in the pipeline translates your XML to XHTML followed by a delete step that cleans out some unwanted nodes).

So yeah, this seems like a Really Hard Problem, but one that’s worth taking a crack at. If this sounds like the kind of thing you’d enjoy working on, my team is hiring–drop me a note.

-m

Thursday, March 18th, 2010

Kindle for Mac scores low on usability

Here’s my first experience with Amazon’s new Kindle client for Mac: After digging up my password and logging in, I was presented with a bunch of books. I picked the last one I’d been reading. It downloaded slowly, without a progress bar, then dumped me on some page in the middle. Apparently my farthest-read location, but I honestly don’t remember.

A cute little graphic on the screen said I could use my scroll wheel. I’m on a laptop, so I tried the two-finger drag–the equivalent gesture sans mouse… and flipped some dozens of pages in half a second. Now, hopelessly lost I searched for a ‘back’ button to no avail.  Perversely, there is a prominent ‘back’ button, but disabled. Mocking me.

This feels rushed. I wonder what could be pushing Amazon to release something so unfinished? -m

Friday, March 5th, 2010

A Hyperlink Offering revisited

The xml-dev mailing list has been discussing XLink 1.1, which after a long quiet period popped up as a “Proposed Recommendation”, which means that a largely procedural vote is is all that stands between the document becoming a full W3C Recommendation. (The previous two revisions of the document date to 2008 and 2006, respectively)

In 2005 I called continued development of XLink a “reanimated spectre”. But even earlier, in 2002 I wrote one of the rare fiction pieces on xml.com, A Hyperlink Offering, which using the format of a Carrollian dialog between Tortoise and Achilles, explained a few of the problems with the XLink specification. It ended with this:

What if the W3C pushed for Working Groups to use a future XLink, just not XLink 1.0?

Indeed, this version has minor improvements. In particular, “simple” links are simpler now–you can drop an xlink:href attribute where you please and it’s now legit. The spec used to REQUIRE additional xlink:type=”simple” attributes all over the place. But it’s still awkward to use for multi-ended links, and now even farther away from the mainstream hyperlinking aspects of HTML5, which for all of its faults, embodies the grossly predominant description of linking on the web.

So in many ways, my longstanding disappointment with XLink is that it only ever became a tiny sliver of what it could have been. Dashed visions of Xanadu dance through my head. -m

Tuesday, February 16th, 2010

There is no honor in namespaces

As heard from my friend and Mark Logic contractor Ryan Grimm. -m

Sunday, January 31st, 2010

Writing tools to avoid: WhiteSmoke

I was lured in by a slick promotion, and decided to give a new writing tool a try. WhiteSmoke seems like it’s primarily aimed at folks for whom English is not a first language, but quotes likeInnovative technology for native and non-native English speakersmake it seem like it could help. When I wrote an article for xml.com that summarized recent mailing list activity, I liked to compile readability statistics on the messages. Maybe this would be similar.

I had some question of whether this would work on a Mac or not, but the FAQ assures one thatMac users are able to use WhiteSmoke’s online interface (also known as the Online Editor), which contains all grammar, enrichment and spelling featuresand (in curiously clumsy language) “Should you be running Safari MacOS x10.3 and encounter any problems, please use FireFox.”

Sounds good.

The spell checker is decent, probably about as good as the one in WordPress.

The thesaurus is pretty good. Clicking on almost any word will prompt a drop-down list of synonyms. This sometimes makes selecting text troublesome. The list itself is often too small, and entries toward the bottom are obscured.

The grammar checker is OK, but I couldn’t point to anything it does that Word couldn’t have handled (though it has been a long time since I have run Word).

The user interface is terrible. Any errors are shown in slightly bolder text either red or green with nothing distinguishing in the background. I’m not too good with colors, so it’s hard for me to say. The text is very difficult to scan. It has an AutoCorrect mode, which can fix some mistakes without interaction, but just as often breaks your text. For example, it changed the previous paragraph fromSounds good.” toremark: Incomplete Sentence good.” The changed text is bold, but only until the next scan, when it becomes indistinguishable from actual text.

At the XML level, it produces horrible output, with stacks upon stacks of nested spans, with duplicate IDs. Some of this may be from the necessary back-and-forth between the web interface and whatever your actual editor is. View source on this posting to see what I mean.

It gets worse. The online interface is limited to 10,000 characters-at-a-timenot wordscharacters. To compare, this short posting contains slightly more than 3,000 characters. I did some experimentation and found the actual limit is somewhat less than the stated 10K; somewhere north of 7K characters, it will show a spinner forever and never finish checking. Clicking the browser “back” button from the forever-spinner screen takes you back to a blank page–all your text is gone. For someone working on, say, a 60,000 word (360,000+ character) project, it would have to be diced up into maybe 50 small pieces, each individually checked, each introducing the prospect of adding rather than fixing problems. Making even a single pass through all the text would require a senseless amount of tedious cut-and-paste work. It’s essentially unusable.

To make sure I know what I’m talking about, I composed this posting in WhiteSmoke, which very well may be the last time I use it. -m

Wednesday, January 20th, 2010

XForms: binding to an optional element

I asked this on the XSLTForms list, but it’s worth casting a wider net.

Say I have an XML instance like this:
<root>
<format>xml</format>
</root>

Possible values for the format element are “xml”, “text”, “binary”, or a default setting, indicated by a complete absence of the <format> element. I’d like this to attach to a select1 with four choices:

<xf:select1 ref=”format”>…

* xml
* text
* binary
* default

Where the first three set the value of the <format> element, and the fourth omits the element altogether. In XForms 1.0, the difficulty comes from binding to a nonexistent element, which causes it to become non-relevant. Are there features from 1.1, 1.2, or extensions that make this case easier? Anyone have an example?

-m

Friday, December 25th, 2009

The Physics of Santa

Hands down, the stupidest Science Friday segment evar. I want my 11 minutes back. -m

Saturday, October 24th, 2009

Are Windows 7 reviewers logic challenged?

At the risk of sounding fanboy, are Windows 7 reviewers logic challenged? Not to pick on any one in particular, but here’s the most recent one I bumped into–I’ve seen similar qualities in other reviews. Under the reasons to get it:

1. Your computer can probably run it. Unlike Vista, which proved a giant slop-feeding resource hog compared to XP, Windows 7′s system requirements haven’t changed much at all since Vista,

So if Vista was a “giant slop-feeding resource hog”, and the Windows 7 requirements haven’t changed much relative to that…how is this a plus again?

2. It costs less than Vista did. Microsoft really seems to have learned its lesson with Vista pricing, which was way too high at first. Although Windows 7 is hardly cheap…

Similar to #1. The argument amounts to ‘it’s not as ridiculous as Vista’. Yay.

3. You’re not stuck with whatever version you choose first. There are a lot of versions of Windows 7 , all with different combinations of features. If you buy Home Premium and decide at some future point that you really need Ultimate—who doesn’t need BitLocker at some point?—you don’t have to drop $319.99 on top of the $199.99 you already spent the first time.

Remember the version chart? If for some reason you choose “Professional” over “Ultimate”, saving a cool $20 at retail price, you can always go back and upgrade for a modest $129.99. Remember, this is from the list of reasons to choose Windows.

5. You don’t have to give up Windows XP. Yes, exiting any long-term relationship can be difficult, but sometimes it has to be done.

A reason to upgrade is that you don’t have to give up the thing you are probably upgrading from?

7. Comedic value. Even if Windows 7 can’t be hailed for anything else, it inspired an enlightening and truly hilarious column from PCMag.com Editor-in-Chief Lance Ulanoff…

Comedic value? Seriously? The comedic value in Windows 7 reviews seems to be entirely unintentional… -m

(Posted from 30k feet. Hooray for Virgin America)

Sunday, August 9th, 2009

Kindle Flaw

Here’s the scenario:

The night before a long flight, I upload my personal files into a freshly charged Kindle 2. To preserve the battery, I switch off wireless and in the bag it goes. The next day, on the plane, I open the Kindle…and it’s showing an entirely depleted battery, exclamation point and all. Can you spot the design flaw?

-m

Wednesday, August 5th, 2009

Misunderstanding Markup

On this comic‘s panel 9 describes XHTML 1.1 conformance as:

the added unrealistic demand that documents must be served with an XML mime-type

I can understand this viewpoint. XHTML 1.1 is a massively misunderstood spec, particularly around the modularization angle. But because of IE, it’s pretty rare to see the XHTML media-type in use on the open web. Later, panel 23 or thereabouts:

If you want, you can even serve your documents as application/xhtml+xml, instantly transforming them from HTML 5 to XHTML 5.

Why the shift in tone? What makes serving the XML media type more realistic in the HTML 5 case? IE? Nope, still doesn’t work. I’ve observed this same shift in perspective from multiple people involved in the HTML5 work, and it baffles me. In XHTML 1.1 it’s a ridiculous demand showing how out of touch the authors were with reality. In HTML5 the exact same requirement is a brilliant solution, wink, wink, nudge, nudge.

As it stands now, the (X)HTML5 situation demotes XHTML to the backwaters of the web. Which is pretty far from “Long Live XHTML…”, as the comic concludes. Remember when X stood for Extensible?

-m

Wednesday, July 1st, 2009

Transformers round-up

Roger Ebert review (“a horrible experience of unbearable length”)

Fake Shia LaBeouf blog (“no no no no!”)

Smashing box office records (200 mil in 5 days, “The per theater average, by the way, was astonishing: $26,453.”)

-m

Friday, May 22nd, 2009

More on the GOOG book settlement

From Brewster Kahle. Good read, so to speak. -m

Friday, May 15th, 2009

Geek Thoughts: plan to Save The World

My plan is to start unsubstantiated rumors about the SUV Flu, and by association cause people to distance themselves from said implicated devices. You can help.

Despite denials from the CDC (you know they’re in the pockets of the auto industry), SUV flu is serious and spreading fast. You might already have it and not realize it. The SUV flu spreads primarily through close contact with gas-guzzling vehicles, such as so-called Sport Utility Vehicles. California has been hit the hardest, with sources reporting that in a small hamlet outside of Oxnard over 3000 drivers have been seen staggering away from their parked vehicles, and further reports indicate that up to 80 top epidemiologists nationwide are stranded and unable to commute to work.

Transmission occurs primarily via exhaust emission and requires close contact between source and recipient because contaminants do not remain suspended in the air and generally rise directly to the ozone layer. Contact with contaminated surfaces (including bucket seats and 4-wheel-drive shifters) is another possible source of transmission.

The estimated incubation period is unknown and could range from 1-7 days, but more likely 3 years or 36,000 miles.

Patients with uncomplicated disease due to confirmed (or unconfirmed) SUV flu virus infection have experienced inflated ego, increased road rage, chronic lack of consideration for others, decreased awareness of nearby traffic, fatigue, vomiting, or diarrhea. In West Palm Beach, 95% of patients with SUV flu met the case definition of opprobrism.

Anyone showing signs–however faint–of possible SUV flu should pull over, immediately self-diagnose, and proclaim the results on Twitter, Facebook, MySpace, or a nearby blog. If you are somehow still disease-free, carefully avoid contamination vectors mentioned above. Please help spread the warning about this dangerous disease, using the hashtag #suvflu.

Be careful out there.

More collected Geek Thoughts at http://geekthoughts.info.

Saturday, April 4th, 2009

On YouTube’s bandwidth and Technologizer’s problem with basic estimation

This article states:

The analysts determined YouTube’s bandwidth costs by assuming that 375 million unique visitors would visit the site in 2009, with 20 percent of those users consuming 400 kilobits per second of video at any given time. That works out to 30 million megabits being served up per second. That’s a heck of a lot of bandwidth to devote to videos of sneezing pandas.

Do you honestly believe that YouTube is sending out 30 petabits per second (to put it another way, fully saturating over 200,000 OC3 connections)? That on average, every single user who counts as a unique visitor in 2009 spends 20% of 24hrs = 4.8 hours actually downloading video, every day of every week?

Gesundheit. -m

Update: the quoted article indeed gets it wrong, though it appears the original Credit Suisse analyst report was estimating peak usage, not a running average. Still doesn’t smell right. Updating the article and title to point the finger at the right people.

Monday, March 30th, 2009

The Geek Thoughts Manifesto

Never trust a document with “Manifesto” in the title, nor that document’s writer.

More collected Geek Thoughts at http://geekthoughts.info.

Thursday, March 26th, 2009

Signs of life in cold fusion research

This article seems encouraging. I’ve never been able to come to grips with the anti-CF bias of the scientific community. Sure a few researchers made fools of themselves two decades ago, but what has that got to do with falsifiable hypotheses? A small amount of research goes on with minimal funding, under the newer name of Low Energy Nuclear Reactions (LENR), and the signs are encouraging.

From the article, researchers used plastic as a permanent record of neutron movement and found that, indeed, neutrons are being produced, leaving tiny tracks behind.

Another recent article from Jeffrey Kooistra has more details of current research. Good stuff, and important if it works. Heck, it’s important if it doesn’t work, because that still expands what we know. -m

Tuesday, March 24th, 2009

XIN: Implicit namespaces

An interesting proposal from Liam Quin, relating to the need for huge rafts of namespace declarations on mixed namespace documents.

In practice, though, almost all elements [in the given example] are going to be unambiguous if you take their ancestors into account, and attributes too.

Amen. I’ve been saying things like this for five years now. Look at any introductory text on XML, and the example used to show the need for namespaces will be embarrassingly contrived. That’s not a dig against authors, it’s a dig against over-engineered solutions to non-problems.

-m

Friday, March 13th, 2009

Lithium battery breakthrough means your phone will charge in 10 seconds? Not so fast.

[Update: now featuring Actually Correct Math. Somebody stop me before I late-night-blog again...]

Recent news coverage mentions a badly-needed breakthrough at MIT in battery technology. Using a slight variation of existing lithium materials, much faster charge and discharge rates are possible. The money quote is that

[Professor Gerbrand Ceder and graduate student Byoungwoo Kang] went on to make a small battery that could be fully charged or discharged in 10 to 20 seconds.

News outlets seem to have latched on to this part of the announcement and hinted that all kinds of battery-powered devices will soon be chargeable in ten seconds. I don’t think it will be likely to see, say a cell phone (much less a vehicle, as some stories hint) that can fully charge in 10 seconds. Here’s why: A typical cell phone battery might be rated at 800 mAh. It’s not perfectly linear, but you can think of it as being able to deliver 800 milliamps for an hour, or 10 milliamps for 80 hours, and so on. You could approximate the energy storage of the battery by multiplying volts x amps x hours, giving a figure in watt-hours (in this case 3.6 x 0.8 x 1 = 2.88 watt-hr). To charge it in 10 seconds, all that energy would need to be delivered within the 10 seconds, which is a sixth of a minute, or a 360th of an hour. So the charging current would need to be 0.8 x 360 = 288 amps, not counting any efficiency losses in the form of heat.

What’s the big deal about pumping out 288 amps? Ohm’s law gives some idea. To push 288 amps through a complete circuit of one ohm (this includes the internal resistance of the battery), you would need to apply 288 volts, with a resulting power consumption of 288 squared, or just under 83,000 watts. That kind of current is more suitable for an industrial arc welder than a household battery charger. Even if the resistance can be made smaller, the benefit is only linear. To compare, my laptop, which dissipates 65 watts over several square inches, gets uncomfortably hot. Or see how long you can hold your hand on a lit 100 watt incandescent bulb. I can’t imagine packing that much energy into a “small” battery. It’s also hard to imagine a safe charging circuit that uses voltages that much above the nominal voltage of the battery.

So professor Ceder’s breakthrough looks great, and probably will be in iPods in a few years, but take the media coverage with a large grain of salt.

-m

Tuesday, March 3rd, 2009

Geek Thoughts: Amazon backtracks on text to speech

With apologies to a real news site. (02-27) 16:14 PST SEATTLE, (AP)

Amazon.com Inc. changed course Friday and said it would allow copyright holders to decide whether they will permit their works to be read aloud by the latest laryngeal apparatus, a feature that has been under development for several thousand years.

The move comes nearly two weeks after a group representing authors expressed concern that the feature, which was intended to be able to read every book, blog, magazine and newspaper out loud, would undercut separate audiobook sales. The average American can use their larynx to read text in a somewhat stilted voice.

Amazon said in a statement that it, too, has a stake in the success of the audiobook market, and pointed to its Brilliance Audio and Audible subsidiaries, which publish and sell professionally recorded readings.

“Nevertheless, we strongly believe many rights holders will be more comfortable with the text-to-speech feature if they are in the driver’s seat,” the company said.

Amazon is working on the technical changes needed for authors and publishers to turn text-to-speech off for individual titles.

The Web retailer also said the text-to-speech feature is legal — and wouldn’t require Amazon to pay out additional royalties — because a book read aloud doesn’t constitute a copy, a derivative work or a performance.

More collected Geek Thoughts at http://geekthoughts.info.

Tuesday, March 3rd, 2009

Boo to Amazon

Dear Amazon, Speaking as an author myself, you not only made a bad choice, you set a precedent in the wrong direction. The Author’s Guild doesn’t speak for me, nor do I want them to. TTS is only going to get better. The last thing we need is another backward industry fighting progress. -m

Tuesday, February 10th, 2009

iPhone battery life after carrier update

A few days ago, a carrier update arrived for my iPhone. Since then, my battery life has suffered a significant decline. Anyone else seen this? -m

Wednesday, December 24th, 2008

Semi-spam on the rise

With tough times comes a rise in semi-spam. What’s that? There’s a grey area between solicited and unsolicted email. Take a company you’ve done business once in the past. These guys are dredging up their old databases and really searching for business. Since these are companies I actually like, I don’t have the heart to click the ‘Spam’ button on their emails… -m