Postel’s Law, AI, and crashing

Some semi-random Sunday thoughts. Why is it that a badly-formed web page will probably still work, but a badly-formed software program (say, a browser) will for certain kinds of bugs crash hard?

I think the answer comes down to intent. Even with a missing quote or closing tag, it’s still mostly obvious what should be done with a web page. Different browsers might make different assumptions resulting in different render trees, which ain’t good, but neither is it catastrophically bad.

On the other hand, if a software program attempts to, say, write to memory it doesn’t own, a serious error is hand. Attempting to continue could seriously compound an already-bad situation. Why is the program trying to do this? Here the intentional gap is far wider. For example, trying to save an open document might overwrite the still-good-on-disk version with random garbage. No, in the face of serious bugs, the only reasonable course is to cut the losses and terminate the program on the spot, ideally saving a core dump for later human inspection.

So, what if, someday, the hard-AI problem is solved (though I prefer cultured intelligence or “CI” to “AI”–consult my audio show for details on that). Say you have a future version of the Linux kernel, and an intelligent supervisor program. Now, if a memory access error occurs, the CI can take a look, consult the source code which it has handy, and figure out exactly what’s going on. In the case of minor errors, the stack and variables can be patched up, bugs automatically filed, and life (and the misbehaving program) can continue on. In the case of serious errors, at least things could be more gently shut down.

Far fetched? Perhaps. Things like Amazon mechanical turk make me think that the only thing to be gained by solving hard AI would be ecomomic (including turnaround time) efficiencies. Then again, sometimes making something more efficient enables its use in entire new realms. Imagine taking the same system, and unleashing it on the non-well-formed web… -m

Comments are closed.