I want to write a program that uses TurKit to pass the Turing Test. Cheating, sure, but should be doable (other than time lag issues), right? -m
Archive for the 'AI' Category
Sunday, October 24th, 2010
Sunday, March 8th, 2009
The remarkable (and prolific) Stephen Wolfram has an idea called Wolfram Alpha. People used to assume the “Star Trek” model of computers:
that one would be able to ask a computer any factual question, and have it compute the answer.
Which has proved to be quite distant from reality. Instead
But armed with Mathematica and NKS [A New Kind of Science] I realized there’s another way: explicitly implement methods and models, as algorithms, and explicitly curate all data so that it is immediately computable.
It’s not easy to do this. Every different kind of method and model—and data—has its own special features and character. But with a mixture of Mathematica and NKS automation, and a lot of human experts, I’m happy to say that we’ve gotten a very long way.
I’m still a SearchMonkey guy at heart, so I wonder how much Wofram’s team is familiar with existing Semantic Web research and practice–because at a high level this seems very much like RDF with suitable queries thereupon. If that’s a good characterization, that’s A Good Thing, since practical application has been one of SemWeb’s weak spots.
Saturday, December 15th, 2007
Surely somebody has implemented this in at least one tool.
In a text editor, I come across a misspelled close tag like </xsl:stylsheet>. My editor highlights the line as an error, which is is, not matching the start tag and all. Why can’t it go the extra step and give me the same kind of interface as I get for misspelled words, which an easy option to repair the spelling? This seems like a much simpler problem than all the hairy cases around human-language spell check…
So, what tools already do this today? -m
Sunday, April 1st, 2007
I can’t talk on the phone right now. Can you follow up on email?
Consider it placed on my todo list.
Let me give you my new address.
Hmm, I don’t have it.
What are you talking about?
(If you get the pattern, post below…) -m
Friday, September 1st, 2006
Most of the censorship stories you hear on the news involve public libraries, but right now I’m writing this from a hospital, which has free wi-fi. Someone providing a service like this has latitude to do pretty much as they please, including censorship, but is it a good idea?
The system here evidently consists of a monitor observing every HTTP access, either forwarding it on or bouncing to another server, one that seems to be down. That second server, referred to only by numeric IP, has yet to ever actually respond, so trying to load any page with a blocked site requres a lengthy timeout of about two minutes before landing on a browser error page with a URL something like this:
Let’s take a look at what kind of sites this inane system prevents hospital visotors from viewing directly:
- flickr.com (“Personal Pages”) — because honestly, who in a maternity ward would ever need to upload pictures of something?
- 360.yahoo.com (“Dating&Personal”) — because who in a maternity ward would consider posting to a blog?
- my.yahoo.com as a (“Portal Site”) — because who, away from home for a few days, might want to check up on news of the world around them?
- thinkbabynames.com (“Personal Pages”) — thankfully, this dangerous and immoral content too has been shielded from the eyes of maternity ward visitors.
At some point, somebody must have pointed out a flaw in their system–that any named site can also be viewed through a numeric IP. Instead of actually thinking about the problem, they also banned all numeric IPs, even for sites that would otherwise work.
The upside to retarded filtering is that it’s easy to get around. Techniques that work here include using a search engine cached page, Coral Cache (.nyud.net:8080), SSH tunneling, VPN, and adding a new entry to hosts to access the same site under a different name. The access is so slow, however (hmm… in a way another form of censorship) that the strain of the additional measures often leads to timeouts and various other errors.
Fortunately, the filtermasters haven’t caught on to dubinko.info yet, thus allowing this post to appear. I hear that site is pretty subversive.
What’s the net?
- It’s obvious their list of sites to filter is woefully generic, not at all adjusted to the environment in which people will be actually using the system. And still, I’d wager they’re paying someone fistfuls of cash to keep updating the generic list.
- I can imagine there are a few sites on the internets that wouldn’t be appropriate in this environment. The majority of well-adjusted adults are perfectly capable of choosing not to visit those sites.
- In cases where supervision is needed, it is effective on a one-on-one basis, often parent-to-child. Witness how many ways there are to easily bypass the filters: software, particularly bad software, isn’t clever enough to replace human judgement.
- Yay for the mobile web, which allowed me to upload my pictures anyway.
Monday, June 5th, 2006
For better or worse. In no particular order.
- Affordable unlimited data plans
- Google getting into the operator business
- Yahoo! getting into the operator business
- Affordable phones not tied to carriers
- The iPod phone
- Development of strong AI (yes I say this about everything)
- Development of decent agent software
- Affordable unlimited voice plans
- Collapse of network neutrality
- An active mobile WebStandards task force (ok, not overnight, but still important)
Sunday, May 7th, 2006
Some semi-random Sunday thoughts. Why is it that a badly-formed web page will probably still work, but a badly-formed software program (say, a browser) will for certain kinds of bugs crash hard?
I think the answer comes down to intent. Even with a missing quote or closing tag, it’s still mostly obvious what should be done with a web page. Different browsers might make different assumptions resulting in different render trees, which ain’t good, but neither is it catastrophically bad.
On the other hand, if a software program attempts to, say, write to memory it doesn’t own, a serious error is hand. Attempting to continue could seriously compound an already-bad situation. Why is the program trying to do this? Here the intentional gap is far wider. For example, trying to save an open document might overwrite the still-good-on-disk version with random garbage. No, in the face of serious bugs, the only reasonable course is to cut the losses and terminate the program on the spot, ideally saving a core dump for later human inspection.
So, what if, someday, the hard-AI problem is solved (though I prefer cultured intelligence or “CI” to “AI”–consult my audio show for details on that). Say you have a future version of the Linux kernel, and an intelligent supervisor program. Now, if a memory access error occurs, the CI can take a look, consult the source code which it has handy, and figure out exactly what’s going on. In the case of minor errors, the stack and variables can be patched up, bugs automatically filed, and life (and the misbehaving program) can continue on. In the case of serious errors, at least things could be more gently shut down.
Far fetched? Perhaps. Things like Amazon mechanical turk make me think that the only thing to be gained by solving hard AI would be ecomomic (including turnaround time) efficiencies. Then again, sometimes making something more efficient enables its use in entire new realms. Imagine taking the same system, and unleashing it on the non-well-formed web… -m