Push Button Paradise
Micah Dubinko
Sun, 19 Mar 2006
Will MTBF be the downfall of Moore's Law?
Or the beginning of the AI revolution? Some semi-random thoughts after reading On Intelligence by Jeff Hawkins and Sandra Blakeslee.
Essentially, Jeff argues that constructing an intelligent machine, with an equivalent of a human-scale cortex, is possible and will be accomplished in the not-too-distant-future. To do so will certainly require massive amounts of fast memory. He argues that much larger memory production will become possible for this application.
To explain: If a gigabit memory chip in my PowerBook has a single flaw--a stuck bit for example--the whole chip is essentially useless. Data will get corrupted, and the system might even crash. Because of this production yields are low, and the actual physical size of the chips has remained fairly small.
Look at it from another viewpoint. Say the Mean Time Between Failure (MTBF) of a circuit for a single bit of memory is 10e15 hours. For this discussion, let's say that means that observing 1,000,000,000,000,000 memory bits over an hour, on average one bit would fail of natural causes. So a gigabit memory chip in my PowerBook should be good for a million hours. Jump ahead 15 years of Moore's law, though, and take a terabit memory chip. Now it might only last a thousand hours. The more stuff on a chip, the more likely something will fail. (In fact, I suspect that the MTBF figure will get worse--as transistors get closer to single atoms, it will be easier for them to individually fail.)
This has bad consequences for the continued exponential growth of conventional computing, but won't really affect a cortical algorithm. In our own brains, tens of thousands of neurons die off every day, and we don't normally experience a system crash or catastrophic data loss. If it was OK for a memory chip to have a few bad spots on it, even today's yields could be much larger. Chips could be much physically larger and thus have far more capacity.
But this leads to another problem, one Jeff doesn't address. If no two memory chips are the same--if each has a unique fingerprint of dead bits--then I doubt it will be possible to simply copy off a cortex. We're talking mathematical chaos to a huge degree. Cortexes aren't just programmed, they need to be trained, much like we do. A small difference, like that stuck bit again, can have a huge affect on the resulting map after training.
So, if Toyota GM in 2050 develops a truly "smart car", carefully training their protoype "cartex", outfitted with infrared, proximity sensors, cameras, realtime traffic feeds, and so on, the will end up with a single smart car. It probably won't be possible to just "copy off" that cortex to new cars on the assembly line. Each would need to be painstakingly trained. Additionally, natural variations would start to become evident. Some cars would be "slow learners", while others might be geniuses or savants.
In case you like this kind of stuff (likely, if you read this far!) let me plug my podcast, Editing Reality. Check it out! -m
posted at: 12:05 | under: 2006-03 | 1 comment(s)
That second factor was the one that stuck out to me, because the issue of data representation fail-safety and scalability is a well known engineering issue (in microchip / hard disk manufacturing). But the issue of applying what he presents as the advantage of the neocortical logical model (with abstract, complex concepts at the top, and specific distinct concepts and 'data' at the bottom) to data structures and the way we store them is what I thought he should have spent more time on.
Semi-structured data models emulate such a logical organization 'inherently.' What is lacking isn't the infrastructure but some well documented best practices in modeling 'human' concepts (as they are represented in our neocortical heirarchies) in semi-structured data models.
Also, I would have enjoyed some material on existing techniques (in Artificial Intelligence reearch) on learning and neural nets and how to correct some of the their shortcomings (which he spends quite some time pointing) and apply them alongside the more 'traditional' data-mining techniques. Basically, he had plenty ammunition to take his claims that further step in the context of the more recent advancements in data / knowledge representation models (RDF/XML) and more traditional (very well known) Artifical Intelligence frameworks (Machine Learning, Descriptive Logics, etc..)
Posted by Chimezie Ogbuji at Sun Mar 19 17:55:07 2006