Human-Like Memory Capabilities by Scott Fahlman, June 17, 2008
My interpretation of what he is saying is that he is looking to build an artificial memory system that can
- build-up new complex concepts/facts from incoming knowledge/information
- cross-check any given input against known facts
- “route” to the relevant fact(s) in response to any new situation — (i’ve always wondered if there is a connection to routing on a graph).
All this happening automatically and rapidly in real-time by taking advantage of massive parallelism built up from millisecond circuits, just like the human brain does, not needing the GHz circuits of today’s microprocessors.
Maybe Google includes a subset of this list. It indexes incoming knowledge (facts) and makes them searchable in response to a human-defined query. Still i see some differences which I outline below… See a related blog post “So What’s the Google End-Game?“about Google and artificial intelligence that quotes the Atlantic Monthly article “Is Google Making Us Stupid?”
First, the ability to specify the query in real-time, in real-life situations. Google or machines can’t do that, only humans can at this point. Second is low search efficiency relative to human memory. Although Google may be the most comprehensive and best search engine in the world today, it still requires a lot of human interpretation to use it and refine queries through multiple searches based on initial search results returned — as an example, I’m picturing all the effort needed to do searches for scientific papers and content. Since we end up having to do many many searches the “search” efficiency is not very high compared to human thought which appears to be near-instantaneous among our store of facts — that too it uses millisecond circuitry compared to GHz microprocessing.
Google search may be a machine, but at the heart of it all are associations and judgments originally created by humans in at least two ways. PageRank uses number and prominence of hyperlinks that point to pages as its metric (collaborative filtering) — the more the better. See “On the Origins of Google”
… the act of linking one page to another required conscious effort, which in turn was evidence of human judgment about the link’s destination.
Another area is Bayesian association of “related” keywords (ex. “nuclear” is related to “radioactive”) based on mining human-generated content. See “Using large data sets”. These associations are input by humans on the web, and merely computed/indexed by Google. Like Google, to some degree it’s possible that people communicate with each other to learn and form their own relevance/judgement. I don’t think that explains 100% of how human memory works.
There must be something else based on a human personal experience with the world — like the way babies learn by putting everything in their mouth — that can bootstrap human memory to turn it into what it ends up becoming. Is it logic, association, or something else? I think that what’s missing in today’s machines memories — Google included.
This sums it up… See page 149, “Advanced Perl Programming,” by Simon Cozens
“Sean Burke, author of Perl and LWP and a professional linguist, once described artificial intelligence as the study of programming situations where you either don’t know what you want to don’t know how to get it.”