Archive for June, 2008

Human-Like Memory Capabilities

June 18, 2008

Human-Like Memory Capabilities by Scott Fahlman, June 17, 2008

My interpretation of what he is saying is that he is looking to build an artificial memory system that can

  1. build-up new complex concepts/facts from incoming knowledge/information
  2. cross-check any given input against known facts
  3. “route” to the relevant fact(s) in response to any new situation — (i’ve always wondered if there is a connection to routing on a graph).

All this happening automatically and rapidly in real-time by taking advantage of massive parallelism built up from millisecond circuits, just like the human brain does, not needing the GHz circuits of today’s microprocessors.

A friend of mine asked me, but isn’t this what exactly what Google is?

Maybe Google includes a subset of this list. It indexes incoming knowledge (facts) and makes them searchable in response to a human-defined query. Still i see some differences which I outline below… See a related blog post “So What’s the Google End-Game?“about Google and artificial intelligence that quotes the Atlantic Monthly article “Is Google Making Us Stupid?

First, the ability to specify the query in real-time, in real-life situations. Google or machines can’t do that, only humans can at this point. Second is low search efficiency relative to human memory. Although Google may be the most comprehensive and best search engine in the world today, it still requires a lot of human interpretation to use it and refine queries through multiple searches based on initial search results returned — as an example, I’m picturing all the effort needed to do searches for scientific papers and content. Since we end up having to do many many searches the “search” efficiency is not very high compared to human thought which appears to be near-instantaneous among our store of facts — that too it uses millisecond circuitry compared to GHz microprocessing.

Google search may be a machine, but at the heart of it all are associations and judgments originally created by humans in at least two ways. PageRank uses number and prominence of hyperlinks that point to pages as its metric (collaborative filtering) — the more the better. See “On the Origins of Google”

… the act of linking one page to another required conscious effort, which in turn was evidence of human judgment about the link’s destination.

Another area is Bayesian association of “related” keywords (ex. “nuclear” is related to “radioactive”) based on mining human-generated content. See “Using large data sets”. These associations are input by humans on the web, and merely computed/indexed by Google. Like Google, to some degree it’s possible that people communicate with each other to learn and form their own relevance/judgement. I don’t think that explains 100% of how human memory works.

There must be something else based on a human personal experience with the world — like the way babies learn by putting everything in their mouth — that can bootstrap human memory to turn it into what it ends up becoming. Is it logic, association, or something else? I think that what’s missing in today’s machines memories — Google included.

This sums it up… See page 149, “Advanced Perl Programming,” by Simon Cozens

“Sean Burke, author of Perl and LWP and a professional linguist, once described artificial intelligence as the study of programming situations where you either don’t know what you want to don’t know how to get it.”

Advertisements

“Secure Lives First” by tapping into larger markets

June 11, 2008

Thanks to Dr. Gavarasana, my 2-page essay got picked up in the May issue of Catalyst magazine alongside several and other articles, a few of them listed below…

See page 25 for my article

Secure Lives First
DEVABHAKTUNI SRIKRISHNA
Tapping into larger markets is the need of the hour if rural poverty in India is to be tackled and human security achieved as agriculture market is limited to $100 billion or $200 billion annually and crop yields are at the mercy of the fluctuating weather.

page 20,

Microcredit, NGOs and Poverty Alleviation
MRITIUNJOY MOHANTY
While access to microcredit serves as a useful complement to the survival strategies of poor households, it is not a strategy of poverty alleviation and growth.

page 31,

Is India’s Prosperity Trickling Down?
ABRAHAM M. GEORGE
Despite increase in employment by 60 million in all sectors in India during the five years sending 2004-05, most of the new jobs have gone urban and incomes have also not risen much for rural workers.

page 38,

Akshaya Patra
The Torch Bearer
At a cost of $28 per child annually, Akshaya Patra is providing underprivileged children in India with a free nutritious meal, often the only meal they receive for the entire day.

quote from Manmohan Singh

“Once poverty-stricken, India has been transformed by the past decade’s economic boom into a burgeoning world power whose wealth can be seen everywhere: New cars cruise the streets, high-end apartment blocks are rising on the edges of cities, luxury shops fill the seemingly endless supply of new shopping malls, he said and added: “But the inequality in this country of 1.1 billion people is as often as conspicuous as the consumption – Indian children are more likely to be malnourished than African ones and the country is home to about a third of the people in the world living on less than $ 1 a day”.

Millennium Technology Prize

June 11, 2008

Millennium Technology Prize Awarded to Professor Robert Langer for Intelligent Drug Delivery

The youtube video on this site interviewing Dr. Langer is cool… he shows how drug delivery via polymers is now leading to precise targeting of drugs down to the unicellular level and enabling release of drugs controlled by human-embedded microprocessors.

In choosing his career in 1974, he blew off the oil companies and talks about how his first boss liked to hire unusual people. He invented 200 ways that it didn’t work for every 1-2 successful ways that did.

Fermi’s nobel lecture (1938)

June 3, 2008

The simplicity of Fermi’s nobel lecture (1938) is stunning — the implications of this work changed history forever. Other nobel lectures i’ve read go on and on — this lecture is only 8 pages. Fermi also cites and gives credit to a dozens of other researchers upon whose work his discoveries are based. He explains the discovery of radioactivity caused by neutron bombardment and study of interactions of “thermal” neutrons with all the elements, including uranium and thorium.

p. 415,

The small dimensions, the perfect steadiness and the utmost simplicity are, however, sometimes very useful features of the radon + beryllium sources.


His experiments involve neutron sources, paraffin wax, and spinning wheels, not complicated particle accelerators or machinery. Anyone with a freshman-level chemistry/physics knowledge should be able to understand the lecture, but even that is not absolutely needed.