Solomonoff Induction: Keystone

After finishing Latin, I now find myself contemplating my next month. I will not be taking a class in December; instead, I will be working on my keystone.

Keystone is a project that every Quest student completes in order to graduate. The keystone topic and format has to be approved by the student’s mentor, and can take a wide variety of forms. In past years students have built a tiny house, produced an album, conducted a scientific study, written a novel, and written a paper (the most common). The keystone relates to the student’s question, but is not (necessarily) meant to be an answer to it. For example, if a student’s question were “What is love?”, he might do research in a lab examining the electrochemical processes that occur in someone’s brain when that person is in love, or he might write a paper exploring how 13th troubadour poetry conceives of love. Keystone is meant to help Quest students pull all their studies together, and to give them experience conducted independent original work.

My question is “How should we create artificial general intelligence?”. My question is a starting point, and doesn’t need to be answered. Just like how if I were majoring in biology at a different university I would not be an expert in biology when I graduated, but I would certainly know more than the average person and be prepared to further my education, so too at Quest. Thus at Quest I take classes like Computer Science, Logic and Metalogic, Modern Philosophy, Algorithm Analysis and Design, et cetera. All of these courses help me understand my question better.

My keystone will be a paper examining how Solomonoff induction can solve a certain open problem in Bayesian epistemology/induction called the zero prior problem. Solomonoff induction is a way to learn from data (experience) such that as one has more (relevant) experiences one’s subjective beliefs about the rules that govern one’s environment converge (arbitrarily close to) the true rules of one’s environment. Bayesian epistemology/induction tells rational agents how they should update their beliefs in statements about the world based on new information, but does not tell the agents which beliefs they ought to start with. Various strategies for initial beliefs have been proposed, but the standard ones sometimes have problems (such as the zero prior problem). Solomonoff induction seeks to provide a universally optimal set of starting beliefs that avoids these problems. So, for my keystone, I will be arguing that it adequately solves/avoids the zero prior problem.

aixi
The AIXI model of artificial general intelligence (http://hutter1.net/ai/aixigentle.pdf).

My motivation for looking at this problem comes from my question “How should we create artificial general intelligence?”. One of the more serious proposed models of AGI in AIXI, which is a combination of Solomonoff induction and sequential decision theory. I find AIXI fascinating, and I want to understand better the frameworks upon which it is based. Additionally, sequence prediction and induction in general are fascinating, and working on this project will give me a chance to dive deep into these topics.

I hope that this post has helped you understand what a keystone at Quest is, and how much thought we put into our keystones. Needless to say, I am looking forward to this next month.

Sapere aude!

~Daniel

Leave a Reply