Formal System

A site about formal logic, literature, philosophy and simulations. And formal systems!

Robert Nozick – Fiction: The nature of reality — August 6, 2012

Robert Nozick – Fiction: The nature of reality

In this brief essay, American philosopher Robert Nozick creates a character named after himself that self-consciously asserts that he is a fictional character but he wonders about his author, that invisible God that controls the character’s thoughts, motivations, etc. The character wonders about the possibility that his God-like author might also be a character in another God-like author’s story and so ad infinitum…

Robert Nozick’s Fiction is a great set of thoughts about reality, fiction and the differences and similarities between both. It also picks up  Descartes’ cogito ergo sum and provides some really interesting insights about it.

Read it for free here.

P.S. Robert Nozick’s Fiction was included in the last chapter of Douglas Hofstadter’s The Mind’s I.

Non Serviam, A.I. and personetics: an insight into simulations — April 17, 2012

Non Serviam, A.I. and personetics: an insight into simulations

Stanislaw Lem was a science fiction and philosophy Polish writer. Among his brilliant works, Non Serviam comes out as a poetic (yet highly detailed by drawing from computer science, evolution and  artificial intelligence) description of conscious software simulations. This conscious software reaches a level of complexity of thought very similar to humans till the point of wondering about their hypothetical creators:

“Nothing is due Him. A God who craves such feelings must first assure his feeling subject that He exists beyond all question. Love may be forced to rely on speculations as to the recipricocity it inspires; that is understandable. But a love forced to rely on speculations as to whether or not the beloved exists is nonsense. He who is almighty could have provided certainty. Since He did not provide it, if He exists, He must have deemed it unnecessary. Why unnecessary? One begins to suspect that maybe He is not almighty. A God not almighty would be deserving of feelings akin to pity, and indeed to love as well; but this, I think, none of our theodicies allow. And so we say: We serve ourselves and no one else.”

Unfortunately, their creators are not Gods but scientists that know that “the bills for the electricity consumed have to paid quarterly, and the moment is going to come when my university superiors demand the “wrapping up” of the experiment”.

About information regarding the meaning of the title of the story “Non Serviam” click here.

You can read the story here for free

About free will, evil and love: Dialogue with an amoralist God — January 27, 2012

About free will, evil and love: Dialogue with an amoralist God

Smullyan is quite an off-beat person. Why?

He started as magician and later on went to become a logician. The most incredible part is that he also became a Taoist (for details of the core ideas of this eastern philosophy click here). It is no secret that Taoist views are rooted on profound paradoxes nor it is secret that Western logicians have been battling against paradoxes like doctors a disease. It should seem that both logic and Taoism are conflicting but apparently, this fellow has managed to keep inner peace.

“Is God a Taoist?” is a dialogue between God and a theist where the latter asks the former why did he bestow free will on humans.  What follows is an explanation of quite a laid-back God who describes the problem of evil, his love for humans as well as a surprisingly simple idea to show why humans need free will.

P.S. Taoists do not believe in divine entities so the title of the dialogue itself can be taken as a sort of paradox, making the title a subtle reference to a core paradox surrounding the idea of free will (this core paradox happens to be mentioned in the dialogue).

You can read it here for free.

Artificial Intelligence: Soul Searching and views of an A.I. advocate. — January 26, 2012

Artificial Intelligence: Soul Searching and views of an A.I. advocate.

I will just put down common objections against the idea that machines can, someday, think and I will reply to them in a systematic manner. If you want your question to be added (and replied) write the question in the comments and I will add it to the list of objections. 😉

First of all, I shall make clear that I don’t think that our “modern” machines can support full human cognition as they are not complex nor flexible enough, but I do think that the period of time to get the complexity required to achieve a decent level of flexibility is finite even though it is going to take several computer science revolutions to get to that point.

1 – Machines can only do what you tell them to do.

The above statement implies predictability (and thus, full knowledge) of the actions of the machine. But as programming languages go farer and farer from the original machine language this predictability is gradually being lost. This means that eventually there will be a time when our predictions about what a machine can do from what we told them to do will be approximate. We will only know the “space” in which the machine’s actions will fall. As simple analogy: when we tell the computer to calculate the first million digits of Pi, we don’t know which digits will be, we only know that it is exactly one million digits.

 

2 – Machines cannot feel thus they cannot think in a human way (intelligent).

As far as I am (and most psychologists) concerned, emotions are a by-product of intelligence.

 

3- Humans are sometimes irrational beings, machines are always rational beings. Moreover, machines cannot be irrational beings because they are mechanic beings.

Look at this picture:

The brain is rational, the mind might not be.

The problem with the above statement (machines are always rational and humans sometimes are irrational) is that it mistakes levels.

To simplify, there are two levels, low rigid level and high flexible level.

Now, let us do some analogies:

Brain <=> Hardware of the machine <=> Low level (rigid)

Mind <=> Software of the machine <=> High level (flexible)

Now, are humans sometimes irrational? Yes. Are human brains sometimes irrational? No. Neurons either fire or not. There is no paradox there. Are humans irrational? Yes. Are human minds sometimes irrational? Yes. BUT, this flexibility of being able to switch between irrational and rational is the consequence of the complexity of a rigid system (the brain). When people say “machines cannot be irrational” they talk about a machines low level (their hardware) but they do refer to humans high level when they say “humans can be irrational”. So the problem with that statement is a lack of differentiation between levels of description. So the hardware of a machine is as rigid as a human’s brain and computer software is (potentially) as flexible as the human mind.

 

4 – How can you program thinking if you do not know what it is?

See my earlier post about the Turing Test.

 

5 – Okay, let us say that in some future, we get some machines thinking like humans, wouldn’t the machine be simulating thinking rather than actually thinking? Simulated thoughts are not actual thoughts. If I simulate milk, regardless of how complex the simulation is, I will never be able to drink it. Simulations are not real.

First, I recommend you to read this and this, since they explain my view quite nicely.

 

Axiom 1 – Simulation of concrete objects is never complete regardless of the complexity of the simulation.

Axiom 2 – Simulation of abstract objects can be complete.

Regarding Axiom 2, then: what is the difference between a simulated song and a real song? There is no difference because:

First: Simulations are data.

Second: Abstract objects are data

Third: Songs are abstract objects.

And the above holds even truer for a complex and abstract “object” as the human mind.

 

If you want your question to be added (and replied) write the question in the comments and I will add it to the list of objections. 😉

Artificial Intelligence and the Turing Test: A Coffehouse Conversation —

Artificial Intelligence and the Turing Test: A Coffehouse Conversation

What is intelligence? No one knows exactly what it is.

Can machines think? Well if we just stated that no one knows what exactly intelligence is, the answer seems to be no.

 

Axiom 1 – Humans are intelligent.

Axiom 2 – Intelligence is nowhere to be seen but one can be fairly right in saying that cognition supports intelligence, this is, intelligence stems (nothing to do with the term “cause”) from thinking processes.

Axiom 3 – Whoever (or whatever) thinks like a human is intelligent.

Alan Turing

This is part of the inferences that drove computer scientist and A.I. advocate Alan Turing, to devise an experiment which if passed by a machine, it should be considered intelligent.

 

Summary of the Turing Test

Three rooms. Three participants.

One interrogator, and two “players” A and B, one of them being a machine. Of course, the interrogator does not know which is the machine.

Three rooms (one for each participant)  isolated from each other except for a text-only communication system where questions (or natural convesation) are typed by the interrogator. If the interrogator fails to discern which is the machine (A or B) then the machine is said to be as intelligent as a human.

From Hofstadter’s Mind’s I, I take one of the chapters where the Turing Test  is discussed quite nicely.

Coffee house Conversation