Formal System

A site about formal logic, literature, philosophy and simulations. And formal systems!

The logic of meaning — August 10, 2015

The logic of meaning

What does it mean to mean?

Form and Logic

I have written many posts about formal logic as it can be seen here, so I won’t delve in the mechanics of it. Formal logic, as used here, is all about applying classical logic to abstract forms. Anything can be a form.

Semantics: Saussurian linguistics

Ferdinand de Saussure is mostly known for his concept of the dual ontology of words where a word is made of an arbitrary link between a physical “token” called signifier and an abstract “token” called signified. The physical token (a sequence of sounds, graphemes or tactile patterns) is the physical embodiment of an abstract concept.

Semantics: Networking

When the question ‘what does x mean’ is asked, the answer will inevitably be a network of linguist tokens (standing for concepts) that are related to x. For example, when asked ‘what is an apple’, the answer will yield a particular arrangement of the tokens ‘fruit’, ‘is’ and ‘tree’ among other tokens. And this is essential to what is understood as meaning, it relates known concepts to the concept asked. In the case of the apple, it is related to ‘fruit’ by stating that it is an instance of ‘fruit’ and it can also be related to ‘tree’ through ‘fruit’. It is worth noticing that manipulating concepts without a linked linguistic token tends to be harder than doing it with concepts that have tokens.

It is my view that meaning is a particular type of network. So when we ask for the meaning of x, we are asking for a network of concepts that are related to x.

Semantics: Your Neighbours Make You

It is my view that if a signifier is a physical token and a signified is a network of related concepts. With this in mind, I decided to build a semantic network to test my views. Concepts are connected to other concepts by linguistic “nodes” that we call function words. Function words designate the type of relationship between two concepts. For example: ‘x is in y’ and ‘y is in x’ designate different types of relationships between x and y and the semantic network displaying both types of relationships would be different. Now consider ‘x flies to y’ and ‘x travels to y’ both can be said to be different in the sense that ‘flies’ and ‘travels’ have different semantic networks but they can said to be similar in the sense that both include the semantic network of ‘motion’ and thus, when queried, the semantic network will understand* that ‘motion’ can be inferred from ‘flies’ and ‘travels’.

Semantics: Understanding

Since semantics involves links between abstract concepts, formal logic can be applied to semantic networks. If applied, it gives the semantic network, the ability to reason about its concepts. Thus, motion can be understood from both flying and travelling*. The assumption being made here is that the meaning of objective concepts can be distilled to logical relationships. This assumption leads to the question:

is the semantics of objective concepts a type of logical relationship?

To that I have nothing but speculation. I think that the answer is yes. Given a finite definition of an objective concept, a semantic network of that concept could be built that represents the semantics of that concept in a logical way. This means that the concept of motion would be inferred from the concepts of flying, walking and travelling. Note that due to the limitations of formal logic, any semantics involving the notion of time would be non-expressible in the type of semantic network described here. However, using sequential logic could be an answer to implementing the notion of time in a semantic network.

Semnet: Testing the Waters

I recently built Semnet as a tool to test my ideas about semantics. Semnet operates using ‘x is y’ statements thus, any semantics needs to be transcribed to a ‘x is y’ statement. From the Readme: So to say ‘a tree has the property of being green’, we can say ‘a tree is Pgreen’ and define Pgreen as ‘Pgreen is the green property’ and further define the green property as ‘The green property is a property’ and ‘The green property is green’ whereby the green property is defined as that thing which is green and is a property.

Concluding

It is my view that the semantics of objective concepts, as viewed under Saussurian linguistics, can be formalised and subject to the rigour (and power) of formal logic. Doing this opens the door to semantic reasoning and all the possibilities that this type of reasoning offers. A semantic network has been built by me to test this.

There exists a field known as formal semantics that seems to aim in the same direction, however, the examples I have seen, are purely abstract with no actual application whatsoever.

Towards a Language of Thought — June 24, 2015

Towards a Language of Thought

I have previously written about ideals of using formal logic to express personal views of the real world in a logical way. This idea of formal logic as an aid in thinking clearly has echoes across the opinions of many logicians, including the very founder of the field, Aristotle. His treatise laying down the foundations of the field of formal logic was given the title “The Organon” (the organ/instrument) pointing to the role of logic as an instrument to establish facts about the world. Others like Boole explicitly referred to the notion of “Laws of Thought” that strongly advocates the notion that the principles of the act of thinking are governed by logical “laws”.

Past: Expert Systems, Completeness and Consistency

In the past, I worked on a simplified expert system to establish a consistent system of ethics. The aim was to filter down your ethical views, and by disallowing inconsistencies, distil a consistent version of your ethical views. However, the expert system was only fit for absolutist ethics, and after a careful consideration, I halted the project on the basis that, in my opinion, relativist ethics are not suitable to the logical distillation that the expert system performed and, in practice, all systems of ethics are relativist. In my view, this was because, in relativist ethics, the truth value of a given ethical statement is contingent on the idiosyncrasies rather than on general principles, in consequence, given a particular set of ethical statements as axioms, it is not possible to derive a set of principles from the ethical statements that apply to any other ethical statement without adding further ethical statements as axioms. In addition, inconsistencies among ethical statements are sometimes solvable only by adding further non-general ethical statements as axioms. This means that a system of relativist ethics is not complete in the way that absolutist ethics is. The inability to ensure consistency and completeness in relativist ethics was the reason why logical distillation was not possible in systems of relativist ethics.

Present: Semantic Networks and Consistency

My new proposal is different and it stems from my dislike of circular definitions in dictionaries.

Circular definitions == Circular logic

Circular logic is an unaccepted form of inference where the individual’s aim is to establish the truth value of a factual statement X by including that statement as a part of an axiom Y that has not been agreed upon. Example:

X. Humans have souls

Y. Humans without souls are machines and no human is a machine

A circular definition is an unaccepted(?) form of definition where the individual’s aim is to establish the meaning of a word X by including that word as a part of the meaning of a word Y whose definition has not been agreed upon. Example:

X. Person: human

Y. Human: person

On linguistic descriptivism versus linguistic prescriptivism

Opinions on the acceptance of the use of circular definitions in written human languages can be divided in two schools. The prescriptivist school and the descriptivist school. The prescriptivist school is made of individuals who have a preference regarding certain aspects of human language use. On the other hand, the descriptivist school is followed by individuals who approach languages like anthropologists follow tribes: they consider themselves mere observers and do not voice their preferences about aspects of human language use. For purposes of semantic consistency and general pragmatism, I advocate that languages should have traits that are useful to its speakers and not have traits that are not useful to its speakers. I consider a lack of semantic consistency, a useless trait and therefore, I advocate a language that has semantic consistency. I do not advocate forcing everyone to follow the same approach to language. I do advocate that, just like in software development where variations of a language can serve different purposes, so variations of a human language can serve different purposes. A semantically consistent language is one such purpose. A single-syllable language might be another such purpose. And language descriptivism advocates can just use any of the variations. The general idea is that, in my opinion, there should be no reason why human languages cannot be modified to suit particular needs.

My goal is three-fold:

a) To bring forth semantic consistency in a variation of a human language by breaking circular definitions

b) To make a semantic network out of the resulting lexicon from a)

c) To introduce facts in this semantic network so that it can employ logical thinking when queried about these facts

Machine learning and human learning — March 14, 2015

Machine learning and human learning

Humans had dreamed of flying like birds for millenia. They had the idea that the ability of flying was dependent on having wings attached to your back. This idea was expressed in poems, tales and paints for a long time.

By the time, humans managed to fly, that idea was long gone and was replaced by aerodynamic principles. Our notion of “flying” had gradually changed from the ability of flapping wings attached to your back to move in the air to the ability of operating a machine that obeys the principles of aerodynamics to move in the air.

Similarly, learning has a similar status, in that we have thought for a long time that only humans can be capable of learning at the speed at complexity that we do.

But perhaps, at some point in the future, the principles of learning will be torn open and exposed under the light of science like it was done with the principles of flying.

Attempts like machine learning, even if they do not offer the ground-breaking principles, might be a part of it. And by then, our notion of learning will have completely changed.

What will learning look like? If machine flying looks nothing like the way birds fly then we might want to think that machine learning will look nothing like the way humans learn. But the end result will be the same. And that is all that matters.

Epicureanism, Buddhism and the Neuroscience of Desires — March 7, 2015

Epicureanism, Buddhism and the Neuroscience of Desires

Humans are born with (and develop) desires. Some of these desires are detrimental to our short-term or long-term physical/psychological health. We know this yet we nevertheless pursue them.

Epicureanism and Buddhism are one of the few philosophies whose ideal of life is based on an austere lifestyle based on enjoying simple pleasures that are easy to acquire.

These ideas developed in ages where factual knowledge about the basis of human behaviour was minimal. Humans have incredible powers of behavioural self-regulation compared to other animals but, overall, we are still incapable to ignore desires that we know that are detrimental to our health.

Under our current understanding, human behaviour is, like most phenomena, deterministic. Some of the factors that determine our behaviour, like social factors, we can avoid. Yet, biological factors remain mostly outside of our power to change. And these factors determine our behaviour, including our desires.

So what’s a human to do if he desires something detrimental for his health? Most advice stems from the dubious idea that human’s powers of self-regulation can override detrimental desires. A look at the obesity rates in certain Western countries would be enough to disprove that idea. Whether our desire is to eat a particularly unhealthy food that we know it is unhealthy, drink a particularly unhealthy drink that we know it is unhealthy or engage in a particularly dangerous activity that we know it is dangerous, the factors that determine our desire are not often within our power to change. Desires are not rational. One could say that, desires are more akin to axioms in a behavioural formal system. They are impervious to the powers of reason. This impervious quality seems to be proportional to the “intense” of the desire experienced.

So perhaps, sometime in the future, the neural basis of most of our behaviour will develop to the point that we can pinpoint the neural mechanisms underlying a particular behaviour such as desires. If that was the case, then perhaps we could devise a technology that, making use of this neuroscientific knowledge, could allow us to ‘turn on’ or ‘turn off’ desires.

In this hypothetical future, I would see Buddhism approving this technology that would allow us to literally, remove desires that are detrimental to our health. I would also see certain people fearful that this technology would disprove the notion of  free will to sin as the technology would correctly and successfully operate under the assumption that all desires could be turned on or off regardless of the intentions/morality of the individual.

Interestingly enough, while I would expect this technology to change our understanding of the notion of “free will” and the “drive” of human behaviour, I would not expect it to disprove the notion that there is a sort of metaphysical entity called “soul” that drives human behaviour and is the basis of free will. The argument exposed would be something along the lines of “this technology effectively proves that desires are determined by certain characteristics of our brain and that these characteristics can be turned on or off in the same way that any part of a human’s body can be removed. But an individual with the ability to choose whether or not to turn off a particular desire is exercising free will because the act of choosing is free”.

And a counterpoint to that would go along the lines of: “it might seem like free will but the ultimate factor of whether this individual will choose to turn off a particular desire D1 is determined by another desire D2 not by a metaphysical entity. D2 would also be determined by other desire called D3 and so on. But it would not stop at any particular point. This backwards causality would see us go back to the child’s birth and to his mother’s birth and to his mother’s mother’s birth and so on. Eventually, the backwards chain of causality would lead us to the beginning of the universe. But at no point, would the argument for souls be open to surface.”

Emotions and Rationality: My views — November 22, 2014

Emotions and Rationality: My views

I have started reading a book of the Oxford University Press VSI series on Emotion. Like many other people, the author, Dylan Evans, thinks that emotions make us more rational where I take “more rational” to mean “more able to achieve whatever goal we want”. I have seen similar views somewhere else, perhaps by Blackmore or Hofstadter, so I thought I would provide my own views.

I disagree with the idea that emotions make you more rational. I have seen many examples of how emotions can make you more able to achieve your goals. But they all assume one thing: that being able to recognise emotions in other people is only possible if you have emotions. I think that’s nonsense. It is true indeed that if, colloquially speaking, you are not fluent in the lingua franca par excellence, you are missing out. But all you need is the ability to recognise or infer emotions in other people not experiencing them yourself.

I agree with the idea that emotions make you more irrational. Yes, I think that emotions make you more irrational. Emotions are essentially, relatively arbitrary biological reactions/changes that result in a change in your priorities or/and behaviour. The problem with this is that, while emotions can be partially controlled with some training, they cannot be fully controlled, thus making your behaviour partially subject to arbitrary non-predictable processes. How can you gear your behaviour towards a goal when a portion of your behaviour is affected by processes you cannot control? You might be able to do it, but in the absence of external obstacles, you cannot ascertain the amount of effort that will take you to achieve your goal because you might come across a stimuli that triggers an emotion that conflicts with your goal-reaching behaviour. Compare the performance of that agent with the performance of an agent that is fully in control of his internal biology (i.e. no emotions). Unlike our emotional agent, in the absence of external obstacles, the emotionless agent can execute the behaviour needed to achieve a goal without worrying if his emotional state will conflict with his goal-reaching behaviour. If the emotionless agent can recognise emotions, no emotional agent could have the upper hand merely due to having emotions. If anything, the emotionless agent is more efficient rationally-speaking because he will achieve his goal in the presence or absence of emotional stimuli while an emotional agent might struggle/stop himself from achieving his goal because some emotional stimuli triggered certain emotional reactions on him.

  • An emotionally-moving visual stimuli such as a gory/comedy movie could stop an emotional agent from carrying out an action such as reading a book but it would not stop an emotionless agent in doing so.
  • An emotional agent might experience emotions that result in a change of his behaviour in a way that he gets away from his goal. So a student feeling boredom might not study for an exam, while an emotionless student would be able to study.
  • An emotional agent would harm himself (see junk food, lack of exercise and drugs) even it was against his interests, while an emotionless agent would not do so.

But emotionless agents are not only better when it comes to negative emotions, they also have the upper hand in positive emotions.

  • A relationship between two humans that we might term “loving” could have an equivalent without the arbitrariness of emotion. Care, responsibility and goal-reaching support are easily feasible without arbitrary biological processes in their bodies. In the absence of any other factor, an emotionless agent would remain in the relationship while an emotional agent could get his libido running high when he comes across an opportunity to mate with another individual and end up ruining his relationship by cheating.
  • An emotionless agent would have no qualms about breaking the relationship if his partner tried to harm him. An emotional agent would be open to the possibility of staying within the abusive relationship if he was experiencing the appropriate emotions.

I don’t see how emotions could improve the goal-reaching behaviour performance of a rational agent compared to that of an emotional agent. I have seen the evolutionary argument thrown around, but all we know is that emotions most likely emerged before humans so any possible advantage of it is not necessarily related specifically to the fulfilment of humans.

However, it is the case that humans have a wider range of emotions compared to other animals, how do we explain that? Perhaps, the wider range of emotions did not worsen the goal-reaching behaviour of humans so it was not something that made humans less fit, hence it stuck around. Perhaps, emotions were effective because they enhanced the expressiveness of body language. How would an emotionless agent that understood emotions fare against this? Well, surely if our emotionless agent understood emotions, we could use them to enhance the expressiveness of his body language as well. Voice intonation, facial expressions, these are things that could be learnable by an emotionless agent. So there is no need of the actual biological process.

I am not disputing the idea that experiencing emotions must have been useful for humans and non-human animals at some point in the past, I am just mentioning that, compared with an emotionless agent with the same intellectual capabilities, the emotional agent performs worse when it comes to goal-reaching behaviour performance.

Epistemology: in the beginning there were beliefs —

Epistemology: in the beginning there were beliefs

This is just a brief post about making some epistemological matters crystal clear:

1. Human knowledge is a form of belief

2. Human knowledge is axiomatic

Human knowledge is a form of belief

belief: An acceptance that something exists or is true, especially one without proof.

Now, some people might argue that a statement S is not a belief. In order for them to demonstrate that S is not a belief, they would have to provide a proof that S is not a belief. This proof would be a set of arguments involving logical or/and empirical/inductive reasoning where the only conclusion is that S is true. However, both logical reasoning and empirical/inductive reasoning themselves rely on the acceptance that other statements are true. In other words, reasoning and empirical reasoning involves holding certain beliefs.

Beliefs: formal logic

So, assume that someone were to prove to me that S is true using a logical argument. This logical argument would only be valid if one accepted the statements that underlie the rules of formal logic. The three statements that underlie formal logic are called the Laws of Thought. One of the statements is “A thing is the same as itself” while the second one goes “It will never be the case that A is true and false”. These statements are believed to be true. And when it comes to the “roots” of logical reasoning, this is far as you can go. I call this “roots” axioms. An axiom is something that is held to be true without the need to provide a proof.

Beliefs: physicalist empiricism

Empiricism is far muddier as it tackles the even muddier area of ontology. Formal logic is something of a discrete world where things are either black or white. But empiricism is more of a grey area where the core idea is that truths about the world can be grasped through the senses, whether natural or augmented by technology, as opposed to rationalism where truths about the world can be grasped through logical reasoning. Empirical reasoning also has its share of statements that mostly relate to ontology. The first one is that whatever we see can be interpreted or made sense of by humans. The second one (for followers of the physicalist school of empiricism) states that physical entities/forces/objects are either physical or material. The third one is that the processes of the universe are measurable to a level of complexity that can be understood by humans.

The first statement, which talks about truth through sensorial experience, relies on statements such as “sensorial experience either is reliable or can be made reliable” being true. How do you prove the reliability of sensorial experience without using sensorial experience to establish the proof? It seems that you can’t if you are a human. Solipsism is one of the ontological stances that denies the reliability of sensorial experience when it comes to matter of truths about the world.

The second statement relies on statements such as “All phenomena that can be observed by us can be measured” and “For practical purposes, phenomena that can’t be observed or inferred from observations does not exist” being true. And in turn, these statements depend on statements such as “Measurability and existence are properties that always go together” being true. Now, as sensible an approach as this is, it inevitably raises the question of how can we establish that measurability is a property of all things that also have the property of existing in this universe? Of course, seeing how measurements are the main way in which we discover knowledge about the universe, it does not seem feasible for us to gain empirical knowledge about the limits of measurability without using measurements. So, we have to take this statement as an axiom.

The third statement  relies on statements such as “We are capable of measuring the processes of the universe or we will get capable to do so in our endeavour of empirical knowledge seeking” being true. This statement, in turn, relies on statements such as “The processes in the universe have a complexity C and humans are capable of understanding processes of complexity C” being true. As smart as we are compared to other life forms in this planet, the idea that you are smart enough to understand the processes of the universe just because your brain makes you do things that no other life form you have seen can do is not currently provable and it does not seem to be true. Surely, our capabilities don’t change regardless of the absence or presence of life forms less capable than us. Of course, you could come up with other statement to rationalise the above statement about complexity but it seems to me that it is just another case of megalomania in our species.

So, as seen above, when looked at in detail, the two main avenues of truth are full of beliefs at their lowest level. Beliefs about the Laws of Thought being true or beliefs about the physicalist ontology being true or beliefs about universal measurability, all these are tools that we use to gain knowledge, we use them on the basis that the set of beliefs we hold are true. And I am not discussing here whether the statements underlying the use of empirical and logical reasoning are true or not. The main idea of this post is that they are beliefs, in other words, they are statements for which we have no proof but nevertheless accept as true.

Human knowledge is axiomatic

Following the conclusion of the previous point, it seems safe to conclude that human knowledge is axiomatic. In other words, if you were to question every factual piece of empirical knowledge or/and every logical statement, you would arrive to the axioms of empiricism and formal logic and you would not be able to go any deeper because axioms form the rock bottom level of our knowledge. The first stone in our pyramid. This is why I cringe every time I see the words “fact” and “belief” opposed to each other. In particular, I see the word “fact” used in a very dogmatic way as something whose truth has been discerned beyond all doubt when, as we have seen, nothing can be discerned beyond doubt by us. Not we the tools we currently have. Instead of saying “S is true”, the logically valid sentence would go: “according to the axioms of empirical/logical reasoning, S is true”. This longer sentence highlights the conditional nature of our statement. It is true insofar as the axioms of our reasoning are true. And this means that we are open to the possibility that were these axioms to ever be proved false, S would be false everything else being equal. Uncertainty, like mortality is something of a constant in our lives, we seek ways to tackle it but denying its pervasive presence in our lives amounts to something akin to denying that A&B is true when A and B are true.

Free Will Series – 5. Simulation in Science and Artificial Intelligence — October 27, 2014

Free Will Series – 5. Simulation in Science and Artificial Intelligence

This is the last article of the Free Will Series and the post that closes a period of regular writing on this blog.

This post focuses on the idea of simulation in sciences and in Artificial Intelligence. Previous posts dealt with the notion of simulation in fiction, how it has been portrayed and the relationship between simulation and recognition.

Simulations in Science

The scientific method: discovery through simulation

Science creates/discovers knowledge through a process called the scientific method. While it might not be obvious, the scientific method is essentially a simulation of a real process under controlled conditions. When you carry out an experimental test, you are trying to model an aspect of our physical world and the assumption is that, if the simulation is accurate enough, we can use the simulation to make inferences about the way our world works.

Apart from this, the idea of simulation lurks somewhere else: in statistics.

Random sampling: using trees to simulate forests

Scientists often cannot get large amount of subjects in their studies (for reasons of complexity or because it is not feasible) so they resort to lower amounts of subjects chosen randomly (in the social sciences, it is an opportunity sampling for ethical reasons) and rely on the idea of random sampling to make their models accurate. The idea behind random sampling is that a random sample tends to be more accurate of the population than a non-random sample. So a random sample is taken as a sort of micro-model of the population and by applying a test on the sample you assume that the test on the sample is statistically equivalent to running a test on the whole population. In other words, in a way, testing a sample simulates running a test on the population.

Simulation in Artificial Intelligence

A.I. is the field of simulation par excellence since one of its core aims is the simulation of human intelligence. The field is divided in two sub-fields: one where the simulation of the modus operandi of human intelligence is the main aim and another one where achieving the results of human intelligence is the ultimate goal. In the former sub-field, we find things like cognitive modelling/architectures and the Human Brain Project. In the latter sub-field, we find things like machine learning. In both cases, there is a simulation, whether it is a simulation of cognitive mechanisms or the simulation of skills performed by humans like recognising a flower in a picture.

Artificial Intelligence has been placed inside a broader sub-field called Artificial Life.

Simulation in Artificial Life

Just like A.I. is a field that simulates human intelligence, Artificial Life or A.L. simulates the properties of carbon-based life forms. And it is also divided in two research paths: life as it is (or the simulation of biological mechanisms) and life as it could be (or the creation of systems that simulate the general properties commonly associated with carbon-based life forms like metabolism, evolution, self-reproduction, etc). Just like in A.I. most of the advances are in the sub-field where systems perform human skills like visual recognition, progress in the field of A.L. mostly revolves around systems that perform stuff that we consider life-like. So in the “life as it is” sub-field, we have things like the OpenWorm project and the computational biology field. While in the “life as it could be” field, we have things like the robotics field, Tom Ray’s Tierra and other life simulators. In both cases, there is a simulation, whether it is a simulation of the mechanisms of carbon-based living systems or the simulation of traits possessed by life forms like group behaviour and the ability to change one’s behaviour in response to the surrounding environment.

Simulation is a central idea both in science in general and Artificial Life in particular. There is something fascinating about the idea of mimicking something. We find truths through simulation and science suggests that a lot of what we perceive and feel (including our sense of visual perception and free will) is a sort of simulation made by our brains. This seems to tell us that simulation is very important to us. Yet simulation itself is not a very clear notion. What is sameness? What is change?

Free Will Series – 5. The Self: Recognition in Fiction — September 6, 2014

Free Will Series – 5. The Self: Recognition in Fiction

The Walk: Futuristic Personal Development and Former Selves

Personal Development is a topic of interest to many people. Whether by reading books or listening to motivational speeches, people wish to change aspects of themselves. Their beliefs, personality traits and in some parts of the world, their sexual orientation. In the Walk, we are shown a piece of neuro-technology that allows the user to modify his beliefs, personality and morality. A person walks before his soon-to-be assassin and tries to convince the assassin that he will do anything that the assassin wants in exchange for his life. Instead, the assassin offers him a device that will alter the person’s view on the self.

This philosophy considers that there is no such thing as a permanent self but instead a person is made of collection of “selves” which are patterns of thought and due to this, where a particular self is defined as a set of thoughts and personality traits at a particular time. According to this view, since a self is essentially made of a pattern of information, there exists the possibility that sometime in the future, whether for a life or a second, an individual exists with the same self as the holder of the philosophy.

The person to-be-killed argues that his “self” cannot exist in the future because he is self is also made of circumstantial factors (the particular story of his life or life time-line) and thus anyone without his story cannot be him.

The assassin replies that even though he does not have the life time-line, personality traits or beliefs of his 5-year old self but he still regards that the self that existed with his body 20 years ago and the person with his body now are the same. Just like his sense of self transcends all the physical and psychological changes that all humans undergo as they mature, and he identifies himself with a 5 year old that looks and acts nothing like him, he could also identify himself with a future individual that is physically different to him but in some aspects is similar to him.

There is also a bit where they talk about surviving through their sons, the idea being that as long as a living human has a portion of genetic information similar to one (as in the case of a biological son), a person’s body dies but his “self” manages to evade death.

The story is interesting because it touches upon the fact that we can identify with humans that are physically and psychologically different to us but might or might not identify with humans that might be psychologically and/or physically similar to us. Just like due to our particular way we “recognise” our self and we don’t feel sad at the fact that our body is constantly replacing old parts of itself with new ones, or “recognise” our self in our biological descendants however physically and psychologically different they are, the technological device provides an alternative view where our view of the is self considers that our self has a non-zero probability of existing (i.e. being hypothetically recognisable by us) at different points in time (past and future), for different periods of time (lifetime, years, seconds), at varying degrees of self-similarity (individual thoughts or collection of thoughts, individual traits or collection of traits and other behavioural patterns), in different human bodies of different ages.

A Kidnapping: simulated unhappiness, simulated lover

A Kidnapping features the story of a world where individuals activate previously prepared simulations of their selves when they die. A man receives a video where his wife tied up and scared asks for his help. The help consists in paying a ransom to a particular bank account. After checking that his wife is not tied up but safe at her workplace, the husband dismisses the video as a prank. But later, considers to pay the ransom (likely to be a relatively large amount of money). He reasons that the video was an extremely accurate simulation. So accurate, that he could not tell that it was fake as every single gesture of the woman in the video he recognised as the gestures of his wife. Every single subtle aspect of his wife behaviour and physical appearance had been displayed by the woman in the video. Since he loves his wife, he pays the ransom but keeps it from his wife. When his wife discovers that he is paying the ransom an argument ensues. She argues that the one in the video is not her but he thinks she could have been scanned and thus the woman in the video was indeed his wife. His wife, after seeing the video, disagrees that the simulation is an accurate reproduction of her but his husband says something very interesting. He says that no one can reliably judge the simulation of himself. And he has a point.

Sameness and the lack thereof is in the brain of the beholder.

The pattern-recognition powers of the brain are incredibly positive to us. But anything in excess can be bad. And this power is no exception. Apophenia, as was said in a previous post, is the downside of our brain’s pattern-recognition powers. It refers to the annoying tendency of human brains of seeing patterns in random data such as a face in the moon or an agency behind atmosphere dynamics. Similarly, we can sometimes have a sort of “positive negative” and ignore the patterns just to focus on the spaces in the data with no patterns. In the story, the wife seems to do this. She ignores how much the woman resembles her and focus on extremely subtle/tiny differences that might not even exist.

Human bodies are constantly replacing their parts

On another occasion, the protagonist recalls a conversation they had about performing a simulation of her brain. She refused on the grounds that a simulation was an imitation of her “self” by a computer. He replies that life is based on imitation, that every part of her body is constantly being replaced by an “imitation” and thus, it is implied that she is already an imitation.

A meta-simulation: simulating simulations

Later, he figures out that the video is indeed a simulation but not taken from a simulation of his wife but from his simulated brain’s memories of her. The second implication being that, before sending the video to him, the simulation of the protagonist was tested on various ransom scenarios to see which one affected him more emotionally. He pays a ransom (which is actually a regular series of payments) because he thinks his simulated wife might be “conscious/self-aware” rather than a sort of tape-like record extract from his simulated brain.

What we have in our brains

The protagonist considers a hypothetical conversation (a simulated conversation, indeed) ensuing his confession of his honouring the ransom. His wife considers the simulated wife “just” a meta-simulation implying a lower ontological status. His husband considers that all they can ever have of each are the “portraits of each other inside their heads”. She dislikes the idea of him considering her just an “idea”. But he corrects her by saying that he does not consider her an idea but an idea (a simulation or model) of her is all he has and concludes that it’s that simulation in his head all he can ever love because that’s all that there is about her inside his brain. A simulation.

Change and Identity

Learning to Be Me: Who is Who

“I was six years old when my parents told me that there was a small, dark jewel inside my skull, learning to be me.” The neural implant called jewel acts as a sort of supervised neural network that is fed the data of a self and eventually, through trial and error, learns to imitate that self so that when the protagonist’s “self” dies, the jewel can continue the self’s existence. An interesting detail is that the jewel and the brain do not know which “one” is the brain and which the jewel. But when asked, he nevertheless replies that “he” is the “real” human in a rather emotional tone.

It becomes quite tricky to think of “real” and “fake” selfs when the jewel and the human receive the same sensory input and the jewel is constantly trained to act like the brain it simulates. “So as long as the jewel and the human brain shared the same sensory input, and so long as the teacher kept their thoughts in perfect step, there was only one person, one identity, one consciousness.” The story engages with the who-is-who question for a while and gives interesting opinions: “This one person merely happened to have the (highly desirable) property that if either the jewel or the human brain were to be destroyed, he or she would survive unimpaired. People had always had two lungs and two kidneys, and for almost a century, many had lived with two hearts. This was the same: a matter of redundancy, a matter of robustness, no more.” The idea is that one’s self resides in two objects instead of one so the loss of one does not affect you anymore than a loss of kidney does.

The process by which the brain is discarded and the jewel takes its place is called “switch”. The protagonist’s parents reveal to him that they underwent the switch 3 years ago, and since the jewel is a replica of the brain, the protagonist did not notice. “This is why we did not tell you. If you had known we had switched, at the time, you might have imagined that we had changed in some way. By waiting until now to tell you, we have made it easier for you to convince yourself that we are still the same people we have always been.” As was mentioned earlier, if the protagonist knew that he should expect some difference his brain could perform a sort of negative apophenia and find differences where there are none. The rest of the story continues this theme of recognition of the self in the context of a society with advanced neuro-technology and a different conception of the self.

Closer: the sense of other-selfness

“Nobody wants to spend eternity alone.” In the first story, it was mentioned how we change across time and how our sense of identity is not necessarily changed due to it. In Closer, a couple literally decides to temporally merge their selves but the result is not satisfying and the break up. The idea is that they literally got so close that they became one for a brief period of time. As consequence, this merge broke the idea of otherness between them. In the story otherness is defined as the basis of intimacy and intimacy is seen as the major drive in their relationship. “What Sian had always wanted most in a lover was the alien, the unknowable, the mysterious, the opaque. The whole point, for her, of being with someone else was the sense of confronting otherness. Without it, you might as well be talking to yourself.” The idea is that people tend to love other people where “other” refers to any self that is not one-self.

“We knew each other too well, that’s all. Detail after tiny fucking microscopic detail.” In their desire for intimacy which can be seen as a desire to get closer and closer to each other, they accidentally destroyed the sense of otherness between them, they got so close that there came a point where there was no other but only one-self. “Together, we might as well have been alone, so we had no choice but to part. Nobody wants to spend eternity alone.”

Perhaps, the question Who am I? is no different to Why does the Earth not fall? or What keeps the Earth floating in the space? Perhaps, we make assumptions about our identity. Assumptions that are not true. If we cannot recognise ourselves, how can we know that ourselves have this property called free will?

Free Will Series – 5. The Self: Simulation in Fiction – A simulated tale — September 5, 2014

Free Will Series – 5. The Self: Simulation in Fiction – A simulated tale

This is the first of a three-part article to close the regular writings of this blog. The three-part article focuses on the idea of the self. The first part talks about the idea of the self in works of fiction.

What we mean by self

It’s one of those old questions with no straight answer. An attempt at straightforwardness: “self” is used to refer to a thing T that refers to the thing T where either a (a: thing T is the same as thing T) or/and b (there is only one thing T). It is a tricky concept because it assumes that there are some things T that can refer to the things T. When a person points to his chest, we can say that the person is pointing to him-self. In other words, we mean that the thing that does the pointing and the thing that is being pointed is the same thing. But, if a computer displays a message saying: “I cannot access the folder”, does that mean that the computer is referring to it-self? Or does that mean that the programmer, as the person who designed the software and the computer’s messages, is the thing doing the pointing and the computer is the thing being pointed?

How is the idea of the self related to simulation?

Simulation refers to the act of imitating the features of something. So when we simulate something, we aim to imitate all the features of that which is being simulated. This implies the idea of sameness. While the self refers to a situation where the thing doing the reference and the thing being referred are the same. So, if a computer says: “Cogito ergo sum”, does that mean that the computer (as a human would do) is referring to it-self? Or does our interpretation of the meaning change depending on our knowledge about the message? Eliza and Parry were two early AI programs that seemed to be able to have conversations and ,up to some extent, to refer to themselves. But Eliza and Parry were just simulations, and the act of referring to themselves could arguably be said to be a simulation too.

Sameness

If one of the features to simulate something is the act of referring to oneself, is that feature simulatable? In other words, when a person says “I” and a computer says “I” is there any difference beyond the fact that one is a human and the other a computer? When a person says “I” and another person says “I” is there any difference beyond the fact that they are two different people? Another way of asking the question is, how much sameness is there in the utterances of “I” by the computer-person and person-person pairs?

Inefabelle

It is a tale by Stanislaw Lem about simulation and sameness. In the story, a king contemplates being simulated inside a digital world where he can live along his beloved Inefabelle. But no matter how precise the simulations of the king are made, he rejects them on the basis that they are not accurate enough because for as long as he exists, no simulation can be perfect. And of course, it can’t, since, if a simulation is supposed to make a copy with 100% similarity, the simulated and the simulation must have the same physical and non-physical traits. One thing that a simulation cannot simulate is the fact that a simulation is something that comes after something. All simulations are made from simulated things. And if analogies from chaos theory are allowed, you could say that simulating (or approximating) a system of which you don’t have the initial conditions (in our case: you can’t go back to the time before the simulated thing existed) makes a perfect simulation practically impossible.

This is what it seemed to be hinted in the story. The very fact that a simulation does not come into existence at the same time as the simulated thing is the divisory line that separates a simulation from the simulated. In the story, the solution proposed is to annihilate the king’s “original” form. But I think that killing the original would not change the fact that the simulated king appeared after the “original” king and that the word “original” itself suggests that it (the original king) was the thing from which another thing was made. In this case, a simulation.

Simulated songs and original songs

Is there such thing as a simulated song? Of course not. Copying a song character by character produces two songs which are equally original. Yes, a copy was made and you could argue that the act of copying a song implies the preexistence of the song to be copied. However, since the two songs are indistinguishable apart from the fact that one was made from the other, it seems reasonable to argue that the two songs are original, or rather, that both songs are equally-valid instances of a song.

A similar line of reasoning could be applied to any other abstract thing that is simulated. It might be because abstract things, unlike physical things, do not change, and even if they do, they tend to change in rather predictable ways, so that if we simulate abstract things, the simulation and the simulated will change in exactly the same ways. While two otherwise identical physical things can change in different ways merely due to the fact that they do not have the same location in the physical world. And since two things cannot occupy the same location in our physical world, there will always be a feature, apart from the issue of the simulated thing existing before the simulation, that will not be simulatable. Of course, you could always simulate the universe and its physics down to its smallest component.

Quote:

When a computer displays a message along the lines of “I found a virus”, what does the “I” mean? Who is pointer and who the pointed in that “I”? What does it mean when a human says “I”?

 

Free Will Series – 5. Simulation in Fiction — June 15, 2014

Free Will Series – 5. Simulation in Fiction

This post will be used as an introduction to the idea of simulation for the last two articles in the Free Will Series.

Origin of the word ‘simulation’:

mid 17th century (earlier (Middle English) as simulation): from Latin simulat- ‘copied, represented’, from the verb simulare, from similis ‘like’.

And the definition thereof:

1. To imitate the appearance or character of.

What is imitation?

1. The act of using someone or something as a model.

2. A thing intended to copy or simulate something else. (This definition is rejected because it is circular. As you can see, it defines imitation in terms of simulation and simulation is defined in terms of imitation. Very bad, Oxford Dictionary.)

What is a model?

1. A 3-Dimensional representation of something at a smaller scale than the original.

2. A thing used as an example to follow or imitate. (Another rejected definition. Simulation is defined in terms of imitation and imitation is defined in terms of models and a model is defined in terms of imitation.)

3. A simplified description of a process or system to assist in calculations or predictions.

From sketching the semantic network of “simulation” we see that simulation is about representation and simplified predictive and non-predictive descriptions of processes.

The stories in the following links directly or indirectly deal with simulation. The Princess Ineffabelle, The Soul of Martha , The Soul of the Mark III Beast. The last two links belong to The Soul of Anna Kleane, a SF novella by Terrel Miedaner.

Conclusion

Assuming that “free will” exists, can a simulation of it be made? If yes, would it be distinguishable from non-simulated “free will”? In order to simulate something, we most likely need to be able to describe it. Can “free will” be accurately described? If yes, what is that description? If not, what does it mean for the claims of the existence of “free will”?