Formal System

A site about formal logic, literature, philosophy and simulations. And formal systems!

The logic of meaning — August 10, 2015

The logic of meaning

What does it mean to mean?

Form and Logic

I have written many posts about formal logic as it can be seen here, so I won’t delve in the mechanics of it. Formal logic, as used here, is all about applying classical logic to abstract forms. Anything can be a form.

Semantics: Saussurian linguistics

Ferdinand de Saussure is mostly known for his concept of the dual ontology of words where a word is made of an arbitrary link between a physical “token” called signifier and an abstract “token” called signified. The physical token (a sequence of sounds, graphemes or tactile patterns) is the physical embodiment of an abstract concept.

Semantics: Networking

When the question ‘what does x mean’ is asked, the answer will inevitably be a network of linguist tokens (standing for concepts) that are related to x. For example, when asked ‘what is an apple’, the answer will yield a particular arrangement of the tokens ‘fruit’, ‘is’ and ‘tree’ among other tokens. And this is essential to what is understood as meaning, it relates known concepts to the concept asked. In the case of the apple, it is related to ‘fruit’ by stating that it is an instance of ‘fruit’ and it can also be related to ‘tree’ through ‘fruit’. It is worth noticing that manipulating concepts without a linked linguistic token tends to be harder than doing it with concepts that have tokens.

It is my view that meaning is a particular type of network. So when we ask for the meaning of x, we are asking for a network of concepts that are related to x.

Semantics: Your Neighbours Make You

It is my view that if a signifier is a physical token and a signified is a network of related concepts. With this in mind, I decided to build a semantic network to test my views. Concepts are connected to other concepts by linguistic “nodes” that we call function words. Function words designate the type of relationship between two concepts. For example: ‘x is in y’ and ‘y is in x’ designate different types of relationships between x and y and the semantic network displaying both types of relationships would be different. Now consider ‘x flies to y’ and ‘x travels to y’ both can be said to be different in the sense that ‘flies’ and ‘travels’ have different semantic networks but they can said to be similar in the sense that both include the semantic network of ‘motion’ and thus, when queried, the semantic network will understand* that ‘motion’ can be inferred from ‘flies’ and ‘travels’.

Semantics: Understanding

Since semantics involves links between abstract concepts, formal logic can be applied to semantic networks. If applied, it gives the semantic network, the ability to reason about its concepts. Thus, motion can be understood from both flying and travelling*. The assumption being made here is that the meaning of objective concepts can be distilled to logical relationships. This assumption leads to the question:

is the semantics of objective concepts a type of logical relationship?

To that I have nothing but speculation. I think that the answer is yes. Given a finite definition of an objective concept, a semantic network of that concept could be built that represents the semantics of that concept in a logical way. This means that the concept of motion would be inferred from the concepts of flying, walking and travelling. Note that due to the limitations of formal logic, any semantics involving the notion of time would be non-expressible in the type of semantic network described here. However, using sequential logic could be an answer to implementing the notion of time in a semantic network.

Semnet: Testing the Waters

I recently built Semnet as a tool to test my ideas about semantics. Semnet operates using ‘x is y’ statements thus, any semantics needs to be transcribed to a ‘x is y’ statement. From the Readme: So to say ‘a tree has the property of being green’, we can say ‘a tree is Pgreen’ and define Pgreen as ‘Pgreen is the green property’ and further define the green property as ‘The green property is a property’ and ‘The green property is green’ whereby the green property is defined as that thing which is green and is a property.


It is my view that the semantics of objective concepts, as viewed under Saussurian linguistics, can be formalised and subject to the rigour (and power) of formal logic. Doing this opens the door to semantic reasoning and all the possibilities that this type of reasoning offers. A semantic network has been built by me to test this.

There exists a field known as formal semantics that seems to aim in the same direction, however, the examples I have seen, are purely abstract with no actual application whatsoever.

Towards a Language of Thought — June 24, 2015

Towards a Language of Thought

I have previously written about ideals of using formal logic to express personal views of the real world in a logical way. This idea of formal logic as an aid in thinking clearly has echoes across the opinions of many logicians, including the very founder of the field, Aristotle. His treatise laying down the foundations of the field of formal logic was given the title “The Organon” (the organ/instrument) pointing to the role of logic as an instrument to establish facts about the world. Others like Boole explicitly referred to the notion of “Laws of Thought” that strongly advocates the notion that the principles of the act of thinking are governed by logical “laws”.

Past: Expert Systems, Completeness and Consistency

In the past, I worked on a simplified expert system to establish a consistent system of ethics. The aim was to filter down your ethical views, and by disallowing inconsistencies, distil a consistent version of your ethical views. However, the expert system was only fit for absolutist ethics, and after a careful consideration, I halted the project on the basis that, in my opinion, relativist ethics are not suitable to the logical distillation that the expert system performed and, in practice, all systems of ethics are relativist. In my view, this was because, in relativist ethics, the truth value of a given ethical statement is contingent on the idiosyncrasies rather than on general principles, in consequence, given a particular set of ethical statements as axioms, it is not possible to derive a set of principles from the ethical statements that apply to any other ethical statement without adding further ethical statements as axioms. In addition, inconsistencies among ethical statements are sometimes solvable only by adding further non-general ethical statements as axioms. This means that a system of relativist ethics is not complete in the way that absolutist ethics is. The inability to ensure consistency and completeness in relativist ethics was the reason why logical distillation was not possible in systems of relativist ethics.

Present: Semantic Networks and Consistency

My new proposal is different and it stems from my dislike of circular definitions in dictionaries.

Circular definitions == Circular logic

Circular logic is an unaccepted form of inference where the individual’s aim is to establish the truth value of a factual statement X by including that statement as a part of an axiom Y that has not been agreed upon. Example:

X. Humans have souls

Y. Humans without souls are machines and no human is a machine

A circular definition is an unaccepted(?) form of definition where the individual’s aim is to establish the meaning of a word X by including that word as a part of the meaning of a word Y whose definition has not been agreed upon. Example:

X. Person: human

Y. Human: person

On linguistic descriptivism versus linguistic prescriptivism

Opinions on the acceptance of the use of circular definitions in written human languages can be divided in two schools. The prescriptivist school and the descriptivist school. The prescriptivist school is made of individuals who have a preference regarding certain aspects of human language use. On the other hand, the descriptivist school is followed by individuals who approach languages like anthropologists follow tribes: they consider themselves mere observers and do not voice their preferences about aspects of human language use. For purposes of semantic consistency and general pragmatism, I advocate that languages should have traits that are useful to its speakers and not have traits that are not useful to its speakers. I consider a lack of semantic consistency, a useless trait and therefore, I advocate a language that has semantic consistency. I do not advocate forcing everyone to follow the same approach to language. I do advocate that, just like in software development where variations of a language can serve different purposes, so variations of a human language can serve different purposes. A semantically consistent language is one such purpose. A single-syllable language might be another such purpose. And language descriptivism advocates can just use any of the variations. The general idea is that, in my opinion, there should be no reason why human languages cannot be modified to suit particular needs.

My goal is three-fold:

a) To bring forth semantic consistency in a variation of a human language by breaking circular definitions

b) To make a semantic network out of the resulting lexicon from a)

c) To introduce facts in this semantic network so that it can employ logical thinking when queried about these facts

Machine learning and human learning — March 14, 2015

Machine learning and human learning

Humans had dreamed of flying like birds for millenia. They had the idea that the ability of flying was dependent on having wings attached to your back. This idea was expressed in poems, tales and paints for a long time.

By the time, humans managed to fly, that idea was long gone and was replaced by aerodynamic principles. Our notion of “flying” had gradually changed from the ability of flapping wings attached to your back to move in the air to the ability of operating a machine that obeys the principles of aerodynamics to move in the air.

Similarly, learning has a similar status, in that we have thought for a long time that only humans can be capable of learning at the speed at complexity that we do.

But perhaps, at some point in the future, the principles of learning will be torn open and exposed under the light of science like it was done with the principles of flying.

Attempts like machine learning, even if they do not offer the ground-breaking principles, might be a part of it. And by then, our notion of learning will have completely changed.

What will learning look like? If machine flying looks nothing like the way birds fly then we might want to think that machine learning will look nothing like the way humans learn. But the end result will be the same. And that is all that matters.

Epicureanism, Buddhism and the Neuroscience of Desires — March 7, 2015

Epicureanism, Buddhism and the Neuroscience of Desires

Humans are born with (and develop) desires. Some of these desires are detrimental to our short-term or long-term physical/psychological health. We know this yet we nevertheless pursue them.

Epicureanism and Buddhism are one of the few philosophies whose ideal of life is based on an austere lifestyle based on enjoying simple pleasures that are easy to acquire.

These ideas developed in ages where factual knowledge about the basis of human behaviour was minimal. Humans have incredible powers of behavioural self-regulation compared to other animals but, overall, we are still incapable to ignore desires that we know that are detrimental to our health.

Under our current understanding, human behaviour is, like most phenomena, deterministic. Some of the factors that determine our behaviour, like social factors, we can avoid. Yet, biological factors remain mostly outside of our power to change. And these factors determine our behaviour, including our desires.

So what’s a human to do if he desires something detrimental for his health? Most advice stems from the dubious idea that human’s powers of self-regulation can override detrimental desires. A look at the obesity rates in certain Western countries would be enough to disprove that idea. Whether our desire is to eat a particularly unhealthy food that we know it is unhealthy, drink a particularly unhealthy drink that we know it is unhealthy or engage in a particularly dangerous activity that we know it is dangerous, the factors that determine our desire are not often within our power to change. Desires are not rational. One could say that, desires are more akin to axioms in a behavioural formal system. They are impervious to the powers of reason. This impervious quality seems to be proportional to the “intense” of the desire experienced.

So perhaps, sometime in the future, the neural basis of most of our behaviour will develop to the point that we can pinpoint the neural mechanisms underlying a particular behaviour such as desires. If that was the case, then perhaps we could devise a technology that, making use of this neuroscientific knowledge, could allow us to ‘turn on’ or ‘turn off’ desires.

In this hypothetical future, I would see Buddhism approving this technology that would allow us to literally, remove desires that are detrimental to our health. I would also see certain people fearful that this technology would disprove the notion of  free will to sin as the technology would correctly and successfully operate under the assumption that all desires could be turned on or off regardless of the intentions/morality of the individual.

Interestingly enough, while I would expect this technology to change our understanding of the notion of “free will” and the “drive” of human behaviour, I would not expect it to disprove the notion that there is a sort of metaphysical entity called “soul” that drives human behaviour and is the basis of free will. The argument exposed would be something along the lines of “this technology effectively proves that desires are determined by certain characteristics of our brain and that these characteristics can be turned on or off in the same way that any part of a human’s body can be removed. But an individual with the ability to choose whether or not to turn off a particular desire is exercising free will because the act of choosing is free”.

And a counterpoint to that would go along the lines of: “it might seem like free will but the ultimate factor of whether this individual will choose to turn off a particular desire D1 is determined by another desire D2 not by a metaphysical entity. D2 would also be determined by other desire called D3 and so on. But it would not stop at any particular point. This backwards causality would see us go back to the child’s birth and to his mother’s birth and to his mother’s mother’s birth and so on. Eventually, the backwards chain of causality would lead us to the beginning of the universe. But at no point, would the argument for souls be open to surface.”

Emotions and Rationality: My views — November 22, 2014

Emotions and Rationality: My views

I have started reading a book of the Oxford University Press VSI series on Emotion. Like many other people, the author, Dylan Evans, thinks that emotions make us more rational where I take “more rational” to mean “more able to achieve whatever goal we want”. I have seen similar views somewhere else, perhaps by Blackmore or Hofstadter, so I thought I would provide my own views.

I disagree with the idea that emotions make you more rational. I have seen many examples of how emotions can make you more able to achieve your goals. But they all assume one thing: that being able to recognise emotions in other people is only possible if you have emotions. I think that’s nonsense. It is true indeed that if, colloquially speaking, you are not fluent in the lingua franca par excellence, you are missing out. But all you need is the ability to recognise or infer emotions in other people not experiencing them yourself.

I agree with the idea that emotions make you more irrational. Yes, I think that emotions make you more irrational. Emotions are essentially, relatively arbitrary biological reactions/changes that result in a change in your priorities or/and behaviour. The problem with this is that, while emotions can be partially controlled with some training, they cannot be fully controlled, thus making your behaviour partially subject to arbitrary non-predictable processes. How can you gear your behaviour towards a goal when a portion of your behaviour is affected by processes you cannot control? You might be able to do it, but in the absence of external obstacles, you cannot ascertain the amount of effort that will take you to achieve your goal because you might come across a stimuli that triggers an emotion that conflicts with your goal-reaching behaviour. Compare the performance of that agent with the performance of an agent that is fully in control of his internal biology (i.e. no emotions). Unlike our emotional agent, in the absence of external obstacles, the emotionless agent can execute the behaviour needed to achieve a goal without worrying if his emotional state will conflict with his goal-reaching behaviour. If the emotionless agent can recognise emotions, no emotional agent could have the upper hand merely due to having emotions. If anything, the emotionless agent is more efficient rationally-speaking because he will achieve his goal in the presence or absence of emotional stimuli while an emotional agent might struggle/stop himself from achieving his goal because some emotional stimuli triggered certain emotional reactions on him.

  • An emotionally-moving visual stimuli such as a gory/comedy movie could stop an emotional agent from carrying out an action such as reading a book but it would not stop an emotionless agent in doing so.
  • An emotional agent might experience emotions that result in a change of his behaviour in a way that he gets away from his goal. So a student feeling boredom might not study for an exam, while an emotionless student would be able to study.
  • An emotional agent would harm himself (see junk food, lack of exercise and drugs) even it was against his interests, while an emotionless agent would not do so.

But emotionless agents are not only better when it comes to negative emotions, they also have the upper hand in positive emotions.

  • A relationship between two humans that we might term “loving” could have an equivalent without the arbitrariness of emotion. Care, responsibility and goal-reaching support are easily feasible without arbitrary biological processes in their bodies. In the absence of any other factor, an emotionless agent would remain in the relationship while an emotional agent could get his libido running high when he comes across an opportunity to mate with another individual and end up ruining his relationship by cheating.
  • An emotionless agent would have no qualms about breaking the relationship if his partner tried to harm him. An emotional agent would be open to the possibility of staying within the abusive relationship if he was experiencing the appropriate emotions.

I don’t see how emotions could improve the goal-reaching behaviour performance of a rational agent compared to that of an emotional agent. I have seen the evolutionary argument thrown around, but all we know is that emotions most likely emerged before humans so any possible advantage of it is not necessarily related specifically to the fulfilment of humans.

However, it is the case that humans have a wider range of emotions compared to other animals, how do we explain that? Perhaps, the wider range of emotions did not worsen the goal-reaching behaviour of humans so it was not something that made humans less fit, hence it stuck around. Perhaps, emotions were effective because they enhanced the expressiveness of body language. How would an emotionless agent that understood emotions fare against this? Well, surely if our emotionless agent understood emotions, we could use them to enhance the expressiveness of his body language as well. Voice intonation, facial expressions, these are things that could be learnable by an emotionless agent. So there is no need of the actual biological process.

I am not disputing the idea that experiencing emotions must have been useful for humans and non-human animals at some point in the past, I am just mentioning that, compared with an emotionless agent with the same intellectual capabilities, the emotional agent performs worse when it comes to goal-reaching behaviour performance.

Epistemology: in the beginning there were beliefs —

Epistemology: in the beginning there were beliefs

This is just a brief post about making some epistemological matters crystal clear:

1. Human knowledge is a form of belief

2. Human knowledge is axiomatic

Human knowledge is a form of belief

belief: An acceptance that something exists or is true, especially one without proof.

Now, some people might argue that a statement S is not a belief. In order for them to demonstrate that S is not a belief, they would have to provide a proof that S is not a belief. This proof would be a set of arguments involving logical or/and empirical/inductive reasoning where the only conclusion is that S is true. However, both logical reasoning and empirical/inductive reasoning themselves rely on the acceptance that other statements are true. In other words, reasoning and empirical reasoning involves holding certain beliefs.

Beliefs: formal logic

So, assume that someone were to prove to me that S is true using a logical argument. This logical argument would only be valid if one accepted the statements that underlie the rules of formal logic. The three statements that underlie formal logic are called the Laws of Thought. One of the statements is “A thing is the same as itself” while the second one goes “It will never be the case that A is true and false”. These statements are believed to be true. And when it comes to the “roots” of logical reasoning, this is far as you can go. I call this “roots” axioms. An axiom is something that is held to be true without the need to provide a proof.

Beliefs: physicalist empiricism

Empiricism is far muddier as it tackles the even muddier area of ontology. Formal logic is something of a discrete world where things are either black or white. But empiricism is more of a grey area where the core idea is that truths about the world can be grasped through the senses, whether natural or augmented by technology, as opposed to rationalism where truths about the world can be grasped through logical reasoning. Empirical reasoning also has its share of statements that mostly relate to ontology. The first one is that whatever we see can be interpreted or made sense of by humans. The second one (for followers of the physicalist school of empiricism) states that physical entities/forces/objects are either physical or material. The third one is that the processes of the universe are measurable to a level of complexity that can be understood by humans.

The first statement, which talks about truth through sensorial experience, relies on statements such as “sensorial experience either is reliable or can be made reliable” being true. How do you prove the reliability of sensorial experience without using sensorial experience to establish the proof? It seems that you can’t if you are a human. Solipsism is one of the ontological stances that denies the reliability of sensorial experience when it comes to matter of truths about the world.

The second statement relies on statements such as “All phenomena that can be observed by us can be measured” and “For practical purposes, phenomena that can’t be observed or inferred from observations does not exist” being true. And in turn, these statements depend on statements such as “Measurability and existence are properties that always go together” being true. Now, as sensible an approach as this is, it inevitably raises the question of how can we establish that measurability is a property of all things that also have the property of existing in this universe? Of course, seeing how measurements are the main way in which we discover knowledge about the universe, it does not seem feasible for us to gain empirical knowledge about the limits of measurability without using measurements. So, we have to take this statement as an axiom.

The third statement  relies on statements such as “We are capable of measuring the processes of the universe or we will get capable to do so in our endeavour of empirical knowledge seeking” being true. This statement, in turn, relies on statements such as “The processes in the universe have a complexity C and humans are capable of understanding processes of complexity C” being true. As smart as we are compared to other life forms in this planet, the idea that you are smart enough to understand the processes of the universe just because your brain makes you do things that no other life form you have seen can do is not currently provable and it does not seem to be true. Surely, our capabilities don’t change regardless of the absence or presence of life forms less capable than us. Of course, you could come up with other statement to rationalise the above statement about complexity but it seems to me that it is just another case of megalomania in our species.

So, as seen above, when looked at in detail, the two main avenues of truth are full of beliefs at their lowest level. Beliefs about the Laws of Thought being true or beliefs about the physicalist ontology being true or beliefs about universal measurability, all these are tools that we use to gain knowledge, we use them on the basis that the set of beliefs we hold are true. And I am not discussing here whether the statements underlying the use of empirical and logical reasoning are true or not. The main idea of this post is that they are beliefs, in other words, they are statements for which we have no proof but nevertheless accept as true.

Human knowledge is axiomatic

Following the conclusion of the previous point, it seems safe to conclude that human knowledge is axiomatic. In other words, if you were to question every factual piece of empirical knowledge or/and every logical statement, you would arrive to the axioms of empiricism and formal logic and you would not be able to go any deeper because axioms form the rock bottom level of our knowledge. The first stone in our pyramid. This is why I cringe every time I see the words “fact” and “belief” opposed to each other. In particular, I see the word “fact” used in a very dogmatic way as something whose truth has been discerned beyond all doubt when, as we have seen, nothing can be discerned beyond doubt by us. Not we the tools we currently have. Instead of saying “S is true”, the logically valid sentence would go: “according to the axioms of empirical/logical reasoning, S is true”. This longer sentence highlights the conditional nature of our statement. It is true insofar as the axioms of our reasoning are true. And this means that we are open to the possibility that were these axioms to ever be proved false, S would be false everything else being equal. Uncertainty, like mortality is something of a constant in our lives, we seek ways to tackle it but denying its pervasive presence in our lives amounts to something akin to denying that A&B is true when A and B are true.

Free Will Series – 5. Simulation in Science and Artificial Intelligence — October 27, 2014

Free Will Series – 5. Simulation in Science and Artificial Intelligence

This is the last article of the Free Will Series and the post that closes a period of regular writing on this blog.

This post focuses on the idea of simulation in sciences and in Artificial Intelligence. Previous posts dealt with the notion of simulation in fiction, how it has been portrayed and the relationship between simulation and recognition.

Simulations in Science

The scientific method: discovery through simulation

Science creates/discovers knowledge through a process called the scientific method. While it might not be obvious, the scientific method is essentially a simulation of a real process under controlled conditions. When you carry out an experimental test, you are trying to model an aspect of our physical world and the assumption is that, if the simulation is accurate enough, we can use the simulation to make inferences about the way our world works.

Apart from this, the idea of simulation lurks somewhere else: in statistics.

Random sampling: using trees to simulate forests

Scientists often cannot get large amount of subjects in their studies (for reasons of complexity or because it is not feasible) so they resort to lower amounts of subjects chosen randomly (in the social sciences, it is an opportunity sampling for ethical reasons) and rely on the idea of random sampling to make their models accurate. The idea behind random sampling is that a random sample tends to be more accurate of the population than a non-random sample. So a random sample is taken as a sort of micro-model of the population and by applying a test on the sample you assume that the test on the sample is statistically equivalent to running a test on the whole population. In other words, in a way, testing a sample simulates running a test on the population.

Simulation in Artificial Intelligence

A.I. is the field of simulation par excellence since one of its core aims is the simulation of human intelligence. The field is divided in two sub-fields: one where the simulation of the modus operandi of human intelligence is the main aim and another one where achieving the results of human intelligence is the ultimate goal. In the former sub-field, we find things like cognitive modelling/architectures and the Human Brain Project. In the latter sub-field, we find things like machine learning. In both cases, there is a simulation, whether it is a simulation of cognitive mechanisms or the simulation of skills performed by humans like recognising a flower in a picture.

Artificial Intelligence has been placed inside a broader sub-field called Artificial Life.

Simulation in Artificial Life

Just like A.I. is a field that simulates human intelligence, Artificial Life or A.L. simulates the properties of carbon-based life forms. And it is also divided in two research paths: life as it is (or the simulation of biological mechanisms) and life as it could be (or the creation of systems that simulate the general properties commonly associated with carbon-based life forms like metabolism, evolution, self-reproduction, etc). Just like in A.I. most of the advances are in the sub-field where systems perform human skills like visual recognition, progress in the field of A.L. mostly revolves around systems that perform stuff that we consider life-like. So in the “life as it is” sub-field, we have things like the OpenWorm project and the computational biology field. While in the “life as it could be” field, we have things like the robotics field, Tom Ray’s Tierra and other life simulators. In both cases, there is a simulation, whether it is a simulation of the mechanisms of carbon-based living systems or the simulation of traits possessed by life forms like group behaviour and the ability to change one’s behaviour in response to the surrounding environment.

Simulation is a central idea both in science in general and Artificial Life in particular. There is something fascinating about the idea of mimicking something. We find truths through simulation and science suggests that a lot of what we perceive and feel (including our sense of visual perception and free will) is a sort of simulation made by our brains. This seems to tell us that simulation is very important to us. Yet simulation itself is not a very clear notion. What is sameness? What is change?