The only way to rectify our reasonings is to make them as tangible as those of the Mathematicians, so that we can find our error at a glance, and when there are disputes among persons, we can simply say: Let us calculate, without further ado, to see who is right.

I heard the above quote by Leibniz from Bertrand Russell and it sums up my current thoughts regarding a similar matter in a field of philosophy. Both Russell and Leibniz believed that logical reasoning boiled down to a set of rules that were no different to the rules of addition or subtraction. Leibniz and Russell went further. The former dreamed about a precisely defined language of human thought while the latter dreamed about a precisely defined set of rules that would underpin the foundations of mathematics. Both attempts, as they were originally conceived, failed. Nevertheless, I think there is potential in the idea of a precisely defined alphabet of thought. The potential lies on its usefulness on ethics.

I am not advocating an objective morality at all. I talked about this in a previous post. What I am advocating is consistency in one’s moral beliefs. The assumption here is that a moral system is like an axiomatic system, it starts from a number of axioms and following the rules of reasoning, it derives statements from the axioms. What does the title of the post have to do with all this?

An expert system embodies the decision-making ability most humans have. The only difference is that an expert system tends to be highly specialised, so you have an expert system for deriving statements from propositional logic, an expert system for assessing the likelihood of sunny weather next week, etc. An expert system is made of two parts: the inference engine and the knowledge base. The knowledge base is the place where the knowledge is stored and the inference engine, as its name suggests, makes inferences about the content in the knowledge base.

A moral expert system would be a type of expert system highly specialised in ethics. The knowledge base and the inference engine would both start with a set of axioms capturing fundamental moral beliefs (perhaps the universal declaration of human rights could be used as moral axiom) and the inference engine would also embody what we call “moral reasoning”. But there would be a problem if this system was used like other expert systems. As I said before, there is no universal system of ethics. So a universal moral expert system could possibly have some universal axioms agreed by all of us but a sizeable chunk of the rest of axioms would be different in each of us. A universal moral expert system would not work for the same reason a universal ethical system does not: a great part of ethics is subjective.

The idea is devising personal moral expert systems. A personalised moral expert system would embody a person’s morality at a certain period of his life. A person’s moral knowledge is likely to be huge and since it would be personal, there is no way the process of feeding the knowledge base could be automated. Feeding one’s knowledge base would require individual manual work. And it would take a relatively long period of time before one could get meaningful answers from the moral expert system. At the beginning, most of the answers would be trivial. See below a possible example of a session with a moral expert system:

Question: Is murdering humans good?

Answer: No.

Question: Why is murdering humans bad?

Answer: Murder causes death. Death is bad.

The above answers are nothing special. However, it is worth noticing the architecture of knowledge in a knowledge base. Knowledge would be stored in sentences which would be made of several components in a standard format (e.g. Murdering humans is bad). And the components of all sentences would be connected to the components of other sentences in the knowledge base in a hyperlink fashion. The result would be a network of moral information where the structure of knowledge would be recursive (i.e. sentences or components would be based on other sentences which would be based on other sentences, all the way back to the axioms).  Once the knowledge base reaches a critical amount of information, some of the links connecting sentences become less and less obvious as the knowledge base grows and it is at this point where the moral expert system would start being handy.

Implications

The idea is that some of the links connecting sentences might have never been thought of by the moral expert system’s owner but they are nevertheless there in an implicit way. For example, if you input in your knowledge base the following information:

Statement 1: “Murdering humans is bad”

Statement 2: “Sarah is a human”

The moral expert would likely create Statement 3:

Statement 3: “Murdering Sarah is bad”

Statement 3 was never input by the person owning the moral expert system but it was there in an implicit way. So when Statements 1 and 2 were the input, Statement 3 would be automatically created. In this case, the relevance of implicating Statement 3 from Statements 1 and 2 is non-trivial but as the knowledge base grows, the number of  logically “implicated” sentences will also grow and the relevance of some of them will also increase. See the following session:

Statement 1: “Discrimination is bad”

Statement 2: “Sexism is discrimination”

Statement 3: “Different treatment on irrelevant grounds or different treatment without a relevant justification is discrimination”

Statement 4: “Animals and humans are treated unequally”

Implicated Statement 5: “Animals and humans are treated unequally on irrelevant grounds or without a relevant justification”

Statement 6: “Statement 5 is Speciesism”

Implicated Statement 7: ” Sexism and speciesism are bad”

Implicated Statement 8: “A priori, animals and humans’ rights are weighed equally”

In this case, the bigger number of statements allows for a bigger number of logical connections and the logical implication of some the sentences is not as obvious as in the previous session. This is one of reasons why I think a moral expert system could be useful.

Potential uses

  • A moral expert system will not produce an objective morality but it will produce a logically consistent morality. By constantly reading the knowledge base and adding, removing or modifying sentences and axioms, one’s view of one’s own morality will be greatly enhanced.
  • Inter-moral understanding: with a log of your knowledge base, you can discuss morality in a clearer way. The reasoning of all your moral statements is crystal clear. All discussions of morality will mainly focus on moral axioms and will leave the implications of those axioms to the moral expert system.

I think even a single one of the above potential uses is more than enough to justify the use of a moral expert system. With a moral expert system, objective morality is not reached but understanding of others’ moral systems and having consistent morality become goals within relatively easy reach.

The feasibility of a moral expert system is worth mentioning. This system will deal with uncertainty and ambiguous words. But I think the tools of the field of NPL  (Natural Language Processing) are more than enough to handle the lack of precision.

I started writing the code for a moral expert system but I halted it for a while. I hope to make a sketch in the following months.

Advertisements