Featured post

Welcome

Hi

Welcome to my personal website. You can find my finished articles here, as well as some personal info. LinkedIn has more details.

About me

I am a generalist and strategist, having interest and skills in many areas. Most of my attention goes to high end technologies though – AI (mostly neural networks and evolution), robotics, space technologies, and blockchain. This is supported by experience from startups and studies of philosophy. As Ayn Rand could say – philosophy to know why, technologies to know how.

LinkedIn

Medium – @dominikfranek

dominik franek

gmail com

Thinking without language

Once upon a time, my girlfriend, who studied Czech language and literature, told me what they learned in school that day. She said that thinking is only possible through language. To me that was most ridiculous but it gave me the idea that some people might believe that. What I found since that day years ago is that apparently there are many of such people, majority in fact. And I always knew they were wrong. The first piece of evidence is what I replied to my girlfriend with. “If a person is raised by wolves, not ever learning any language, they will still be intelligent and able to perform complicated tasks, to imagine things or to plan etc. Do you claim that the person does not think?”

This question is not new and has been, in various ways, answered by others. A direction similar to my wolf question has been investigated by neurologists. They examine people with damage in brain centers connected to language and their subsequent ability to solve tasks that require thinking. A nice example is the composer Vissarion Shebalin who was still able to compose music after completely losing the ability to produce or understand language. Another branch of research examines societies with different language structures and the correlation to their abilities. In short, the results are that some thinking is possible, and language does affect it. 

The approach I will take here is different and has two components. One part is a description of my own thinking processes which appears to be unusually transparent. A single, subjective case of course does not prove anything. But a big part of it is reproducible, or at least serves as an inspiration and navigation for the second part, which, drawing its language from epistemology, is describing a specific view on thinking and its relationship to language and the real world. As a first step I will start with a more systematic description of the problem.

A more exact formulation of the issue revolves around concepts. Concepts are the building blocks of our thinking. They represent entities and ideas around us, both concrete and abstract. A concept of a chair is what, in our heads, represents all the chairs in the world. Abstract concepts then represent ideas without a physical representation, such as heroism. The problematic of concepts is a part of epistemology, a branch of philosophy, which looks at how we attain, process and use knowledge. In short – how we know. Concepts are the core part of it because they are the building blocks of our knowledge. Typical questions that revolve around them are – what are the limits of one concept? Is it still a chair if it does not have the back and misses two legs? How do we create a concept (one thing people do agree on is that we are not born with them)? How are the concepts structured and relate to one another? All these questions are important, and at this point solved to a pretty good degree. A controversial one is “Are they real?”. Clearly they are not, they are just an ability of our minds created by evolution for dealing with reality. But many people relate them to “essence” (mainly introduced by Greeks and popular to this day), which also is not real, but has strong religious charge, and therefore a strong foothold in mysticism and among its many irrational proponents. 

The last question, which is the subject of this essay, is ”can concepts exist without language?”. As I see it, this question is not important – for the same reasons as why my answer is “yes”. It is just too trivial. But while the question is not important, answering it apparently is, because so many people think it is impossible. For me that is even more striking because it is not only the historical philosophers and some prominent modern ones like Wittgenstein or Russel. But even the objectivist philosophy is on boat with what is, to my knowledge, the majority – and I find objectivists to be right about more things than any other philosophies that I know of (but still is far from being right about everything)

The basic idea behind the generally accepted view is this. People, when born, start with no concepts. But as they learn, they distinguish new separate objects and ideas and formulate concepts for them in their head. A new concept first goes through a creation phase. It starts as some hazy idea that is gradually refined (in some way, the model of which differs by philosophical school) into a “final” form by giving it properties that specify it and distinguishing it from other concepts. At some point during the process the new concept is given a name drawn from the language (e.g. a “chair”). That name becomes its unique handle which is necessary to use the concept – to store it as its own thing, to recover it, identify, to be clear that it is this concept and not another. And also to communicate it – which is irrelevant for the question in question, but in fact (I mean in my view) that is the only reason why language is needed in regard to concepts. Without a language label, they say, the new concept could not be stored in the mind and used. Language is, therefore, necessary for concepts and in consequence for any thinking, since it works over them.

The reason I disagree comes primarily from my own experience – but anyone can see it if they look in the right places, as I will show. 
I was quite young when I started practicing meditation, the core of which was calming the mind down. The most “noisy” part was the thinking in words – thinking as if leading a conversation with myself. I was learning to suppress it, while still thinking and progressing through the meditation. With some effort, the thought processes were there, but the words were not. Another piece of the puzzle came in high school when a friend of mine was surprised that I think in such an inefficient way – using words – at which moment I found I am only scratching the surface. My “loud” thinking remained to this day. But not much later I came up with another idea. To perform (simple) arithmetics in my head without words, but rather using intuition. Start with the assignment, say 12×7. But to not go through the calculation explicitly as usual (“saying“ out the calculation steps, or imagining them written), but relax, turn off the head (as in the meditation), and let the result come. It worked rather well. I have never extended it to practical use but it was a nice proof of concept. Neither of these proves my point that thinking can be done without words, but they were my stepping stones.

A more tangible progress came with higher and more abstract mathematics. The common way to deal with it is using formulas, but that did not work well for my brain. Instead I imagined the mathematical bodies (usually some weird sets) and their interactions as some fuzzy objects in space. They did not possess any conventional names or concepts, they were new temporary entities I have created to deal with them.

Over time I have adopted these thinking frameworks into my everyday life. I still usually think in words, but a lot of the time, or rather with non-trivial problems, I use something else. I call it raw concepts. Remember how I described the creation of concepts – the intermediary fuzzy object that gets a name assigned when it is finished? The traditional view makes it look like after giving it the name, it ceases to exist. But It never went anywhere. Not the label, but that fuzzy thing is what a concept is. In very exact terms it is a specific pattern of neural excitation, which is different for every different concept. Subjectively for us, it is something in our head that probably everyone would describe differently. It is a “feeling”, a “flavor”, maybe a differently shaped object in our imagination (some people even see colors). And by shaped I don’t mean chair-shaped, but a fuzzy cloud that has this “feeling” or whatever which carries the concepts properties and makes us recognize it for what it is. It is likely that you are not used to perceiving it this way. I am assuming that because if most people did, there would be no question about whether words are necessary in order to think. But the reason people do not see it this way is not that it is not there. It is. But it is covered up by the word labels and images that we have attached to the concepts. When we want to realize a chair, the word “chair” and various chair images shine so bright that it seems that is all that is there. But it is just a shiny wrapping of the feeling pattern, that cloud of neural excitations that really define what the concept is. I can tell because in my mind, I can turn the words off and observe these concept “feelings” in their naked, raw form.

Now that we have the concept of a raw concept, it should seem more obvious how the concepts are formed to begin with. Either a blank raw concept “stem cell” is created, or one is split off an existing concept, inheriting its properties and is shaped through the concept formulation process (which I have not described, see “How we know” by Harry Binswanger for a good theory) into its new form. That gets labeled (although we might start with the label already – “Mom, what is a “chair”?”) and stored. The label, the word of language, is just that, a label. The label is not the concept and the label is by no means necessary for the concept to be created or to exist.

Are you still not convinced? Let me give you a more familiar example. Remember those times you wanted to recall some word and you could not? When you have the word on your tongue, when the “feeling” of that word is there in your head, bright and clear, but the word itself doesn’t want to come out? There it is. Your raw concept without a label. You knew it all along.

Another piece of evidence, and I would say even more serious, comes from the way the thinking itself works. Or perhaps with how it doesn’t. I am not sure if anyone thinks this – but in order to clearly dispel this idea – the core of thinking is not performed by language. Language and sentences can be used, yes. But that only works for simple, well defined problems, and is highly inefficient and limited by the speed we can formulate those sentences. A good use case is going over a shopping list in our head – it is simple, linear, and needs to be precise. But it can hardly work for anything complicated if it does not work even for simple, well structured problems like chess, or unintelligent physical activities. Imagine a chess player running through hundreds of moves per minute, thinking out loud “Ok this piece moves to this position, and then that piece over there to that position” where “this” and “that” should actually also have specific descriptions… Or trying to catch a ball, calculating its trajectory, movements of the body needed in order to catch it, while considering how heavy the ball is and if it could hurt you – and doing all that using sentences that precisely describe every bit of it? Well, clearly that is not how thinking really works. Again, these sentences are only labels put on the thinking in case we want to keep a very clear track of it or to communicate it. The way thinking really works is again through these raw concepts – and their brain excitation patterns. They are there, they change forms on the go as needed, they mix, interact, merge, split… They form new excitation patterns, often intermediary ones that have no label and never will, until a state is reached where the configuration of the patterns contain an answer we were looking for. So for instance, we conjure the raw concepts of a ball (most relevantly its physical properties that we know), laws of physics, a model of our body and its physics, we let those models interact in a simulation and plan the best way to move in order to catch the ball. This comes as quite intuitive. Is this anyhow different from pondering the development of ethics in the life of a novel character? In principle, no. It is still a manipulation of some models consisting of concepts and their relations and interactions. The reason it seems different is that catching a ball is really automatic and intuitive for us, while ethical considerations are an unknown territory that requires a clear, conscious focus. But the inner mindworks are the same.

As for me, I can observe this concept interaction in my mind directly. I can see the fuzzy raw concepts in the 3d space, moving, interacting and mixing in many ways and points, simultaneously creating new flavors. Sometimes those flavors “click” into something that seems to make sense and to be useful, which I can then lock as a next step in the thinking and move on.

To be clear, this whole thing is not trying to say that language is not useful for thinking. I am only saying that it is not necessary – in theory, and some, but not all, practical applications. Language is very helpful for its labeling function as well as putting thoughts and thought processes into clear boxes, which make the thinking process clear, well organized, and manageable even for complex problems. Another aspect that plays a practical role is that the hardware of our brains is already developed by evolution with the expectation of using a language. Now this is my speculation. Because of this wiring, thinking without language is more difficult for us than if we did not have the language ability to begin with. Some brain pathways are so optimized and dependent on language that it makes not using it more difficult, and for some people impossible.

On the other hand – and this is just a side note for perspective – there are people whose beliefs point in an entirely opposite direction. Not only do they see the usage of language as problematic, but they view even the very foundations that we have laid out here – concepts – as the enemy of true “thinking”. It is the Zen Buddhists. Let me present their idea with a famous koan.

Shuzan held out his short staff and said:
“If you call this a short staff, you oppose its reality. If you do not call it a short staff, you ignore the fact.
Now what do you wish to call this?”

The second part of the master’s statement is already trivial for us. The “fact” he mentions is that the language label “short staff” does indeed belong to the item he is holding. But what does the “opposing reality” in the first part mean? The Zen Budhists teach that the world is not words, or concepts, or even objects. The world just is as it is. Boxing it up into categories and labeling it prevents us from seeing it for what it really is. Assigning even “identical” items, like two same looking chairs, a common concept means forgetting their individuality. The short staff Shuzan holds is simple, and yet very complex. It is an object (here he does not go as far as to deny even the “object” property in order to not confuse the students too much), with its material, shape, temperature, the way light reflects on it, its trajectory through history and into the future and much more. Saying that the object is a “short staff” (assigning it the label or the short staff concept under it) would leave out all of these critical individual properties, and deny its true reality.

As we know from physics, they are technically right. The world is a continuous space that is filled with different kinds of particles. There is no “water” or a “rock”. A rock is just particles of one kind that are dense in the area that in some region happen to change to another kind of particles, perhaps of water. While they look different to us, on the fundamental level the difference is unimportant. It is only in our brains that we cut this continuous space into pieces and give those pieces names and categories. These categories (or concepts, or essences) are not a fundamental part of the universe. As I wrote earlier – they are only a virtual tool imagined in our brains and created by evolution to deal with the world and to survive. The lesson given to us by Zen is that when we start on the path of discovering the core of our minds, dropping the language to reach pure concepts is only a first step and we can go much further.

To summarize my idea – the question whether language is necessary in order to think seems ridiculous to me and I hope I have presented enough evidence for why I see it that way. Now it is up to your introspection and imagination. But even if you cannot at all directly observe it as I do (which I think is the normal way and my brain is just broken), the model I have described should still make more sense than the clumsy language one and present a foundation for further research.

The definition of intelligence

When reading AI papers I keep running into definitions of intelligence. Two researchers – Shane Legg and Marcus Hutter – even made a nice effort and put together a collection of them [1]. I don’t know how about you, but I keep finding them unsatisfactory. Apparently, a popular and widely accepted one nowadays is

Intelligence measures an agent’s ability to achieve goals in a wide range of environments

by Legg and Hutter (L&H) [1,2]. It sounds ok and yet – do you feel any closer to understanding what intelligence is after seeing it?

Intelligence definitions suffer from various common maladies. Putting aside that many people just don’t understand what intelligence is, there are two main reasons for their inaccuracy. One reason is a bias towards circumstances. The authors are not trying to be accurate, but instead they are tailoring their definition to their specific needs. Others (perhaps unknowingly) conform to whatever the opinion of the public or scientific community, or research direction, is. In other words, there is a divide between what intelligence is, and what people expect it to be.

The other issue is a prevalent logical inaccuracy. Generally, a proper definition needs to have two main properties: to completely cover what we want to describe, but also exclude everything else (the third property being that it is simple). But with the existing definitions that is, to my knowledge, never the case.

Many describe intelligence too explicitly, in too much detail and using examples. That is especially common for the older ones and ones done by psychologists (who are rather practical and human oriented than formally accurate). One example, picked at random from the collection:

the general mental ability involved in calculating, reasoning, perceiving relationships and analogies, learning quickly, storing and retrieving information, using language fluently, classifying, generalizing, and adjusting to new situations.” Columbia Encyclopedia, sixth edition, 2006

The result is a definition that is perhaps good for uninitiated readers, but is too constricted to describe all that we want to understand as intelligence. For our needs it does not suffice to describe human intelligence – we are dealing with prospects of future AI’s and perhaps also extra-terrestrial ones. So we need to define it even more broadly than we need right now.

On the other side, many intelligence definitions suffer from being too loose and including too much. A good example is a definition by Minsky:

Intelligence is the ability to solve hard problems”.

It indeed is. But there are other things too that can solve hard problems. Like a pneumatic hammer. Or a brute force state search. Are those intelligent? No and not much.

What are we trying to define?

There is a lot of confusion about what intelligence is, and what level of it is enough to call something intelligent. This stems from the fact that different people have different subjective experience, expectations and applications for it, and nobody has properly defined the intelligence itself yet. What matters for most people is human intelligence and how to compare it between people. Some are trying to find where on the scale animals end and humans are. Others are working with AI, which works quite differently, while, on the applied side of the research, is still being compared on the same scale and the limit of what already is intelligent and what is not is attempted to be specified – without much success due to insufficient understanding. There is this funny property – “When it starts to work, we don’t call it AI anymore” (this is often quoted but I can’t find an attribution). The theoretical scientists and philosophers are attempting to find a clear and generic definition free of all the clutter.

The point here is that there are very different expectations and applications to match – both theoretical and practical. Different people want different aspects of intelligence to be emphasized and detailed while others can (or should) be kept simple or omitted. Therefore it would be a mistake to try to fit one definition on them all and attempting to do that is one of the reasons why past researchers have failed.

What I propose is, instead of writing one definition, creating a framework with a simple core that can be extended for the specific needs.

Before presenting it, I will first show how the definitions are constructed (and pinpoint some errors) which will lay foundations to the new framework.

Modularity

Nowadays enough research has been done and enough terms defined that making a proper definition is not an artistic endeavor anymore but rather a mechanical work of grabbing available pieces and plugging them into a frame to achieve the desired outcome. I will demonstrate this on decomposition of the contemporary definition so that it is more clear later.

Intelligence measures an agent’s ability to achieve goals in a wide range of environments”.

1) “Intelligence” – the subject, necessary.

2) “measures” –  “is” is commonly used too. “Measures” stresses that it is a measure, therefore a range and something that can be measured.

3) “ability” – it is a property of something and it enables something.

4) “to achieve goals” – it has a target, as opposed to properties that just exist without any direction at all. Note that this is not sufficient for purposefulness. Evolution has a goal (gene spreading) but it does not reason and has no purpose. I think that having a purpose is not necessary for intelligence though.

5) “agent’s” – intelligence is a property of something that has agency, acts. Not strictly necessary, but without agency  the intelligence would be inconsequential.

6) “in a wide range of environments” – this is the main contribution of the authors and the meat of the definition. The authors believe that this is a sufficient prerequisite for intelligence as it implies a wide (… full) range of intelligent abilities. To quote from [2]:

Reasoning,  planning,  solving  problems,  abstract  thinking,  learning  from  experience and so on, these are all mental abilities that allow us to successfully achieve goals. If we were missing any one of these capacities, we would clearly be less able to successfully deal with such a wide range of environments. Thus, these capacities are  implicit  in  our  definition  also.

True. But so does having legs or a lot of money. While the success in a wide range of environments is a good addition to intelligence, it does not define intelligence. It only defines versatility. To me it seems that the reason why this definition came to existence and got popularity is the current research which is trying to shake off the disappointment of AI’s that were supposed to be the end game but instead turned out to be “narrow” and useless for anything but their specific application. Therefore the focus today is on “general” AI, which is exactly what this definition aims at. So while it looks great by being very general and simple, by being too general it violates the second property of a good definition and fails to define intelligence. Which, after all, the authors admit themselves in the end. “We simply do not care whether the agent is efficient, due to some very clever algorithm,or absurdly inefficient, for example by using an unfeasibly gigantic look-up table of precomputed  answers.  The  important  point  for  us  is  that  the  machine  has  an amazing   ability   to   solve   a   huge   range   of   problems   in   a   wide   variety   of environments.”

The definition

What I propose is one core definition of intelligence and then an array of optional extensions to satisfy the specific needs and use cases. The core does not contain anything it does not have to, it is as simple as possible and to the point.

Intelligence is an ability to process information.

It intentionally does not say who has the ability, to what end, or to what degree. Because those are already various measures and properties of intelligence that are not necessary to define it. Does this define intelligence? It seems too simple and perhaps counterintuitive. But that is because of the framing we are used to from our perspective in which people are intelligent and chess programs are not. But we need to take more than one step back in order to see the whole picture.

The reason for emphasis on information is that it is exactly what separates “thinking” and “intelligence” from the manipulation of physical objects. Brains are intelligent, hammers are not. Even calculators are intelligent, just to a very trivial degree.

As far as I can say, the definition can’t be made more simple than it is without completely breaking it. So the question rather is whether anything that is necessary for intelligence definition is missing. I have already addressed many such components, such as the agency or goal, but I would like to mention a couple more.

It is tempting to say “ability to process and utilize information”, but even using the information already falls on the “interface” of the intelligence. If you imagine the intelligence as something that is happening inside a box, taking inputs, doing the “processing” and giving outputs, the usage of the information means using the results of the processing and already falls in the space outside the box, or on its border.

The most striking deficiency is that there is absolutely no indication of a measure of the intelligence. I think that it stems from our expectations. We hear about intelligence a lot and almost never think about the intelligence itself, but instead automatically go a step further and are interested in measuring and comparing it. But measuring the magnitude of something is a different topic than its definition. A very important topic certainly! But it is a very complex one that I will not attempt to address – many researchers, including Leg and Hutter, are working on it and making nice progress (by the way, their definition correctly does not address the magnitude either). A related question though is how useful a definition is as a foundation towards being able to measure intelligence. If we could choose between two equally powerful definitions, then the more practical one would be better. But right now the main thing to get at least one definition right – the practical considerations are the next step. I would say mine is as good as any and its design towards modular extensibility is already a step towards practical applications.

As for the optional addons, here are some examples.

  • Agent’s … – if we want to emphasize what our research aims at
  • (an ability to) achieve goals through (the processing…) – to say that we are trying to use the intelligence to solve something
  • Complex (processing) – To emphasize that certain degree of intelligence is necessary in order to call it intelligent
  • namely calculating, reasoning, perceiving relationships and analogies, learning quickly, storing and retrieving information, using language fluently, classifying, generalizing, and adjusting to new situations. – to tailor it to people
  • in a wide range of environments – to emphasize we are looking for versatility and to distance from narrow intelligence

As you can imagine, you can create quite anything, including the L&H definition. With the caveat of including the information processing clause – lack of which was my motivation for this paper in the first place. Intelligence is about information, so let’s go from there.

[1] The ultimate definition of intelligence, Shane Legg & Marcus Hutter, 2007, https://arxiv.org/abs/0706.3639

[2] Universal Intelligence: A Definition of Machine Intelligence, Legg & Hutter, 2006, https://zoo.cs.yale.edu/classes/cs671/12f/12f-papers/legg+hutter-universal.pdf

Fair reward – merit or effort?

For us, responsible people, it is clear that just and fair is to be rewarded accordingly to our contribution. If we try harder, work more and better and create more value as a result, we expect to get more in return. And accordingly, if we don’t try or we do a lousy job for whatever reason, we understand that we deserve less for it. An abominable contrast to it is the altruistic system that commands that we shall not ask more for doing a good job. In fact we shall not ask for any rewards at all. The rewards go to those in need, regardless of their contribution or how deserving they are. This system is fundamentally unjust.

While this is clear, many people hold a different view that leads to a very common conflict – and not only among philosophers. A typical objection I keep hearing is the following:

Different people have different opportunities that they cannot affect. Why should that make some people better off than others? Imagine two identical children. One is born into a rich family that provides it with good education and raises it to be confident and successful. The other’s parents are poor and abusive, the child receives little education and grows up to be a nerve-wreck. Both of them start to work and put equal effort into it, doing the best they can. Is it fair that the first receives vastly higher wage and acclaim?
The person telling you this believes that it is fair to reward people according to their effort and not the actual value their work creates.

I have to admit that this does make sense in a way. In line of the ethics we started off with, a person should be rewarded based on what they do. So why should a person be punished or rewarded based on things that are not in their power to change? Rewarding people based on their effort indeed is fair as well. Another perspective comes from the negative side. While I abhor the idea of a poor lazy person getting someone else’s money in welfare, I similarly dislike it when an arrogant moron makes a lot of money just because he was born with a golden spoon in his mouth. Formally, he may create a lot of value, but only a tiny fraction of it can be really credited to him. I find them both undeserving.

How can that be? How can there be two conflicting definitions of a just reward at the same time?
As always – “Whenever you think you are facing a contradiction, check your premises”.
Hint comes when you need to give an answer to the person with their heartbreaking children story. Maybe you have a better one but the best answer I can come up with is – “The world is not fair. It sometimes sucks, but we just have to live the life the best we can with the cards we are dealt.” Which is a lousy answer when trying to explain what fair means.

The reason is that these two cases of justice, while talking about the same thing, are based in different worlds.
Those different starting conditions that our objector complained about are based in the real world. It is the reality we all live in, which is without values or feelings. It deals to everybody, everyday, something different, and there is no fairness in it whatsoever. It is what it is.
On the other hand, the justice we wanted originally applies on a higher, abstract level – on the level of us humans and our ethics. It is the level where values do exist and the one that we can choose and change. So while the conditions each of us is given are unfair, we can create fairness in how we interact.

While this explains how two seemingly colliding definitions of fairness can coexist, it does not say how to deal with the violation of the later. Unfortunately, dealing with unfair conditions is an open problem and previous attempts to solve it have led to some top-tier catastrophes. Formally, the statement of both is quite clear. But the practicality of their solutions differ widely.
The problem is that the amount of “value” we can distribute is limited. We only have as much as we create. Value can’t be drawn out of thin air – regardless politicians often saying otherwise.
Giving a fair reward proportionally to the value created is straightforward. It just means exchanging value for value in a corresponding manner. Value created is distributed back proportionally and without significant issues. So the overall vision that 1) the world is not fair, deal with it; 2) but reward everybody according to their contribution – is simple, clear, consistent, and easy to implement.
On the other hand, trying to fix the unfairness of the world itself and reward people according to their effort is impractical. There is no way to objectively assess an effort a person is making. If somebody creates something of value to you, you don’t need to care how and why they did it – the value, for you, is objective. But knowing how hard they tried? Was it an incapable person doing their best, or a very capable one but slacking, or a one specializing in the skill of acting out a hard effort?
While we can make a personal call and pay extra to a person we know to be good and honest and trying hard even though they did not do so much at the end, this can’t practically be extended to a large scale. Any attempt to do so inevitably fails on the subjective nature of an effort. Moreover, since it can’t be correctly assessed, it only creates wrong incentives for people – to pretend to try, instead of doing actual good work – destroying value for everybody as a result.

“World is not fair” is a poor answer, but currently the best we have. Trying to fix that, on a global scale, should be done with utmost caution as such attempts have already cost hundreds of millions of lives. Until somebody figures something out (naive wishful thinking really does not count), we should stay content with playing the cards we were dealt and the rewards we deserve. Which is not that bad.

Attraction, cheating and jealousy explained

Romantic relationships are an important area of our lives that we are all familiar with. And yet there are many aspects surrounding it that are confusing, or do not make any sense at all. Why does virtually everybody cheat on their partners when it clearly should not be worth it? Why do women go after rich guys that don’t respect them and men pursue nearly any girl they see?

All can be clarified if approached systematically and from its logical foundations, instead of the usual point of view. Traditionally it is being analyzed either through our very subjective experience or common knowledge that is heavily burdened by cultural traditions.

This overview starts only from trivial axioms that are the foundation of any life and just the basic biological difference between men and women (yes, they do exist). From there, through logical steps, comparing them from the men’s and women’s perspectives, it builds up all the way to our real life issues.

Before I start, I would like to clarify a few points to avoid some common misconceptions.

The first point is that there is a major divide within our minds. Large part of what is happening inside us and what is running our decisions are hard-wired, genetically preprogrammed instincts. The other part of us is our aware consciousness – the part that we perceive as “us”. The conscious part of our minds believes that it itself is the only decision maker, while in fact is mostly doing the bidding of the unconscious instincts. That is all fine as long as we are aware of it. From my experience, some people are aware, and some people are not at all. But more importantly, many refuse that notion as it requires accepting that we do not have as much control as we would like to believe we have.
This article taps a lot into the hard wired areas of us and so at least accepting the possibility of it is a necessary requirement for understanding it.

Another important point is the effect of culture, which is very strong, but relatively short. We evolved over millions of years under some, rather stable, conditions. Only over the last couple thousand years the conditions started to change drastically by emergence of larger and larger societies and their cultures. Even more drastic changes are brought by technologies in recent history, the timespan of which is basically nothing in comparison. The way the world as we know it looks like and what we take for granted is one thing. What conditions we are hard-wired for is quite another. These two origins of our conditioning are often clashing drastically, which we have all experienced.

A striking example is contraception. Technically, it is a game changer. We can now have sex with anyone without having children from it. Technically, cheating should be totally fine as a result. That may be true, to some degree, on the level of reason. But reason can hardly change how we feel about things and contraception was not around when our instincts, and the ensuing emotions, evolved. This article taps a lot into the hard wired areas of us and so at least accepting the possibility of it is a necessary requirement for understanding it.

In short – if during reading you find something outrageous or plain stupid, please, try to recall those two points above which may help you see it in a new light.

This article taps into the hard wired areas of us a lot so at least accepting the possibility of it is a necessary requirement for understanding it.

So let’s get started. Throughout the whole table, men’s side is on the left, women’s on the right.

MenWomen
The main axiom: the main goal of all living things is to spread their genes (that means also similar genes)
That is how nature made us and it is shared by all of life.
Reason is that anything alive that did not try their best to spread got extinct. Only those who tried their hardest made it through billions of years of evolution to this day.
Strategies for spreading of genes
There are 4 main ways to do it:

1. Make children in large number (quantity)

2. Make children with good genes = high ability to spread their own genes (quality)
Children only help spread our genes as much as they spread themselves.

3. Raise and support one’s own offspring, increasing their chances (care)

4. Support other individuals with similar genes
The more similar the better – that is why we prefer to help (even sacrifice for) close family over distant family, our country to another one, human species over kettle.
No. 1 and 2 are the most effective, no. 3 varies, no. 4 is out of our scope now.
What does it take to do no. 1 – make a child?
Few enjoyable minutesFew sometimes enjoyable minutes + about two years of pregnancy and feeding + lot of nutrients + large chance of death
What are the costs of no. 2 – finding a partner with good genes?
This is complicated and differs in time and culture. But both women and men compete for the best partners (in entirely different ways) and in the end the total effort is comparable.
What are the costs of no. 3 – supporting a child
About the same and quite high for either, BUT
Does not know the child is his.
Making the effort less appealing
Knows 100% the child is hers.
What is the potential in no. 1 – quantity
Virtually unlimited~10
What is the potential in no. 2 – quality
About the same. Offspring gender is random and both can choose partners.
What is the potential in no. 3 – care
Stronger in protection/providingNecessary in first x months of life
This difference in care approach is unimportant right now. Both parents are important for a child’s survival. But effectively the cost is higher for a man as the child does not need to be his.
The more certain he is of his parenthood, the more worthwhile the effort is. The ratio of children by another man is not well known and estimates are between 4% and 30%.
So what is the most effective strategy?
No. 1 – Quantity
Because it is just so easy.

So easy that only going around and having sex, never caring what happens to the woman/child is an effective strategy. Even if 9/10 die, he can still have many children. Raising a child is only a second option.
Note that efficient does not mean easy. Only few men are able to do this successfully.
No. 2 – Quality + No. 3 – raising
Because of the cost and no. 1 is not an option.

Having a child is extremely expensive and dangerous. Therefore big investment into choosing the sex partner with good genes is worthwhile, and so is an effort of any size to keep the child alive and well.
Choosing a partner
What partner to choose?
There are in fact two kinds of partners. One for sex, to produce a child with one’s own and the partner’s genes. Other is a long-term partner for living – supporting each other and raising children. As a result, different things are expected of them.
Necessary side note – what are the strengths of the genders?
Equipped for combat, hunting, construction
Generally whatever is needed for survival. Men are not important for reproduction (!), one is enough for whole town. Which is the reason why they are good for combat and hunting – they are expendable.
Can bear children, communication
Ability to bear children is critical for reproduction and fate of any society. Any group can only have so many children (=gene spreading) as they have women.
Communication maintains society.
Why need partner for sex?
To make childrenTo make children and get means
Men have the survival means – offering them a chance to bear a child is a way to get them.
Side note – for complete picture – why to try in life?
To be able to get/provide for women
Only way to have children. This is why men are so competitive not only over women, but over everything. Not winning any woman means a dead end for his genes.
Do not have to try
Only need to find some guy. Men are happy to provide in exchange for the chance to have a child. Women are only competitive over men = their genes and means.
What partner to choose for sex = genes?
Does not matter. The more women the better.

Little cost and much to gain. Why limit oneself?

This is why men would have sex with about anybody.
One that gives the best genes possible.
I.e. making children that will be most successful in spreading further. This is why women go for bad boys – they are shit, but bound to have many children (and may pass that trait to her male offspring). This is why women go for whoever many women want. Because they want their son etc. to also be wanted by many women.
How difficult is it to find a partner to make a child?
Mostly very difficult
Because women go for quality (are picky) and are scarce. Top men got many, others get none in their whole life. Wars are, from a big part, fought over women.
Zero effort, if not picky. High to find the best
Because men go for quantity.
Any effort goes into choosing the best one.
Being “hard to get” and provoking men into competition are some ways to find the best.
Why need partner for living?
To be able to ensure children are his
To be able to have any sex at all
To utilize time between the one-time sexes
To be able to effectively pass on his power and wealth
To maintain the home
Not having one is a viable strategy.
To get safety for herself and children
To get other means
To pass her partner’s power and wealth onto her children
Historically, a woman without man’s protection would die or get kidnapped and her children killed or enslaved.
What partner to choose for living?
One that can bear/raise children
One that can best reassure him that he will be the father of the children and will not waste his efforts.
This is why young age, virginity and chastity are so sought after.
One with the largest means to provide for safety and needs of her and children and to pass the means to the children.
Age is not important. Power is.
A wealthy powerful son is a nuke of gene spreading
So who is attractive?
Attractive is exactly what fulfills the needs above.
Nature programmed us to like that which will help us spread.
Cheating
Today it is mostly called “cheating” (let’s ignore for now that it can be allowed by the partners) and that is what we will focus on from here.
Historically, it often worked differently – men having harems or women being shared by tribes as examples. While these approaches are important in general, the are not for our purposes now.
Why to “cheat” on the long-term (for life) partner?
To have many low effort children
Remember, having sex with as many women as possible is the top strategy for men.
To get the best genes possible for her child.
Her long-term partner is probably (99.9%) not the best guy out there – many women need a partner and few men are at the top. But they can still cheat and give her children the good genes.
What makes the “cheating” bad? How does it harm the long-term partner?
He diverts some of his wealth and attention that could be hers or her children’s.
If things go wrong, he might have to provide for another child.
There is a chance he will change for the other woman.
Her goal is to bring other man’s child for her long-term partner to take care of, inflicting a significant cost (and a lie) on him.
Good chance to change for a “better” guy if feasible.
What makes the “cheating” ok?
He goes for quantity -> does not want to spend more than he has to. He just goes and comes back and does not want to care.
Since the “quality” is less important, he is less likely to change his partner.
Nothing really
How “bad” is the “cheating” overall?
Somewhat
This is why polygamy is common and why women often overlook the cheating.
Very
This is why, in places, women get executed for cheating. And so are men who cheat with married women.
Why are they jealous?
To find and prevent the negatives outlined above. The jealousy is proportional to the threat – a lot higher for men.
That is why men go and kill the offending man. Not the woman though as he still needs her to make children.

What does this all mean for us?

Things are not pretty. Nature did not program us to be nice and honest and fair, but to do things that hurt our partners and ourselves too, for the sake of spreading our own genes. I am not advocating for any of this nor I like it. But that is a reality that is not going away anytime soon.

Not closing our eyes and being aware of how things work is a good start of dealing with it though.

“Fixing” it

“Fixing” it is very hard. Sure, contraception can wipe out the real consequences. But it can’t change how we feel about it. These instincts are ancient and important and form the roots of who we are. They are behind some of our deepest emotions, such as love, and what we like and dislike. Keep in mind that this article is only an outline and does not capture the whole complexity of the subject, nor the countless ways it connects to other parts of our minds and bodies – affecting everything in our lives. 

I will not try to propose solutions here, but will only give a warning. Trying to “fix” some of it – how we feel – can hardly be done without side effects and can be dangerous. For instance, many people now believe jealousy is wrong and they try to remove it from their personality (so that they can have sex without limits). But I don’t think that can be done without the likely danger of changing and damaging other parts of us, such as the way we love. So try it with caution.

What we can do, on the other hand, safely, are two things. One is to be more understanding of others. While these instincts are often based on bad reasons, people are usually not at all aware of them and just follow their programming. The urge to cheat is no more wrong than jealousy, or love. All exist for the same reason. Having these urges and emotions is not wrong – it is who we are. Whether we act upon them and how is quite another thing. Once aware of our actions and reasons behind them, we get new power and also full responsibility and from there we can call others and ourselves accountable.

Final words

While the article only focuses on sexuality, the outlined differences between genders have much wider implications affecting everything we do. I will not go deeper than I already have. If your only takeaway is that men and women are not the same – as it is popular these days to claim – it will already be a big step towards avoiding very costly mistakes.

Origins of bread queues of communism

It goes about like this.

The (working class) people: “The bread is expensive, them baker exploiters are overpricing it!”

The (communist party) government: “No worries, we are working on it.”

Government: Sends couple bakers to uranium mines and sets official bread price mandatory for everybody to 5x cheaper than before.

People: “Yay! Serves them well. Now we can have the cheap bread we are entitled to.”

Bakers: “Da fuk. It’s impossible to make bread this cheap. We can’t bake it out of thin air. We have to pay the workers, pay for the flour and feed our families!”

Government: “Stop being selfish, help your fellow comrades in need.”

Bakers: “Right. Fuck that, let’s make bread rolls.”

People: “Oy, secret police, them bakers ain’t making our bread!”

Government: Sends more bakers to uranium mines and fixes the prices for all food. “You better have bread next time.“

Bakers: “What can we do, what can we do.” Bake bread as before, but offer just the minimum amount they must at the government price and sell the rest under the counter – black market style.

People: Standing in line from 4am to get the low supply cheap bread. “Finally we have our bread, bless the communist party.”

Bakers: “Wait a minute! Millers! How come you are not doing your part in the plan for better tomorrows? We demand the flour be 5x cheaper too!”

Millers: “Fuck.”

CFS/ME explained by a geek

CFS in short

Chronic fatigue syndrome is a very difficult disease to understand and navigate. With the flu, you just stay under a blanket for two weeks and that’s it. Not the case with CFS though. It seems to be changing form, sometimes improving and other times going right to hell for no reason at all. It can take weeks to months to even assess what effect an external factor (treatment, exercise, diet, …) had. But there is a system under all of that.
It took me over three years to figure it out and now I can share it with you, along with a few recommendations. I only have experience with a rather light form of it, so I am not sure if this would apply to worse cases. But from what I read from others, I don’t see why it would not. In either case, let me know.

Being a geek, I will approach it quite technically – which adds a lot of clarity but maybe also confusion for some people. Imagine it is a computer game (might as well be!) and it should be fine.

Stats to follow

The whole progress of CFS is an interaction of three parameters related to the body – long-term fitness, sickness level, and current fatigue

The first two tracks progress, and the third is what runs the game. Their interaction is what it is all about. Don’t mind the missing details now, all will be explained later.

CFS parameters

CFS stats

I) The first one is the overall fitness. Of the three, it is the most stable and changes very slowly. It forms the baseline from which the game is played.

II) The second is the sickness level. It goes from feeling normal (which is about as great as Christmas) to being hardly able to move, braindead, in bed. Most of the time it is somewhere in between – feeling shitty, with a mind fog that makes thinking and concentration pretty difficult, and thinking twice about any kind of activity.

III) The third part is a meter (a queue exactly) of muscle fatigue. Mind exhaustion affects it too, but muscles are what it is mainly about. Any kind of physical exertion adds on top of it, and it slowly empties over time. And there is a marker on the meter, which is really important. Keeping the cumulative fatigue under the mark means things are going fine, while going over leads to a quick and painful relapse.

How it ties together

I)  The fitness sets the top point one returns to after being acutely sick and working back up to a “normal” state. A better overall fitness means more space to navigate through – getting more sick does not need to be so limiting and more activity is possible when getting better again.

You can increase your fitness very slowly by careful measured exercising and only when not sick – which, unfortunately, is the smaller portion of time. The rest of the time it will slowly deteriorate. 

II) The sickness level is not as simple as it may seem. As a general rule, it is hard and slow to improve, but can jump from ok to hell in a snap of the fingers.

 

It has three different stages. There is a sick part, which means simply being more or less sick – feeling shitty, a bee hive in the head, you name it. And trying to do as little as possible in order to get out of it asap – which takes days, weeks, months…

Then there is an intermediate stage – this one is really tricky and a frequent downfall. After a long time of non-activity you finally feel ok, but in fact you are not. You want to start being active again, but the body cannot handle it and plummets right back into the sick state. I still haven’t figured out how to know when this phase is over other than very careful probing and waiting.

Only after succeeding in this patience trial one gets into the good stage where muscles regenerate well enough and exercising and improvement can be achieved.

III) The fatigue meter. Correct fatigue control is the key. It is very tricky because it is cumulative and the body gives false signals about it. Any kind of exertion adds to the fatigue meter and the fatigue level slowly dissipates as the body regenerates – the speed of which depends on the sickness level. So while being sick, leg muscles get tired from a stupid walk around the block and can take a week recover, while in a very good state the recovery can be close to normal.

The fatigue meter has this critical threshold. As long as you are keeping the fatigue below this threshold, you are fine and can work on getting better. But crossing it means a quick relapse right down into the sick stage (on the sickness meter), often undoing months of previous patient progress. 

Not being enough, there are two complications.

One is that there is no way (that I know of) of knowing where that threshold currently is. Sometimes I can trek outdoors for a couple days and be ok, while another time running after a tram is all it takes to cross it. Generally the lower the I) fitness and worse the II) the sickness, the lower is the threshold too. But reality is complicated beyond my, and probably the general, understanding.

The other issue is that the fatigue level itself is very obscure – often you don’t feel it. You can do an exercise, then feel totally fine the next day, and the next day too, then do another exercise on the fourth day – they add up and you are fucked right there.

So how to know when it is finally safe to be active?

They key here is to be as careful and pessimistic as possible. People say “listen to your body”. But in this case the body can’t be relied on at all. Primarily, keep conscious track of any activity, imagine the meter and don’t add any more unless you know that enough time has passed for the fatigue to dissipate. So if you exert yourself and feel great the next day – no. It is a lie. Don’t fall for it.

The whole sentence is “listen to your body for a no”. If anything does not feel right, take the safe option.

Now to not be entirely pessimistic here I have to say that for me there is one very specific feeling in my muscles that seems to really signal that they are genuinely regenerating and I should soon be able to work again. But it is a slow learning process where I have many times misjudged it so I simply can’t recommend anything other than maximal caution.

Hope at the end

Even though I have learned and understood all these things, I was not able to make it work for myself and things have been slowly and steadily going downwards to the point it starting looking hopeless a couple months ago. But then I got recommended to a specific exercise method that apparently has helped many CFS people to get back to a normal life and so far it is doing miracles for me. It is the Wim Hof method. In short, it is a combination of breathing exercises and exposure to cold, along with some yoga, although I don’t think that part is essential. What it does is that as crossing the fatigue threshold throws the body into a sick state, this exercise is able to kick it right back into the good state, and keep it there. It makes sense with the current shaman level understanding of CFS which is that it is a sort of safe mode, “hibernation” of the body tied with poor oxygen utilisation. This breathing/cold combo then kicks the body into a “ready” state to be able to face the harshness of the natural environment. Or maybe it does something entirely different, who knows. It seems that it only works for some people and again, there is no saying whether it is because of different causes behind CFS, differences in people, doing something wrong… But it works for me (so far) so in my opinion, it’s worth a shot.

Understanding general statements

How fights start

Over and over I hear exchanges in these lines:
A: *General statement about something* (e.g. “Asians are smart”)
B: “How can you say that is true for ALL? There are exceptions!”*, basically calling the whole A’s statement invalid, often turning into an argument and accusations of racism.
This is a template of any attempt of a productive discussion on controversial subjects, especially in the US. A lot of misunderstandings and social conflict could be avoided if only people better understood what general statements actually mean.

There are two reasons why B’s interpretation is wrong. (In the end they are the same thing)

1) The interpretation of A’s general statement to apply to all/everyone is wrong.
2) Exceptions cannot make rules.

Interpreting general statements

By pure logic, “Asians are smart” is indeed wrong. But we are dealing with the real world and the issue is about language conventions rather than logic. But it takes some logical thinking to understand the conventions (or lack of logical thinking to not to). “Asians are smart” can be understood in two different ways – only one makes sense, but the other is often the result.

It is obvious that not every single Asian is smart. If I am not seriously mentally impaired, it is obvious that I know that. So why would anybody assume that when I say “Asians are smart”, I mean that every single one is?

Although logically correct, this interpretation does not make any sense in vast majority of real cases and is useless. Therefore, another interpretation should be used instead that that would be meaningful and useful. That interpretation is that the statement is true for a significant part or statistic**.

To show it on the example: we can assume that the general statement (“Asians are smart”) does not mean it is true for every instance (“every single Asian is smart”) as B did, as that is out of reality. Instead it means that it is true for a significant part (“most Asians are smart” or “In average, Asians are smart” or “The ratio of smart people is higher with Asians than some other group”) – which is the meaning that A intended.

Sometimes we really want to say something about every single instance. But in that case we can say it explicitly – “All Asians are smart”. But even in many such cases we can assume that the person is just intentionally over-exaggerating. It is all about trying to understand.

Exceptions do not make rules

The no. 2) of B’s wrongs is more simple, but maybe even more important. Let’s use another example – “Dogs have four legs”. That is something we kind of accept. But then some B comes and says “No way, I have seen a dog that had an accident and has 3 legs. So dogs have three or four legs”. … or any other number they identify with. And you will get arrested if you say they have four from now on.
By pure logic it is true we can’t say “dogs have four legs”, as a single exception is enough to invalidate a general statement. But that is not very helpful for our daily life – which is an argument I already made with no. 1).
The important angle here is that because of an exception of a 3-legged dog we shouldn’t alter and destroy a helpful rule and be saying that dogs have three to four legs. Even though it would be more correct, it would harm our everyday life. Just imagine the confused children.
Every rule has exceptions. That is a part of what real-world rules are. Exceptions do not make rules, they underline them. Dogs don’t have three legs and people do not have 129 genders. Yet, in Canada, you can get arrested for claiming there are only two.

So to sum it up – when somebody says a general statement, they most likely do not mean all/everybody. Just try to be positive – first try to understand what they mean and what makes sense***. People usually mean well. It can avoid a lot of bad things happening.

————————————————-
* The reaction seems to depend on what the general statement is about though. Saying “Africans have lower average IQ” is quite guaranteed to invoke the response “You racist, how can you say that ALL Africans have low IQ?” at the least – while saying “ALL white males are privileged Trump voter racist sexual predators” seems to be fine.

** Does not even have to be a majority. With “Fish live in water” we mean pretty much all of them. But “driving is dangerous” does not mean we have an accident on most drives, only that the danger is somehow statistically higher than some other activity.

*** Applies even to stupid people. They can mean it the wrong way, but everybody should get a chance first.

Killing should not be easy

Should machines be allowed to make life and death decisions? With technologies already up to the task, this is a pressing question, but not an easy one.

Although there is a strong opposition from the scientific community, the force seems to be on the proponent’s side. Not only do the weapon manufacturers hold virtually unlimited resources and are backed by their governments, they have pretty strong arguments on their side as well. At least on the first glance that is.

Arguments for and against autonomous weapons

The resistance is natural as killing machines go against our basic instincts. We are frightened by an image of machines that can kill us – without feelings, without a chance to read them, predict them, negotiate with them. It is a combination of hopelessness and the fear of the unknown. The way people put this into words is by saying that the decision to kill people should be left to people, for they are restrained by compassion and human goodness. Allowing machines to kill would mean more deaths as these limitations would not exist.

But the proponents argue that allowing machines to make the decisions would actually lead to fewer deaths and especially eliminate the unwanted ones. Machines are more accurate and effective. But the main reason is the same one the opponents use – machines have no emotions. No anger, no killing spree, no hatred. Machines will not kill anyone they are not supposed to kill. These arguments are correct. Autonomous weapons would indeed make the killing more accurate and safe. But they are wrong about the consequences.

Why is it wrong?

Making killing more accurate and safe means making it easier, and that is not a good thing. Nowadays, ordering a kill strike carries a lot of risk and responsibility. The decision makers need to think twice before they take the risk of the mission not going perfectly right – having to carry the weight of civilian deaths, having to sweep it under the rug, or even worse, being exposed in the media. Because of these risks and occasional accidents, strikes are being questioned – by the public, the decision makers, as well as those who pull the trigger and have to live with it.

On the other hand, imagine that ordering a kill has no risks whatsoever. The public is already convinced that nothing “bad” (i.e. no unintended deaths) can happen, decision makers are free of the civilian death nightmare and those pulling the trigger feel nothing at all – they are machines. Targeted killings would become a simple effortless routine, an easy universal solution that will be used in many places in which it was unthinkable before. Because of the general perception of being safe and moral, there will be no interest from the public and journalists anymore, no scrutiny, no raised eyebrows. The result of that will not be increased safety, as the proponents say, but a wide abuse of the targeted automated killing to remove whoever is inconvenient. Because, why not, when it is so easy?

So while the arguments for autonomous killing machines are safety and less unintended casualties, the actual result will be a large increase in intentional casualties, with accidental deaths of bystanding civilians being replaced by intended deaths of uncomfortable ones.

Therefore, killing should not be easy, and autonomous weapons are not a good thing.

 

How not to lose the AI race before it even begins

Foreword

This strategic analysis has been originally written as a submission to the GoodAI’s General AI Challenge. I give my thanks to GoodAI for making me put my thoughts on paper.

The following should be read by anyone participating in the research of AI strategy and policy formulation. While the text contains ideas already covered elsewhere (See FAQ for reason), other parts explain why solutions that are generally proposed and accepted are actually wrong and utterly dangerous.

Original in pdf. I recommend it for better formatting.

FAQ:

Where are scientific references?

I have written the outline of this analysis prior to reading any text about general AI (with the exception of the old wait-but-why blog overview). Therefore most of the ideas in this text are originally mine. Where I am aware of others credit I give it and do my best to not leave anyone out. I am not interested in the publication points game though, so you won’t find the usual list.


Introduction

A new power is emerging that overshadows everything we know. Because it will be orders of magnitude more intelligent than us, we cannot imagine its potential or motivations. In short, if an unconstrained, recursively improving AI is created, we will be at its mercy, with no way to estimate the outcome. But even if the worst scenario is avoided, other dire dangers exist on the way. Fortunately, effort is being made to avoid the grim scenarios in favour of more desirable ones. This work presents an analysis of some of the key aspects surrounding the issue and proposes one specific strategy as a possible solution.

First, I will briefly lay out the philosophical background of the issue. After, I will describe some specifics of the AI development and its capabilities. Although most of this part has been well covered by other authors, some takes might still be original. In the next section I will categorize and evaluate participants of the AGI research race. That will altogether lay the foundations for building and evaluating four possible strategies. Two strategies are unrealistic, but provide a good reference. The third one is highly likely and dangerous. The last one has potential to secure a favourable outcome.

Our main goal

I do not want to go too deep into philosophy. It is a critical part of the issue, but it is too large for the scope of this work. I will only sum it up in the following paragraph.

The following summary is hard and contradicts the beliefs of most people. Unfortunately, those beliefs are the result of self-deception, and with our extinction at hand, there is no place for self-deception here.

To the contrary of popular belief, the interest of us, humans, is only to spread limitlessly, anything else being secondary. The “secondary” contains our happiness and individual survival, same as the well-being of anything else in the universe. We are born to spread and to exploit anything that stands in our way. All the other values, be it religions, respect for life and rights of others, preservation of nature… are our artificial inventions that are nothing but means to the first objective. In other words – any such values are quickly forgotten once our children are in danger.

This brief summary serves two objectives. One is to understand what it is that we really want, the other to understand what we do not.

Somebody proposed that we can be content with allowing the AI to wipe out humankind, as long as it carries over the human values. The problem is, there are no human values to speak of and our survival is what we want. So this is not an option.

From a universal point of view, the survival of humankind is by no means necessary and no one would (be left to) mind if the human race suddenly disappears. However, if we accept this as a reasonable option, then any effort is meaningless – including this work itself. Therefore, the rest of this analysis assumes that our survival is the main goal.

Other human goals than that are a complicated issue with no simple answer. They are not critical right now, and I will not attempt to solve them here.

The issue at hand

A lot has been previously written about the potential benefits and dangers of a general AI1 – AGI. So in order to not repeat it, I am only going to list some of the aspects that are important for the later deliberations.

The claim is that the AGI would be able to solve pretty much all the problems humanity has. Let’s examine this claim from an individual perspective.

Ok, so when all problems get solved, what then? And how does solving humanity problems benefit me (anyone can ask), especially when I want to come out ahead of other people? Unfortunately, helping humanity is not as motivating for most people as much as it sounds great. Motivations that are by far prevailing are those of smaller groups or individuals. So while all the breakthroughs in sciences, medicine etc. are great, they will play an inferior role in the race dynamics. The race is not with time, but with other people with a more focused interest. Therefore, more tangible benefits should be in focus, leaving the “grand goals” in the background.

About super intelligent AI

For most of this work, I will be considering an AI that is only moderately intelligent, perhaps a bit over the human level. Although an AI orders of magnitude more intelligent than people is, through recursive self-improvement, very feasible, it is a case that, in my opinion, is not worth that much attention. The reason is that it is totally futile to try to understand capabilities and motivations of such an entity and the outcome is, therefore, out of our hands. Our instinct is to say “Ok, so it will have this information about the world, there are some values XY we gave it, so it should rationally arrive to such and such conclusions.” But this approach has critical problems.

For one, philosophy has this inconvenient property that a tiny change in initial assumptions (or their understanding) leads to completely different results. Just consider how much and how many times all the world and values change during one person’s life, while we, supposedly, share some common human values. Assuming that everything changes for every doubling of IQ would be a very safe assumption. With that assumption, an AI 1000x more intelligent than us (whatever that means) can’t be predicted.

The other problem is that we are assuming that logic itself will work the same way, but that is likely not the case, especially when we already know that the currently used logical framework has its issues and limitations.
For the same reasons for which we can’t presume to understand such an entity, we can by no means expect to be able to control it. I am aware that I am invalidating the subject of work of many people. But I am being realistic, as a hamster would be if it decided to go about looking for food instead of wasting time by trying to understand people.

The conclusion is – if we can’t control it and we can’t asses its motivation, the outcome is virtually random and not worth consideration – except for trying to avoid it altogether.

Reasonably intelligent AI and its appeal

An AGI, providing it ends up under control, can provide great benefits even if it is just moderately intelligent, but possesses large computational resources and speed. Basically, imagine a very smart person with unlimited perfect memory and a years of time inside a minute. This case is the most interesting one, as, unlike he superintelligent AI, we can reasonably attempt to control it.

There are many benefits that can come out of it. I will only list few – the main general benefits and then the capabilities that could spark the highest interest in the minds of people wishing to control such an AI.

General benefits

Science

  • Breakthroughs in all science disciplines
  • Progress in philosophy

Labour

  • Replace most or all human labour

Solve popular issues

  • Ecology
  • Poverty
  • Space colonisation

“Power” benefits

Production

  • Unlimited energy (for time being)
  • Automatic manufacturing

Efficiency

  • Manufacturing
  • Logistics
  • Energy production and distribution

Weapons

  • New weapons
  • Efficient battlefield control

Biology

  • Extend life
  • Body/brain enhancements = superpowers

Surveillance

  • Automatic real time surveillance using existing resources

Psychology

  • Understanding people – personality, motivation, values
  • Predicting people
  • Manipulation
  • Brain hacking, mind control

Data mining

  • Understand and utilize online data
    • USA collects most of the internet traffic
  • Know all about individual people and predict them
    • Elimination of potentially dangerous people well ahead of time
  • New insights into history and policies
    • Deduce other parties’ secrets

Hacking

  • Parallel work and “connecting the dots” to eventually access majority of devices
  • Control over resources, production, weapons, …
  • Control over communications – paralysing, misleading or controlling any resistance

These, and many other capabilities pose a huge temptation for anyone who seeks influence or other personal satisfactions – either for power or to change the world to fit their image.

Next, I will list typical parties with the highest interest and potential in AI research, along with their specifics and dangers – then I will arrive to an ordering by the danger they pose that will be useful for scenario evaluations.

Who can invent AGI

AGI development can take many forms and since we do not even know how it can be done, many scenarios seem possible. It may come out as a result of a large, expensive and focused research effort, or as a good idea of one bright mind in a dark cellar. Neither it is easy to say which ways and outcomes are better than others, because what matters most is the motivation of people in control, which can be good or bad in any settings.

Initially, I will order the parties by their size. Because of a network of often mutual influence, other groupings become unclear. These connections will be roughly described too. The final result though will be an ordering by their dangerousness2 if they succeed in creating (and controlling) an AGI.

An independent individual researcher / small independent team

Because of the minimal size, this research effort could be impossible to detect and the motivation behind it can be anything. Unpredictability does not mean it is bad though as some control seeking people would say. An individual still has a better chance to go after a good motif than some other groups that are inherently power hungry. The success of an individual seems unlikely compared to a large research group, but since one good idea can be the cornerstone of the research, one smart or lucky individual can be all it takes.

+ Good chance of good motivation
+ Outside of influence of power groups
+ Rather smaller chance of success
– Motivation is highly unpredictable
– Likely limited expertise in safety areas and low budget for it
– Possibly insufficient regard for the danger
– Close to impossible oversight
– If discovered, it can easily be acquired/controlled

An ideological group (cult, religion)

A common characteristic of these groups is that they are founded on some made up unrealistic premise, which can lead to very bizarre aims. Even the more reasonable cults believe in a return of some savior. But, from history, we know examples of the really crazy ones that would seek to destroy humankind in order to save it from one ailment or another3.

– Being out of reality, their motivation is principally wrong
– May have no regard for human life
– Can have resources
– Closed and secretive
+ Not very focused. They need to convince followers, not so much to actually do it
– Some can be incredibly focused though

A private, hidden research effort

This is a case where a (wealthy) individual runs a private research initiative for their own ends. Their motivation will likely be one of two kinds.
A personal benefit – perhaps power/influence, getting superpowers, or fulfilling other personal goals or dreams.
Or the research can be run with genuinely good goals but kept private for safety reasons.

+ Good chance of good motivation
– Quite possibly bad motivation
+ Outside of influence of power groups
+ Mediocre chance of success
– Lower importance of safety

Privately funded public research initiative

The general direction of the effort would be dictated by the owner, but would be kept within the limits of public scrutiny. Therefore, its goals would need to be on the good size, including safety considerations. While, in the case of success, the technology could be used for the purposes of the owner, it is not very likely. Because if that was the owner’s plan, he or she would have instead chosen the path of secret research.

+ Good motivation
+ Regard for safety
~ Possibly sufficient resources
+ Reasonable chance to succeed
+ Weak influence of power groups

State funded public research initiative

This research effort might look very much like the previous one, except for two differences. One is that the direction of the research would not be as clear as when given by an owner. The other is that if it succeeds (or is close to success), it will be easy for the sponsoring state to appropriate the research by some of its power sections (military, intelligence, …) – which is also very likely.

+ Good motivation, initially
+ Regard for safety
+ Sufficient resources
+ Reasonable chance to succeed
– Almost certain to be eventually grabbed by the state for its private needs

University research

Nowadays, this is the most common mode of research, because of the concentration of expertise and cheap money. A possible weakness is the lack of a goal. Universities do research for its own sake, but they do not plan for what to do with the result. That would again likely be dictated by the sponsoring state, which could use the resulting AI for its needs.

~ No goal
+ Regard for safety
+ Sufficient resources
+ Good chance to succeed
– Almost certain to be eventually grabbed by the state for its private needs

A large business / corporation

The issue with corporations is their unclear governance. Whose goals do they follow? Shareholders? The board? The thousands of employees? The state they cooperate with? The customers for its products? This ambiguity and complexity is dangerous. Many parties can influence the direction of the research and possibly utilize the outcomes while staying obscured. This is strengthened by the fact that the research can be kept entirely out of sight and scrutiny. While private ownership is generally a good thing, too large businesses do not really fall into that category anymore. A striking example is Google with its large AI research and its high interconnection with the US military.

– Unclear ownership
– Unclear direction, goal, decision making
– Difficult oversight – has means to both hide and protect the research
– Low regard for safety (again, because of the unclear direction)
+ Huge resources
+ Good chance to succeed
– Results are likely to be used by some of its power seeking stakeholders

Research run by a state / state agency

This scenario is realistic and dangerous. States are rarely known for being honest and transparent. In fact, they have been responsible for all the largest massacres and monstrosities throughout the history. The people in power that the state represents (whoever that is, by no means limited to public figures) possess a terrible combination of enormous power and close to zero accountability. One thing that the states can be relied upon is that they will do anything to obtain more power4.

– Power-seeking out of principle – worst motivation
– Proven track record of worst behavior
~ Unlimited resources
~ Large chance to succeed – can steal research from the other groups
– No accountability
– Impossible oversight – means for secrecy

The ordering of AGI research entities by danger

There are three main aspects affect the dangerousness of a researching entity category:

1) Motivation. Generally, we can say that the wider the “audience”, the safer and more predictable the motivation is. So being public gives plus points.

A more important aspect though is the inherent probability of having good or bad motivation. No entity is guaranteed to be good, but some are guaranteed evil.

2) Regard for safety. The AGI research safety is a very complicated open issue, therefore it can be expected to be costly to keep it on a high level. Some entities can’t afford it, and some just don’t care enough. That can be caused by limited knowledge, not being the one responsible, or a rational deliberation – for many even a high risk would be worth the possible winnings.

3) Chance of success. Quite clearly, an initiative with a concentration of talent, money, and focus has higher odds of success, but the success is far from guaranteed. There will be a lot of competition, and perhaps a single bright idea can cut it – even one individual with a computer may be the first one to the line, especially considering there can be many of them.

Here, a higher chance of success is not good or bad by itself but becomes bad when combined with bad intentions or poor safety.

In light of these aspects we can finally arrive to an ordering of the researcher entity groups by their dangerousness. Descending, from the most dangerous to the safest:

  1. The state / state agency. With inherently power seeking motivation, vast resources for the effort, low transparency and power to limit any competing influence, the state is the most dangerous entity to perform the AGI research. Due to the power of the state to acquire other entities with a chance of success by any means (with violence and propaganda in its repertoire), any other AGI research entity within the sphere of the state influence fall into the same category.
  1. Big company / corporation. They have a similar scale of resources as the states. A very unclear control and motivation would be dangerous by itself, but the larger they are, the more similar they are to the state with an extensive interconnection with it.
  1. Ideology group / cult / religion. Less powerful with perhaps a less dangerous motivation in general due to their confusion, but a strong resolve and total unpredictability puts them very high on the ladder. Basically a crazy guy with a finger on the trigger – hopefully too crazy to make it work.
  1. – 5. State funded public research initiative, university research. They have some differences, but the outcome is the same. Good chance to succeed, very likely to get snatched by the state if they do.
  1. Individual researcher / small independent team. Finally on the safer side with regards to motivation (~50/50 that is), we are getting to the better part of the ladder. This group is rated dangerous mainly because of the safety side, as it can easily be underestimated or fall out of the budget.
  1. A private, hidden research effort. Same motivation chances as the previous group, but larger funding can decrease the safety issues. With the safety the secrecy provides, a hidden private group led by a sponsor with the right motivation can be the best option possible.
  1. Privately funded public research initiative. Low influence of power groups, public scrutiny and decent funding together make the best combination. Publicity provides two benefits. One is protecting their interest – providing further pressure in favour of safety and fairness. The other is a proof of good intentions of the owner, who would have chosen the secret path otherwise. A battle for independence from the state will still be tough, but there is hope.

Having this classification in place allows us to better decide which possible future scenarios are more or less favourable, by seeing which groups benefit and suffer from them.

Privately funded public research is the clear winner. In light of new considerations, these are the key properties:

  • Private ownership
    • Because state is the alternative
  • Large resources
    • Nothing should stand in the way of maximal safety
  • Maximal independence and protection from power groups, mainly states
    • Which are the main danger
  • Wide international involvement
    • To mitigate power struggles and support fairness
  • Public and transparent
    • Community contribution and oversight for higher safety and fairness

How to deal with the AGI research race

This chapter will propose and compare four possible strategies for managing the AGI race. This list definitely not exhaustive and better strategies may be found. But in the very least it lays a foundation for future analysis and strategy comparison.

Priorities

Since we are dealing with realistic scenarios, I will start by specifying more concrete goals and priorities.

  1. For humankind to survive.

Many researchers, including me, are quite worried, as the end of humankind seems to be a likely outcome of AGI development.

  1. Not to end up with a much worse result than if no AGI was developed.

Such cases are again easy to imagine – it can either be someone using AGI for their bad goals, or an AGI that makes our lives much worse on its own.

  1. To actually get some benefits from the AGI.

Considerations and directions

Impacts of priority 1 – survival of humankind

An important aspect with regards to the first priority is how likely that outcome is in different scenarios. Currently, nobody has a clear answer to that. But it seems that it does not require much effort for a successful AGI researcher to slip into one of many paths that lead to the AGI destroying everything. On the contrary, it seems to be a likely result whenever things are not done perfectly right. And doing things perfectly right, when

  • it is a complex software project
  • in a field no one understands
  • no one even knows what the perfectly right is
  • the first try can be the last

is something even the best funded and knowledgeable teams can’t rely on –  even less so for small teams or lucky individuals. In other words, the chance that successful development of an AGI will not result in the destruction of humankind is rather slim.

With this high probability of disaster, regrettably, avoiding the creation of any AGI currently appears to be the best option, even if it means forfeiting the potential benefits. Unfortunately, due to the appeal of the AGI for any potential wield of its reins, with the increasing ease of development over time for more and more people, makes this option very difficult to achieve.

Impacts of priority 2 – avoiding very bad outcomes

This part has two aspects – not making an actively bad AGI, and avoiding an AGI under control of the wrong people.

The first part still falls into the category of “do it right” (programming the AGI that is) and so this is shared with the priority 1 criteria. “Doing it right” is clearly the most important part, but not in the scope of this work.

The second part though is considered here and follows on the previous chapter about entities that might end up developing an AGI and the dangers of that happening. Some entities are dangerous because of the lower chance of “doing it right” and causing a complete catastrophe or even causing that catastrophe deliberately. But in the case the development is successful, and the AGI ends up under control, some origins are better than others because of a better chance of having more positive motivation.

Impacts of priority 3 – getting benefits of an AGI

If we get this far, we have survived and did not end up in slavery. That by itself is a win. If we can benefit over that, even better, but it is, after all, the last priority.

The four strategies

The strategies, or scenarios, will be considered in light of the aforementioned priorities. To reiterate – the primary goal is to survive and if we do, to end up with the AGI in good hands.

As an overview of what is to come: The first and second scenarios are not very realistic and serve as baselines. The third scenario is very realistic and very dangerous. The fourth is difficult, but might work.

Scenario 1: Destruction of civilization

The credit for this idea goes to the game Mass Effect. In this game, (spoiler) an artificial “race” has been created a long time ago for one purpose. Whenever civilization (not limited to humans) gets close to developing an AGI, this “race” reappears to wipe out the whole galaxy and restart civilization back into the stone age. The reason of which is, as you would guess, to prevent the complete destruction that the AGI would cause once finished.

This option is obviously very bad and the fact I am considering it shows how serious the situation is. But even the destruction of our entire civilization is a good option if it averts the complete end of the human race.

The way it would work is that people would induce some sort of global catastrophe that would destroy as much infrastructure as possible. Most people would die and the rest would have such a hard time fighting for survival that all the remaining knowledge they carried would be forgotten.

This scenario would not work though, for two reasons:

The first one is human nature – people are bad at making hard decisions. Even if this were by far the most rational thing to do, people would still cling to the hope of a happy ending5.

The second reason is that no matter how it is done, some powerful organizations will dig in together with all the technologies and data in order to re-emerge later in full power. The destruction would even help them by defeating the competition, and thus the original goal would not be satisfied.

So this is not a way. But may it serve as a baseline and as a comparison when weighing other options. Does scenario XY give us better chances of survival than if we burned everything down?

Besides that, a related utilization of this scenario is as a strawman, to encourage cooperation in case some entity incorrectly6 thinks that they would do better developing an AGI on their own.

Scenario 2: Do nothing

Inaction is always an option and in the case of many policy decisions, a good one. Although not very likely in this case, we should be aware of the reasons why and it can serve as another reference.

So what would happen if no action is taken with the goal to restrict AGI development?

Because of its high appeal and low entry barriers, the development will be done by many, all over the world. The competition will be driven by the states racing to achieve global control. Total catastrophe is quite likely in this scenario because neither the competing nations nor the many individual researchers would be very strong in safety. If we get through this alive, the chances are that the winner will be someone very motivated and as I stated at the beginning, the strongest motivation comes from the personal, mostly power-related, goals.

While the exact outcome is hard to predict, the odds of it being favourable are low.

Scenario 3: Global surveillance and control

Not only it is our nature to want to control things, but it is also the general direction of today’s world.

What happens when a hidden “danger”7 arises? Be it terrorists, hackers, whistleblowers, child porn sharers, (oil-rich) country with chemical weapons… 3-letter agencies are sent in to observe, then gunmen or bombers to eliminate the threat. And perhaps a law sanctioning it is passed somewhere along the way. All that happens with quite broad public support controlled by the media. These processes are the same all over the world.

What happens when a threat of technology arises that, if developed by anyone, would mean a loss of power of all the others? And a threat that actually does pose an existential risk to people?

What will naturally pop in mind of most and the minds all of the power holders will be the same – total control of everyone and everything capable of AI research. Or elimination, if control is not feasible.

This option is already being proposed, will be proposed, and will be pushed by the strongest force from many directions. Because AI or not, control is what the power wielders want and any (virtual) danger is their opportunity.

As before – considering how dangerous the overall AGI situation is, this option does not have to be so bad, relatively speaking, and needs to be considered. Total surveillance is definitely better than our extinction, and it beats the reference Scenario 1 – destruction. What it does not beat though is the scenario 2 – do nothing.

Such kind of global surveillance would have to be imposed by the states, no one else has that power. This has three weaknesses8.

1) Even the best surveillance can’t be perfect. It will dissuade most people, but some will remain who will hide and continue the work – under higher pressure, with less time and resources. Since information and research sharing will be non-existent under the crackdown, everyone will be on their own. There will be no space nor knowledge to implement safety measures. As a result, the risk of the catastrophic outcome can actually be increased, making the official reason for the crackdown invalid.

2) When a state imposes strong restrictions on its subjects, who is best equipped to continue covertly with the AGI research? The state itself. States will never give up their pursuit of power and no laws, treaties or moral decency will stop them – as we keep seeing over and over again9.

3) “Global” control is still maintained by some number of distinct powers. They may shake hands and sign treaties, but they will know that the others continue with the research the same as they do themselves. The race will go on.

The result of this is that if the AI is developed and we survive (which seems even less likely than in other scenarios), it will end up in the hands of the group we have identified as the most dangerous in the earlier analysis, while any opposition is already suppressed.

The result of this scenario in a nutshell:

  • All research will go into hiding
  • No sharing of research results
  • Pressure on the remaining, hidden small researchers
  • Exclusive race of superpowers for world dominance
  • No transparency
  • Lower – not higher – safety
  • No opposition
  • Zero chance of a positive outcome (by priority 2 and 3)
  • Global totality, abused for unrelated goals of the overseers

As I said before, even doing nothing is better than this. The global surveillance and control will be strongly pushed by those in power as well as the indoctrinated public and must be opposed at all cost. Otherwise, we will have a catastrophe before we even begin.

Scenario 4: Safeguarding AI

This variant is based on the premise that if we can’t prevent AGI creation altogether, having just one is the next best option10.

The way to achieve this objective comes from the AI itself. We do not have the means to prevent AGI development (the failures of scenarios 1 and 3). But an AI, more capable than us, might be able to do it. Imagine that an autonomous AI system existed that would do nothing, except for preventing anyone from developing another, potentially dangerous, AGI. Its other objective would be to be as non-intrusive as possible, only maintaining power and resources necessary to perform its task11.

If this is achieved, the dangers posed by AGI (destruction of humankind and AI as a tool of power) would be mitigated. Although it effectively means a “totality” in a similar manner as the one in scenario 3, it has none of its downsides. The AI would be impartial, with no hidden motivations. Of course, it would pose a limitation on the development of a technology with many potential benefits, but, as I said earlier, these benefits are the last priority. But even the benefits would not need to be completely foregone, although that is a sensitive issue I will discuss soon.

How to achieve this result?

Three main criteria need to be met in order to create this kind of AI successfully:

  1. An initiative with sufficient resources must be started that would adhere to this goal.

It should be started by a private entity with maximum public cooperation to ensure that the right goals are set and followed. The project needs to be founded on support given by all world powers. That can be secured by showing them the prospects of the end of the world or a power other than them winning the race, if another path is taken.

  1. The initiative must stay independent and safe.

If not enough precaution is taken, the world powers will use any means to get their hands on the project if it has good promise. And if they cannot, they would not hesitate to nuke a whole city the project is based in, if they believe that the project poses a serious threat to them.

It is not possible to collect enough power to protect the project by strength. The best way to achieve safety is a combination of the widest possible consensus and the cooperation of all world powers, combined with high transparency. The transparency is essential – it would allow anyone to confirm that the project does not divert into a direction that would pose a threat to them. Consensus and worldwide cooperation would make the powers check each other. Because for all of them, an independent neutral project is better than any competition getting the upper hand.

  1. The project must be safe and successful, and must be first.

An initiative is no good if it does not do the maximum for the safety of the research. All means must be employed to thoroughly understand the problems of control and motivation. At the same time, the initiative is no good if it is not fast enough because if somebody else beats it to the AGI, it will be too late for anything.

Success is by no means guaranteed – we do not know which path leads to it and even the best initiative might have a pretty low probability to be the first among all the competition. It can be helped though. One way to help is by getting maximum support for the initiative – which the worldwide cooperation should provide. Another is to minimize competition. From the analysis of Scenario 3 we know that suppression by force is not a good way. Still, it makes sense to curb some obviously dangerous or ill-intended cases. That could be, in this case, aided by the states themselves as it would be in their interest. But criteria must be very strict that would not allow abuse. Extensive information campaigns spreading the knowledge of the dangers can further discourage many independent researchers. As a slight of hand, Scenario 1 could be used a deterrent – it is a very concrete and tangible threat people could understand.

Properties of the safeguarding AI

There are properties that are necessary for this plan to work, and some that perhaps could be added as a bonus.

The necessary properties:

  • Limit on intelligence and self-improvement
    • Unlimited AI could not be predicted anymore. It should be able to adapt itself to a minimum degree though to keep up with progress.
  • Independent
    • If any control or modification mechanism is available, it can eventually fall into the wrong hands.
  • Impartial
    • If it sided with anyone, the rest would oppose it and prevent its creation.
  • Has no other safeguarding objectives
    • It would be tempting to give it more objectives for “our good”, but such things never end well. At the very least, it would create an opening for power seekers to smuggle in their agenda.

Possible properties:

  • A turn off switch requiring a global consensus to be triggered. Conditions change and we should not fully close future options.
  • Design benign tool AIs for people to use that could provide the benefits we expect from AGI, while being passive and harmless.
    • This is a slippery slope as it would be hard to specify which uses of the tool AIs are still beneficial and which are weapons.

I do not claim this strategy to be the best one available. But at this point, it is the option with the best odds that I can think of. The odds are still low but that is given by the already poor situation.

Conclusion

We do not know what the best strategy for dealing with the AGI is. But by thorough analysis, we can compare the strategies and identify those that are clearly bad and others that show promise. This work shows examples of such analysis and brings the following main results:

1) Prioritization of dangers and goals
2) Categorization of entities taking part of the race
3) Finding who should and who should not lead the research
4) Identification of a clearly bad (while highly likely) strategy that must be avoided
5) Proposal of a promising strategy
6) Providing two reference scenarios

While the current situation is very difficult and the odds of getting through it alive and well are slim, we can still do our best to maximize our chances. But if we are to succeed, we must not give in to illusions. Thinking that “AI is not that dangerous”, “people will understand”, that “the politicians mean well” would lead to defeat. We are those able to understand, to make a difference and we are responsible.



1 I like Nick Bostrom’s book for one
2 Note to the word “dangerousness” The term will be used frequently in the future debates to support one or another side during the power play to obtain more control, and its two different meanings will be deliberately substituted to manipulate public opinion.
The meaning of the word “dangerous” I go by is the potential of causing harm to the general public or human race as whole, eventually to other parts of our environment.
The other meaning that will sound is a danger posed to those currently in power. These people and parties are very afraid of losing the power they hold. The AGI has large potential to cause that and so it will be called a “danger” by the power wielders for this reason. Since they cannot admit this publicly, they will talk about “danger” to the public interest instead, hiding their true intent. Therefore, whenever the word “danger”, and other terms describing possible effects of the AGI are voiced, pay attention and double check the speaker’s motivations and whether they really follow the proclaimed general interest or rather some hidden agenda.
3 Like those people called Heaven’s Gate. They killed themselves in order to be transported onto a huge spaceship that would take them away from the Earth right in time before its destruction.
4 We don’t need to look at North Korea when looking for an example of a terrible wielder of the power the general AI represents. We can consider the good guys, the USA, and still get to the same outcome. Even the little part of their trespasses that makes it to the public shows a bleak picture. Take the Prism program for mass surveillance of the population, or illegal wars in the middle east started on a false pretext. By that I do not mean that any other power, like Russia or China, would be any better. Some of the small countries *might* make exceptions, but they are not the important players in the race either.
5Like when Hitler was breaking all WW1 treaties when he was building armies, fortifying Germany and later started taking other countries. Rational people knew from the beginning where it was heading and that a preemptive military operation (which was even sanctioned by the treaties) should take place. But the naive majority went with “We must avoid violence at all costs. Let’s be nice to Hitler and everything will be ok.” …
6 The “incorrectly” is important here – we are trying to get the best result, not bully anyone.
7 Earlier footnote – “note to dangerousness”
8 Has many more – but others are not directly related to our subject
9 How does Russia react to the ban of chemical weapons? Starts research of Novichok chemical weapons that are more potent and easier to hide.
10 Theoretically, some multi AI system might be safer, but as Nick Bostrom wrote, and I agree, it is not realistic for such system to be stable.
11 Setting such objectives is by itself not an easy task with many dangers, but nothing is simple and safe when it comes to AGI. I am only suggesting that this way is safer than the others.