Category Archives: Philosophy and Economics

Choose the lesser of two evils

You, and a couple of other people, are stranded on a boat and near death by starvation There is no prospect of being saved in the foreseeable future. Your only chance is to eat someone. Will you pull straws, look for volunteers, pick the oldest, vote, … What is your pick? This is a classical question from ethics and many evils to choose from, not just two. If you follow the neat and popular quote by some Spurgeon – “Of two evils, choose neither”, you will all die. But hey, at least you will feel good about yourself for not killing that one person, while everybody is dying as a result of your choice.

For the start, in order to avoid unnecessary complications, let’s not use the word “evil” here. It is relative and depends on the values of specific people. Let’s just talk about options that we don’t like instead. If there were a cute pig and a goat on the boat, most people would not consider it a choice between evils anymore but rather just distasteful. That does not change the merit of the question and the necessity of the choice to be made though. Unless they were vegan, and it would be evil again.

If you find yourself in a bad situation, you will only have bad options to choose from. People have a tendency to automatically reject bad solutions even if they don’t deal in quotes, but simply on the grounds that they don’t like them. Somehow they believe that there will always be an option available that is great and nice with everyone being happy. Problem is that if a good solution indeed exists, it was not a bad situation to begin with. Perhaps the solution is hard to find. That makes the situation complicated. But not bad. If you are in a bad situation, then, by definition, you only have bad options. Not that the options are bad in every aspect, but they have some harmful components alongside the beneficial ones.

Now, you can either believe the fantasy that every situation has a good, happy solution. Or you have to admit that bad situations also exist and you will have to accept solutions that you do not like.

This has a far reaching consequence. No policy – from little daily choices to global policies and philosophies, can be rejected solely on the grounds of being bad in some way. Because if that bad solution is a response to a bad situation, it may still be the best one available.

There is a parallel in predicate logic (the system all our knowledge stands on) which also shows how wrong this wishful thinking is. It is not perfect, but quite close.

If a logical system is consistent – that is, no two pieces of information in it (neither any consequence and combination of them) can contradict one another –  it is always clear what is true (i.e. what is a part of that system) and what is false, or being outside of it. If you take any correct logical step within the system (such as induction), combining information and arriving at a new one, you will always get a correct, true outcome. If you want to show that something true is indeed true, you can always do it using those correct, logical steps. This corresponds to a good situation in life in which we can always find good plans that lead to good outcomes. If we want to reach our goals, we can always do that through good means, with no need to compromise.

But what happens when we introduce a single inconsistency, a single tiny flaw into the logical system? It is much worse than one would expect. The system does not become a little flawed so that we can just go around the issue and still find good solutions elsewhere, no. Any flaw, however small, opens the floodgates and everything that was false before, becomes true. Every bad thing and bad step becomes legitimate. If a life situation is bad, suddenly no purely good path exists and no bad move can be a priori ruled out. One bad solution can only be ruled out when we find a better one, and the better one will also be bad.

The refusal to see or understand this is why people so often end up in endless disagreements, unable to reach a solution, or worse, they make terrible decisions. In a bad situation, whenever someone proposes a solution, there are two possibilities. If it is a legitimate solution, it has to be, inevitably, bad in some way. Then the other side points out the flaw and rejects the whole idea. When their turn comes to propose something, it gets rejected by the former party for the same reason. Since no side is willing to accept that a solution to the bad situation isn’t going to be perfect, they will not get anywhere.

There is a nice line demonstrating this. “The left will waste money to ensure not one person in need is left behind, while the right will not spend anything if there’s a chance one undeserving person might benefit.”

The other possibility is that the proposed solution is just wishful thinking nonsense that is even worse, but it hides the problems and looks good. It then passes. And that is how politics is done.

The reality is that the world is one big bad situation. There are imperfections and inefficiencies everywhere. There may be specific, local areas where improvement is straightforward and without trade offs. But in most cases, any attempt to solve anything will have some downsides. If we want to perform any action, we need to accept that it will have some bad consequences and that alone should not stop us. It is inevitable.

That is even more true on the large scale of policies. Consequently, a policy cannot be refused solely because there is something bad in it. It can only be refuted by providing a better one, or at least by proving that a better one exists and can reasonably and practically be found, and therefore we should wait and look for it.

Libertarians and the Austrian school of economics make this mistake on the largest scale – in theory. They literally refuse any actions other than ones that are pareto-optimal. Which is a fancy word for actions that are perfect in every way, with zero downside and everyone happy about it. As I have shown, this is reasonable only in a flawless system or special, limited cases. In the real world, this theory is almost entirely useless.

How bad can things be? Similarly to the flawed logical system where suddenly every falsehood becomes true: if the reality is not perfect – which it never is – there is no limit (in theory, not in specific cases) to how bad even the best available actions may be. The worse the situation, the worse the best solution1. Or stated in reverse – however dreadful a thing you can think of – a situation can (theoretically) exist in which this dreadful thing is still the best thing to do, and doing anything else will be the same, or even worse.

A further consequence is that in reality, circumstances may arise that will require doing something that really goes against our values. People usually have some limit of what they are willing to do. Many would refuse to kill a person, or hurt an animal. But what if we end up in a bad situation, the best solution of which requires us to do something of this sort? Doing it would be the best thing possible, and rejection would lead to something even worse. If we are in such a situation and uphold our moral limit, something even worse will have to be done, and we become responsible for the difference between the original solution and the worse one.

If you could kill ten year old Hitler, would you do it? Upholding the no-murder value is nice, but It would make you responsible for all the WW2 terrors.

Practically, this is a hard question, because there is always some uncertainty about the consequences and a hope that a good solution exists. The moral limits help to prevent a lot of errors that would happen if we go for the bad solutions too eagerly. But if the consequences are clear, there is no avoiding them. 

So in summary, I have shown that we need to be able to accept doing harm if we want to act. Now I will add some important points on what limits it has. And I will make the first point right here – accepting that doing harm may be necessary does by no means absolve us of responsibility for it.

A thing to note is that if offered two bad solutions, you should not automatically jump onto one because of what I wrote. It is good to look for other options and more so because people may present those two options on purpose as the “false dilemma fallacy” – trying to force you into making a wrong choice. Recently I saw a communist arguing that either you are a fascist, or you oppose fascism, and the only such opposition is communism. Fascism, or communism – make your choice. It never occurred to them that freedom exists. Or rather, they intentionally left that out.

The last point I want to emphasize is the responsibility. Yes, doing bad things and harm is inevitable. But we are still responsible for the action. This should not lead to the conclusion that we should not do anything, or that it doesn’t matter what we do. But rather that we need to learn to accept the responsibility and to carry that burden. If we get into a bad situation, it can’t be avoided and the best way is to deal with it head on. Learning this is a big part of growing up.

  1. The degree of badness of the situation is the limit of how bad the best solution is. But until we know how bad the issue we are dealing with is, we can’t put any limit on it. ↩︎

What is “Good”

Most intellectual discussions inevitably, sooner or later, hit a dead end, when they arrive at the question of what “good” or “bad”1 means, because nobody knows the answer. They typically contain assessments, evaluations or deliberations about some future directions. “Is it good when…?” or “would it be better if…?” or “is A or B better?”2. Without an answer, any discussion becomes pointless. This extends way out from intellectual discourse though. Any decisions, from little personal things to wides scale policies, can only reasonably be made if we are clear about what the goal is3.

Unfortunately, not having a definition is a pervasive issue because even philosophers have been struggling with it without much success. Good thing is, the answer exists. Not fully explicit for all situations, but one that is as good as is possible within the reality we live in, to a degree from which the explicit definition can always be derived on a case by case basis.

The bad news is that it is not too easy. What we would want is to have one definition of good that is true at all times and places. A definition that is universal. As far as I am concerned, it does not exist. Not in this universe, at least. The only known universal laws are the laws of physics. This is not to say that the universal definition is guaranteed to not exist. There might as well be something outside this universe, such as the “outside the simulation” that is pretty clear about what our “good” should be. But since there is zero evidence of it, we might as well consider it non-existent. In a contrast to this, majority of people on earth believe in universal good and bad. They believe there is some god who told them what it is. But, as far as I can tell, all of that was made up by people for their own practical ends and there is no reason whatsoever to think otherwise. Popularly, there are also traditional or intuitive definitions. But these are not definitions. They are ever changing outlines or sets of examples, and they do not go to the core.

When asked what “good” is, apparently the automatic human intuition is to look for this universal truth – because it is simple and it is what we were always taught. But when pressed, prople find that it is in fact just a phantasm and they actually don’t know. They then throw their hands in the air and end up as post-modern relativists.

With the easy solution inaccessible, we need to roll up our sleeves and look closer. Since nature does not give an answer valid everywhere and for everyone, we need to ask the next question. “Good” for whom?

The universe is just rocks, atoms and laws of physics. It does not care. Even “nature” does not care. Whatever happens, it will go on, without any consideration. The only part of the universe that does care are people. It is only us that care about good and bad and only for us it is relevant. Everything else, including other living things without rational capacity, do not consider such things. “Good” is, therefore, only relevant with regard to humans.

More specifically, it derives from human values, and from this the definition already follows – although it is not yet a complete answer. Good is whatever promotes and develops our values (and bad is what goes against and hampers them)4.

So this gives us a clear definition of goodness. It is not as simple as one universal good – it is derived from humans and their values. If the values are well specified5, what is good and what is not then follows objectively and clearly.

It seems that we have not solved anything and instead we have only delegated the problem to yet another elusive word. But it is actually a useful progress, because unlike the esoteric word “good”, values are something that we can personally relate to. It is something that we all have (with varying degrees of awareness and clarity), and what we do in our lives – the directions we take, decisions we make, and yes, what we consider good and bad, depends on what we value.

A complication, as before, is that humans are not one unified entity, and there is no one universal set of values, either. Every person has their own values – they are subjective. One person values friendship, another dog welfare, third values the terrified expression of their enemies. How people “choose” their values is a complicated topic. It is a mix of tendencies and preferences we are born with, all sorts of outside influences throughout our life, and our own deliberations and rational decisions6. What matters is that people do have values and the values differ between them. These individual values are the real foundation everything else stands on. We can consider, and often hear about, some derived values, such as values of a company, a country or values of a time period. These are some conglomerates of values of people it concerns, or just something that someone made up in order to achieve their ends. Although these “higher order” values feed back and influence what the individual people value, it is only the individual values that are have practical impact.

The values, and consequently what good and bad is, being subjective, is chaotic and hard to navigate – especially compared to the one answer valid for everyone that we started with and usually expect. There can be as many different “goods” as there are people and they can often collide. But while complicated, it is how it really is and it is the only way. Every person matters and so do their values. Forgetting or omitting values of individuals and replacing them by some “greater good” and by the “right values” is what gets us to another Reich. That is not to say that we need to agree with other’s values. But we should respect their existence. Then, practically: wherever we are in the debate that leads to “what the hell does good mean”, or making some decision, simply specify which people it concerns, what their values are, and combine them to get the answer. This finally covers both the theoretical question of what good means, and also the practical part of how to find it.

Unfortunately this does not solve too many problems. It is only a clarification of one big question and a stepping stone on the way. The “little point” and the question of how to combine conflicting values of different people is the actual issue and the core of virtually all world conflicts. Solving this is a task for another time.

  1. I will use the terms liberally before properly defining them. For now, intuition about them will suffice. ↩︎
  2. The term “good” covers everything since “better” only means “more good” and “bad” and “worse” are a direct reverse. ↩︎
  3. Decisions and policies are being made regardless. In better cases, they are based on right intuition and fall close to the target. But often the results are bad or catastrophic as a result of this omission. ↩︎
  4. This definition comes from objectivism. But as far as I know, objectivism does not go far enough, considering values objective – which is not true. ↩︎
  5. A typical person’s values are inconsistent and conflicting, so this is not at all easy. ↩︎
  6. It really is as complicated as it gets. Let’s leave it as this – it is a material for not even an article but rather a couple of scientific disciplines put together. ↩︎

Nobody hates capitalism

Many people think they hate capitalism – while, in fact, they hate corporatism. This misunderstanding is why we can’t have nice things. If everyone understood what capitalism actually is, they would endorse it, and we would all be much better off.

Capitalism only means respect of private property and maximal freedom (and minimal state interference), which allows people to fully pursue their goals and be effective doing it.

Corporatism (which they call capitalism) is very different and hating it is not an exclusive domain of communists and anarchists. Pretty much everyone, including libertarians, hates it. It is characterized by powerful corporations or other entities having a strong grip on the state, using its power (regulations, wars, etc.) to prevent competition and eventually control everything.

Clearly, it is very different from capitalism. A relevant worry though is, as some claim, that capitalism will always converge into it (similar to communism always ending up an authoritarian totality). This can certainly happen, although the inevitability is questionable. The transition there would require large consolidation of the businesses together with large growth of the state power. Either part can be prevented, theoretically – through maintaining a high level of freedom. Competition (which requires freedom) prevents any business from growing too large, and freedom is an antithesis to a large and powerful state. But whether that is possible is unclear – freedom seems to be fighting a losing battle throughout the world.

Most vocal capitalism haters may not be satisfied by these answers. But if you consider the steps that are needed for this transition, it is what you get with communism right away. A powerful state, and a few people controlling everything. Only instead of a few CEO’s and politicians, it is the top party officials. In other words, if transitioning from capitalism to corporatism is like slowly sinking into a swamp, introducing communism is like jumping straight legs into a cesspool.

Why is socialism so popular?

Three words: lack of imagination.

People want to feel that they have control over their lives. Naturally. Decreasing chaos and increasing our power of prediction is a critical heuristic for survival. Feeling like we are in control, that we understand the world and what is coming, is what we are conditioned for. The political system is a wrapper around our living conditions. If we don’t understand the system, if we don’t see how we will be safe and fed tomorrow under it, we become anxious.

This is where imagination comes in. It is very easy to imagine how the political system works if it is centrally managed. There is one (or a few) people who make the right decisions for our benefit and things happen that way. Clear, simple, and anyone can follow.

Imagine, instead, a system that is decentralized. A system where there is no system. People are not told what to do – they make their choices freely, they interact and create freely. Chaos. And out of that chaos order emerges in which things work just the right way for everyone to have what they need. Can you imagine that? Maybe you do and maybe you don’t. Either way, it is really not easy and I can’t do it myself. But my imagination is sufficient for me to believe that it would work. Unfortunately, I, and other such people, are the minority. The majority lacks the required imagination which makes them distrustful of such a system – and understandably so. They instead choose a system that they can imagine and understand, and that is socialism.

Thinking without language

Once upon a time, my girlfriend, who studied Czech language and literature, told me what they learned in school that day. She said that thinking is only possible through language. To me that was most ridiculous but it gave me the idea that some people might believe that. What I found since that day years ago is that apparently there are many of such people, majority in fact. And I always knew they were wrong. The first piece of evidence is what I replied to my girlfriend with. “If a person is raised by wolves, not ever learning any language, they will still be intelligent and able to perform complicated tasks, to imagine things or to plan etc. Do you claim that the person does not think?”

This question is not new and has been, in various ways, answered by others. A direction similar to my wolf question has been investigated by neurologists. They examine people with damage in brain centers connected to language and their subsequent ability to solve tasks that require thinking. A nice example is the composer Vissarion Shebalin who was still able to compose music after completely losing the ability to produce or understand language. Another branch of research examines societies with different language structures and the correlation to their abilities. In short, the results are that some thinking is possible, and language does affect it. 

The approach I will take here is different and has two components. One part is a description of my own thinking processes which appears to be unusually transparent. A single, subjective case of course does not prove anything. But a big part of it is reproducible, or at least serves as an inspiration and navigation for the second part, which, drawing its language from epistemology, is describing a specific view on thinking and its relationship to language and the real world. As a first step I will start with a more systematic description of the problem.

A more exact formulation of the issue revolves around concepts. Concepts are the building blocks of our thinking. They represent entities and ideas around us, both concrete and abstract. A concept of a chair is what, in our heads, represents all the chairs in the world. Abstract concepts then represent ideas without a physical representation, such as heroism. The problematic of concepts is a part of epistemology, a branch of philosophy, which looks at how we attain, process and use knowledge. In short – how we know. Concepts are the core part of it because they are the building blocks of our knowledge. Typical questions that revolve around them are – what are the limits of one concept? Is it still a chair if it does not have the back and misses two legs? How do we create a concept (one thing people do agree on is that we are not born with them)? How are the concepts structured and relate to one another? All these questions are important, and at this point solved to a pretty good degree. A controversial one is “Are they real?”. Clearly they are not, they are just an ability of our minds created by evolution for dealing with reality. But many people relate them to “essence” (mainly introduced by Greeks and popular to this day), which also is not real, but has strong religious charge, and therefore a strong foothold in mysticism and among its many irrational proponents. 

The last question, which is the subject of this essay, is ”can concepts exist without language?”. As I see it, this question is not important – for the same reasons as why my answer is “yes”. It is just too trivial. But while the question is not important, answering it apparently is, because so many people think it is impossible. For me that is even more striking because it is not only the historical philosophers and some prominent modern ones like Wittgenstein or Russel. But even the objectivist philosophy is on boat with what is, to my knowledge, the majority – and I find objectivists to be right about more things than any other philosophies that I know of (but still is far from being right about everything)

The basic idea behind the generally accepted view is this. People, when born, start with no concepts. But as they learn, they distinguish new separate objects and ideas and formulate concepts for them in their head. A new concept first goes through a creation phase. It starts as some hazy idea that is gradually refined (in some way, the model of which differs by philosophical school) into a “final” form by giving it properties that specify it and distinguishing it from other concepts. At some point during the process the new concept is given a name drawn from the language (e.g. a “chair”). That name becomes its unique handle which is necessary to use the concept – to store it as its own thing, to recover it, identify, to be clear that it is this concept and not another. And also to communicate it – which is irrelevant for the question in question, but in fact (I mean in my view) that is the only reason why language is needed in regard to concepts. Without a language label, they say, the new concept could not be stored in the mind and used. Language is, therefore, necessary for concepts and in consequence for any thinking, since it works over them.

The reason I disagree comes primarily from my own experience – but anyone can see it if they look in the right places, as I will show. 
I was quite young when I started practicing meditation, the core of which was calming the mind down. The most “noisy” part was the thinking in words – thinking as if leading a conversation with myself. I was learning to suppress it, while still thinking and progressing through the meditation. With some effort, the thought processes were there, but the words were not. Another piece of the puzzle came in high school when a friend of mine was surprised that I think in such an inefficient way – using words – at which moment I found I am only scratching the surface. My “loud” thinking remained to this day. But not much later I came up with another idea. To perform (simple) arithmetics in my head without words, but rather using intuition. Start with the assignment, say 12×7. But to not go through the calculation explicitly as usual (“saying“ out the calculation steps, or imagining them written), but relax, turn off the head (as in the meditation), and let the result come. It worked rather well. I have never extended it to practical use but it was a nice proof of concept. Neither of these proves my point that thinking can be done without words, but they were my stepping stones.

A more tangible progress came with higher and more abstract mathematics. The common way to deal with it is using formulas, but that did not work well for my brain. Instead I imagined the mathematical bodies (usually some weird sets) and their interactions as some fuzzy objects in space. They did not possess any conventional names or concepts, they were new temporary entities I have created to deal with them.

Over time I have adopted these thinking frameworks into my everyday life. I still usually think in words, but a lot of the time, or rather with non-trivial problems, I use something else. I call it raw concepts. Remember how I described the creation of concepts – the intermediary fuzzy object that gets a name assigned when it is finished? The traditional view makes it look like after giving it the name, it ceases to exist. But It never went anywhere. Not the label, but that fuzzy thing is what a concept is. In very exact terms it is a specific pattern of neural excitation, which is different for every different concept. Subjectively for us, it is something in our head that probably everyone would describe differently. It is a “feeling”, a “flavor”, maybe a differently shaped object in our imagination (some people even see colors). And by shaped I don’t mean chair-shaped, but a fuzzy cloud that has this “feeling” or whatever which carries the concepts properties and makes us recognize it for what it is. It is likely that you are not used to perceiving it this way. I am assuming that because if most people did, there would be no question about whether words are necessary in order to think. But the reason people do not see it this way is not that it is not there. It is. But it is covered up by the word labels and images that we have attached to the concepts. When we want to realize a chair, the word “chair” and various chair images shine so bright that it seems that is all that is there. But it is just a shiny wrapping of the feeling pattern, that cloud of neural excitations that really define what the concept is. I can tell because in my mind, I can turn the words off and observe these concept “feelings” in their naked, raw form.

Now that we have the concept of a raw concept, it should seem more obvious how the concepts are formed to begin with. Either a blank raw concept “stem cell” is created, or one is split off an existing concept, inheriting its properties and is shaped through the concept formulation process (which I have not described, see “How we know” by Harry Binswanger for a good theory) into its new form. That gets labeled (although we might start with the label already – “Mom, what is a “chair”?”) and stored. The label, the word of language, is just that, a label. The label is not the concept and the label is by no means necessary for the concept to be created or to exist.

Are you still not convinced? Let me give you a more familiar example. Remember those times you wanted to recall some word and you could not? When you have the word on your tongue, when the “feeling” of that word is there in your head, bright and clear, but the word itself doesn’t want to come out? There it is. Your raw concept without a label. You knew it all along.

Another piece of evidence, and I would say even more serious, comes from the way the thinking itself works. Or perhaps with how it doesn’t. I am not sure if anyone thinks this – but in order to clearly dispel this idea – the core of thinking is not performed by language. Language and sentences can be used, yes. But that only works for simple, well defined problems, and is highly inefficient and limited by the speed we can formulate those sentences. A good use case is going over a shopping list in our head – it is simple, linear, and needs to be precise. But it can hardly work for anything complicated if it does not work even for simple, well structured problems like chess, or unintelligent physical activities. Imagine a chess player running through hundreds of moves per minute, thinking out loud “Ok this piece moves to this position, and then that piece over there to that position” where “this” and “that” should actually also have specific descriptions… Or trying to catch a ball, calculating its trajectory, movements of the body needed in order to catch it, while considering how heavy the ball is and if it could hurt you – and doing all that using sentences that precisely describe every bit of it? Well, clearly that is not how thinking really works. Again, these sentences are only labels put on the thinking in case we want to keep a very clear track of it or to communicate it. The way thinking really works is again through these raw concepts – and their brain excitation patterns. They are there, they change forms on the go as needed, they mix, interact, merge, split… They form new excitation patterns, often intermediary ones that have no label and never will, until a state is reached where the configuration of the patterns contain an answer we were looking for. So for instance, we conjure the raw concepts of a ball (most relevantly its physical properties that we know), laws of physics, a model of our body and its physics, we let those models interact in a simulation and plan the best way to move in order to catch the ball. This comes as quite intuitive. Is this anyhow different from pondering the development of ethics in the life of a novel character? In principle, no. It is still a manipulation of some models consisting of concepts and their relations and interactions. The reason it seems different is that catching a ball is really automatic and intuitive for us, while ethical considerations are an unknown territory that requires a clear, conscious focus. But the inner mindworks are the same.

As for me, I can observe this concept interaction in my mind directly. I can see the fuzzy raw concepts in the 3d space, moving, interacting and mixing in many ways and points, simultaneously creating new flavors. Sometimes those flavors “click” into something that seems to make sense and to be useful, which I can then lock as a next step in the thinking and move on.

To be clear, this whole thing is not trying to say that language is not useful for thinking. I am only saying that it is not necessary – in theory, and some, but not all, practical applications. Language is very helpful for its labeling function as well as putting thoughts and thought processes into clear boxes, which make the thinking process clear, well organized, and manageable even for complex problems. Another aspect that plays a practical role is that the hardware of our brains is already developed by evolution with the expectation of using a language. Now this is my speculation. Because of this wiring, thinking without language is more difficult for us than if we did not have the language ability to begin with. Some brain pathways are so optimized and dependent on language that it makes not using it more difficult, and for some people impossible.

On the other hand – and this is just a side note for perspective – there are people whose beliefs point in an entirely opposite direction. Not only do they see the usage of language as problematic, but they view even the very foundations that we have laid out here – concepts – as the enemy of true “thinking”. It is the Zen Buddhists. Let me present their idea with a famous koan.

Shuzan held out his short staff and said:
“If you call this a short staff, you oppose its reality. If you do not call it a short staff, you ignore the fact.
Now what do you wish to call this?”

The second part of the master’s statement is already trivial for us. The “fact” he mentions is that the language label “short staff” does indeed belong to the item he is holding. But what does the “opposing reality” in the first part mean? The Zen Budhists teach that the world is not words, or concepts, or even objects. The world just is as it is. Boxing it up into categories and labeling it prevents us from seeing it for what it really is. Assigning even “identical” items, like two same looking chairs, a common concept means forgetting their individuality. The short staff Shuzan holds is simple, and yet very complex. It is an object (here he does not go as far as to deny even the “object” property in order to not confuse the students too much), with its material, shape, temperature, the way light reflects on it, its trajectory through history and into the future and much more. Saying that the object is a “short staff” (assigning it the label or the short staff concept under it) would leave out all of these critical individual properties, and deny its true reality.

As we know from physics, they are technically right. The world is a continuous space that is filled with different kinds of particles. There is no “water” or a “rock”. A rock is just particles of one kind that are dense in the area that in some region happen to change to another kind of particles, perhaps of water. While they look different to us, on the fundamental level the difference is unimportant. It is only in our brains that we cut this continuous space into pieces and give those pieces names and categories. These categories (or concepts, or essences) are not a fundamental part of the universe. As I wrote earlier – they are only a virtual tool imagined in our brains and created by evolution to deal with the world and to survive. The lesson given to us by Zen is that when we start on the path of discovering the core of our minds, dropping the language to reach pure concepts is only a first step and we can go much further.

To summarize my idea – the question whether language is necessary in order to think seems ridiculous to me and I hope I have presented enough evidence for why I see it that way. Now it is up to your introspection and imagination. But even if you cannot at all directly observe it as I do (which I think is the normal way and my brain is just broken), the model I have described should still make more sense than the clumsy language one and present a foundation for further research.

Fair reward – merit or effort?

For us, responsible people, it is clear that just and fair is to be rewarded accordingly to our contribution. If we try harder, work more and better and create more value as a result, we expect to get more in return. And accordingly, if we don’t try or we do a lousy job for whatever reason, we understand that we deserve less for it. An abominable contrast to it is the altruistic system that commands that we shall not ask more for doing a good job. In fact we shall not ask for any rewards at all. The rewards go to those in need, regardless of their contribution or how deserving they are. This system is fundamentally unjust.

While this is clear, many people hold a different view that leads to a very common conflict – and not only among philosophers. A typical objection I keep hearing is the following:

Different people have different opportunities that they cannot affect. Why should that make some people better off than others? Imagine two identical children. One is born into a rich family that provides it with good education and raises it to be confident and successful. The other’s parents are poor and abusive, the child receives little education and grows up to be a nerve-wreck. Both of them start to work and put equal effort into it, doing the best they can. Is it fair that the first receives vastly higher wage and acclaim?
The person telling you this believes that it is fair to reward people according to their effort and not the actual value their work creates.

I have to admit that this does make sense in a way. In line of the ethics we started off with, a person should be rewarded based on what they do. So why should a person be punished or rewarded based on things that are not in their power to change? Rewarding people based on their effort indeed is fair as well. Another perspective comes from the negative side. While I abhor the idea of a poor lazy person getting someone else’s money in welfare, I similarly dislike it when an arrogant moron makes a lot of money just because he was born with a golden spoon in his mouth. Formally, he may create a lot of value, but only a tiny fraction of it can be really credited to him. I find them both undeserving.

How can that be? How can there be two conflicting definitions of a just reward at the same time?
As always – “Whenever you think you are facing a contradiction, check your premises”.
Hint comes when you need to give an answer to the person with their heartbreaking children story. Maybe you have a better one but the best answer I can come up with is – “The world is not fair. It sometimes sucks, but we just have to live the life the best we can with the cards we are dealt.” Which is a lousy answer when trying to explain what fair means.

The reason is that these two cases of justice, while talking about the same thing, are based in different worlds.
Those different starting conditions that our objector complained about are based in the real world. It is the reality we all live in, which is without values or feelings. It deals to everybody, everyday, something different, and there is no fairness in it whatsoever. It is what it is.
On the other hand, the justice we wanted originally applies on a higher, abstract level – on the level of us humans and our ethics. It is the level where values do exist and the one that we can choose and change. So while the conditions each of us is given are unfair, we can create fairness in how we interact.

While this explains how two seemingly colliding definitions of fairness can coexist, it does not say how to deal with the violation of the later. Unfortunately, dealing with unfair conditions is an open problem and previous attempts to solve it have led to some top-tier catastrophes. Formally, the statement of both is quite clear. But the practicality of their solutions differ widely.
The problem is that the amount of “value” we can distribute is limited. We only have as much as we create. Value can’t be drawn out of thin air – regardless politicians often saying otherwise.
Giving a fair reward proportionally to the value created is straightforward. It just means exchanging value for value in a corresponding manner. Value created is distributed back proportionally and without significant issues. So the overall vision that 1) the world is not fair, deal with it; 2) but reward everybody according to their contribution – is simple, clear, consistent, and easy to implement.
On the other hand, trying to fix the unfairness of the world itself and reward people according to their effort is impractical. There is no way to objectively assess an effort a person is making. If somebody creates something of value to you, you don’t need to care how and why they did it – the value, for you, is objective. But knowing how hard they tried? Was it an incapable person doing their best, or a very capable one but slacking, or a one specializing in the skill of acting out a hard effort?
While we can make a personal call and pay extra to a person we know to be good and honest and trying hard even though they did not do so much at the end, this can’t practically be extended to a large scale. Any attempt to do so inevitably fails on the subjective nature of an effort. Moreover, since it can’t be correctly assessed, it only creates wrong incentives for people – to pretend to try, instead of doing actual good work – destroying value for everybody as a result.

“World is not fair” is a poor answer, but currently the best we have. Trying to fix that, on a global scale, should be done with utmost caution as such attempts have already cost hundreds of millions of lives. Until somebody figures something out (naive wishful thinking really does not count), we should stay content with playing the cards we were dealt and the rewards we deserve. Which is not that bad.

Origins of bread queues of communism

It goes about like this.

The (working class) people: “The bread is expensive, them baker exploiters are overpricing it!”

The (communist party) government: “No worries, we are working on it.”

Government: Sends couple bakers to uranium mines and sets official bread price mandatory for everybody to 5x cheaper than before.

People: “Yay! Serves them well. Now we can have the cheap bread we are entitled to.”

Bakers: “Da fuk. It’s impossible to make bread this cheap. We can’t bake it out of thin air. We have to pay the workers, pay for the flour and feed our families!”

Government: “Stop being selfish, help your fellow comrades in need.”

Bakers: “Right. Fuck that, let’s make bread rolls.”

People: “Oy, secret police, them bakers ain’t making our bread!”

Government: Sends more bakers to uranium mines and fixes the prices for all food. “You better have bread next time.“

Bakers: “What can we do, what can we do.” Bake bread as before, but offer just the minimum amount they must at the government price and sell the rest under the counter – black market style.

People: Standing in line from 4am to get the low supply cheap bread. “Finally we have our bread, bless the communist party.”

Bakers: “Wait a minute! Millers! How come you are not doing your part in the plan for better tomorrows? We demand the flour be 5x cheaper too!”

Millers: “Fuck.”

Understanding general statements

How fights start

Over and over I hear exchanges in these lines:
A: *General statement about something* (e.g. “Asians are smart”)
B: “How can you say that is true for ALL? There are exceptions!”*, basically calling the whole A’s statement invalid, often turning into an argument and accusations of racism.
This is a template of any attempt of a productive discussion on controversial subjects, especially in the US. A lot of misunderstandings and social conflict could be avoided if only people better understood what general statements actually mean.

There are two reasons why B’s interpretation is wrong. (In the end they are the same thing)

1) The interpretation of A’s general statement to apply to all/everyone is wrong.
2) Exceptions cannot make rules.

Interpreting general statements

By pure logic, “Asians are smart” is indeed wrong. But we are dealing with the real world and the issue is about language conventions rather than logic. But it takes some logical thinking to understand the conventions (or lack of logical thinking to not to). “Asians are smart” can be understood in two different ways – only one makes sense, but the other is often the result.

It is obvious that not every single Asian is smart. If I am not seriously mentally impaired, it is obvious that I know that. So why would anybody assume that when I say “Asians are smart”, I mean that every single one is?

Although logically correct, this interpretation does not make any sense in vast majority of real cases and is useless. Therefore, another interpretation should be used instead that that would be meaningful and useful. That interpretation is that the statement is true for a significant part or statistic**.

To show it on the example: we can assume that the general statement (“Asians are smart”) does not mean it is true for every instance (“every single Asian is smart”) as B did, as that is out of reality. Instead it means that it is true for a significant part (“most Asians are smart” or “In average, Asians are smart” or “The ratio of smart people is higher with Asians than some other group”) – which is the meaning that A intended.

Sometimes we really want to say something about every single instance. But in that case we can say it explicitly – “All Asians are smart”. But even in many such cases we can assume that the person is just intentionally over-exaggerating. It is all about trying to understand.

Exceptions do not make rules

The no. 2) of B’s wrongs is more simple, but maybe even more important. Let’s use another example – “Dogs have four legs”. That is something we kind of accept. But then some B comes and says “No way, I have seen a dog that had an accident and has 3 legs. So dogs have three or four legs”. … or any other number they identify with. And you will get arrested if you say they have four from now on.
By pure logic it is true we can’t say “dogs have four legs”, as a single exception is enough to invalidate a general statement. But that is not very helpful for our daily life – which is an argument I already made with no. 1).
The important angle here is that because of an exception of a 3-legged dog we shouldn’t alter and destroy a helpful rule and be saying that dogs have three to four legs. Even though it would be more correct, it would harm our everyday life. Just imagine the confused children.
Every rule has exceptions. That is a part of what real-world rules are. Exceptions do not make rules, they underline them. Dogs don’t have three legs and people do not have 129 genders. Yet, in Canada, you can get arrested for claiming there are only two.

So to sum it up – when somebody says a general statement, they most likely do not mean all/everybody. Just try to be positive – first try to understand what they mean and what makes sense***. People usually mean well. It can avoid a lot of bad things happening.

————————————————-
* The reaction seems to depend on what the general statement is about though. Saying “Africans have lower average IQ” is quite guaranteed to invoke the response “You racist, how can you say that ALL Africans have low IQ?” at the least – while saying “ALL white males are privileged Trump voter racist sexual predators” seems to be fine.

** Does not even have to be a majority. With “Fish live in water” we mean pretty much all of them. But “driving is dangerous” does not mean we have an accident on most drives, only that the danger is somehow statistically higher than some other activity.

*** Applies even to stupid people. They can mean it the wrong way, but everybody should get a chance first.

Why basic income can’t possibly work

Imagine three people starving in the middle of a desert. They don’t want to starve to death, and so they come up with an idea. Every day in the morning they will collect all their money  and split it evenly. They could then use the money to buy food from each other and with that not only they would survive, but even have money left for whatever they want. So next day they do exactly that, and few days later they die.

For a time, I have been aware of the universal basic income (UBI) topic, and while cautious (“too good to be true” rule), I was not able to put my finger on it. On an intuitive level, anyone must at least feel in the back of their head that giving free money to everyone can’t suddenly make everything better. But let’s be more specific first.

What is Universal Basic Income?

The point of UBI is to take a lot of money from “somewhere” and give everyone some nice, identical amount of it every month. So much money, that not only no one would need to worry about survival, but they can have a nice life from it.

The goal is to enable people to do whatever they would like to do, instead of working in order to obtain a salary. Disconnect work and wage. Sounds great.

The way there

What it took for me to understand was going to see a “documentary” called “Free Lunch Society”, which gave me an hour to focus and think, with nice pictures, propaganda and nonsense in the background.

The reason revealed itself to me and was suddenly very clear. Ironically, it is connected to something they said in the “documentary”. You can’t eat money.

One obvious issue is where to take that money from. Even though it’s important, I don’t want to dive into it. For UBI people, the question is very simple – for them, there are always some rich people to pluck.

The only concern that sounds from the UBI debate is that when people receive their UBI, they might stop working. That would obviously lead to lower production, resulting in big trouble. UBI proponents disprove this by pointing to experiments that have shown that people on UBI work the same or even a higher number of hours than they did at their previous employment. Therefore, the economy will go on working just fine and can be even better owing to people’s happiness from doing the work they like.

So what is wrong?

The critical problem is that it is not the number of work hours that matters. What matters is real value that people produce. Be it direct production, like mining or growing food, or intermediary work, such as organizing rice shipments in excel sheets, all this work is needed to provide people with what they require in their life.

We don’t live in small tribes spread over vast empty areas with infinite resources anymore. Today, we live in large density, especially in cities where millions live in a small spot. We are almost entirely dependent on modern production and complex systems that make it all work – systems that get soy beans from a field in Vietnam to a kitchen in Detroit, because at the moment that is the most productive way to do it.

All this production depends on the work of the people. Work that often is no fun, is not pleasant, is sometimes so far from the end products that the people doing it have no idea what their contribution is. But without these people and their work, it would all collapse. Not only would there no longer be any iPhones, but even food will not be able to grow or reach its destination – people will starve and die.

What does this have to do with UBI?

When people work for a salary (or profit from their business), it means someone (an employer or a customer) pays them for their work. Why do others pay them? Because the work brings the others value in return. The salary or profit is an explicit confirmation that the work a person has done had value for someone else.

Sometimes it is hard to see how one man’s work connects to a real production somewhere, maybe even on the other side of the world. But that is exactly what we have money for – to transfer the value across the chain, from the valuable commodity on one side to the person contributing to it on the other. Unless there is some imperfection in play, whenever we get paid for our work, it means that we have produced value that made someone’s life better.

At this point, it should be easy to see what happens when people get unconditional money instead of a deserved salary.  When people do what they enjoy doing, maybe it will be productive, but maybe not. There is no connection.

Basic income will never result in someone taking a more productive job, because if they wanted to, they would have done so already in order to get a higher salary. So it will only lead to people choosing work that will be more pleasant, but less productive. There will be more artists and windsurfers, fewer factory workers and farmers. There will be less goods, less food, less organisation. And less food in today’s densely populated society means that people will die. If the UBI proponent’s dreams come true and work is disconnected from wages or profits, millions of people will die as a result.

Basic income is nothing other than another form of communism. “Free” stuff to everyone to create paradise. But giving “free” stuff does not create anything, it only destroys.

How many more millions have to die until the communists stop trying to sneak their agenda in under different names?