Artificial meat and the problems of ‘-isms’

I often say that I am not a fan of ‘-isms’. Not even those supporting the causes I care for, such as transhumanism. I sympathise with some ‘-isms’ (again, such as transhumanism), but I never consider myself a ‘something-ist’. The reason is that, generally, ‘-isms’ have two problems. The first problem is that they almost always support at least some ideas, or make certain claims, which I disagree with or find too fanciful (certain acceptations of ‘mind uploading’ come to mind, but that’s a story for another post). The second problem is that, if you say you’re a something-ist, people will almost surely assume that you endorse, or believe in, some ideas that are really not your thing, merely because such ideas are either an integral part of the relevant something-ism, or are what people think something-ism is about (which may or may not be true). Not to mention the fact that people often regard the dictionary as the ultimate authority on what ‘real’ something-ism is about, cheerfully ignoring all its variants and flavours (which often blur into mainstream something-ism and each other), whose proponents are usually well persuaded that their own something-ism is the real thing—others just got it wrong.

There’s actually a third problem too. Namely, that if they’re not careful, something-ists who are a bit too zealous might end up putting their ideology before the reasons they embraced it in the first place. Sometimes, this can undermine the very objective something-ists intended to achieve with their embracing the ideology and spreading it left and right.

I had a brilliant example of this phenomenon one time when, while having lunch at a vegetarian restaurant (or vegan, I’m not sure), I mentioned lab-grown meat to my friend. My friend is a vegetarian (or vegan, I’m again not sure), whereas I am not. Vegetarianism and veganism are, obviously, two ‘-isms’; as a corollary of the above, I’m neither a vegetarian, nor a vegan. (Although again according to the above, perhaps I should say ‘vegetarianist’…) I’m not just quibbling about definitions: I do indeed eat animal products of pretty much all kinds. This, however, doesn’t prevent me from sympathising with the cause of vegetarians and vegans who are such because they’re against animal cruelty. Quite frankly, even if animals are raised in the best possible way, and are then killed in the nicest possible way (whatever that means), eating them is still at odds with my own moral compass. Even though I do eat them. The main reason I eat them despite my inner conflict is that I am one of those lucky bastards who wouldn’t put on an ounce even if they ate a mammoth; the downside of this is, I tend to lose weight very easily, and if I were to banish animal proteins from my diet entirely, I fear I could become even more translucent than I already am. A second reason is that I am not persuaded that a vegetarian/vegan diet is necessarily the healthiest option for everyone (and no offence, but I’m not interested in debating this, as it is beyond the point of this post); the last reason, and it is actually a scarcely important one, is taste. I say it is scarcely important because I tried vegan food more than once and I found it delicious. If taste was the only reason, I’d probably be vegetarian/vegan. Minor side note, I’m a huge fan of veggies, and together with fruits, seeds and nuts, they make up at least 80% of my diet.

By the way, are you starting to understand why this blog is called looking4troubles already?

But do not let me digress. Let’s go back to the veggie lunch with my friend, and why it is an example of the third problem of ‘-isms’. Lab-grown meat, I explained, would be a great lateral-thinking solution to the problem of animal cruelty (and a bunch of others, such as unsustainable farming and how to feed a growing world population). It doesn’t involve raising any animals, let alone killing them. All it takes is cultured cells, a lab, and hard science. We are not yet at the point where lab-grown meat could be produced in industrial quantities and be more economically convenient than traditional meat, but we’re pretty damn close. Not only the cost of a lab-grown burger plummeted from hundreds of thousands of dollars to around 11 dollars in about four years, but there already are companies and startups (such as SuperMeat and Memphis Meats) that have already created chicken and duck meat as well. While some people will surely freak out at the thought of consuming ‘unnatural’ meat and will likely think that it must somehow be bad for you, the truth is that artificial meat could be even better than the real thing: Since we would engineer it, we could make it more nutritious and less unhealthy (for example, we could make it have fewer saturated fats, of which you can get too many if you eat a lot of certain types of meat). Nutritional value aside, producing lab-grown meat would require far less land (mostly that where the laboratory is located), cause significantly fewer CO2 emissions (cow cells just don’t fart as much as cows do), and could probably be much more simply and cheaply automatised than traditional farms—in other words, it could be cheaper to produce meat in a lab than in a farm, which means ‘lab-burgers’ would be cheaper than hamburgers, thus giving people an incentive to prefer artificial meat over the traditional one. Of course meat isn’t the only reason we raise and kill animals; there are other animal-derived products—as well as products coming from other animals than cows, chickens, and ducks—we use or consume as well, and as long as we will do so, animal suffering and the rest of the usual suspects will still be a problem to at least some extent. However, we need to keep in mind that meat consumption is the primary reason for raising livestock; remove that reason and you’ll eliminate a big chunk of the problem. Additionally, no one says that cultured beef and poultry are the best we can do: dairy, seafood, egg whites, (thought these aren’t exactly like the real thing) and leather are possible as well. Admittedly, we’re far from molecular assemblers or the fabled Star Trek‘s replicators (though 3D printers could be considered their great-great-grandfathers, and yes, they can be used to make food) that could make nearly everything, food included, without having to plant a seed or touch an animal, but you know, we’re off to a good start anyway.

These reasons alone would probably be enough to convince many consumers to make the transition; the fact artificial meat and foods would take animal cruelty out of the equation could easily make them the new ‘organic’ and plant the final nail in the coffin of old-school farming. After tens of thousands of years, it would be about time.

This sounds great. So, what’s the problem, and what does this have to do with the lunch with my friend? My friend’s reaction to artificial meat was: ‘I need to think of a reason to oppose it.’ My understanding was that the matter was one of principle: Eating meat is wrong, in any form, and people should not do it. Period.

Even if my understanding was incorrect in the specific case, I wouldn’t be surprised if some people actually espoused this principle, and that is the third problem of ‘-isms’: Sometimes, ideologies come before objectives.

If people didn’t eat meat in the first place, then we probably wouldn’t have an animal cruelty problem, because we probably would just leave cows & co. alone, and we wouldn’t need artificial meat either. However, a lot of people do eat meat, and some (possibly quite many) of these people may never buy into vegetarianism or veganism for a number of reasons ranging from taste to laziness to ideology. Anyone who thinks eating meat is wrong because it causes animals a great deal of suffering (not to mention death) needs to decide what is most important to them—convincing everyone to embrace their same moral principles, or putting an end to said animal suffering. I would say the latter is far more important, for whether or not everyone agrees that eating meat is wrong doesn’t really matter so long as no animal is harmed or killed in order to get the meat. Animals certainly wouldn’t give a flying monkey’s arse whether you ate meat or not if the meat wasn’t their own, strictly speaking.

So, if one’s goal is to persuade everyone else that eating meat is Wrong™—by all means, let them go ahead and wage total war on any kind of meat consumption, traditional or artificial. On the other hand, if the goal is to achieve the (desirable) vegan utopia where no human ever harms an animal, one should consider that the war on the dietary habits of other people is unlikely to be won, at all or before a rather long time (during which animals would keep suffering and dying in slaughterhouses). There are a number of reasons why chances of turning everyone into a vegan are low—for example, for as long as our species has existed, we haven’t managed to convince everyone not to kill other humans for ultimately futile reasons; I fail to see what could possibly persuade everyone not to kill any animal, ever, for food—but artificial meat stands a respectable chance of sidestepping the problem altogether and let us have the cake and eat it too, and within a reasonable timeframe of a couple of decades, give or take. (Some think they might serve the first lab-burgers five years from now already.) After all, the best way to resolve a conflict is to make all parties involved equally happy.

I don’t know how many vegetarians/vegans are aware of the opportunities offered by the advent of artificial meat (and, more generally, artificial food), nor how many endorse or oppose the idea. Hopefully, those who oppose it are very few. As said, I’m neither vegetarian nor vegan, yet I look forward to the day when science and technology will have solved this problem too and ended the age of slaughterhouses; if I was vegetarian/vegan, I’d be shouting about artificial meat from the rooftops all the time—pretty much in the same way I do with rejuvenation biotechnologies—because, a) it’s likely only way to actually ever close the slaughterhouses for good, and b) the more people people support the cause, financially or otherwise, the sooner it will happen.

Sapiens: a discussion

Writing book reviews isn’t really something I do on a regular basis or that I particularly enjoy, but I recently read a book that was good and interesting enough to be worth discussing: Sapiens – A Brief History of Humankind by Yuval Noah Harari. The book is exactly what it says—a (relatively) brief history of our species, starting from before it was even born and ending with speculations about the future. Up to right before Harari got into the speculative bits, I enjoyed the book thoroughly; I found it to be thought-provoking and even eye-opening. The author’s outlook on the future, however, was disappointing. Too much unsubstantiated, subtly implied pessimism, which, together with a few huge blunders here and there, betrays the lack of a solid scientific background—indispensable for any serious discussion about the future, however speculative.

Anyway, let’s start with the good bits.

The cognitive revolution

The first part of the book is focused on the cognitive revolution—the intellectual growth that slowly changed us from ‘animals of no significance’ to rulers of the Earth. This was the time when humans became able to pass on to future generations knowledge and skills that were not encoded in their DNA; it is also the time when we started building more complex social structures, no doubt thanks to our newly got ability to communicate, which also enabled us to plan and thus think and act as a group. During this time, we were not alone: Homo Sapiens is only one of the six human species that ever inhabited the Earth, and we did share the planet with some of them for some time. In particular, we lived alongside the famous Neanderthals, whom apparently we’ve ‘driven’ to extinction along with a number of ecosystems, such as the ancient Australian one, relatively shortly after coming into contact with them.

This is one of the non-speculative bits that made me turn up my nose. It seems there’s still some controversy on whether the Sapiens or other causes (such as climate change) were the driver of the extinction of Neanderthals and the ancient Australian ecosystem, but this is not the point. Even if it was because of the Sapiens (our species), I find that Harari’s way of explaining the facts is not as neutral as it would be necessary. It’s undeniable that, these days, blaming humans for everything is fashionable, whether they’re guilty or not; nodding approvingly at how bad or evil humans are has somehow become a sign of wisdom, and that’s what I think Harari is doing, perhaps unintentionally, in his exposition. It would be easy for a reader to think that the extinctions caused by our primitive ancestors are nothing but the umpteenth sign that our nature is intrinsically evil and that we’re a ‘bad’ species. Harari’s exposition doesn’t seem to put sufficient emphasis on the fact that Sapiens back then knew nearly nothing of how the world works, and were reasonably more concerned with their own survival than with that of an ecosystem whose existence they ignored. I’d have a hard time believing that they intentionally hunted down species to extinction, or that they knew this was possible to begin with. They didn’t kill off the Neanderthals, if they did, out of sheer evil, or corporate greed, or racial hatred. Odds are they were trying to do what all other species try to do: Survive, without worrying too much (or at all) about the other species. In fact, other species put no special effort in maintaining the delicate balance we observe in ecosystems today; they don’t care about it and they don’t even know about it. If wolves were hungry enough to hunt down sheep to extinction and had the chance to do it, they would. They don’t have an ecological conscience that might stop them before the irreparable happens. Arguably, humans are the only species on Earth that ever cared about the ecosystem, even though some of us are indeed part of the reason why the rest of us has to worry about the ecosystem in the first place. Another thing that Harari doesn’t seem to take into account at this point is that the very same moral system by which we judge killing other species as ‘bad’ hadn’t been invented yet.

Imagined realities

Moral systems are an outstanding example of a phenomenon Harari describes quite early on in the book: Sapiens’ unique ability to make up imaginary realities and live them out as if they were objective truth. As Harari himself says, this might sound shocking to you, but there’s no such thing as human rights, or companies, or money, or nations, or gods. They’re nothing but figments of our imagination, but we agree they exist and behave accordingly because of the benefits we get out of them. Ideals such as human rights can coordinate a huge crowd of complete strangers to pursue a common goal in a very effective way; similarly, our belief in the existence of companies makes us do work for them, even though there really are no companies. There’s only a bunch of people doing stuff that eventually ends up in a usable result because they all believe the company exists. If a judge were to dissolve the company for some reason, then the same bunch of people would most certainly stop believing in the existence of the company and thus stop working for it. When you think about it, though, how was the company created or dissolved? Mostly, things were written on pieces of paper, and other more or less virtual things (money) were moved from one place to another. Harari calls this a ‘ritual’, comparing it to the religious rituals that lead people to believe that a piece of bread turns into the flesh of their god. Morals are imaginary too: Good or bad, socially acceptable or not, right or wrong, etc., are merely ideas, and as such they exist only in our heads, and they may come as easily as they may go. Two gladiators fighting to the death, or a naked and unarmed prisoner devoured by hungry beasts, were perfectly acceptable forms of entertainment back in the day of ancient Romans; today, not only is this not entertaining, but it is considered barbaric and it is illegal in the ultra-vast majority of the world—and personally, it makes me feel like throwing up.

The growth trap

Another interesting idea is that the agricultural revolution—i.e., when we stopped wandering around and settled down to cultivate the land—was a fraud, and not just any fraud: It was history’s biggest fraud, Harari says. Before we started cultivating, Harari explains, we had a much more varied diet and we didn’t work as hard, even though we were physically very active nonetheless. It would take a group of hunters much less time to hunt down a huge animal, skin it, etc., than it would take to cultivate a wheat field, protect it from parasites and intruders, harvest it, store it, etc. Floods and dry seasons also increased the chances of losing the yield and all the work that was put into it. In addition, a permanent settlement was the ideal environment for diseases to flourish (especially considering that hygiene wouldn’t be a thing for yet quite a few millennia), which was not the case with a nomadic lifestyle. Another big change brought about by the new lifestyle was that women could get pregnant every year, which came in handy because extra help in the field was always needed. However, an extra helper was also an extra mouth to feed, so eventually the plan became to work harder now to have a bigger yield in the future; children would be well fed, and less work would be required for a while. The plan didn’t work out, Harari says, because population kept increasing (even though living conditions were still quite bad and child mortality quite high), and more work to feed the ever-growing number of mouths was always required. Now it was too late to go back to the old hunter-gatherer lifestyle, as it couldn’t support the current population. The only option was going down the more-people-more-work spiral. The fraud lies in that that while the agricultural revolution is supposed to have made people’s lives better, it did not, at least not for those who lived through it. We inhabitants of the modern world, whose existence is a consequence of the agricultural revolution, are ultimately reaping the fruits of the hard work of the people who lived back then.

However, the fraud is still going on, Harari argues, because the ‘work more today to have more tomorrow’ mentality is still with us, even though it doesn’t quite work today as it did not back then. In theory, the point of all our tools and technology would be that of either simplifying our work, or doing it for us so that we can be relieved of the burden of work entirely. In practice, what happens is that we use these tools to ‘increase productivity’, which is just a way of saying ‘work more’. We still work on average 8 hours a day—so much for Keynes’ prediction that we’d be working only 15 hours a week by the end of the 1900s—and instead of taking advantage of the ever-growing automation to free ourselves from work and pursue our passions, we worry about robots taking our jobs, failing to realise that both money and the idea that individuals must earn a living are among the aforementioned imaginary things we let dictate our lives. We have washing machines, dishwashers, phones, vacuum cleaners, the Internet itself, and they’re all supposed to make our lives easier and more enjoyable, and yet, we’re more stressed than ever and with much less free time than our ancestors had before the agricultural revolution.

Harari uses email as an illuminating example of how our use of technology may lead to more stress and work rather than less. In the old days, one wouldn’t receive but a handful letters a month, because it would take a long time for a letter to get from the sender to the recipient. When you got a letter, you knew it would take weeks before your reply got to its destination, so you’d hardly feel compelled to sit down and pen a reply right away. You could take your time and think carefully about what to write. Today, emails can reach the other side of the globe nearly instantly, so we all expect swift replies and often feel compelled to reply immediately. While in the 1800s useless mail wasn’t really a thing, today there’s plenty of essentially junk mail going around in the form of social media notifications, ads, spam, scams, etc. (I might add that lightning-fast communication is the mother of flame wars. When somebody pisses you off on the Internet, you can type down a furious reply right away, in the midst of your rage, and expect an equally unfriendly answer in a matter of a few hours tops.) Emails didn’t simply speed up our existing mail traffic; rather, they increased it exponentially. That’s pretty much what we have done with all other technologies: Machines make existing jobs simpler and faster, so that we could maintain the same level of productivity with less work; instead, we insist on increasing productivity, never really cutting down our working hours when not making them longer. Maybe this is because our population is still growing, and the same ancient trap that fooled our ancestor is keeping us captive today; maybe, as our numbers will start to settle down at the turn of 2100, it will be possible to break free of the growth trap. It is also possible that our technology isn’t yet advanced enough to break us free from the trap, but that we will indeed break free once we reach a sufficient technological level. However, I fear the productivity mantra might be so ingrained in our brains that we could be compelled to keep ‘creating new jobs’ to subjugate people’s lives to even if it was unnecessary.

Three factors of unification

As Harari explains, there are three main factors that have led to the unification (not necessarily in the sense of single, big, loving family) of humankind: money, empires, and religion.

All these three things are purely psychological constructs. Money and the reasons I dislike it are going to be regulars here on l4t, so I’m not going to talk about them too much in this post. Nonetheless, it is definitely worth spending a few words on the subject.

The fact money has no intrinsic value will hardly be a revelation. More or less everyone knows it, but we accept as an axiom the value of money because we don’t really have an option. The vast majority of the people in the world believes in money, so if I decided not to believe in it any more and stopped accepting it in exchange for things, all I would likely obtain is screwing myself over. As Harari explains, money is the most successful trust system ever devised, and it is so successful precisely because so many people (for all intents and purposes, everyone) believe in it. I would add another reason of its success: The system is engineered so that your life depends on having money. If it wasn’t so, you’d hardly give a flying rat’s arse about money. A lot of people believe in all sorts of crazy things, and yet, as long as your life doesn’t depend on them, you can safely neglect them entirely. These two things together are what makes money so successful. Try to imagine this scenario: The world works just as it does today, except there is no money. You do work because you trust that other people will do the same for the same reasons as you. You go to your workplace every morning and do your job, and then go to the shopping mall and get your part and no more than that, trusting others will do the same. Unfortunately, as things stand this system doesn’t stand a chance to work for several reasons—for example, hardly anyone would want to work as a cleaner if they weren’t forced to; another one is that, without money, it’s difficult to establish what is ‘your part’ you can rightfully get at the shopping mall. However, these reasons aren’t nearly as important as the fact that this system is too easy to cheat. Everybody does work because they trust others will do the same, but what if some—perhaps even many—broke this trust? In this system there’s no way of being reasonably sure someone isn’t breaking your trust, and it doesn’t seem to enforce any punishment for eventual offenders, unlike an economic system where money is a thing. In a system that uses money, if you don’t work (i.e., if you break trust in the moneyless system) you don’t get money, and since your life depends on having money, as a rule of thumb if you don’t work you’re screwed. This is why in a money-based economic system you can be reasonably sure most people will (at least try to) work (i.e., they will not break your trust): Their lives depend on it. Don’t get me wrong—I’m certainly not praising this system. I think it causes more troubles than it solves, and the way it minimises the risk of freeloaders is essentially by blackmailing everyone. Besides, over the course of millennia money has pretty much got a life of its own, and it is not really the people whom we trust any more—we trust their money. If they run out of money, Harari rightfully reminds us, we run out of trust.

I’m not going to discuss the other two factors here because I’m not nuts about the topics themselves, but the chapters Harari dedicates to them are definitely interesting and worth your time.

The scientific revolution

For a long time humans believed that all that was necessary or worth knowing was already known. All the answers you needed could be looked up in your holy book. Then, at some point, we started realising that this was not the case: We began admitting our ignorance and started studying the natural world looking for answers. That’s, in a nutshell, how the scientific revolution started—with the admission that we’re far from knowing all, but also with the knowledge that we could learn more and consequently improve our lives. This certainly worked out, and in no small part because of all the research and enterprises funded by rich people who, rather than in knowledge per se, were interested in using science to become even more rich. Harari dedicates the last third or so of the book explaining in detail how the scientific revolution came about, and finally speculating on the future—the bit that I found more than cringeworthy. As said, up to this point the book was essentially brilliant and characterised by lucid and careful analysis; after this point, Harari falls more often than not into silly clichés and rhetoric; I don’t think he thought this part through as much as he did the rest. (After all he’s a historian, not a technologist.)

The bad bits: rejuvenation biotechs

I was positively surprised to see Harari talked, albeit indirectly, about rejuvenation biotechnologies; unfortunately, my surprise turned into disappointment when I read what he wrote about it. In three paragraphs, he condensed a a bunch of the usual, stale objections to rejuvenation, and committed several rookie mistakes. I feel compelled to point them out, so that less attentive readers will not fall prey to the same misconceptions.

Suppose science comes up with cures for all diseases, effective anti-ageing therapies and regenerative treatments that keep people indefinitely young. In all likelihood, the immediate result will be an unprecedented epidemic of anger and anxiety.
Those unable to afford the new miracle treatments—the vast majority of people—will be beside themselves with rage. Throughout history, the poor and oppressed comforted themselves with the thought that at least death is even-handed—that the rich and powerful will also die. The poor will not be comfortable with the thought that they have to die, while the rich will remain young and beautiful for ever.

The first, subtle but capital mistake Harari is committing here is what I call the magic pill assumption. He is implicitly assuming that the cures for all diseases etc. will come all at the same time and with no warning, so that suddenly the world will be split into those who can afford the treatments and those who can’t. In this scenario, there is no time for our society to adapt to the change, and thus hell will break loose. However, the chances of this actually happening are exactly zero, for an elementarily simple reason. All these wondrous advancements he talks about will require time and effort to become real. There will not be a single treatment to cure all diseases, let alone ageing, magically popping into existence all of a sudden; rather, several therapies will be necessary to achieve these goals, and expecting them to arrive all at the same time is utterly unrealistic. Some will come sooner, others will come later. Hardly any of these therapies will work perfectly right off the bat; first-generation therapies will work to some extent, but not very well—a good wake-up call for everyone to realise that fully working treatments are well on their way, although not here yet, and that work needs to be done to demand and ensure widespread access. No one will suddenly find themselves ‘immortal’, and thus there will be no instant immortality the poor can envy the rich for. Therapies will come gradually (in fact, they’re already coming, slowly and pretty much one at a time), and during these long stretches it’s reasonable to expect the prices to go down—not to mention the fact that it would be far more convenient for any given State to pay for people’s rejuvenation rather than paying their pensions.

The alleged comfort that the poor find in the rich’s mortality is nothing but a romantic fairytale, no less imaginary than the imagined realities he exposed at the beginning of the book. I have a hard time believing that the mother of a child who is dying of starvation would cheer herself up thinking that some unnamed rich people whose faces she’s never even seen will eventually also die. A poor person dying of ageing in a developing country can’t get but an extremely cold comfort, if any, thinking about a rich person dying of ageing in the developed world. Unlike the poor, the rich has likely led a life full of comforts and his passing will equally likely be less sad and painful than the poor’s. Unlike the poor, the rich has no doubt lived a less grim old age, and will be able to afford a clinic, doctors, and pain-killers to make his passing less hard. Additionally, why would all poor hate all rich people? Why hate philanthropists who spend millions (if not billions) to help the developing world?

It gets worse.

But the tiny minority able to afford the new treatments will not be euphoric either. They will have much to be anxious about. Although the new therapies could extend life and youth, they cannot revive corpses. How dreadful to think that I and my loved ones can live for ever, but only if we don’t get hit by a truck or blown to smithereens by a terrorist! Potentially a-mortal people are likely to grow averse to taking even the slightest risk, and the agony of losing a spouse, child or close friend will be unbearable.

The implicit assumption here is that, presently, the thought of dying in an accident or in a terrorist attack isn’t so bad, because you would die of ageing anyway eventually. I would refrain from trying to comfort the relatives of terrorism victims using this argument—odds are they’d punch me in the face. Losing a dear one is always horrible, no matter how long they had left to live when they died. I really don’t think that losing a child is any less painful if you know that he’d have died of ageing anyway. This is honestly an extremely dumb argument, a real shame for a book that had thus far been quite excellent.

The claim that a-mortal people are likely to grow averse to any risk, even the smallest, is completely unsubstantiated and unjustified. I do agree that being a-mortal could make you think twice before you abuse alcohol or drive recklessly, for the simple reason that such things are hardly worth risking your potentially endless life (and IMHO, they’re not worth risking even a presently normal lifespan); but I don’t think you would not get on a plane to see your family just because there’s a chance in a million that the plane will fall. (You might reconsider when you approach your 1.000.000th flight, but getting there would probably take longer than planes will be around.)

Also, Harari fails to take into account timescales entirely—he commits the mistake of imagining present-day problems in far-future scenarios. If everything goes well, we might have rejuvenation therapies in time for most people alive today to benefit from them, but I think the day when everyone can safely be called ‘a-mortal’ is a long way off. Is he sure that, so far into the future, there will be terrorist attacks, trucks to be hit by, and economic inequality (or even money)? I am not sure there won’t be any of these things, but he seems to be pretty damn sure there will be. As a historian, he should be very well aware of how our world has changed throughout history, and he should be especially aware of the positive trends of the last two centuries (for example, the plummeting of extreme poverty) that are evidence of a future much brighter than he implicitly suggests.

Quite frankly, Harari made rather miserable arguments to back up his case that perpetual health and indefinite lifespans might not be so good.

The bad bits: computer viruses and artificial life

In the section ‘Another life’, Harari brings up computer viruses as examples of ‘completely inorganic beings’. He mentions genetic programming as one of the most interesting areas of computer science at the moment, which is certainly true, and explains that it endeavours to emulate the methods of genetic evolution, which is also true. He says,

Many programmers dream of creating a program that could learn and evolve completely independently of its creator. […] A prototype for such a program already exists—it’s called a computer virus. As it spreads through the Internet, the virus replicates itself millions upon millions of times, all the while being chased by predatory antivirus programs and competing with other viruses for a place in cyberspace. One day when the virus replicates itself a mistake occurs—a computerised mutation. Perhaps the mutation occurs because the human engineer programmed the virus to make occasional random replication mistakes. Perhaps the mutation was due to a random error. If, by chance, the modified virus is better at evading antivirus programs without losing its ability to invade other computers, it will spread through cyberspace. If so, the mutants will survive and reproduce. As times goes by, cyberspace would be full of new viruses that nobody engineered, and that undergo non-organic evolution.

If you have only a vague idea of how computers work and what a computer virus is, this might sound sensible. The truth is, a little research on the Internet is enough to conclude that this is mostly nonsense. In order to explain why, we need to distinguish fact from fiction.

  • Computer viruses are completely inorganic beings – FICTION
    Computer viruses are inorganic alright, but they’re nowhere near being ‘beings’, no more than Microsoft Word or Mozilla Firefox are. Computer viruses are instructions for a computer to run—nothing more, nothing less.
  • Computer viruses replicate themselves millions of times – FACT
    How much they replicate depends a lot on the circumstances, but yes, computer viruses generally are instructed to create copies of themselves. There’d be little point in even programming a virus otherwise.
  • Computer viruses mutate via replication mistakes, either random ones or intentional ones meant by the programmer – MOSTLY FICTION
    Computer viruses can be programmed with different stealth techniques meant to help them avoid detection by antiviruses. A possible way of doing this is using self-modifying code in combination with polymorphic code: In a nutshell, the virus rewrites parts of itself (or even the whole thing) to make its code look different while doing exactly the same things it used to do before. This is not at all the same thing as a ‘replication mistake’. An actual replication mistake, be it random or meant by the programmer, would overwhelmingly likely result in broken code that can’t even be run. I’m not going to say that it’s entirely impossible for a replication mistake to produce a working copy of the virus, but I am willing to bet that chances are so ridiculously small that the virus would have to replicate itself until the end of time before such a miraculous occurrence could actually take place.

    More importantly, viruses will not ‘evolve’ any new features as a result of this approach, and will not build upon them. Additionally, antivirus software will not ‘mutate’ as a consequence of viral mutation. Antivirus software doesn’t have a life that depends on catching viruses. When a new virus comes around, more often than not human researchers figure out a way for their software to detect it by inspection (the so-called ‘virus signature’) and update it; alternatively, antivirus software can bust unknown viruses through heuristics or by keeping an eye on programs that exhibit virus-like behaviour (touching stuff they’re not supposed to touch or reproducing like rabbits, for example. As a side note, if they ‘want to live’, computer viruses need to keep a low profile. If a virus ‘evolved’ new, unplanned features it’d be much more likely to start doing random shit and attract attention, either from the user or the antivirus, thus undermining its own ‘survival’). Viruses and antiviruses will not cause each other to evolve, as predator-prey pairs do in nature.

  • Computer viruses are chased by predatory antiviruses and compete for a place in cyberspace – FICTION
    Computer viruses don’t compete with each other. You can easily have tons of different viruses running on the same computer. They don’t kill each other and they don’t steal any imaginary resources they don’t need from other viruses. Worst-case scenario, a virus might have spread so much in a given computer that all the space is taken or the computer can’t run. Other viruses on this same computer are certainly not going to ‘die’ because of this. Maybe they can’t make more copies of themselves, but that’s not going to trigger a competition with anyone. They aren’t going to fight each other. More importantly, viruses will not try to outsmart each other and will not ‘evolve’ any features to do better than other viruses. If you have a computer with virus A and virus B inside and A does a better job at replicating itself than B, B will not ‘evolve’ any new feature to do better than A.

    Also, viruses are not ‘chased’ by ‘predatory’ antivirus software. Antivirus programs are nothing more, nothing less than a set of instructions meant to find and eliminate unwanted instructions (viruses). An antivirus will not ‘run after’ a virus, it will either find it or not, and if it does, the virus is not going to ‘run away’ and the antivirus will not have to ‘chase’ it.

The bottom line is: Yes, computer viruses do mutate to an extent and these mutations can help them avoid detection; however, no, they aren’t going to ‘evolve’ into anything different, and especially not into something that ‘nobody engineered’. As a side note, while genetic programming can be used to evolve computer viruses, it’s not a very efficient way of doing so for a virus that needs to survive ‘predatory’ antivirus software out there in the wild cyberspace. Running a genetic algorithm to create new variant of the virus would take time and create tons of useless junk in the process—a great way to increase the chances of being busted.

I admit that these two blunders—rejuvenation and computer viruses—might have biased me against the rest of the book, but I do think Harari’s view of the future tastes too much like a dystopian-future movie. It seems based on rather fanciful interpretations of possible developments of current technologies and polluted by the stereotypically pessimistic idea that history is destined to repeat itself over and over. (Not the whole of history, though. A lot of people seem to think only negative patterns repeat themselves throughout the centuries, while positive ones do not, for some mysterious reason. The reason might be that bad news sell more than good news.)

However, my overall opinion is that this is a good book, some parts of which need to be taken with a pinch of salt. Even though I was quite disappointed in Harari’s speculations on the future, I’m nonetheless curious to see if he has redeemed himself in the sequel to SapiensHomo Deus: A Brief History of Tomorrow. When I’ll read it, I’ll let you know. 🙂

Scattered thoughts on self-awareness and AI

I’ve always been a fan of androids as intended in Star Trek. More generally, I think the idea of an artificial intelligence with whom you can talk and to whom you can teach things is really cool. I admit it is just a little bit weird that I find the idea of teaching things to small children absolutely unattractive while finding thrilling the idea of doing the same to a machine, but that’s just the way it is for me. (I suppose the fact a machine is unlikely to cry during the night and need to have its diaper changed every few hours might well be a factor at play here.)

Improvements in the field of AI are pretty much commonplace these days, though we’re not yet at the point where we could be talking to a machine in natural language and be unable to tell the difference with a human. I used to take for granted that, one day, we would have androids who are self-aware and have emotions, exactly like people, with all the advantages of being a machine—such as mental multitasking, large computational power, and more efficient memory. While I still like the idea, nowadays I wonder if it is actually a feasible or sensible one.

Don’t worry—I’m not going to give you a sermon on the ‘dangers’ of AI or anything like that. That’s the opposite of my stand on the matter. I’m not making a moral argument either: Assuming you can build an android that has the entire spectrum of human emotions, this is morally speaking no different from having a child. You don’t (and can’t) ask the child beforehand if it wants to be born, or if it is ready to go through the emotional rollercoaster that is life; generally, you make a child because you want to, so it is in a way a rather selfish act. (Sorry, I am not of the school of thought according to which you're 'giving life to someone else'. Before you make them, there's no one to give anything to. You're not doing anyone a favour, certainly not to your yet-to-be-conceived potential baby.) Similarly, building a human-like android is something you would do just because you can and because you want to.

I find self-awareness a rather difficult concept to grasp. In principle, it should be really easy: It's the ability to perceive your own thoughts and tell yourself apart from the rest of the world. I really don’t think this is something we’re born with, but rather something we develop. Right after you’re born, you’re a jumbled mass of feelings and sensations that try to make sense of themselves, but can’t. If it was somehow possible to keep an infant alive while depriving them of any external stimuli (vision, hearing, smell, touch, taste), I very much doubt that jumbled mass would ever manage to make sense of itself. In such a hypothetical scenario, I’m not even sure this baby could think as we intend it: What can you reasonably think about, if you know literally nothing?

Research to better understand how the brain develops self-awareness is going on all the time; recently, studies have pinpointed the part of the brain where self-awareness might be generated. Other studies suggest there’s no single area responsible for it; rather, it might be the work of several brain pathways working together. I really hope they’ll figure it out soon, because this issue drives me bonkers. From my personal experience of self-awareness, it appears as if there is some kind of central entity that is constantly bombarded with external stimuli, memories, feelings, and so on. This central entity (my self-awareness, or me) also appears to be independent of the stimuli it is exposed to: You’re still you whether you’re smelling a cup of coffee or not, or whether you’re thinking about a certain past event or not. This seems to make sense, but I think it is in sharp contrast with the idea I expressed above, i.e. that self-awareness is developed thanks to all the internal and external stimuli we’re exposed to since day 1. Either there’s no central entity, or it works differently than I think.

Some suggest self-awareness is an illusion brought about by the constant, small changes in your perceptions. These changes are so tiny, and their flow so smooth, that we get an illusion of continuity—the article argues—illusion which we call ‘self-awareness’. This is a very interesting theory, which I think makes sense at least to an extent. The thought that ‘you’ could be the sum total of all perceptions (internal and external) you experience in a given unit of time has crossed my mind more than once. Over the years, people change their minds, they behave differently, and in some case might become the opposite of what they used to be. This might be nothing more, nothing less than the result of the changes in our perceptions and how they change what’s inside our brains. Still, in order to work, illusions require someone (or even something) to be tricked by the illusions themselves. If the self is just an illusion and there really is no ‘you’ there, who or what is being tricked by the illusion? It’s almost as if this theory says that the illusion is tricking itself, which is quite paradoxical, but on the other hand I think this conclusion shows there’s no such thing as a ‘simulated consciousness’, as in a machine that thinks to be self-aware when in fact it is not. (This has always been quite clear to me.)

Before I get too philosophical, let’s go back to self-aware machines. I think we’ll have a hard time giving machines self-awareness before we thoroughly understand what it is and how it works. Some suppose it might be a spontaneous consequence of complexity; for example, if you have a neural network learn bagillions of things, self-awareness might ‘magically’ arise once a certain level of interconnectedness between all that information has been reached. That’s an intriguing idea, and it implies we might be able to give self-awareness to a machine without having to understand how the process works first, but I am a bit sceptical. On top of that, how do you prove that the machine you built is self-aware? As a matter of fact, you can’t prove that directly even for humans: There’s no way you can be sure that the person you’re talking to is actually perceiving their own thoughts, but we generally consider some things as acceptable signs of self-awareness, such as the ability to talk about yourself, to recall past experiences, to interact with your surroundings, to avoid obstacles, to learn new knowledge and apply it, etc. If a machine showed these signs, then we might conclude it is self-aware, but what if some of these signs are missing? For example, dogs don’t talk much about themselves. Are they not self-aware? I don’t know for a fact, but I doubt they aren’t. It is possible that there are different levels of self-awareness. Maybe self-awareness is not a binary phenomenon, but a continuous one. If it is true that self-awareness develops thanks to the external stimuli you’re exposed to, then it is reasonable to think that different species may have different levels of self-awareness, because not all species can experience the same stimuli, let alone process them. Furthermore, it seems reasonable to assume that a critical brain mass is necessary to be aware of your own thoughts, so the smaller the creature’s brain, the ‘lower’ (no offence intended) degree of self-awareness we should probably expect, and below the critical mass there would be no self-awareness at all.

This takes us to another aspect of the issue. If self-awareness is (as it is reasonable to suppose) a product of evolution, then each species’ self-awareness has been fine-tuned to best suit the needs of the species itself. Bacteria, for example, needed no self-awareness to do what they do, and so they haven’t developed any. (To be fair, they probably couldn’t have developed self-awareness even if they needed to, because it’s hard to have a brain when your body is a handful of cells at best.) A machine, even if it was capable to learn and thus to ‘evolve’ to an extent, wouldn’t be subject to the same evolutionary pressure as biological creatures; its ‘survival’ would not depend on how fit it was for its environment, and it would not ‘reproduce’ with other machines so that any ‘mutations’ could be passed on or selected against. In other words, it wouldn’t have any need requiring it to develop self-awareness or emotions of any kind. (On the plus side, this suggests it would hardly have a reason to go on a human-killing spree.) If a machine will ever have self-awareness or display emotions, I think it’ll be because we made it that way, quite possibly just because it was better for us, in a way or another. A self-aware machine with no objective need for sadness or guilt, for example, might well ask you why the heck you enabled it to feel such unpleasant emotions if it wasn’t strictly necessary. An answer could be that we get along better with an emotional machine rather than an unemotional one, but the machine might find this reason insufficient. (Which, on the minus side, might give the machine a reason to go on a human-killing spree, especially if we gave it an instinct for revenge.)

Still, I suppose it wouldn’t be entirely impossible that machine ‘feelings’ might develop in a sentient machine, even without evolution. Maybe they wouldn’t be happiness or sadness, since these have a strong biological basis, but they could be something else, that only a machine can ‘feel’ and that we cannot even imagine. Future developments in any given field have a tendency to surpass our imagination, and whatever future AI is going to be like, I’m looking forward to see it.