The Elynx Saga goes permafree

Believe it or not, l4t did not die again. In fact, my to-post list has been growing longer for the past few months, but I never had the time to write anything because I had other priorities. One of these priorities was turning The Elynx Saga into a permafree series.

Oh, right. At this point, you probably have no idea what The Elynx Saga is. It’s my science fiction ebook series. I began writing it around 2002, just as a hobby, thinking that one day I would send it to a publisher to become rich and famous and gift it to the world. Eventually, self-publishing became a thing, and in late 2015, in the middle of an existential crisis kind of thing, I thought I’d drop everything I was doing and become a writer, taking advantage of all the stuff I had already written (pretty much four novels) and publishing it all on Amazon.

My super professional-looking ad.

After tons of work and time needed to re-read, proofread, change all the things that inevitably make you cringe 10+ years after you’ve written them, proofread again, translate everything from Italian into English, and understand the world of self-publishing, I eventually realised that, yes, I do like writing and I do want to publish my science fiction series for everyone to read; but writing for a living, especially given that you have to do everything by yourself with near zero immediate returns? No, thank you. You know, I like eating—and not freaking out. Besides, I came to the conclusion that I really love all the other things I was about to drop for the sake of writing alone, and had I abandoned them, I could never be really happy.
The Fall of the Gods

That realisation notwithstanding, I had already published my first book, The Fall of the Gods (aka FOG), and I saw no reason why I should not continue writing and publishing the rest of the series, albeit not as a job. I could still sell my books on Amazon and other publishing platforms, having fun in the process and maybe making a pretty penny from it, too. It wasn’t going to be very easy, though, because of the four books I had already written, only the first was actually good enough to be published. The rest was okay in terms of general directions for the series’ timeline and events, but basically it would need to be rewritten nearly from scratch—which is what I am currently doing. Additionally, other commitments (mainly my master’s thesis, Rejuvenaction, and studying web development) were preventing me from dedicating much time to my series; to top it all, a number of more or less horrible translation blunders had somehow made it to the English version of The Fall of the Gods. After a kind soul helped me root out all these outrages to the English language and put an end to their nefarious existence, I was ready to publish the second edition of The Fall of the Gods. If you think a ‘but’ is coming, you’re wrong. In fact, two but’s are coming.

The first ‘but’ was that all my manuscripts were written in OpenDocumentText (ODT) format. In order to get from ODT to Kindle and ePUB formats, my manuscript had to go through all sorts of more-or-less automatic conversions, the obvious result of which was that there would always be at least some things not working as expected; when this happened, I had little to no idea how to fix it. I had this kind of problems back then when I published the first edition of FOG, and I was having it again the second time around. In order to preserve my sanity, I decided to manually convert everything into ePUB format. This took tons of copypasting and messing around with CSS & HTML5 to make sure the end result would be reasonably compatible with a reasonable number of ebook readers, but at least now I have a flexible and reliable framework that does exactly what I want it do to and requires only little converting, which can be done with ease. This pushed the publication date forward by two more months.

The second ‘but’ should actually be further divided into two more but’s.

First, I really dislike Amazon. Thanks to their dominant position on the ebook market, they can afford playing bully. As an example, I was unjustly suspected of manipulating reviews, and consequently threatened to have my book taken down, because some friends of mine reviewed my book unbeknownst to me. If you’re an author, their policy forbids your friends and relatives to review your books; strict, maybe, but understandable. The problem is, Amazon don’t care whether you asked your friend to review your book or your friend did so of his own accord without telling you anything; you, as the author, are going to get the butt of it anyway. They won’t take any time to understand what’s going on. They’ll just send their standard threatening email to every author who fails to comply with their regulations to the letter. Speaking of standard emails, don’t get me started on their customer support. No matter what question you asked, the response you’re going to get is a standard copypaste, usually only vaguely related to your original enquiry. Initially I thought I had bumped into an especially dumb employee, but when nonsensical replies started piling up (coming from different people, at that), I realised they can’t all be that thick, and their answers must come from a standard script which they’re probably not supposed to deviate from by an iota. I’m glad I don’t work for Amazon.

Second, I am of the ‘information should be free’ school of thought. I could have published my ebooks for a price with other publishers than Amazon, but I’d rather have my own, cozy thing which I manage on my own and on my own website. In multiple formats and for free. So I took down my ebooks from Amazon and wherever else they had been published. Now you can only find them on my website, distributed under a Creative Common licence. I really couldn’t be bothered uploading them to other publishing platforms, especially for as long as my series will be a marginal phenomenon. Should there ever be any demand for it, I will consider making physical copies available for on-demand printing (always carefully avoiding any direct contact with Amazon). They probably won’t be for free, because, unlike ebooks, paper books have production and shipping costs, which no sane publisher would ever be willing to bear without making some profit off the author’s revenue. However, in such case I’ll try to keep the price as low as I can. For now, you can enjoy my books for free—even if you don’t have an ebook reader: Just download the PDF format.

I am not going to go into the details of the series or of FOG; I’m kinda hoping you’ll go and have a look for yourself on the series’ website. 😉

Sapiens: a discussion

Writing book reviews isn’t really something I do on a regular basis or that I particularly enjoy, but I recently read a book that was good and interesting enough to be worth discussing: Sapiens – A Brief History of Humankind by Yuval Noah Harari. The book is exactly what it says—a (relatively) brief history of our species, starting from before it was even born and ending with speculations about the future. Up to right before Harari got into the speculative bits, I enjoyed the book thoroughly; I found it to be thought-provoking and even eye-opening. The author’s outlook on the future, however, was disappointing. Too much unsubstantiated, subtly implied pessimism, which, together with a few huge blunders here and there, betrays the lack of a solid scientific background—indispensable for any serious discussion about the future, however speculative.

Anyway, let’s start with the good bits.

The cognitive revolution

The first part of the book is focused on the cognitive revolution—the intellectual growth that slowly changed us from ‘animals of no significance’ to rulers of the Earth. This was the time when humans became able to pass on to future generations knowledge and skills that were not encoded in their DNA; it is also the time when we started building more complex social structures, no doubt thanks to our newly got ability to communicate, which also enabled us to plan and thus think and act as a group. During this time, we were not alone: Homo Sapiens is only one of the six human species that ever inhabited the Earth, and we did share the planet with some of them for some time. In particular, we lived alongside the famous Neanderthals, whom apparently we’ve ‘driven’ to extinction along with a number of ecosystems, such as the ancient Australian one, relatively shortly after coming into contact with them.

This is one of the non-speculative bits that made me turn up my nose. It seems there’s still some controversy on whether the Sapiens or other causes (such as climate change) were the driver of the extinction of Neanderthals and the ancient Australian ecosystem, but this is not the point. Even if it was because of the Sapiens (our species), I find that Harari’s way of explaining the facts is not as neutral as it would be necessary. It’s undeniable that, these days, blaming humans for everything is fashionable, whether they’re guilty or not; nodding approvingly at how bad or evil humans are has somehow become a sign of wisdom, and that’s what I think Harari is doing, perhaps unintentionally, in his exposition. It would be easy for a reader to think that the extinctions caused by our primitive ancestors are nothing but the umpteenth sign that our nature is intrinsically evil and that we’re a ‘bad’ species. Harari’s exposition doesn’t seem to put sufficient emphasis on the fact that Sapiens back then knew nearly nothing of how the world works, and were reasonably more concerned with their own survival than with that of an ecosystem whose existence they ignored. I’d have a hard time believing that they intentionally hunted down species to extinction, or that they knew this was possible to begin with. They didn’t kill off the Neanderthals, if they did, out of sheer evil, or corporate greed, or racial hatred. Odds are they were trying to do what all other species try to do: Survive, without worrying too much (or at all) about the other species. In fact, other species put no special effort in maintaining the delicate balance we observe in ecosystems today; they don’t care about it and they don’t even know about it. If wolves were hungry enough to hunt down sheep to extinction and had the chance to do it, they would. They don’t have an ecological conscience that might stop them before the irreparable happens. Arguably, humans are the only species on Earth that ever cared about the ecosystem, even though some of us are indeed part of the reason why the rest of us has to worry about the ecosystem in the first place. Another thing that Harari doesn’t seem to take into account at this point is that the very same moral system by which we judge killing other species as ‘bad’ hadn’t been invented yet.

Imagined realities

Moral systems are an outstanding example of a phenomenon Harari describes quite early on in the book: Sapiens’ unique ability to make up imaginary realities and live them out as if they were objective truth. As Harari himself says, this might sound shocking to you, but there’s no such thing as human rights, or companies, or money, or nations, or gods. They’re nothing but figments of our imagination, but we agree they exist and behave accordingly because of the benefits we get out of them. Ideals such as human rights can coordinate a huge crowd of complete strangers to pursue a common goal in a very effective way; similarly, our belief in the existence of companies makes us do work for them, even though there really are no companies. There’s only a bunch of people doing stuff that eventually ends up in a usable result because they all believe the company exists. If a judge were to dissolve the company for some reason, then the same bunch of people would most certainly stop believing in the existence of the company and thus stop working for it. When you think about it, though, how was the company created or dissolved? Mostly, things were written on pieces of paper, and other more or less virtual things (money) were moved from one place to another. Harari calls this a ‘ritual’, comparing it to the religious rituals that lead people to believe that a piece of bread turns into the flesh of their god. Morals are imaginary too: Good or bad, socially acceptable or not, right or wrong, etc., are merely ideas, and as such they exist only in our heads, and they may come as easily as they may go. Two gladiators fighting to the death, or a naked and unarmed prisoner devoured by hungry beasts, were perfectly acceptable forms of entertainment back in the day of ancient Romans; today, not only is this not entertaining, but it is considered barbaric and it is illegal in the ultra-vast majority of the world—and personally, it makes me feel like throwing up.

The growth trap

Another interesting idea is that the agricultural revolution—i.e., when we stopped wandering around and settled down to cultivate the land—was a fraud, and not just any fraud: It was history’s biggest fraud, Harari says. Before we started cultivating, Harari explains, we had a much more varied diet and we didn’t work as hard, even though we were physically very active nonetheless. It would take a group of hunters much less time to hunt down a huge animal, skin it, etc., than it would take to cultivate a wheat field, protect it from parasites and intruders, harvest it, store it, etc. Floods and dry seasons also increased the chances of losing the yield and all the work that was put into it. In addition, a permanent settlement was the ideal environment for diseases to flourish (especially considering that hygiene wouldn’t be a thing for yet quite a few millennia), which was not the case with a nomadic lifestyle. Another big change brought about by the new lifestyle was that women could get pregnant every year, which came in handy because extra help in the field was always needed. However, an extra helper was also an extra mouth to feed, so eventually the plan became to work harder now to have a bigger yield in the future; children would be well fed, and less work would be required for a while. The plan didn’t work out, Harari says, because population kept increasing (even though living conditions were still quite bad and child mortality quite high), and more work to feed the ever-growing number of mouths was always required. Now it was too late to go back to the old hunter-gatherer lifestyle, as it couldn’t support the current population. The only option was going down the more-people-more-work spiral. The fraud lies in that that while the agricultural revolution is supposed to have made people’s lives better, it did not, at least not for those who lived through it. We inhabitants of the modern world, whose existence is a consequence of the agricultural revolution, are ultimately reaping the fruits of the hard work of the people who lived back then.

However, the fraud is still going on, Harari argues, because the ‘work more today to have more tomorrow’ mentality is still with us, even though it doesn’t quite work today as it did not back then. In theory, the point of all our tools and technology would be that of either simplifying our work, or doing it for us so that we can be relieved of the burden of work entirely. In practice, what happens is that we use these tools to ‘increase productivity’, which is just a way of saying ‘work more’. We still work on average 8 hours a day—so much for Keynes’ prediction that we’d be working only 15 hours a week by the end of the 1900s—and instead of taking advantage of the ever-growing automation to free ourselves from work and pursue our passions, we worry about robots taking our jobs, failing to realise that both money and the idea that individuals must earn a living are among the aforementioned imaginary things we let dictate our lives. We have washing machines, dishwashers, phones, vacuum cleaners, the Internet itself, and they’re all supposed to make our lives easier and more enjoyable, and yet, we’re more stressed than ever and with much less free time than our ancestors had before the agricultural revolution.

Harari uses email as an illuminating example of how our use of technology may lead to more stress and work rather than less. In the old days, one wouldn’t receive but a handful letters a month, because it would take a long time for a letter to get from the sender to the recipient. When you got a letter, you knew it would take weeks before your reply got to its destination, so you’d hardly feel compelled to sit down and pen a reply right away. You could take your time and think carefully about what to write. Today, emails can reach the other side of the globe nearly instantly, so we all expect swift replies and often feel compelled to reply immediately. While in the 1800s useless mail wasn’t really a thing, today there’s plenty of essentially junk mail going around in the form of social media notifications, ads, spam, scams, etc. (I might add that lightning-fast communication is the mother of flame wars. When somebody pisses you off on the Internet, you can type down a furious reply right away, in the midst of your rage, and expect an equally unfriendly answer in a matter of a few hours tops.) Emails didn’t simply speed up our existing mail traffic; rather, they increased it exponentially. That’s pretty much what we have done with all other technologies: Machines make existing jobs simpler and faster, so that we could maintain the same level of productivity with less work; instead, we insist on increasing productivity, never really cutting down our working hours when not making them longer. Maybe this is because our population is still growing, and the same ancient trap that fooled our ancestor is keeping us captive today; maybe, as our numbers will start to settle down at the turn of 2100, it will be possible to break free of the growth trap. It is also possible that our technology isn’t yet advanced enough to break us free from the trap, but that we will indeed break free once we reach a sufficient technological level. However, I fear the productivity mantra might be so ingrained in our brains that we could be compelled to keep ‘creating new jobs’ to subjugate people’s lives to even if it was unnecessary.

Three factors of unification

As Harari explains, there are three main factors that have led to the unification (not necessarily in the sense of single, big, loving family) of humankind: money, empires, and religion.

All these three things are purely psychological constructs. Money and the reasons I dislike it are going to be regulars here on l4t, so I’m not going to talk about them too much in this post. Nonetheless, it is definitely worth spending a few words on the subject.

The fact money has no intrinsic value will hardly be a revelation. More or less everyone knows it, but we accept as an axiom the value of money because we don’t really have an option. The vast majority of the people in the world believes in money, so if I decided not to believe in it any more and stopped accepting it in exchange for things, all I would likely obtain is screwing myself over. As Harari explains, money is the most successful trust system ever devised, and it is so successful precisely because so many people (for all intents and purposes, everyone) believe in it. I would add another reason of its success: The system is engineered so that your life depends on having money. If it wasn’t so, you’d hardly give a flying rat’s arse about money. A lot of people believe in all sorts of crazy things, and yet, as long as your life doesn’t depend on them, you can safely neglect them entirely. These two things together are what makes money so successful. Try to imagine this scenario: The world works just as it does today, except there is no money. You do work because you trust that other people will do the same for the same reasons as you. You go to your workplace every morning and do your job, and then go to the shopping mall and get your part and no more than that, trusting others will do the same. Unfortunately, as things stand this system doesn’t stand a chance to work for several reasons—for example, hardly anyone would want to work as a cleaner if they weren’t forced to; another one is that, without money, it’s difficult to establish what is ‘your part’ you can rightfully get at the shopping mall. However, these reasons aren’t nearly as important as the fact that this system is too easy to cheat. Everybody does work because they trust others will do the same, but what if some—perhaps even many—broke this trust? In this system there’s no way of being reasonably sure someone isn’t breaking your trust, and it doesn’t seem to enforce any punishment for eventual offenders, unlike an economic system where money is a thing. In a system that uses money, if you don’t work (i.e., if you break trust in the moneyless system) you don’t get money, and since your life depends on having money, as a rule of thumb if you don’t work you’re screwed. This is why in a money-based economic system you can be reasonably sure most people will (at least try to) work (i.e., they will not break your trust): Their lives depend on it. Don’t get me wrong—I’m certainly not praising this system. I think it causes more troubles than it solves, and the way it minimises the risk of freeloaders is essentially by blackmailing everyone. Besides, over the course of millennia money has pretty much got a life of its own, and it is not really the people whom we trust any more—we trust their money. If they run out of money, Harari rightfully reminds us, we run out of trust.

I’m not going to discuss the other two factors here because I’m not nuts about the topics themselves, but the chapters Harari dedicates to them are definitely interesting and worth your time.

The scientific revolution

For a long time humans believed that all that was necessary or worth knowing was already known. All the answers you needed could be looked up in your holy book. Then, at some point, we started realising that this was not the case: We began admitting our ignorance and started studying the natural world looking for answers. That’s, in a nutshell, how the scientific revolution started—with the admission that we’re far from knowing all, but also with the knowledge that we could learn more and consequently improve our lives. This certainly worked out, and in no small part because of all the research and enterprises funded by rich people who, rather than in knowledge per se, were interested in using science to become even more rich. Harari dedicates the last third or so of the book explaining in detail how the scientific revolution came about, and finally speculating on the future—the bit that I found more than cringeworthy. As said, up to this point the book was essentially brilliant and characterised by lucid and careful analysis; after this point, Harari falls more often than not into silly clichés and rhetoric; I don’t think he thought this part through as much as he did the rest. (After all he’s a historian, not a technologist.)

The bad bits: rejuvenation biotechs

I was positively surprised to see Harari talked, albeit indirectly, about rejuvenation biotechnologies; unfortunately, my surprise turned into disappointment when I read what he wrote about it. In three paragraphs, he condensed a a bunch of the usual, stale objections to rejuvenation, and committed several rookie mistakes. I feel compelled to point them out, so that less attentive readers will not fall prey to the same misconceptions.

Suppose science comes up with cures for all diseases, effective anti-ageing therapies and regenerative treatments that keep people indefinitely young. In all likelihood, the immediate result will be an unprecedented epidemic of anger and anxiety.
Those unable to afford the new miracle treatments—the vast majority of people—will be beside themselves with rage. Throughout history, the poor and oppressed comforted themselves with the thought that at least death is even-handed—that the rich and powerful will also die. The poor will not be comfortable with the thought that they have to die, while the rich will remain young and beautiful for ever.

The first, subtle but capital mistake Harari is committing here is what I call the magic pill assumption. He is implicitly assuming that the cures for all diseases etc. will come all at the same time and with no warning, so that suddenly the world will be split into those who can afford the treatments and those who can’t. In this scenario, there is no time for our society to adapt to the change, and thus hell will break loose. However, the chances of this actually happening are exactly zero, for an elementarily simple reason. All these wondrous advancements he talks about will require time and effort to become real. There will not be a single treatment to cure all diseases, let alone ageing, magically popping into existence all of a sudden; rather, several therapies will be necessary to achieve these goals, and expecting them to arrive all at the same time is utterly unrealistic. Some will come sooner, others will come later. Hardly any of these therapies will work perfectly right off the bat; first-generation therapies will work to some extent, but not very well—a good wake-up call for everyone to realise that fully working treatments are well on their way, although not here yet, and that work needs to be done to demand and ensure widespread access. No one will suddenly find themselves ‘immortal’, and thus there will be no instant immortality the poor can envy the rich for. Therapies will come gradually (in fact, they’re already coming, slowly and pretty much one at a time), and during these long stretches it’s reasonable to expect the prices to go down—not to mention the fact that it would be far more convenient for any given State to pay for people’s rejuvenation rather than paying their pensions.

The alleged comfort that the poor find in the rich’s mortality is nothing but a romantic fairytale, no less imaginary than the imagined realities he exposed at the beginning of the book. I have a hard time believing that the mother of a child who is dying of starvation would cheer herself up thinking that some unnamed rich people whose faces she’s never even seen will eventually also die. A poor person dying of ageing in a developing country can’t get but an extremely cold comfort, if any, thinking about a rich person dying of ageing in the developed world. Unlike the poor, the rich has likely led a life full of comforts and his passing will equally likely be less sad and painful than the poor’s. Unlike the poor, the rich has no doubt lived a less grim old age, and will be able to afford a clinic, doctors, and pain-killers to make his passing less hard. Additionally, why would all poor hate all rich people? Why hate philanthropists who spend millions (if not billions) to help the developing world?

It gets worse.

But the tiny minority able to afford the new treatments will not be euphoric either. They will have much to be anxious about. Although the new therapies could extend life and youth, they cannot revive corpses. How dreadful to think that I and my loved ones can live for ever, but only if we don’t get hit by a truck or blown to smithereens by a terrorist! Potentially a-mortal people are likely to grow averse to taking even the slightest risk, and the agony of losing a spouse, child or close friend will be unbearable.

The implicit assumption here is that, presently, the thought of dying in an accident or in a terrorist attack isn’t so bad, because you would die of ageing anyway eventually. I would refrain from trying to comfort the relatives of terrorism victims using this argument—odds are they’d punch me in the face. Losing a dear one is always horrible, no matter how long they had left to live when they died. I really don’t think that losing a child is any less painful if you know that he’d have died of ageing anyway. This is honestly an extremely dumb argument, a real shame for a book that had thus far been quite excellent.

The claim that a-mortal people are likely to grow averse to any risk, even the smallest, is completely unsubstantiated and unjustified. I do agree that being a-mortal could make you think twice before you abuse alcohol or drive recklessly, for the simple reason that such things are hardly worth risking your potentially endless life (and IMHO, they’re not worth risking even a presently normal lifespan); but I don’t think you would not get on a plane to see your family just because there’s a chance in a million that the plane will fall. (You might reconsider when you approach your 1.000.000th flight, but getting there would probably take longer than planes will be around.)

Also, Harari fails to take into account timescales entirely—he commits the mistake of imagining present-day problems in far-future scenarios. If everything goes well, we might have rejuvenation therapies in time for most people alive today to benefit from them, but I think the day when everyone can safely be called ‘a-mortal’ is a long way off. Is he sure that, so far into the future, there will be terrorist attacks, trucks to be hit by, and economic inequality (or even money)? I am not sure there won’t be any of these things, but he seems to be pretty damn sure there will be. As a historian, he should be very well aware of how our world has changed throughout history, and he should be especially aware of the positive trends of the last two centuries (for example, the plummeting of extreme poverty) that are evidence of a future much brighter than he implicitly suggests.

Quite frankly, Harari made rather miserable arguments to back up his case that perpetual health and indefinite lifespans might not be so good.

The bad bits: computer viruses and artificial life

In the section ‘Another life’, Harari brings up computer viruses as examples of ‘completely inorganic beings’. He mentions genetic programming as one of the most interesting areas of computer science at the moment, which is certainly true, and explains that it endeavours to emulate the methods of genetic evolution, which is also true. He says,

Many programmers dream of creating a program that could learn and evolve completely independently of its creator. […] A prototype for such a program already exists—it’s called a computer virus. As it spreads through the Internet, the virus replicates itself millions upon millions of times, all the while being chased by predatory antivirus programs and competing with other viruses for a place in cyberspace. One day when the virus replicates itself a mistake occurs—a computerised mutation. Perhaps the mutation occurs because the human engineer programmed the virus to make occasional random replication mistakes. Perhaps the mutation was due to a random error. If, by chance, the modified virus is better at evading antivirus programs without losing its ability to invade other computers, it will spread through cyberspace. If so, the mutants will survive and reproduce. As times goes by, cyberspace would be full of new viruses that nobody engineered, and that undergo non-organic evolution.

If you have only a vague idea of how computers work and what a computer virus is, this might sound sensible. The truth is, a little research on the Internet is enough to conclude that this is mostly nonsense. In order to explain why, we need to distinguish fact from fiction.

  • Computer viruses are completely inorganic beings – FICTION
    Computer viruses are inorganic alright, but they’re nowhere near being ‘beings’, no more than Microsoft Word or Mozilla Firefox are. Computer viruses are instructions for a computer to run—nothing more, nothing less.
  • Computer viruses replicate themselves millions of times – FACT
    How much they replicate depends a lot on the circumstances, but yes, computer viruses generally are instructed to create copies of themselves. There’d be little point in even programming a virus otherwise.
  • Computer viruses mutate via replication mistakes, either random ones or intentional ones meant by the programmer – MOSTLY FICTION
    Computer viruses can be programmed with different stealth techniques meant to help them avoid detection by antiviruses. A possible way of doing this is using self-modifying code in combination with polymorphic code: In a nutshell, the virus rewrites parts of itself (or even the whole thing) to make its code look different while doing exactly the same things it used to do before. This is not at all the same thing as a ‘replication mistake’. An actual replication mistake, be it random or meant by the programmer, would overwhelmingly likely result in broken code that can’t even be run. I’m not going to say that it’s entirely impossible for a replication mistake to produce a working copy of the virus, but I am willing to bet that chances are so ridiculously small that the virus would have to replicate itself until the end of time before such a miraculous occurrence could actually take place.

    More importantly, viruses will not ‘evolve’ any new features as a result of this approach, and will not build upon them. Additionally, antivirus software will not ‘mutate’ as a consequence of viral mutation. Antivirus software doesn’t have a life that depends on catching viruses. When a new virus comes around, more often than not human researchers figure out a way for their software to detect it by inspection (the so-called ‘virus signature’) and update it; alternatively, antivirus software can bust unknown viruses through heuristics or by keeping an eye on programs that exhibit virus-like behaviour (touching stuff they’re not supposed to touch or reproducing like rabbits, for example. As a side note, if they ‘want to live’, computer viruses need to keep a low profile. If a virus ‘evolved’ new, unplanned features it’d be much more likely to start doing random shit and attract attention, either from the user or the antivirus, thus undermining its own ‘survival’). Viruses and antiviruses will not cause each other to evolve, as predator-prey pairs do in nature.

  • Computer viruses are chased by predatory antiviruses and compete for a place in cyberspace – FICTION
    Computer viruses don’t compete with each other. You can easily have tons of different viruses running on the same computer. They don’t kill each other and they don’t steal any imaginary resources they don’t need from other viruses. Worst-case scenario, a virus might have spread so much in a given computer that all the space is taken or the computer can’t run. Other viruses on this same computer are certainly not going to ‘die’ because of this. Maybe they can’t make more copies of themselves, but that’s not going to trigger a competition with anyone. They aren’t going to fight each other. More importantly, viruses will not try to outsmart each other and will not ‘evolve’ any features to do better than other viruses. If you have a computer with virus A and virus B inside and A does a better job at replicating itself than B, B will not ‘evolve’ any new feature to do better than A.

    Also, viruses are not ‘chased’ by ‘predatory’ antivirus software. Antivirus programs are nothing more, nothing less than a set of instructions meant to find and eliminate unwanted instructions (viruses). An antivirus will not ‘run after’ a virus, it will either find it or not, and if it does, the virus is not going to ‘run away’ and the antivirus will not have to ‘chase’ it.

The bottom line is: Yes, computer viruses do mutate to an extent and these mutations can help them avoid detection; however, no, they aren’t going to ‘evolve’ into anything different, and especially not into something that ‘nobody engineered’. As a side note, while genetic programming can be used to evolve computer viruses, it’s not a very efficient way of doing so for a virus that needs to survive ‘predatory’ antivirus software out there in the wild cyberspace. Running a genetic algorithm to create new variant of the virus would take time and create tons of useless junk in the process—a great way to increase the chances of being busted.

I admit that these two blunders—rejuvenation and computer viruses—might have biased me against the rest of the book, but I do think Harari’s view of the future tastes too much like a dystopian-future movie. It seems based on rather fanciful interpretations of possible developments of current technologies and polluted by the stereotypically pessimistic idea that history is destined to repeat itself over and over. (Not the whole of history, though. A lot of people seem to think only negative patterns repeat themselves throughout the centuries, while positive ones do not, for some mysterious reason. The reason might be that bad news sell more than good news.)

However, my overall opinion is that this is a good book, some parts of which need to be taken with a pinch of salt. Even though I was quite disappointed in Harari’s speculations on the future, I’m nonetheless curious to see if he has redeemed himself in the sequel to SapiensHomo Deus: A Brief History of Tomorrow. When I’ll read it, I’ll let you know. 🙂

Scattered thoughts on self-awareness and AI

I’ve always been a fan of androids as intended in Star Trek. More generally, I think the idea of an artificial intelligence with whom you can talk and to whom you can teach things is really cool. I admit it is just a little bit weird that I find the idea of teaching things to small children absolutely unattractive while finding thrilling the idea of doing the same to a machine, but that’s just the way it is for me. (I suppose the fact a machine is unlikely to cry during the night and need to have its diaper changed every few hours might well be a factor at play here.)

Improvements in the field of AI are pretty much commonplace these days, though we’re not yet at the point where we could be talking to a machine in natural language and be unable to tell the difference with a human. I used to take for granted that, one day, we would have androids who are self-aware and have emotions, exactly like people, with all the advantages of being a machine—such as mental multitasking, large computational power, and more efficient memory. While I still like the idea, nowadays I wonder if it is actually a feasible or sensible one.

Don’t worry—I’m not going to give you a sermon on the ‘dangers’ of AI or anything like that. That’s the opposite of my stand on the matter. I’m not making a moral argument either: Assuming you can build an android that has the entire spectrum of human emotions, this is morally speaking no different from having a child. You don’t (and can’t) ask the child beforehand if it wants to be born, or if it is ready to go through the emotional rollercoaster that is life; generally, you make a child because you want to, so it is in a way a rather selfish act. (Sorry, I am not of the school of thought according to which you're 'giving life to someone else'. Before you make them, there's no one to give anything to. You're not doing anyone a favour, certainly not to your yet-to-be-conceived potential baby.) Similarly, building a human-like android is something you would do just because you can and because you want to.

I find self-awareness a rather difficult concept to grasp. In principle, it should be really easy: It's the ability to perceive your own thoughts and tell yourself apart from the rest of the world. I really don’t think this is something we’re born with, but rather something we develop. Right after you’re born, you’re a jumbled mass of feelings and sensations that try to make sense of themselves, but can’t. If it was somehow possible to keep an infant alive while depriving them of any external stimuli (vision, hearing, smell, touch, taste), I very much doubt that jumbled mass would ever manage to make sense of itself. In such a hypothetical scenario, I’m not even sure this baby could think as we intend it: What can you reasonably think about, if you know literally nothing?

Research to better understand how the brain develops self-awareness is going on all the time; recently, studies have pinpointed the part of the brain where self-awareness might be generated. Other studies suggest there’s no single area responsible for it; rather, it might be the work of several brain pathways working together. I really hope they’ll figure it out soon, because this issue drives me bonkers. From my personal experience of self-awareness, it appears as if there is some kind of central entity that is constantly bombarded with external stimuli, memories, feelings, and so on. This central entity (my self-awareness, or me) also appears to be independent of the stimuli it is exposed to: You’re still you whether you’re smelling a cup of coffee or not, or whether you’re thinking about a certain past event or not. This seems to make sense, but I think it is in sharp contrast with the idea I expressed above, i.e. that self-awareness is developed thanks to all the internal and external stimuli we’re exposed to since day 1. Either there’s no central entity, or it works differently than I think.

Some suggest self-awareness is an illusion brought about by the constant, small changes in your perceptions. These changes are so tiny, and their flow so smooth, that we get an illusion of continuity—the article argues—illusion which we call ‘self-awareness’. This is a very interesting theory, which I think makes sense at least to an extent. The thought that ‘you’ could be the sum total of all perceptions (internal and external) you experience in a given unit of time has crossed my mind more than once. Over the years, people change their minds, they behave differently, and in some case might become the opposite of what they used to be. This might be nothing more, nothing less than the result of the changes in our perceptions and how they change what’s inside our brains. Still, in order to work, illusions require someone (or even something) to be tricked by the illusions themselves. If the self is just an illusion and there really is no ‘you’ there, who or what is being tricked by the illusion? It’s almost as if this theory says that the illusion is tricking itself, which is quite paradoxical, but on the other hand I think this conclusion shows there’s no such thing as a ‘simulated consciousness’, as in a machine that thinks to be self-aware when in fact it is not. (This has always been quite clear to me.)

Before I get too philosophical, let’s go back to self-aware machines. I think we’ll have a hard time giving machines self-awareness before we thoroughly understand what it is and how it works. Some suppose it might be a spontaneous consequence of complexity; for example, if you have a neural network learn bagillions of things, self-awareness might ‘magically’ arise once a certain level of interconnectedness between all that information has been reached. That’s an intriguing idea, and it implies we might be able to give self-awareness to a machine without having to understand how the process works first, but I am a bit sceptical. On top of that, how do you prove that the machine you built is self-aware? As a matter of fact, you can’t prove that directly even for humans: There’s no way you can be sure that the person you’re talking to is actually perceiving their own thoughts, but we generally consider some things as acceptable signs of self-awareness, such as the ability to talk about yourself, to recall past experiences, to interact with your surroundings, to avoid obstacles, to learn new knowledge and apply it, etc. If a machine showed these signs, then we might conclude it is self-aware, but what if some of these signs are missing? For example, dogs don’t talk much about themselves. Are they not self-aware? I don’t know for a fact, but I doubt they aren’t. It is possible that there are different levels of self-awareness. Maybe self-awareness is not a binary phenomenon, but a continuous one. If it is true that self-awareness develops thanks to the external stimuli you’re exposed to, then it is reasonable to think that different species may have different levels of self-awareness, because not all species can experience the same stimuli, let alone process them. Furthermore, it seems reasonable to assume that a critical brain mass is necessary to be aware of your own thoughts, so the smaller the creature’s brain, the ‘lower’ (no offence intended) degree of self-awareness we should probably expect, and below the critical mass there would be no self-awareness at all.

This takes us to another aspect of the issue. If self-awareness is (as it is reasonable to suppose) a product of evolution, then each species’ self-awareness has been fine-tuned to best suit the needs of the species itself. Bacteria, for example, needed no self-awareness to do what they do, and so they haven’t developed any. (To be fair, they probably couldn’t have developed self-awareness even if they needed to, because it’s hard to have a brain when your body is a handful of cells at best.) A machine, even if it was capable to learn and thus to ‘evolve’ to an extent, wouldn’t be subject to the same evolutionary pressure as biological creatures; its ‘survival’ would not depend on how fit it was for its environment, and it would not ‘reproduce’ with other machines so that any ‘mutations’ could be passed on or selected against. In other words, it wouldn’t have any need requiring it to develop self-awareness or emotions of any kind. (On the plus side, this suggests it would hardly have a reason to go on a human-killing spree.) If a machine will ever have self-awareness or display emotions, I think it’ll be because we made it that way, quite possibly just because it was better for us, in a way or another. A self-aware machine with no objective need for sadness or guilt, for example, might well ask you why the heck you enabled it to feel such unpleasant emotions if it wasn’t strictly necessary. An answer could be that we get along better with an emotional machine rather than an unemotional one, but the machine might find this reason insufficient. (Which, on the minus side, might give the machine a reason to go on a human-killing spree, especially if we gave it an instinct for revenge.)

Still, I suppose it wouldn’t be entirely impossible that machine ‘feelings’ might develop in a sentient machine, even without evolution. Maybe they wouldn’t be happiness or sadness, since these have a strong biological basis, but they could be something else, that only a machine can ‘feel’ and that we cannot even imagine. Future developments in any given field have a tendency to surpass our imagination, and whatever future AI is going to be like, I’m looking forward to see it.

Reboot

Welcome to l4t reborn! Yeah, I know. Odds are you haven’t got the foggiest clue what I’m talking about. Long story short, looking4troubles, or l4t, is a blog I started in mid 2016 and promptly abandoned because it just wasn’t working. Interestingly, my other blog—Rejuvenaction—has a similar story: I started it in mid 2015, wrote a lot of articles, but then I abandoned it for several months because I wasn’t very inspired in terms of blog posts. Today, after a graphical and content revamp, Rejuvenaction is much more popular than I thought it would be, at least in its niche. Hopefully, l4t’s destiny will be the same as its twin brother’s. As l4t is still in its infancy, its structure and layout might change even significantly, though I’m really fond of this theme—the same as Rejuvenaction‘s except for the palette. (They’re twin brothers for a reason.) I am afraid this colour scheme might be a bit difficult to read, so that might change too.

I’m not a fan of long introductory posts, so I’ll leave it at that and move on to the next post. I hope you’ll enjoy your stay!