Tempus fugit

You might have noticed I have sometimes mentioned ‘Rejuvenaction’ in passing in other posts on l4t, but never really went too much into detail as to what Rejuvenaction even is. I was hoping to trigger curiosity in the few readers l4t has had thus far (this is what I get for posting once in a blue moon), but I think it is high time to formally introduce l4t’s older brother.

Simply put, Rejuvenaction is an advocacy blog meant to spread awareness about the problem of human ageing and what could be done to bring about the end of this problem within a few decades.

No, it’s neither a joke, nor about snake oil supplements to part a bunch of fools from their money. It’s about hopefully relatively-soon-to-be rejuvenation biotechnologies to bring the biological clock of people back to about 25 years of age, so that regardless of their chronological age, they can be as healthy as they were when they were young adults.

Going too deep into the details of the topic would be rather redundant, since Rejuvenaction does it already. What I want to do here is providing a very brief introduction to the topic, with the hope interested readers will then move on to Rejuvenaction, and eventually to SENS, LEAF, Fight Aging! and all the other more specific online resources.

I’m going to keep this introduction short and to the point.

  • What are we talking about?
    As said, we’re talking about true, genuine rejuvenation. Medical treatments that can turn an 80-year-old into a 25-year-old again (or at least that’s the idea). Needless to say, this doesn’t mean that your brain is reset back 55 years and you forgot all that happened to you during that time, or anything crazy like that. It means that you may be 80 years old, but your body looks, feels, and functions like that of a 25-year-old. Period.
  • What aren’t we talking about?
    We aren’t talking about living longer in a decrepit body, or immortality (no matter how young and healthy, one can still die in an accident after all), or a modest increase in lifespan, or cosmetics, or ‘ageing gracefully’ (whatever that may mean), or being ‘healthy for your age’, or ’embracing ageing’, or dressing up ageing into any sort of cute and inspiring metaphors to hide the rather self-evident fact that biological ageing sucks. Oh, and as said above, we’re not talking about sending you back in time or wiping your brain back to when you were a toddler.
  • Why would we want to do this?

    • Like it or not, after you hit a certain age your health goes downhill and you become very, very sick and dependent on others even to wipe your own arse. Is there any age when you’d like to be like that? I didn’t think so.
    • Once you’ve been sick enough, your body gives up and you die. Assuming you were always perfectly healthy (which rejuvenation would allow you to be), would you have a good reason to die? Moreover, do we generally regard dying as a good thing? Again, I didn’t think so.
    • Before you die of ageing, you become a burden on your family and on society, again like it or not. If you were always young and healthy, this would not happen.
  • How could we do this?
    The so-called maintenance approach is gaining a lot of traction these days. In a nutshell, the idea is that of periodically repairing the damage the body accumulates as a side effect of its normal operations with the passing of time—which is pretty much what ageing is, according to modern science. Whether this approach will actually lead to the desired result is still unclear, because it hasn’t been tried out on humans yet, but one thing is certain: The data is looking good, and we’ll never know for a fact how far we can get this way until we try.
  • Are you sure this is not a joke?
  • Why does this need advocacy?
    Because, believe it or not, people come up with all sorts of crazy theories about why ageing is supposedly good for you (it isn’t), mix up chronological and biological ageing, commit all kinds of logical fallacies and mental gymnastics to justify the unjustifiable, and are eager to get back to their everyday life where they can pretend (at least for a while) that the ill health of old age will never be their problem. Research funding for ageing is pitifully minuscule, and without decent support from the public it ain’t gonna get any better any time soon.
  • What would be the benefits of this whole affair?
    Many and varied.
  • But have you thought that—
    Yes, I most likely already have.
  • How could I help, if I wanted to?
    It depends on the level of commitment you want/can offer. You can:

    • Donate your money (not to me, in case you’re wondering)
    • Donate your time by volunteering for LEAF, for example
    • Educate yourself on the subject and then spread awareness through your social media and/or talking to your friends and family
    • All of the above (that’s what I do)
  • I need to think about this…
    Sure thing. That’s why I am writing this post. Just keep in mind one thing: tempus fugit.

Imaginary points

There’s this thing people say about work. I’m sure you’ve heard it countless times, and maybe you’ve said it yourself. I’ve heard it a lot too, and I’ve nodded at it more than once. As an expression, it may well be old enough to qualify as common wisdom.

“I want to work to live, not live to work.”

It sounds perfectly reasonable, or at least it does until you start thinking about it. The sentence above suggests three things.

  1. You need to work in order to live.
  2. The reason you want to work is that you want to live.
  3. Working is not something you’d like to spend your entire life doing. (It follows from the fact you don’t want to live to work.)

These three propositions too sound perfectly reasonable, but I think they do only because we are used to deduce them from real life. Most people you know need to work to live, and you’ve likely heard many people saying that, if they won the lottery, they would insta-quit their job and move to a tropical island or something faster than you can read this sentence. I think we learn to accept the supposed truth of these propositions somewhat axiomatically, in most cases without even questioning it. I also think these claims are false.

You may think I’m nitpicking, but strictly speaking, you don’t need to work in order to live. Mainly you need food, water, sleep, and shelter. You don’t get them out of work, not directly anyway. You would back in the day when you had to hunt your dinner and build your own shelter, but that’s not how it goes these days. Today, your work consists of doing certain things, generally to someone else’s more or less direct benefit, and in exchange for that, you get the means (aka money) to buy goods and services to satisfy your needs with. The money you got proves you’ve made your contribution to the collective good and have thus earned your share. This is what they call ‘earning a living’, and however reasonable all of this might still sound, I can’t shake off the fact this expression implicitly says that, unless you earn it, you don’t deserve to live.

If you think about it, it’s not you who needs your work to live. It is society. To keep our society running the way it does, a number of things need to be done every day. If everyone stopped working and went back to hunting their own dinner instead, society would die instantly. The people, however, would still be alive. (At least for a short while… I think our hunting skills may be a bit rusty.) Money is, among other things, a rather effective way to make sure everyone will contribute to society with some kind of work—in particular, it is a way to ensure there will always be someone to do all those crappy but necessary jobs no one would do unless forced.

Of course this system is far from perfect. Its most obvious problem is that you can steal money and ‘earn your living’ for free, if you will. Another problem, and a quite serious one, is that getting money is not easy because getting a job is not easy, and this might seem a contradiction: Since society depends on people’s work, you’d think we’d make it easy for everyone to work, but we didn’t. Last time I checked, ‘finding a job’ was nowhere near the top of the list of the easiest things in the world. However, the contradiction is only apparent indeed, because while society needs people’s work to function, it does not need everyone’s work to function; not even everyone of working age. Yet, in principle the system requires everyone of working age to be employed to be allowed to live, and it does because we make it so. Try suggesting that only some of us work, and the rest, even if able-bodied and of working age, be supported by the workforce. You’d certainly outrage quite a lot of people, because, well, propositions 1, 2, and 3: ‘If I need to work to live—despite the fact I wouldn’t if I didn’t need to—why shouldn’t everyone else?’ Your suggestion would likely be met with a reaction like that most of the time. However, under different circumstances, your suggestion should be no cause for outrage, but I’d rather not go off at a tangent right now, and leave this discussion, as well as the other problems of using money, to future posts.

Propositions 2 and 3 are strictly connected. They can even be strung together: ‘Working is not something you’d want to spend your entire life doing; in fact, the reason you want to work in the first place is that you want to live.’ This seems to suggest working is an unpleasant activity which you carry on doing just because you need to for survival, but if you could avoid it, you would gladly do so. This is certainly true of some jobs, and if we go far back into the past, I’m guessing most jobs were like this. However, the claim that people don’t like working is false. I would say that a certain number of people (I don’t know how many, but not few of them) dislike their job—not working per se—because it is not something they’re really interested in or passionated about. This is another reason our system is not that good: In order to live, you must work, and sometimes this means you can’t afford choosing and have to settle for a job you dislike, if not hate, for the sake of survival. However, I think that for nearly all of us there is at least something we’d love to do consistently enough to call it a ‘job’, whether we know it or not; and probably, the number of people who would just sit on their arses from dawn to dusk and from dusk to dawn if they only had the chance is far lower than people think it is.

If, as I suggest, it were true that everyone has something they’d love to do for work, something that is their passion, something they look forward to doing, then I don’t think ‘living to work’ would sound all that bad any more—especially if your survival was independent of your work. In such case, it would mean living to do something you love doing, and I don’t see anything wrong with it. I’d rather say it’d be pretty darn good reason to be alive. It definitely sounds better than ‘working to live’, which implies the reason you do what you do is survival, and says nothing about whether you enjoy what you do or not.

In my ideal world, I’d live to work, not the other way around.

For years, I’ve had a hunch that the reason we erroneously think working to live is better than living to work may be money. What is money, really? It used to be special pieces of metal we kept on a side because they were rare, pretty, and shiny, and used them to trade. With time, we started making notes of how many special pieces of metal we had using special pieces of paper, and that, if you can tolerate the oversimplification, is how we started using paper money. Today, money is mostly imaginary points kept in a virtual safe, and today as yesterday, it has no value but that which we collectively agree to attribute to it. (Though most of us, as individuals, are simply forced to agree on this value; you can’t abandon the use of money on your own and get away with it.) We use money without touching it or even seeing it. It’s getting more and more abstract, and I wonder if its destiny isn’t to abstract itself out of existence. I’ve always seen money as a pointless constriction causing more problems than it solves, and I tend to think we could greatly benefit from abolishing it, but I may well be dead wrong. Even if I wasn’t, abolishing money is hardly something you could do overnight and hope everything will be fine.

The topic is wide and complex. I’m certainly not going to draw any conclusions in a single post without even doing any research first, so I think this will be but the first in a series of posts for the Imaginary points category. I don’t pretend to understand economics (I don’t), or to know better than others whether money, and our economic system, should or shouldn’t be a thing. You should take everything you read in posts of this category (or any other, for that matter…) as nothing more than my musings and (un)educated opinions.

We’ll see where this will lead…

Artificial meat and the problems of ‘-isms’

I often say that I am not a fan of ‘-isms’. Not even those supporting the causes I care for, such as transhumanism. I sympathise with some ‘-isms’ (again, such as transhumanism), but I never consider myself a ‘something-ist’. The reason is that, generally, ‘-isms’ have two problems. The first problem is that they almost always support at least some ideas, or make certain claims, which I disagree with or find too fanciful (certain acceptations of ‘mind uploading’ come to mind, but that’s a story for another post). The second problem is that, if you say you’re a something-ist, people will almost surely assume that you endorse, or believe in, some ideas that are really not your thing, merely because such ideas are either an integral part of the relevant something-ism, or are what people think something-ism is about (which may or may not be true). Not to mention the fact that people often regard the dictionary as the ultimate authority on what ‘real’ something-ism is about, cheerfully ignoring all its variants and flavours (which often blur into mainstream something-ism and each other), whose proponents are usually well persuaded that their own something-ism is the real thing—others just got it wrong.

There’s actually a third problem too. Namely, that if they’re not careful, something-ists who are a bit too zealous might end up putting their ideology before the reasons they embraced it in the first place. Sometimes, this can undermine the very objective something-ists intended to achieve with their embracing the ideology and spreading it left and right.

I had a brilliant example of this phenomenon one time when, while having lunch at a vegetarian restaurant (or vegan, I’m not sure), I mentioned lab-grown meat to my friend. My friend is a vegetarian (or vegan, I’m again not sure), whereas I am not. Vegetarianism and veganism are, obviously, two ‘-isms’; as a corollary of the above, I’m neither a vegetarian, nor a vegan. (Although again according to the above, perhaps I should say ‘vegetarianist’…) I’m not just quibbling about definitions: I do indeed eat animal products of pretty much all kinds. This, however, doesn’t prevent me from sympathising with the cause of vegetarians and vegans who are such because they’re against animal cruelty. Quite frankly, even if animals are raised in the best possible way, and are then killed in the nicest possible way (whatever that means), eating them is still at odds with my own moral compass. Even though I do eat them. The main reason I eat them despite my inner conflict is that I am one of those lucky bastards who wouldn’t put on an ounce even if they ate a mammoth; the downside of this is, I tend to lose weight very easily, and if I were to banish animal proteins from my diet entirely, I fear I could become even more translucent than I already am. A second reason is that I am not persuaded that a vegetarian/vegan diet is necessarily the healthiest option for everyone (and no offence, but I’m not interested in debating this, as it is beyond the point of this post); the last reason, and it is actually a scarcely important one, is taste. I say it is scarcely important because I tried vegan food more than once and I found it delicious. If taste was the only reason, I’d probably be vegetarian/vegan. Minor side note, I’m a huge fan of veggies, and together with fruits, seeds and nuts, they make up at least 80% of my diet.

By the way, are you starting to understand why this blog is called looking4troubles already?

But do not let me digress. Let’s go back to the veggie lunch with my friend, and why it is an example of the third problem of ‘-isms’. Lab-grown meat, I explained, would be a great lateral-thinking solution to the problem of animal cruelty (and a bunch of others, such as unsustainable farming and how to feed a growing world population). It doesn’t involve raising any animals, let alone killing them. All it takes is cultured cells, a lab, and hard science. We are not yet at the point where lab-grown meat could be produced in industrial quantities and be more economically convenient than traditional meat, but we’re pretty damn close. Not only the cost of a lab-grown burger plummeted from hundreds of thousands of dollars to around 11 dollars in about four years, but there already are companies and startups (such as SuperMeat and Memphis Meats) that have already created chicken and duck meat as well. While some people will surely freak out at the thought of consuming ‘unnatural’ meat and will likely think that it must somehow be bad for you, the truth is that artificial meat could be even better than the real thing: Since we would engineer it, we could make it more nutritious and less unhealthy (for example, we could make it have fewer saturated fats, of which you can get too many if you eat a lot of certain types of meat). Nutritional value aside, producing lab-grown meat would require far less land (mostly that where the laboratory is located), cause significantly fewer CO2 emissions (cow cells just don’t fart as much as cows do), and could probably be much more simply and cheaply automatised than traditional farms—in other words, it could be cheaper to produce meat in a lab than in a farm, which means ‘lab-burgers’ would be cheaper than hamburgers, thus giving people an incentive to prefer artificial meat over the traditional one. Of course meat isn’t the only reason we raise and kill animals; there are other animal-derived products—as well as products coming from other animals than cows, chickens, and ducks—we use or consume as well, and as long as we will do so, animal suffering and the rest of the usual suspects will still be a problem to at least some extent. However, we need to keep in mind that meat consumption is the primary reason for raising livestock; remove that reason and you’ll eliminate a big chunk of the problem. Additionally, no one says that cultured beef and poultry are the best we can do: dairy, seafood, egg whites, (thought these aren’t exactly like the real thing) and leather are possible as well. Admittedly, we’re far from molecular assemblers or the fabled Star Trek‘s replicators (though 3D printers could be considered their great-great-grandfathers, and yes, they can be used to make food) that could make nearly everything, food included, without having to plant a seed or touch an animal, but you know, we’re off to a good start anyway.

These reasons alone would probably be enough to convince many consumers to make the transition; the fact artificial meat and foods would take animal cruelty out of the equation could easily make them the new ‘organic’ and plant the final nail in the coffin of old-school farming. After tens of thousands of years, it would be about time.

This sounds great. So, what’s the problem, and what does this have to do with the lunch with my friend? My friend’s reaction to artificial meat was: ‘I need to think of a reason to oppose it.’ My understanding was that the matter was one of principle: Eating meat is wrong, in any form, and people should not do it. Period.

Even if my understanding was incorrect in the specific case, I wouldn’t be surprised if some people actually espoused this principle, and that is the third problem of ‘-isms’: Sometimes, ideologies come before objectives.

If people didn’t eat meat in the first place, then we probably wouldn’t have an animal cruelty problem, because we probably would just leave cows & co. alone, and we wouldn’t need artificial meat either. However, a lot of people do eat meat, and some (possibly quite many) of these people may never buy into vegetarianism or veganism for a number of reasons ranging from taste to laziness to ideology. Anyone who thinks eating meat is wrong because it causes animals a great deal of suffering (not to mention death) needs to decide what is most important to them—convincing everyone to embrace their same moral principles, or putting an end to said animal suffering. I would say the latter is far more important, for whether or not everyone agrees that eating meat is wrong doesn’t really matter so long as no animal is harmed or killed in order to get the meat. Animals certainly wouldn’t give a flying monkey’s arse whether you ate meat or not if the meat wasn’t their own, strictly speaking.

So, if one’s goal is to persuade everyone else that eating meat is Wrong™—by all means, let them go ahead and wage total war on any kind of meat consumption, traditional or artificial. On the other hand, if the goal is to achieve the (desirable) vegan utopia where no human ever harms an animal, one should consider that the war on the dietary habits of other people is unlikely to be won, at all or before a rather long time (during which animals would keep suffering and dying in slaughterhouses). There are a number of reasons why chances of turning everyone into a vegan are low—for example, for as long as our species has existed, we haven’t managed to convince everyone not to kill other humans for ultimately futile reasons; I fail to see what could possibly persuade everyone not to kill any animal, ever, for food—but artificial meat stands a respectable chance of sidestepping the problem altogether and let us have the cake and eat it too, and within a reasonable timeframe of a couple of decades, give or take. (Some think they might serve the first lab-burgers five years from now already.) After all, the best way to resolve a conflict is to make all parties involved equally happy.

I don’t know how many vegetarians/vegans are aware of the opportunities offered by the advent of artificial meat (and, more generally, artificial food), nor how many endorse or oppose the idea. Hopefully, those who oppose it are very few. As said, I’m neither vegetarian nor vegan, yet I look forward to the day when science and technology will have solved this problem too and ended the age of slaughterhouses; if I was vegetarian/vegan, I’d be shouting about artificial meat from the rooftops all the time—pretty much in the same way I do with rejuvenation biotechnologies—because, a) it’s likely only way to actually ever close the slaughterhouses for good, and b) the more people people support the cause, financially or otherwise, the sooner it will happen.

The Elynx Saga goes permafree

Believe it or not, l4t did not die again. In fact, my to-post list has been growing longer for the past few months, but I never had the time to write anything because I had other priorities. One of these priorities was turning The Elynx Saga into a permafree series.

Oh, right. At this point, you probably have no idea what The Elynx Saga is. It’s my science fiction ebook series. I began writing it around 2002, just as a hobby, thinking that one day I would send it to a publisher to become rich and famous and gift it to the world. Eventually, self-publishing became a thing, and in late 2015, in the middle of an existential crisis kind of thing, I thought I’d drop everything I was doing and become a writer, taking advantage of all the stuff I had already written (pretty much four novels) and publishing it all on Amazon.

My super professional-looking ad.

After tons of work and time needed to re-read, proofread, change all the things that inevitably make you cringe 10+ years after you’ve written them, proofread again, translate everything from Italian into English, and understand the world of self-publishing, I eventually realised that, yes, I do like writing and I do want to publish my science fiction series for everyone to read; but writing for a living, especially given that you have to do everything by yourself with near zero immediate returns? No, thank you. You know, I like eating—and not freaking out. Besides, I came to the conclusion that I really love all the other things I was about to drop for the sake of writing alone, and had I abandoned them, I could never be really happy.
The Fall of the Gods

That realisation notwithstanding, I had already published my first book, The Fall of the Gods (aka FOG), and I saw no reason why I should not continue writing and publishing the rest of the series, albeit not as a job. I could still sell my books on Amazon and other publishing platforms, having fun in the process and maybe making a pretty penny from it, too. It wasn’t going to be very easy, though, because of the four books I had already written, only the first was actually good enough to be published. The rest was okay in terms of general directions for the series’ timeline and events, but basically it would need to be rewritten nearly from scratch—which is what I am currently doing. Additionally, other commitments (mainly my master’s thesis, Rejuvenaction, and studying web development) were preventing me from dedicating much time to my series; to top it all, a number of more or less horrible translation blunders had somehow made it to the English version of The Fall of the Gods. After a kind soul helped me root out all these outrages to the English language and put an end to their nefarious existence, I was ready to publish the second edition of The Fall of the Gods. If you think a ‘but’ is coming, you’re wrong. In fact, two but’s are coming.

The first ‘but’ was that all my manuscripts were written in OpenDocumentText (ODT) format. In order to get from ODT to Kindle and ePUB formats, my manuscript had to go through all sorts of more-or-less automatic conversions, the obvious result of which was that there would always be at least some things not working as expected; when this happened, I had little to no idea how to fix it. I had this kind of problems back then when I published the first edition of FOG, and I was having it again the second time around. In order to preserve my sanity, I decided to manually convert everything into ePUB format. This took tons of copypasting and messing around with CSS & HTML5 to make sure the end result would be reasonably compatible with a reasonable number of ebook readers, but at least now I have a flexible and reliable framework that does exactly what I want it do to and requires only little converting, which can be done with ease. This pushed the publication date forward by two more months.

The second ‘but’ should actually be further divided into two more but’s.

First, I really dislike Amazon. Thanks to their dominant position on the ebook market, they can afford playing bully. As an example, I was unjustly suspected of manipulating reviews, and consequently threatened to have my book taken down, because some friends of mine reviewed my book unbeknownst to me. If you’re an author, their policy forbids your friends and relatives to review your books; strict, maybe, but understandable. The problem is, Amazon don’t care whether you asked your friend to review your book or your friend did so of his own accord without telling you anything; you, as the author, are going to get the butt of it anyway. They won’t take any time to understand what’s going on. They’ll just send their standard threatening email to every author who fails to comply with their regulations to the letter. Speaking of standard emails, don’t get me started on their customer support. No matter what question you asked, the response you’re going to get is a standard copypaste, usually only vaguely related to your original enquiry. Initially I thought I had bumped into an especially dumb employee, but when nonsensical replies started piling up (coming from different people, at that), I realised they can’t all be that thick, and their answers must come from a standard script which they’re probably not supposed to deviate from by an iota. I’m glad I don’t work for Amazon.

Second, I am of the ‘information should be free’ school of thought. I could have published my ebooks for a price with other publishers than Amazon, but I’d rather have my own, cozy thing which I manage on my own and on my own website. In multiple formats and for free. So I took down my ebooks from Amazon and wherever else they had been published. Now you can only find them on my website, distributed under a Creative Common licence. I really couldn’t be bothered uploading them to other publishing platforms, especially for as long as my series will be a marginal phenomenon. Should there ever be any demand for it, I will consider making physical copies available for on-demand printing (always carefully avoiding any direct contact with Amazon). They probably won’t be for free, because, unlike ebooks, paper books have production and shipping costs, which no sane publisher would ever be willing to bear without making some profit off the author’s revenue. However, in such case I’ll try to keep the price as low as I can. For now, you can enjoy my books for free—even if you don’t have an ebook reader: Just download the PDF format.

I am not going to go into the details of the series or of FOG; I’m kinda hoping you’ll go and have a look for yourself on the series’ website. 😉

Sapiens: a discussion

Writing book reviews isn’t really something I do on a regular basis or that I particularly enjoy, but I recently read a book that was good and interesting enough to be worth discussing: Sapiens – A Brief History of Humankind by Yuval Noah Harari. The book is exactly what it says—a (relatively) brief history of our species, starting from before it was even born and ending with speculations about the future. Up to right before Harari got into the speculative bits, I enjoyed the book thoroughly; I found it to be thought-provoking and even eye-opening. The author’s outlook on the future, however, was disappointing. Too much unsubstantiated, subtly implied pessimism, which, together with a few huge blunders here and there, betrays the lack of a solid scientific background—indispensable for any serious discussion about the future, however speculative.

Anyway, let’s start with the good bits.

The cognitive revolution

The first part of the book is focused on the cognitive revolution—the intellectual growth that slowly changed us from ‘animals of no significance’ to rulers of the Earth. This was the time when humans became able to pass on to future generations knowledge and skills that were not encoded in their DNA; it is also the time when we started building more complex social structures, no doubt thanks to our newly got ability to communicate, which also enabled us to plan and thus think and act as a group. During this time, we were not alone: Homo Sapiens is only one of the six human species that ever inhabited the Earth, and we did share the planet with some of them for some time. In particular, we lived alongside the famous Neanderthals, whom apparently we’ve ‘driven’ to extinction along with a number of ecosystems, such as the ancient Australian one, relatively shortly after coming into contact with them.

This is one of the non-speculative bits that made me turn up my nose. It seems there’s still some controversy on whether the Sapiens or other causes (such as climate change) were the driver of the extinction of Neanderthals and the ancient Australian ecosystem, but this is not the point. Even if it was because of the Sapiens (our species), I find that Harari’s way of explaining the facts is not as neutral as it would be necessary. It’s undeniable that, these days, blaming humans for everything is fashionable, whether they’re guilty or not; nodding approvingly at how bad or evil humans are has somehow become a sign of wisdom, and that’s what I think Harari is doing, perhaps unintentionally, in his exposition. It would be easy for a reader to think that the extinctions caused by our primitive ancestors are nothing but the umpteenth sign that our nature is intrinsically evil and that we’re a ‘bad’ species. Harari’s exposition doesn’t seem to put sufficient emphasis on the fact that Sapiens back then knew nearly nothing of how the world works, and were reasonably more concerned with their own survival than with that of an ecosystem whose existence they ignored. I’d have a hard time believing that they intentionally hunted down species to extinction, or that they knew this was possible to begin with. They didn’t kill off the Neanderthals, if they did, out of sheer evil, or corporate greed, or racial hatred. Odds are they were trying to do what all other species try to do: Survive, without worrying too much (or at all) about the other species. In fact, other species put no special effort in maintaining the delicate balance we observe in ecosystems today; they don’t care about it and they don’t even know about it. If wolves were hungry enough to hunt down sheep to extinction and had the chance to do it, they would. They don’t have an ecological conscience that might stop them before the irreparable happens. Arguably, humans are the only species on Earth that ever cared about the ecosystem, even though some of us are indeed part of the reason why the rest of us has to worry about the ecosystem in the first place. Another thing that Harari doesn’t seem to take into account at this point is that the very same moral system by which we judge killing other species as ‘bad’ hadn’t been invented yet.

Imagined realities

Moral systems are an outstanding example of a phenomenon Harari describes quite early on in the book: Sapiens’ unique ability to make up imaginary realities and live them out as if they were objective truth. As Harari himself says, this might sound shocking to you, but there’s no such thing as human rights, or companies, or money, or nations, or gods. They’re nothing but figments of our imagination, but we agree they exist and behave accordingly because of the benefits we get out of them. Ideals such as human rights can coordinate a huge crowd of complete strangers to pursue a common goal in a very effective way; similarly, our belief in the existence of companies makes us do work for them, even though there really are no companies. There’s only a bunch of people doing stuff that eventually ends up in a usable result because they all believe the company exists. If a judge were to dissolve the company for some reason, then the same bunch of people would most certainly stop believing in the existence of the company and thus stop working for it. When you think about it, though, how was the company created or dissolved? Mostly, things were written on pieces of paper, and other more or less virtual things (money) were moved from one place to another. Harari calls this a ‘ritual’, comparing it to the religious rituals that lead people to believe that a piece of bread turns into the flesh of their god. Morals are imaginary too: Good or bad, socially acceptable or not, right or wrong, etc., are merely ideas, and as such they exist only in our heads, and they may come as easily as they may go. Two gladiators fighting to the death, or a naked and unarmed prisoner devoured by hungry beasts, were perfectly acceptable forms of entertainment back in the day of ancient Romans; today, not only is this not entertaining, but it is considered barbaric and it is illegal in the ultra-vast majority of the world—and personally, it makes me feel like throwing up.

The growth trap

Another interesting idea is that the agricultural revolution—i.e., when we stopped wandering around and settled down to cultivate the land—was a fraud, and not just any fraud: It was history’s biggest fraud, Harari says. Before we started cultivating, Harari explains, we had a much more varied diet and we didn’t work as hard, even though we were physically very active nonetheless. It would take a group of hunters much less time to hunt down a huge animal, skin it, etc., than it would take to cultivate a wheat field, protect it from parasites and intruders, harvest it, store it, etc. Floods and dry seasons also increased the chances of losing the yield and all the work that was put into it. In addition, a permanent settlement was the ideal environment for diseases to flourish (especially considering that hygiene wouldn’t be a thing for yet quite a few millennia), which was not the case with a nomadic lifestyle. Another big change brought about by the new lifestyle was that women could get pregnant every year, which came in handy because extra help in the field was always needed. However, an extra helper was also an extra mouth to feed, so eventually the plan became to work harder now to have a bigger yield in the future; children would be well fed, and less work would be required for a while. The plan didn’t work out, Harari says, because population kept increasing (even though living conditions were still quite bad and child mortality quite high), and more work to feed the ever-growing number of mouths was always required. Now it was too late to go back to the old hunter-gatherer lifestyle, as it couldn’t support the current population. The only option was going down the more-people-more-work spiral. The fraud lies in that that while the agricultural revolution is supposed to have made people’s lives better, it did not, at least not for those who lived through it. We inhabitants of the modern world, whose existence is a consequence of the agricultural revolution, are ultimately reaping the fruits of the hard work of the people who lived back then.

However, the fraud is still going on, Harari argues, because the ‘work more today to have more tomorrow’ mentality is still with us, even though it doesn’t quite work today as it did not back then. In theory, the point of all our tools and technology would be that of either simplifying our work, or doing it for us so that we can be relieved of the burden of work entirely. In practice, what happens is that we use these tools to ‘increase productivity’, which is just a way of saying ‘work more’. We still work on average 8 hours a day—so much for Keynes’ prediction that we’d be working only 15 hours a week by the end of the 1900s—and instead of taking advantage of the ever-growing automation to free ourselves from work and pursue our passions, we worry about robots taking our jobs, failing to realise that both money and the idea that individuals must earn a living are among the aforementioned imaginary things we let dictate our lives. We have washing machines, dishwashers, phones, vacuum cleaners, the Internet itself, and they’re all supposed to make our lives easier and more enjoyable, and yet, we’re more stressed than ever and with much less free time than our ancestors had before the agricultural revolution.

Harari uses email as an illuminating example of how our use of technology may lead to more stress and work rather than less. In the old days, one wouldn’t receive but a handful letters a month, because it would take a long time for a letter to get from the sender to the recipient. When you got a letter, you knew it would take weeks before your reply got to its destination, so you’d hardly feel compelled to sit down and pen a reply right away. You could take your time and think carefully about what to write. Today, emails can reach the other side of the globe nearly instantly, so we all expect swift replies and often feel compelled to reply immediately. While in the 1800s useless mail wasn’t really a thing, today there’s plenty of essentially junk mail going around in the form of social media notifications, ads, spam, scams, etc. (I might add that lightning-fast communication is the mother of flame wars. When somebody pisses you off on the Internet, you can type down a furious reply right away, in the midst of your rage, and expect an equally unfriendly answer in a matter of a few hours tops.) Emails didn’t simply speed up our existing mail traffic; rather, they increased it exponentially. That’s pretty much what we have done with all other technologies: Machines make existing jobs simpler and faster, so that we could maintain the same level of productivity with less work; instead, we insist on increasing productivity, never really cutting down our working hours when not making them longer. Maybe this is because our population is still growing, and the same ancient trap that fooled our ancestor is keeping us captive today; maybe, as our numbers will start to settle down at the turn of 2100, it will be possible to break free of the growth trap. It is also possible that our technology isn’t yet advanced enough to break us free from the trap, but that we will indeed break free once we reach a sufficient technological level. However, I fear the productivity mantra might be so ingrained in our brains that we could be compelled to keep ‘creating new jobs’ to subjugate people’s lives to even if it was unnecessary.

Three factors of unification

As Harari explains, there are three main factors that have led to the unification (not necessarily in the sense of single, big, loving family) of humankind: money, empires, and religion.

All these three things are purely psychological constructs. Money and the reasons I dislike it are going to be regulars here on l4t, so I’m not going to talk about them too much in this post. Nonetheless, it is definitely worth spending a few words on the subject.

The fact money has no intrinsic value will hardly be a revelation. More or less everyone knows it, but we accept as an axiom the value of money because we don’t really have an option. The vast majority of the people in the world believes in money, so if I decided not to believe in it any more and stopped accepting it in exchange for things, all I would likely obtain is screwing myself over. As Harari explains, money is the most successful trust system ever devised, and it is so successful precisely because so many people (for all intents and purposes, everyone) believe in it. I would add another reason of its success: The system is engineered so that your life depends on having money. If it wasn’t so, you’d hardly give a flying rat’s arse about money. A lot of people believe in all sorts of crazy things, and yet, as long as your life doesn’t depend on them, you can safely neglect them entirely. These two things together are what makes money so successful. Try to imagine this scenario: The world works just as it does today, except there is no money. You do work because you trust that other people will do the same for the same reasons as you. You go to your workplace every morning and do your job, and then go to the shopping mall and get your part and no more than that, trusting others will do the same. Unfortunately, as things stand this system doesn’t stand a chance to work for several reasons—for example, hardly anyone would want to work as a cleaner if they weren’t forced to; another one is that, without money, it’s difficult to establish what is ‘your part’ you can rightfully get at the shopping mall. However, these reasons aren’t nearly as important as the fact that this system is too easy to cheat. Everybody does work because they trust others will do the same, but what if some—perhaps even many—broke this trust? In this system there’s no way of being reasonably sure someone isn’t breaking your trust, and it doesn’t seem to enforce any punishment for eventual offenders, unlike an economic system where money is a thing. In a system that uses money, if you don’t work (i.e., if you break trust in the moneyless system) you don’t get money, and since your life depends on having money, as a rule of thumb if you don’t work you’re screwed. This is why in a money-based economic system you can be reasonably sure most people will (at least try to) work (i.e., they will not break your trust): Their lives depend on it. Don’t get me wrong—I’m certainly not praising this system. I think it causes more troubles than it solves, and the way it minimises the risk of freeloaders is essentially by blackmailing everyone. Besides, over the course of millennia money has pretty much got a life of its own, and it is not really the people whom we trust any more—we trust their money. If they run out of money, Harari rightfully reminds us, we run out of trust.

I’m not going to discuss the other two factors here because I’m not nuts about the topics themselves, but the chapters Harari dedicates to them are definitely interesting and worth your time.

The scientific revolution

For a long time humans believed that all that was necessary or worth knowing was already known. All the answers you needed could be looked up in your holy book. Then, at some point, we started realising that this was not the case: We began admitting our ignorance and started studying the natural world looking for answers. That’s, in a nutshell, how the scientific revolution started—with the admission that we’re far from knowing all, but also with the knowledge that we could learn more and consequently improve our lives. This certainly worked out, and in no small part because of all the research and enterprises funded by rich people who, rather than in knowledge per se, were interested in using science to become even more rich. Harari dedicates the last third or so of the book explaining in detail how the scientific revolution came about, and finally speculating on the future—the bit that I found more than cringeworthy. As said, up to this point the book was essentially brilliant and characterised by lucid and careful analysis; after this point, Harari falls more often than not into silly clichés and rhetoric; I don’t think he thought this part through as much as he did the rest. (After all he’s a historian, not a technologist.)

The bad bits: rejuvenation biotechs

I was positively surprised to see Harari talked, albeit indirectly, about rejuvenation biotechnologies; unfortunately, my surprise turned into disappointment when I read what he wrote about it. In three paragraphs, he condensed a a bunch of the usual, stale objections to rejuvenation, and committed several rookie mistakes. I feel compelled to point them out, so that less attentive readers will not fall prey to the same misconceptions.

Suppose science comes up with cures for all diseases, effective anti-ageing therapies and regenerative treatments that keep people indefinitely young. In all likelihood, the immediate result will be an unprecedented epidemic of anger and anxiety.
Those unable to afford the new miracle treatments—the vast majority of people—will be beside themselves with rage. Throughout history, the poor and oppressed comforted themselves with the thought that at least death is even-handed—that the rich and powerful will also die. The poor will not be comfortable with the thought that they have to die, while the rich will remain young and beautiful for ever.

The first, subtle but capital mistake Harari is committing here is what I call the magic pill assumption. He is implicitly assuming that the cures for all diseases etc. will come all at the same time and with no warning, so that suddenly the world will be split into those who can afford the treatments and those who can’t. In this scenario, there is no time for our society to adapt to the change, and thus hell will break loose. However, the chances of this actually happening are exactly zero, for an elementarily simple reason. All these wondrous advancements he talks about will require time and effort to become real. There will not be a single treatment to cure all diseases, let alone ageing, magically popping into existence all of a sudden; rather, several therapies will be necessary to achieve these goals, and expecting them to arrive all at the same time is utterly unrealistic. Some will come sooner, others will come later. Hardly any of these therapies will work perfectly right off the bat; first-generation therapies will work to some extent, but not very well—a good wake-up call for everyone to realise that fully working treatments are well on their way, although not here yet, and that work needs to be done to demand and ensure widespread access. No one will suddenly find themselves ‘immortal’, and thus there will be no instant immortality the poor can envy the rich for. Therapies will come gradually (in fact, they’re already coming, slowly and pretty much one at a time), and during these long stretches it’s reasonable to expect the prices to go down—not to mention the fact that it would be far more convenient for any given State to pay for people’s rejuvenation rather than paying their pensions.

The alleged comfort that the poor find in the rich’s mortality is nothing but a romantic fairytale, no less imaginary than the imagined realities he exposed at the beginning of the book. I have a hard time believing that the mother of a child who is dying of starvation would cheer herself up thinking that some unnamed rich people whose faces she’s never even seen will eventually also die. A poor person dying of ageing in a developing country can’t get but an extremely cold comfort, if any, thinking about a rich person dying of ageing in the developed world. Unlike the poor, the rich has likely led a life full of comforts and his passing will equally likely be less sad and painful than the poor’s. Unlike the poor, the rich has no doubt lived a less grim old age, and will be able to afford a clinic, doctors, and pain-killers to make his passing less hard. Additionally, why would all poor hate all rich people? Why hate philanthropists who spend millions (if not billions) to help the developing world?

It gets worse.

But the tiny minority able to afford the new treatments will not be euphoric either. They will have much to be anxious about. Although the new therapies could extend life and youth, they cannot revive corpses. How dreadful to think that I and my loved ones can live for ever, but only if we don’t get hit by a truck or blown to smithereens by a terrorist! Potentially a-mortal people are likely to grow averse to taking even the slightest risk, and the agony of losing a spouse, child or close friend will be unbearable.

The implicit assumption here is that, presently, the thought of dying in an accident or in a terrorist attack isn’t so bad, because you would die of ageing anyway eventually. I would refrain from trying to comfort the relatives of terrorism victims using this argument—odds are they’d punch me in the face. Losing a dear one is always horrible, no matter how long they had left to live when they died. I really don’t think that losing a child is any less painful if you know that he’d have died of ageing anyway. This is honestly an extremely dumb argument, a real shame for a book that had thus far been quite excellent.

The claim that a-mortal people are likely to grow averse to any risk, even the smallest, is completely unsubstantiated and unjustified. I do agree that being a-mortal could make you think twice before you abuse alcohol or drive recklessly, for the simple reason that such things are hardly worth risking your potentially endless life (and IMHO, they’re not worth risking even a presently normal lifespan); but I don’t think you would not get on a plane to see your family just because there’s a chance in a million that the plane will fall. (You might reconsider when you approach your 1.000.000th flight, but getting there would probably take longer than planes will be around.)

Also, Harari fails to take into account timescales entirely—he commits the mistake of imagining present-day problems in far-future scenarios. If everything goes well, we might have rejuvenation therapies in time for most people alive today to benefit from them, but I think the day when everyone can safely be called ‘a-mortal’ is a long way off. Is he sure that, so far into the future, there will be terrorist attacks, trucks to be hit by, and economic inequality (or even money)? I am not sure there won’t be any of these things, but he seems to be pretty damn sure there will be. As a historian, he should be very well aware of how our world has changed throughout history, and he should be especially aware of the positive trends of the last two centuries (for example, the plummeting of extreme poverty) that are evidence of a future much brighter than he implicitly suggests.

Quite frankly, Harari made rather miserable arguments to back up his case that perpetual health and indefinite lifespans might not be so good.

The bad bits: computer viruses and artificial life

In the section ‘Another life’, Harari brings up computer viruses as examples of ‘completely inorganic beings’. He mentions genetic programming as one of the most interesting areas of computer science at the moment, which is certainly true, and explains that it endeavours to emulate the methods of genetic evolution, which is also true. He says,

Many programmers dream of creating a program that could learn and evolve completely independently of its creator. […] A prototype for such a program already exists—it’s called a computer virus. As it spreads through the Internet, the virus replicates itself millions upon millions of times, all the while being chased by predatory antivirus programs and competing with other viruses for a place in cyberspace. One day when the virus replicates itself a mistake occurs—a computerised mutation. Perhaps the mutation occurs because the human engineer programmed the virus to make occasional random replication mistakes. Perhaps the mutation was due to a random error. If, by chance, the modified virus is better at evading antivirus programs without losing its ability to invade other computers, it will spread through cyberspace. If so, the mutants will survive and reproduce. As times goes by, cyberspace would be full of new viruses that nobody engineered, and that undergo non-organic evolution.

If you have only a vague idea of how computers work and what a computer virus is, this might sound sensible. The truth is, a little research on the Internet is enough to conclude that this is mostly nonsense. In order to explain why, we need to distinguish fact from fiction.

  • Computer viruses are completely inorganic beings – FICTION
    Computer viruses are inorganic alright, but they’re nowhere near being ‘beings’, no more than Microsoft Word or Mozilla Firefox are. Computer viruses are instructions for a computer to run—nothing more, nothing less.
  • Computer viruses replicate themselves millions of times – FACT
    How much they replicate depends a lot on the circumstances, but yes, computer viruses generally are instructed to create copies of themselves. There’d be little point in even programming a virus otherwise.
  • Computer viruses mutate via replication mistakes, either random ones or intentional ones meant by the programmer – MOSTLY FICTION
    Computer viruses can be programmed with different stealth techniques meant to help them avoid detection by antiviruses. A possible way of doing this is using self-modifying code in combination with polymorphic code: In a nutshell, the virus rewrites parts of itself (or even the whole thing) to make its code look different while doing exactly the same things it used to do before. This is not at all the same thing as a ‘replication mistake’. An actual replication mistake, be it random or meant by the programmer, would overwhelmingly likely result in broken code that can’t even be run. I’m not going to say that it’s entirely impossible for a replication mistake to produce a working copy of the virus, but I am willing to bet that chances are so ridiculously small that the virus would have to replicate itself until the end of time before such a miraculous occurrence could actually take place.

    More importantly, viruses will not ‘evolve’ any new features as a result of this approach, and will not build upon them. Additionally, antivirus software will not ‘mutate’ as a consequence of viral mutation. Antivirus software doesn’t have a life that depends on catching viruses. When a new virus comes around, more often than not human researchers figure out a way for their software to detect it by inspection (the so-called ‘virus signature’) and update it; alternatively, antivirus software can bust unknown viruses through heuristics or by keeping an eye on programs that exhibit virus-like behaviour (touching stuff they’re not supposed to touch or reproducing like rabbits, for example. As a side note, if they ‘want to live’, computer viruses need to keep a low profile. If a virus ‘evolved’ new, unplanned features it’d be much more likely to start doing random shit and attract attention, either from the user or the antivirus, thus undermining its own ‘survival’). Viruses and antiviruses will not cause each other to evolve, as predator-prey pairs do in nature.

  • Computer viruses are chased by predatory antiviruses and compete for a place in cyberspace – FICTION
    Computer viruses don’t compete with each other. You can easily have tons of different viruses running on the same computer. They don’t kill each other and they don’t steal any imaginary resources they don’t need from other viruses. Worst-case scenario, a virus might have spread so much in a given computer that all the space is taken or the computer can’t run. Other viruses on this same computer are certainly not going to ‘die’ because of this. Maybe they can’t make more copies of themselves, but that’s not going to trigger a competition with anyone. They aren’t going to fight each other. More importantly, viruses will not try to outsmart each other and will not ‘evolve’ any features to do better than other viruses. If you have a computer with virus A and virus B inside and A does a better job at replicating itself than B, B will not ‘evolve’ any new feature to do better than A.

    Also, viruses are not ‘chased’ by ‘predatory’ antivirus software. Antivirus programs are nothing more, nothing less than a set of instructions meant to find and eliminate unwanted instructions (viruses). An antivirus will not ‘run after’ a virus, it will either find it or not, and if it does, the virus is not going to ‘run away’ and the antivirus will not have to ‘chase’ it.

The bottom line is: Yes, computer viruses do mutate to an extent and these mutations can help them avoid detection; however, no, they aren’t going to ‘evolve’ into anything different, and especially not into something that ‘nobody engineered’. As a side note, while genetic programming can be used to evolve computer viruses, it’s not a very efficient way of doing so for a virus that needs to survive ‘predatory’ antivirus software out there in the wild cyberspace. Running a genetic algorithm to create new variant of the virus would take time and create tons of useless junk in the process—a great way to increase the chances of being busted.

I admit that these two blunders—rejuvenation and computer viruses—might have biased me against the rest of the book, but I do think Harari’s view of the future tastes too much like a dystopian-future movie. It seems based on rather fanciful interpretations of possible developments of current technologies and polluted by the stereotypically pessimistic idea that history is destined to repeat itself over and over. (Not the whole of history, though. A lot of people seem to think only negative patterns repeat themselves throughout the centuries, while positive ones do not, for some mysterious reason. The reason might be that bad news sell more than good news.)

However, my overall opinion is that this is a good book, some parts of which need to be taken with a pinch of salt. Even though I was quite disappointed in Harari’s speculations on the future, I’m nonetheless curious to see if he has redeemed himself in the sequel to SapiensHomo Deus: A Brief History of Tomorrow. When I’ll read it, I’ll let you know. 🙂

Scattered thoughts on self-awareness and AI

I’ve always been a fan of androids as intended in Star Trek. More generally, I think the idea of an artificial intelligence with whom you can talk and to whom you can teach things is really cool. I admit it is just a little bit weird that I find the idea of teaching things to small children absolutely unattractive while finding thrilling the idea of doing the same to a machine, but that’s just the way it is for me. (I suppose the fact a machine is unlikely to cry during the night and need to have its diaper changed every few hours might well be a factor at play here.)

Improvements in the field of AI are pretty much commonplace these days, though we’re not yet at the point where we could be talking to a machine in natural language and be unable to tell the difference with a human. I used to take for granted that, one day, we would have androids who are self-aware and have emotions, exactly like people, with all the advantages of being a machine—such as mental multitasking, large computational power, and more efficient memory. While I still like the idea, nowadays I wonder if it is actually a feasible or sensible one.

Don’t worry—I’m not going to give you a sermon on the ‘dangers’ of AI or anything like that. That’s the opposite of my stand on the matter. I’m not making a moral argument either: Assuming you can build an android that has the entire spectrum of human emotions, this is morally speaking no different from having a child. You don’t (and can’t) ask the child beforehand if it wants to be born, or if it is ready to go through the emotional rollercoaster that is life; generally, you make a child because you want to, so it is in a way a rather selfish act. (Sorry, I am not of the school of thought according to which you're 'giving life to someone else'. Before you make them, there's no one to give anything to. You're not doing anyone a favour, certainly not to your yet-to-be-conceived potential baby.) Similarly, building a human-like android is something you would do just because you can and because you want to.

I find self-awareness a rather difficult concept to grasp. In principle, it should be really easy: It's the ability to perceive your own thoughts and tell yourself apart from the rest of the world. I really don’t think this is something we’re born with, but rather something we develop. Right after you’re born, you’re a jumbled mass of feelings and sensations that try to make sense of themselves, but can’t. If it was somehow possible to keep an infant alive while depriving them of any external stimuli (vision, hearing, smell, touch, taste), I very much doubt that jumbled mass would ever manage to make sense of itself. In such a hypothetical scenario, I’m not even sure this baby could think as we intend it: What can you reasonably think about, if you know literally nothing?

Research to better understand how the brain develops self-awareness is going on all the time; recently, studies have pinpointed the part of the brain where self-awareness might be generated. Other studies suggest there’s no single area responsible for it; rather, it might be the work of several brain pathways working together. I really hope they’ll figure it out soon, because this issue drives me bonkers. From my personal experience of self-awareness, it appears as if there is some kind of central entity that is constantly bombarded with external stimuli, memories, feelings, and so on. This central entity (my self-awareness, or me) also appears to be independent of the stimuli it is exposed to: You’re still you whether you’re smelling a cup of coffee or not, or whether you’re thinking about a certain past event or not. This seems to make sense, but I think it is in sharp contrast with the idea I expressed above, i.e. that self-awareness is developed thanks to all the internal and external stimuli we’re exposed to since day 1. Either there’s no central entity, or it works differently than I think.

Some suggest self-awareness is an illusion brought about by the constant, small changes in your perceptions. These changes are so tiny, and their flow so smooth, that we get an illusion of continuity—the article argues—illusion which we call ‘self-awareness’. This is a very interesting theory, which I think makes sense at least to an extent. The thought that ‘you’ could be the sum total of all perceptions (internal and external) you experience in a given unit of time has crossed my mind more than once. Over the years, people change their minds, they behave differently, and in some case might become the opposite of what they used to be. This might be nothing more, nothing less than the result of the changes in our perceptions and how they change what’s inside our brains. Still, in order to work, illusions require someone (or even something) to be tricked by the illusions themselves. If the self is just an illusion and there really is no ‘you’ there, who or what is being tricked by the illusion? It’s almost as if this theory says that the illusion is tricking itself, which is quite paradoxical, but on the other hand I think this conclusion shows there’s no such thing as a ‘simulated consciousness’, as in a machine that thinks to be self-aware when in fact it is not. (This has always been quite clear to me.)

Before I get too philosophical, let’s go back to self-aware machines. I think we’ll have a hard time giving machines self-awareness before we thoroughly understand what it is and how it works. Some suppose it might be a spontaneous consequence of complexity; for example, if you have a neural network learn bagillions of things, self-awareness might ‘magically’ arise once a certain level of interconnectedness between all that information has been reached. That’s an intriguing idea, and it implies we might be able to give self-awareness to a machine without having to understand how the process works first, but I am a bit sceptical. On top of that, how do you prove that the machine you built is self-aware? As a matter of fact, you can’t prove that directly even for humans: There’s no way you can be sure that the person you’re talking to is actually perceiving their own thoughts, but we generally consider some things as acceptable signs of self-awareness, such as the ability to talk about yourself, to recall past experiences, to interact with your surroundings, to avoid obstacles, to learn new knowledge and apply it, etc. If a machine showed these signs, then we might conclude it is self-aware, but what if some of these signs are missing? For example, dogs don’t talk much about themselves. Are they not self-aware? I don’t know for a fact, but I doubt they aren’t. It is possible that there are different levels of self-awareness. Maybe self-awareness is not a binary phenomenon, but a continuous one. If it is true that self-awareness develops thanks to the external stimuli you’re exposed to, then it is reasonable to think that different species may have different levels of self-awareness, because not all species can experience the same stimuli, let alone process them. Furthermore, it seems reasonable to assume that a critical brain mass is necessary to be aware of your own thoughts, so the smaller the creature’s brain, the ‘lower’ (no offence intended) degree of self-awareness we should probably expect, and below the critical mass there would be no self-awareness at all.

This takes us to another aspect of the issue. If self-awareness is (as it is reasonable to suppose) a product of evolution, then each species’ self-awareness has been fine-tuned to best suit the needs of the species itself. Bacteria, for example, needed no self-awareness to do what they do, and so they haven’t developed any. (To be fair, they probably couldn’t have developed self-awareness even if they needed to, because it’s hard to have a brain when your body is a handful of cells at best.) A machine, even if it was capable to learn and thus to ‘evolve’ to an extent, wouldn’t be subject to the same evolutionary pressure as biological creatures; its ‘survival’ would not depend on how fit it was for its environment, and it would not ‘reproduce’ with other machines so that any ‘mutations’ could be passed on or selected against. In other words, it wouldn’t have any need requiring it to develop self-awareness or emotions of any kind. (On the plus side, this suggests it would hardly have a reason to go on a human-killing spree.) If a machine will ever have self-awareness or display emotions, I think it’ll be because we made it that way, quite possibly just because it was better for us, in a way or another. A self-aware machine with no objective need for sadness or guilt, for example, might well ask you why the heck you enabled it to feel such unpleasant emotions if it wasn’t strictly necessary. An answer could be that we get along better with an emotional machine rather than an unemotional one, but the machine might find this reason insufficient. (Which, on the minus side, might give the machine a reason to go on a human-killing spree, especially if we gave it an instinct for revenge.)

Still, I suppose it wouldn’t be entirely impossible that machine ‘feelings’ might develop in a sentient machine, even without evolution. Maybe they wouldn’t be happiness or sadness, since these have a strong biological basis, but they could be something else, that only a machine can ‘feel’ and that we cannot even imagine. Future developments in any given field have a tendency to surpass our imagination, and whatever future AI is going to be like, I’m looking forward to see it.


Welcome to l4t reborn! Yeah, I know. Odds are you haven’t got the foggiest clue what I’m talking about. Long story short, looking4troubles, or l4t, is a blog I started in mid 2016 and promptly abandoned because it just wasn’t working. Interestingly, my other blog—Rejuvenaction—has a similar story: I started it in mid 2015, wrote a lot of articles, but then I abandoned it for several months because I wasn’t very inspired in terms of blog posts. Today, after a graphical and content revamp, Rejuvenaction is much more popular than I thought it would be, at least in its niche. Hopefully, l4t’s destiny will be the same as its twin brother’s. As l4t is still in its infancy, its structure and layout might change even significantly, though I’m really fond of this theme—the same as Rejuvenaction‘s except for the palette. (They’re twin brothers for a reason.) I am afraid this colour scheme might be a bit difficult to read, so that might change too.

I’m not a fan of long introductory posts, so I’ll leave it at that and move on to the next post. I hope you’ll enjoy your stay!