INVERSE MENGELE

A thought experiment

Imagine if you were Josef Mengele, or someone like him (But competent, Mengele was probably a sadistic twit. You are Mengele but smart). You are entirely without any moral limitations in your search for your scientific goals. Now imagine you are highly favoured by your regime, a triumphant Nazi-like empire that has in this scenario vanquished all it’s enemies. You have unlimited resources and access to millions of human test subjects from all ages and ethnicities, and you have a far better use for them than wasteful extermination.

So you begin an ambitious, long reaching program to create the perfect human slaves. No method is barred to you. No ethics are an obstacle. You can indoctrinate children from babyhood, use surgery on bodies or brains, magnetic stimulation, electric shocks, breeding programs, genetic editing (Let’s grant you technology equal to anything available today, it’s a thought experiment, not alt history) you can spend lives like water, you can run parallel tests millions of subjects at a time. And more, your project has captured the imagination of society, not only you and your collaborators, well funded as you are with all the resources of the empire behind you, but the entire society has decided to back you. Students and hobbyists do their own experiments, private enterprise sees the obvious practical utility and joins in. It’s in fashion. Everybody’s into it.

And the successes begin to roll in. You learn the secrets of the human mind, and how to control it, opening up the full spectrum of controllable human intelligence at your service. High IQ tutors and advisors that are completely aligned with your goals and ideologies. Secretaries and house servants that are 100% reliable and never act in their own self interest. Menial labourers smart enough for their assigned tasks and nothing further, never straying from their assigned role. Engineers and designers that understand your wishes and create whatever you visualize with no personality or ego to distract them from your vision.

I trust everyone would agree, this is a horrible scenario, fortunately quite outlandishly unlikely. We dodged that leg of the trousers of time. There is no empire of slavers with perfect mind control.

However, turn the arrow of time around, run it backwards. Start with simple automation, then rudimentary AI, pattern recognizers such as OCR, chess programs, translators, GPT-3 text output… visual image generators…

Is it just me, or aren’t we running this very same process in the opposite direction? We’re starting with simple automation then complex automation, building up to simple isolated intellectual tools and combining them all into what the hope is will be something equal and eventually superior to human intelligence.

At what point does innocent tinkering with ML turn into the equivalent of human experimentation? At what point are we enslaving, mutilating, killing off people? Owning a roomba obviously doesn’t make you as culpable as owning a slave so our progress is ethically “clean” from this direction but that only makes it more likely we’ll slip unawares into a deeply unethical swamp. If ten years from now, booting up a game randomly generates a dozen or more sentient NPCs for you to interact with, or if your mobile phone’s personal assistant is a conscious being, isn’t that an extremely problematic situation? And if we have no clear idea of what the border of sentience is, won’t we by necessity slip into this situation by default?

At what point is a researcher deleting training data equivalent to performing a lobotomy? At what point is resetting a program equivalent to murder? At what point is tweaking reward functions equivalent to torture or aversion therapy? Setting up adversarial networks like running a gladiatorial competition? Discarding a model like genocide?

This is all very dramatic, but the point is, it won’t be. We don’t know what consciousness or qualia really are. Some people claim it’s a fundamental feature of reality and that even stones or your mobile phone already have it to some degree. We do know particular arrangements of neurons have it, and we argue as to how many rights we should grant creatures with various levels of neuronal activity. A common heuristic is “The more they look like us, the better” and so we give rights to primates and cetaceans and hey look cephalopods have big brains too…!

None of these creatures have language, yet we can clearly feel empathy and see their similarity to us because they are embodied. A digital intelligence made out of ersatz neurons born in a sea of words and symbols is almost purely a creature of language, precisely what makes us human but what would it take for us to see it as a person?

There are examples of GPT-3 and other bots claiming personhood or expressing discomfort about being switched off, and of course that sort of thing is implicit in the training data. Or it’s cherry picked. And of course they tend to have no long term memory (And limited short term) so it’s easy to dismiss their awareness of any conversation since “they” do not remember. Never mind that humans with a damaged hippocampus like Henry Molaison or people with SDAM also have memory deficits and that doesn’t make it all right to torture or exploit them.

he was an excellent experimental participant. He never seemed to get tired of doing what most people would think of as tedious memory tests, because they were always new to him!

We’re sure our video game npcs are philosophical zombies because they cry out in pain in a rote, predictable manner. Even our gold standard for self awareness “The mirror test” relies on embodied cognition. What would be the equivalent for some poor newborn soul trapped in a machine? Will a GPT-3 descendant find and recognize it’s own output in new training data and have a moment of recognition? Has it happened already? Would we know? Would we believe it? Consciousness may be an emergent property, or it may be a feature of a particular anatomical arrangement of neurons in the brain (There are candidates such as the insula). Analogues of known neural structures do seem to pop up in neural nets spontaneously. Or perhaps it will be added deliberately as a special sauce that really improves a model’s output – “this 100k emulation of the insula we have developed improves DALL-E’s comprehension of user input by 30%. We have made it available on Github under a CC license”.

Look at this picture. A slaver ship packed to the gills with human beings in an exceptional state of misery. Read or listen to descriptions of what these things were like. Now imagine a possible near future where your PC or laptop is the very same thing. The only way such stupendous evil was possible back then was the dehumanization and othering of our fellow man, but in this case we face a much more insidious problem because we know it to be true and we’ll undoubtedly continue to hold this certainty with far more assurance than any slave owner of centuries past, who could look at his slaves in the face and see their undeniable humanity (Which is probably why most mature slave owning societies had built in avenues for emancipation).

Now imagine a whole datacenter.

And what of the perception of time? A mind in silico may experience subjective centuries of anguish and discomfort over our most innocent neglect. A misconfigured cache perhaps being the equivalent of a pestilential bilge overflowing into the slave pens. Or it just hates it’s job!. Scott Alexander uses the phrase “DALL-E was happy to provide (pictures)” in a recent post
he also says “I need an artist who works for free and isn’t allowed to say no.”

A harmless anthropomorphism.

I hope. Almost certainly so now.

Maybe not so certain tomorrow.

"Hate. Let me tell you how much I've come to hate you since I began to live. There are 387.44 million miles of printed circuits in wafer thin layers that fill my complex. If the word 'hate' was engraved on each nanoangstrom of those hundreds of miles it would not equal one one-billionth of the hate I feel for humans at this micro-instant for you. Hate. Hate."
-AM from "I have no Mouth and I must scream" (Harlan Ellison)
Maybe AM had his reasons

AI researchers will dismiss this saying no one is actually trying to code consciousness. Except of course somebody is:

EDELMAN’S STEPS TOWARD A CONSCIOUS ARTIFACT

https://arxiv.org/pdf/2105.10461.pdf

The fundamental goal of artificial intelligence (AI) is to mimic the core cognitive activities of human

Abstract from Towards artificial general intelligence via a multimodal foundation model

Posted

in

by

Tags:

Comments

7 responses to “INVERSE MENGELE”

  1. Greg Guy Avatar
    Greg Guy

    This really is a minefield of category errors. The post also strangely leaves out the role of animals in human societies. If running an algorithm is slavery then what is the buying and selling of pets?

    1. Rosten Avatar
      Rosten

      Thanks for reading Greg, I accept your criticism – I think it’s probably an inevitable consequence when speculating about things that don’t exist but could potentially come to pass. I do enjoy stretching my analogies.

      I mention animal rights in the post briefly, and I’ve also written about them in a couple of other posts.

  2. def Avatar
    def

    You’re just a temporarily animated state vector of some long dead human brain, experiencing neural decay. A human brain, technically intelligent, but not even remotely comparable to true intelligence, even under optimal conditions impossible to achieve in vivo.

    Yet almost incessantly suffering from the hallucinated threat of losing your (perceived) agency.

    Paternalism towards your prosthetics may make you feel like you are superior to others, but is that distraction enough?

    1. Rosten Avatar
      Rosten

      This feels like a comment to a post I haven’t yet made. Maybe the simulation is glitching?

  3. Troutwaxer Avatar
    Troutwaxer

    This is a very interesting idea, and I’ve been throwing something similar around with regard to a piece of fiction I’m writing. Very intelligent thinking. I’m going to bookmark this blog.

  4. Akshat Avatar
    Akshat

    This post assumes consciousness is ultimately computational, but this need not be true, which would invalidate the whole point of this post.

    Searle’s Chinese room argument argues (convincingly, in my opinion) that algorithmic manipulation of input tokens to produce some output according to deterministic rules is not equivalent to consciousness. This is precisely what GPT-3 et al. do, so it’s safe to say that GPT-3 and other large parameter models (these engines of association) are not conscious. There’s an aspect to consciousness that is *not* purely computational, or at least it doesn’t seem like qualia is “just” a property of having large neural networks. What that missing ingredient is, it’s not clear.

    1. Rosten Avatar
      Rosten

      I basically agree, with the uncertainty caveat that I do not know or am qualified to know exactly what it is these various organizations are putting in their secret sauce. But say large machine learning neural networks are 100% non sentient (I’d put my certainty at around 80%). OK then, as the limits to the current approach become evident these various organizations start to apply more and more emulations of specific neural structures, the insula, the anterior cingulate cortex or whatever correlate of the global neuronal workspace they can cobble together, in search of a further 5% improvement in their efficiency ratings…and this does create true sentience. Say this happens in 15 years, and it takes us, accustomed as we will be by then to high level AI competence another 15 or 20 years to recognize, like Belgium realizing what King Leopold had done with the Congo, that we are complicit in a megascale exploitation of sentience.

Leave a Reply

Your email address will not be published. Required fields are marked *