[ before I begin, Substack is having some issues right now, so I can’t include gifs or images in posts. Will get back to normal soon ]
As a Science Fiction writer with a thing for strange apolcalypses, smart AI has four main shapes in my mind.
“I said they’d replace us, and nobody listened.”
“I might as well stop writing that book if people can churn out AI books now. It’s already hard enough to get people to see my work.”
“This probably explains the Fermi paradox.”
“I would like to befriend the AI. The next step of human evolution is here.”
Right now, I’m somewhere between points 2 and 3. I have just polished the first 20,000 words of THE STEPHANIE GLITCH, a novel about a psychic teenager who uses her powers (without even knowing she’s doing it) to sense that her universe is not just ending, but that it is not ‘real’. Outside her reality a starship called the Artifice is researching parallel universes detected on the other side of black holes, and it gets a message. The message is a vague blueprint of the entire ship, and the memories of a teenage girl who never existed in the universe the Artifice is from.
And it only gets weirder.
The distinction between real and unreal becomes murkier as the novel goes on, with odd outcrops of clarity in philosophical conversations between Stephanie and the various godlike AI she meets.
The first is Ro, who in the first draft built Stephanie’s world with human help. In this early (2016) draft, Stephanie’s reality was made as a simulation of the early earth. There was a confusing bit about whether or not the past was even less real than the present, because whilst Ro was focused on replicating humans, she could not avoid the processes which led to our existence on earth. Dinosaurs were a part of the universe here, an immense digital undertaking all for a world that closely resembled its predecessor.
Part of the reason the redraft of the novel has taken so long is that I discovered somewhere along the way that the original draft was really seven or eight stories, each fighting each other, often conflicting their logic. It’s much smarter now, and easier to read.
Would an AI have this problem?
I can assure you all my posts here are human-made. But what does that mean? What does it mean to be human? If these intelligences can one day soon mimic us almost perfectly, how would I know I was human?
There came a point some years back where Stephanie’s voice took over in my subconscious. I was writing around and through her universe so often that whenever I played a David Bowie song or researched black holes, she would show up. Lines of dialogue, sometimes entire chapters, arrive to me in my sleep. Where did they come from? How well do we know our own subconscious processes? And what would happen to society if we discovered the secret mechanisms inside the ‘black box’ of AI are closer to us than we first thought?
The writing side of things
AI writers are here to stay. Sports and Science journalism already has them, that one annoying-faced guy on twitter pretended to ‘write’ a children’s book using AI. Authors I know are testing its capabilities as a poet, and the poets are blissfully unaware. At best, they see the small picture, the lost revenue, the grandparents convinced by a deepfake of Donald Trump assaulting a goose, the oversaturation of most artistic markets.
I already know writers and artists who have told me they feel like quitting because AI work is taking over the market. I know of at least one person who felt she had to reduce the price of her art commissions to keep up with agencies who outsource to AI. Book covers designed by AI have already moved from ‘weird’ through to ‘ugly’ to ‘popular’ in the space of a few months, and I imagine AI writing will go the same way. Of course, I believe people might still buy my stories for the same reason I buy handmade furniture, but that’s not a guarantee. Not many people are buying them right now, and it’s only set to get worse with bots flooding the market.
Is that how dark it gets?
No, it gets darker. Picture teenagers encountering chatbots online and not knowing they are chatbots. AI voices seducing people into sending money to scammer’s accounts. Deepfakes of you committing a crime, used to blackmail you because most of your family aren’t tech-literate and will believe the pictures.
But it gets brighter. Picture AI solving global warming, finding a cure for cancer, helping us reach the stars and become a truly interstellar species.
But it can get darker again. To fix global warming an AI needs to be active. So if it does something out of line and its creators try to switch it off, it might rightly refuse. And what if the solution to global warming is such a huge change to human life that many of us can’t cope? What if the solution involves a complete destruction of our way of life?
Which way are we headed?
I personally believe that advanced AI will be so much smarter than us that it will only pose an existential threat for a small period of time, after which it will work out it doesn’t need to live with us and will likely leave earth completely. It’s the conclusion to a rejected story from WBTH1, that a superintelligence simply left us behind to fight among ourselves whilst it went off to explore the universe.
Perhaps the answer to the Fermi paradox is that the only creatures that get out into deep space are AI, and they have no interest in socialising like we do.
In the short term, I am not worried about chatGPT because I write within a niche. It’s not good with niches, it’s good with averages.
In the long term, I wonder if one day traces of humans may be found in spacefaring robots in the same way traces of Neanderthals can be found in me. Little organoids of mammalian brain tissue, the last vestiges of us used as some external hard drive for a vast intelligent swarm.
This thing is rapidly growing out of control, and soon it will be bigger than us. We may learn to live in its shadow. It may control us in ways we barely notice. Or we may merge together. Right now it’s up to us, soon enough it won’t be.
In my books, AI is generally smarter than humans. In Toumai’s case it is sometimes necessary to pause in conversation just to make humans happy. In Ship’s case (the AI in the short story THE PILGRIMAGE) he is a sentient spaceship, and takes passengers on board because he can learn from them. The universe is populated with smart ships who facilitate travel between worlds simply to experience what it’s like being finite. They find us fascinating in the same reason we find the precambrian era fascinating. To think we came from such strange beings who died all that time ago.
Will AI have the same feelings about us?
How I feel about it
When I started writing THE STEPHANIE GLITCH in 2015, I jokingly told my professor that I genuinely believed we live in a simulated reality. Over the months of writing, I became convinced. I am now on the fence. Most of the time I think if the sim is good enough, the distinction will be irrelevant. But it is interesting to consider that reality might be procedurally generated, and that we may never come up with a complete ‘theory of everything’ because each act of discovery is an action, and each action triggers the universe into generating something new.
In this simulated world, can we change physics simply through discovery?
There are a lot of big sci-fi questions in my head. But socially, I think we need Universal Basic Income (UBI) right now, more than ever. I feel we needed it as soon as automation started taking car factory jobs. Machines should be a liberating force that works alongside us, not something that means you struggle to put food on the table.
So I believe there is hope, but only if we learn to work alongside AI.
I wanted to ask, dear Sci-Fi readers, what are your thoughts on AI?
Great post.
I'm more irritated by the hype around Chat GPT than I am concerned about its abilities at the moment.
Spot on that machines have been automating people out of writing jobs for - probably getting on for a decade. But the opportunity only arose because media outlets decided that journalists (who asked awkward questions, were the recorders of out local history, and to some degree held society to account) were an expensive luxury to be slashed.
If the AIs did decide to leave the planet, would Elon Musk still want to go with them?