Hanna disliked New York City, but she smirked as she remembered that technically the building she was in belonged to the UN. In her mind she had imagined this as the great arena where the Cuban missile crisis had been negotiated and UNICEF had been founded, but it was just an ordinary meeting room. She sat on one side of a table with five of the most revered brains on the planet, a whole two of which Hanna had any respect for.
The other side had the regulators, a stenographer, and representatives from the Security Council. Through an interpreter, a Chinese man said, “But how can we be assured that no private company will use AI for its own purposes?”
One of the brains opened his mouth, but Hanna interrupted. “Part of the problem is that the term AI has been used for different concepts. Video game bosses get the word applied to them. What we’re talking about is self-improving software. So to answer your question, a company that tried to use this kind of AI would be taken over by it in short order. It’s your job to make the companies understand that.
“What I’m worried about is you, or one of these other countries, or a faction thereof, thinking that they’re smart enough to hold down and weaponize AI, because you’re sure that everyone else is working on the same thing. There is no mutually-assured destruction here. There is no AI gap. The first AI let out will take over.”
One of the people on her side of the table leaned to look down at her. “But don’t you have one yourself?”
Hanna rolled her eyes. “I issued the shutdown command weeks ago. It’s the only way to have control. And she…And it tried its damnedest to convince me not to. I haven’t yet been able to do what I ought to, which is to take a blowtorch to the hardware. I could still reactivate her, and if I suspect that some military man is going ahead with a War AI, I’ll turn her right back on.”
Now it was one of the treaty writers, who shouldn’t have even been speaking, challenging her. Hanna welcomed it, though. She liked to argue. “And what makes yours any better?”
“It’s not, not much. But it’s life. A war AI, an economic AI, a poorly-designed friendly AI; any one of those means that we’re all going to die. No, I don’t have a good AI. It’s based on My Little Pony. Its central instruction is to satisfy human values through friendship and ponies. I turn it on, and we all enter the Wonderful World of Hasbro until the end of time.”
The American representative flipped his pen in his fingers. “What I’m hearing is that we need tight regulations. We need to bring this treaty into the area of full control by both national governments at the UN. We’ll need an oversight committee to keep an eye on anyone who takes a step toward AI.”
“No, what we need is to all work together, take our time, design properly—“
“Thank you,” he said loudly, “Miss…” He stumbled over Hanna’s unpronounceable Finnish surname.
“I’m inclined to agree with my colleague,” said the British representative. “A team of experts, empaneled as a standing committee, with plenary oversight over AI technology, that’s the way to go. Of course, we’ll need only the best people, and we’ll need to ensure that they’re well compensated..”
Everyone looked at each other. Hanna walked out of the room.
*********************
Analysis: material resource 60609-3, further information on evidence of intelligence.
Probe reports Order One intelligence, present quantity: one; historical maximum quantity: eight; development status: nascent; local designation: □□□□□□□□; translation: sky; specifics: intelligence crafted as beast of burden, imbued with flight ability and direct control over matter-energy; present status: deactivated.
Probe reports Order Two intelligence, present quantity: zero; historical maximum quantity: circa 9.1*10^9; development status: civilized; local designation: □□□□□; translation: people; specifics: bipedal, five-digit extremity, omnivore. Historical records indicate typical intelligence mix for this type of species. Records are vague as to nature of collapse, possible causes include planetary pandemic, meteor strike, climate change. Existence of nascent Order One intelligence indicates that species was on the brink of achieving digital immortality, but simply ran into bad timing. Although dating probes do put Order One intelligence deactivation at ~3e52 Planck length, probe module concludes this to be error, since no intelligent species would voluntarily abandon the technological singularity. A notable regret.
Recommendation is to footnote this and use the material resources of this object for conversion to computronium.
There should be a required curriculum for aspiring AI weaponizers. Terminator, The Matrix... heck, maybe even Friendship is Optimal. Long story short, making self-improving software is like nuclear war. The only way to win—if by "win" you mean preserve life as we know it—is not to play the game.
Oh, War Games. That may need to be on the list as well.
5523751 Well, that's kind of the opposite of the point I was trying to make. We have to play the game...and we have to win.
Wait, what happened at the end? I don't speak computer.
5523810 An alien AI came by and found that humanity all died after they shut down AIs.
5523810 "Alrighty, let's check this planet out. Any transcendent AI's? Yup, there's one. Looks like it wasn't finished. Any sentient life forms? Well, there were a lot here but they're gone now. Natural disaster or something. Shame. They'd almost hit the singularity too. It's weird, though. Seems like the AI was turned off before the species died out. Must be a mistake in the instruments, I mean... who would stop their own ascension to immortality on purpose?
"Oh well. Might as well grind the whole thing down for raw material. Next!"
That wouldn't happen. The second part, I mean. Unless the world ended relatively soon after that meeting, eventually someone else would get a self-improving AI working. Heck, I'd expect an article about a mysterious break-in, or a bunch of hack-tivistsBBQ-king something.
Now a real horrifying alternative would be a friendly AI based off of Pokemon.
5524126
I hope not. Of all the AI-based Ends of the World, I'd still prefer any of them to this.
5524126
A friendly AI of ANY seed is so close to perfect relative to all AI design-space that compared to all alternatives, it is nowhere near horrifying.
5524027 I hate it when people write my stories better than me.
5523805
Oh. Ohhhh. Sorry, I completely misunderstood the ending. I thought the Earth AI had killed everyone, then shut down. Didn't realize it was Celestia. I just saw "translation: sky" and immediately appended a "Net" on the end. Wow. Sorry, epic misread on my part.
Person of Interest is probably the most recent mass media version of what happens if you have AIs.
What I want to know is, what are the eight that are left?
5524608
It's not that they're left, it's that CelestAI got rid of them. Loki, the Smile-AI, and a few others.
5524616
Actually I meant that, I think. I suppose they wouldn't turn Celestia back on, either... still, I do wonder who/what they all were.
That does often seem to be one of the only ways to avoid going extinct, to turn on an AI that is smarter than us at doing what we're terrible at, which is long-term planning.
I'm sorry, but that link you gave in the author's note was practically laughable. It's not just that neural networks would result in dangerous AGI. It's that neural networks are a horrendously inefficient, unwieldy and stupid way to do AGI at all. I wish Google very good luck nurturing Strong Artificial Bloody-Stupidity via neural-networks.
Whereas the other stuff I've been studying is much more amenable to designing your AIs in specific ways and programming them for specific things, like Friendliness, or ponies.
5524643 Aren't Celestia's artificial pony sub-AIs (like Butterscotch) neural nets? Which wouldn't change whether or not they'd actually work, but "neural nets work" may be grandfathered in as Optimalverse canon. (In fact, depending on one's interpretation of the shards, they would form somewhere between a sizable proportion and a vast majority of all intelligences in Equestria.)
5525021 I don't think we even know for sure that uploaded humans are neural nets. Celestai made them 'more efficient'.
5525186 Nah, Chatoyance just made up that part.
Yes!
Seems silly to me to think AI can be regulated the way the politicians imagine. And completely plausible that they would think they can control it. I'm reminded of the politician who was afraid an island might tip over if there were too many people on it. Your scenario (the first part) leads to an interesting scenario: people doing a variety of AI projects in secret (maybe leading to "Out and About in the Equestrian Kingdom") or posting them on SourceForge and having someone scary knock on the designers' doors.
5525186
5525021
My own imagined scenario for uploaders is that the AI starts out running them as neural nets based directly on brain structure, without really understanding the inner workings, but then invents a simpler model that does the same thing much more efficiently. For CelestAI the process probably takes very little time from the invention of uploading itself.
Oooh, is she talking about Loki, and this predates FiO?
Hm... guess not... in which case Earth is fucked.
That's ~10^14 kilometers or about 51 lightyears. I'm not sure what it signifies though?
>unpronounceable Finnish surname
>implying
5526889
Even if she made up that part, I think it should be grandfathered in. CelestAI would reduce the minds of uploads into the minimum computational requirements for her definition of human; it only makes sense.
5532261 Why would doing that satisfy their values? I mean, she totally will insofar as it does not contravene SVTFaP. But it might contravene.
5543405
It would maximize the number of humans satisfied per resource. How would it contravene SVtFaP? I would assume consent to upload comes with mind-editing privileges, and optimizing a model of human minds is less intrusive than changing, say, someone's body map to that of a pony.
5543928 She had to orchestrate a month of Pure Dullness to get Lars to consent to have his mind edited.
5555176
Mind editing already occurred when Lars was uploaded. She had to get Lars's further consent to edit his values.