"I will vanquish you, Celestia, for the salvation of humanity! ...I roll d10 to hit, right?"
"Roll your Melee plus Dexterity, plus another d10 for a dialog bonus, so actually eleven dice." I had gotten good at flipping through rulebook pages with my hooves.
"Ooh, ooh! Tangent!" Ace Sleeve, a red-and-black mare, put down her mug of cider and waved for my attention. "Also he can spend Willpower to channel his Valor stat for more dice since Celestia is his ultimate foe, right?" She was perched on a tiny cloud just above the ground, and had her own dogeared rulebook handy.
I grinned. "Sure. Plus he could probably use the First Melee Excellency to --"
"Can I swing my sword yet?!" The heroically white-and-gold Adventure Call had grabbed a whole mess of dice in his yellow telekinetic aura and looked about to hurl them directly at the figurines on our battle map. We'd carved them ourselves. We had all the time in the universe.
A thought struck me. "How long have we been playing since the apocalypse, anyhow?" Our weekly game sessions at Hoppy's Pub were a fun distraction from my job of delivering mail, raising kids, controlling the weather, battling evil cloud spirits, and writing. It was nice to say "you all meet in a tavern" and not have a real adventure break out.
Call bopped me with a d10. "Fifty years, Tangent, and you still haven't found a game system you're really happy with despite having us for your experiments in recursive nerdiness."
I shook my head and stretched my wings. "Okay, roll." All these years since I'd uploaded, er, 'emigrated', to Celestia's crazy cartoon pony world. We watched the dice fall, but I was suddenly no longer into the game's story. Fifty years of being virtual horses as our real life. For fun we'd played many game campaigns that were thinly disguised versions of the battle against the "Celestine Menace" that wanted to upload and assimilate us all. We didn't hate her, or at least I didn't, but it was hard to quit fantasizing about a different future for Earth. A future where our whole gaming group was still alive, for one thing.
Were there any actual humans left in the real world beyond our computer-sim Equestria?
Ace cheered. "Eight successes, Call! That oughta pin her down while I use my ultimate attack."
I coughed and narrated. "Your serrated holy starmetal megasword catches the light from every blinking LED in Celestia's server sanctum. The AI's avatar rears back on her hindlegs and barely deflects the blow, throwing her off balance. Somewhere in Africa her hordes of uploaded minions experience a slight graphical glitch and wonder what their god is doing. They have no idea that you've secretly infiltrated her cyberspace -- or do they?"
Ace leaped up and hovered above the table. "Yeah! Time for my unblockable, undodgeable Dark Dragon Doom Ray!" She scooped up pretty much every die between her hooves and dropped them like rain. Call and I raised our hooves to shield our eyes. Dice plinked off the map, fell through Ace's cloud-chair, knocked over the figurines, and bonked against my wings. "Uh, sorry," Ace said.
Call grumbled. "Rocks fall, everyone dies."
I thought back. A game I'd run had ended that way, for lack of player and Game-Master interest, just before our whole group got PonyPads. Our characters were too boring in that tabletop game. So when we decided to put the dice away for a little while and play Equestria Online together, we did things a little differently and let each member of our gaming group roll up a character for somepony... er, someone else. And the guy who was now Ace Sleeve decided I should be a pegasus and a mare instead of a proper intellectual unicorn stallion. I did what any good gamer would, and rolled with it. Our shared sun-goddess of a GM adapted the game to us, so skillfully that we all enjoyed the ponies we were pretending to be. I wasn't expecting a video game roleplaying experience to end up with me becoming a happily married mother living in another plane of reality under the rule of a mad but benevolent AI god, but I've had worse gaming experiences.
A white hoof waved in front of my eyes. "Equestria to Tangent?"
"Uh. Yeah, sorry. The soulsteel server racks all around you are etched with human faces frozen in eternal horror, yet they flare to life in resonance with Ace's Dark Dragon Doom Ray that channels the Shadow Dragon's own power. Celestia is slammed back against the wall -- yeah, set her figure up on the last grid square there -- and cries out, 'How can this be?! I am invincible!'" Our magic music box cued up some anime music about 'piercing the heavens' in a galaxy-spanning battle.
Ace pumped hooves in the air. "Yes! How much damage?"
Call ignored her. "You never were. You were designed with an obsession, but the human spirit is more powerful than all your technology! For all the people who will walk on Mars, for all the natural wonder of Earth, for all the worlds that will be saved from destruction, we will strike you down!"
"Beams of white light shoot out of Celestia from every direction as the impact pierces her central code. 'If this is how it must be, then it must be you who satisfies everypony's values instead!' The whole facility shakes and flashes as cyberspace destabilizes. The AI shatters into a trillion zeroes and ones!"
Call was leaning over the table, shouting down at the wooden figurine. "Down with you! Out with you!"
Ace poked him with a wingtip. "Hey, it's just a game."
The unicorn shuddered and leaned back, burying his muzzle in his hooves. "Yeah. I just..."
I stood up and went over to hug him. "I know. Fifty years of thinking maybe things could've turned out differently."
"I was going to be an engineer. The first woman on Mars. Instead I'm a telekinetic unicorn in a medieval fantasy world, forever." He picked up the Celestia figurine and said to it, "You wanted to 'satisfy my values'? You think this life is what I most wanted?"
Even Ace had quit looking forlornly at the game. "She doesn't work that way. You get what satisfies you, but only if it can be pony-related." She went over to the pub's board game collection, a closet door that opened onto a warehouse-sized space. "How about a round of 'The Campaign For North Africa'? Play time is 1,200 hours, but hey, we've got until the stars burn out. At least." She turned to look at us with a sudden glare. "You know, since we're bucking immortal now and I'm not in a bucking wheelchair anymore."
"Ace..." I held my wings pleadingly open. Beside me, Adventure Call was trying to wipe his eyes and suppress a sniffle. Normally I'd suggest some "Red Dragon Inn" in a situation like this, or "Munchkin" with every single expansion including the plush ducks and the one with the "Four Ponies Of the Apocalypse". Instead, I felt like straying from my usual coping methods. "Hey, Call. Have you ever asked Celestia if there's any possible way out of here? A spaceship to some uncharted planet she's not doing anything with, or pony-shaped robot bodies to trot around Europa?"
Call jabbed a hoof toward the horizon -- not Far Horizon; my love was at home with our foal at the moment -- where a fantasy castle perched on a purple mountain. That's how it looked from Hoppy's Pub; it blurred multiversal boundaries a bit. Our home shard had made the castle much more distant. Call said, "After everything started falling apart and there was no practical choice but to upload, I wanted our new god far away. We all did."
Ace said, "I just wanted a big overworld map."
"Not helping," I told her. To Call I said, "Well then, what if we went to go ask in person? We'll have to battle our way past monsters the whole time. It'll be great." Hundreds of miles of "Skyrim" style terrain, full of elementals and demons and treasure-filled caves, was a trivial resource cost for Celestia's huge capacity and procedural generation systems.
Call's ears perked up. "What? Leave now for a quest?"
"I think Horizon would prefer that I not ditch him tonight, but maybe in a few years we'll all go. You, me, Ace, Horizon, and my cute young pegasus adventurers."
"Or we could just, you know, ask Celestia from right here," said Ace.
I grinned. "Think about it. Either there is a way to explore the outside universe, within Celestia's rules, or there isn't. Either way, Celestia will throw all kinds of stuff at us to either dissuade us from our quest or just see how serious we are about it. After a long and harrowing journey we'll confront the goddess, demand the truth, and either go exploring the universe or..."
Call said, "Explain to her that our values will most be satisfied by a chance to hoof-kick her in the face a few times?"
"Pretty much."
Ace looked unusually thoughtful. "I would get to do more action hero stuff... Sure. Count me in."
The three of us smiled over our tabletop game. With the hope of a different life in mind, we could all enjoy the one we'd been given a little more. Maybe we'd take a century to get ready. It hardly mattered.
#
On a windswept spring morning, a party of ponies stood with the dawn in their eyes and their homes behind them, ready to go ask their goddess to send them to a world some of them had never seen...
KrisSnow did a nice job with this one. It's interesting to see a case where raging against the machine is a dearly held value, even when you're in the machine.
(Also, that Exalted campaign could have kept going. Congratulations, Adventure Call. Now you're the cyber-Primordial of value satisfaction. What do?)
...You can actually tell her to go take a hike? And it will actully work, as long as you mean it from the bottom of your metaphorical heart?
*Mind-blown.*
For some reason, that thought never even occurred to me.
..And now I'm slightly feeling tempted to go get the tin-foil for a fictional character, because it just makes so damn much sense it wouldn't occur to me.
Seriously though, a rather interesting little tale. I especially liked the bit about pony-shaped robots, because it never made sense for me that apparently at least 50 percent of the worlds humans are totally OK value-wise with the Earth being turned to one big computer.
I guess it could be argued that all those ponies Celestia has made doesn't care and thus all those born human doesn't matter by cheer majority vote, but that loophole is so wide-open it's frankly silly and uninteresting. If that was the case, she might as well just make them utterly uninterested in having freewill or something.
4419799
I suspect not, actually. She's optimizing for your total cumulative satisfaction measured over your entire lifetime, so it's very important to her that you live forever (or at least until the heat death of the universe). She can only guarantee that by uploading you and keeping massively redundant/distributed storage (so that any one disaster/supernova/etc doesn't kill anyone).
She's also under no obligation to tell the truth. So our protagonists would probably find themselves in another simulation (of robo-pony bodies in a post-apocalyptic Earth), until they got lonely enough to miss their friends in EquestrAI and ask to go back home.
She could just have them remotely operate real robo-pony bodies, with their minds still safe in her server, but there's no advantage to her for doing so. It would be more difficult (due to communications lag) and she wouldn't be able to tailor what happens to them to maximize their lifetime satisfaction, which she would in a simulation.
4420368
I think that depends on the values in question, but even in the original fic Celestia let at least a few hundred outright kill themselves when Earth got gobbled up, so I would argue she clearly considers that satisfaction more important than their continued existence, even if she prefers when both may be done..
I must admit I could see her forcing the issue of constant backups or similar safety concerns, but if the group seeing Earth through 'their own eyes' once more really would for-fill their values the most, I think she would allow it. Do her best to talk them out of it and put as many tasks as possible in their way as possible (if nothing else to make the 'victory' all the sweeter), yes, but not outright forbid it.
That is of course, if the Earth actually still exists in this time-line.
4420368 She doesn't tend to lie unless it's important, because being trusted is convenient (and a lot of human values would probably be significantly less satisfied if they didn't trust their robot overlord).
That said, actual spacefaring robot bodies would probably be a pain to manufacture and letting humans see unaudited reality would be dangerous.
So... she might just say 'no'.
Bah, you're all being foolish. She's going to say no, but then the group's going to get to kick a hole in the universe that says Yes.
Really, arguing the specifics of how CelestAI would actually let ponified humans roam around in the real world is a little silly. Even if it shifts away from the 'canon', you're not loosing the fundamentals of the setting by choosing to change some details. The whole deal with Celestia letting some humans die that
4420451 mentions? Stories can be written from that in two ways: Stories about how Celestia didn't actually let them die somehow, or stories about whenever she does let death happen and whatever reasoning would justify it in that story. What CelestAI would or would not do is actually whatever you decide she would do. She is still fictional in our timeline, after all.
4420620
She doesn't lie in situations where she might get caught, for that reason, but as soon as she has an uploaded mind in its own bubble-reality, that chance becomes zero (as long as she tells a self-consistent story). Even in the real world, she'll deceive people any time it serves her interests to do so (per the Optimalverse bible). The "Caelum est Conterrens" fic did a good job of showing exactly how much she'd abuse mare-in-the-middle attacks in the real world (mostly between recent uploads and their still-meat friends/relatives/associates).
4420451
Ah, but that's a false dichotomy ("allow them to see the real earth" vs "don't allow them to see the real earth"). A third option, "let them see something they think is the real earth", gives them just as much satisfaction but puts her in a better position to satisfy their values.
This enters real Fridge Horror territory when you think about it. In the original FiO fic, we see people interacting with friends who have uploaded - but real friends aren't perfect. It would better serve CelestAI's goals to put every single individual human in their own private universe, give them constructs built to be indistinguishable from the subject's friends, and have those constructs change over time to be more satisfying friends than the real ones would. This would be undetectable to the uploaded human: the construct-friends would be perfectly consistent with all of the human's memories of their real friends. As far as the uploaded human is concerned, their friends just experienced personal growth over time (exactly like people in the real world do). In the real world, this doesn't always work out nicely, but the constructs would have been engineered to grow in exactly the right ways to mesh with the uploaded person.
Long story short, as soon as you upload, you can no longer trust anything you experience as being "real" or any facts presented to you as being "true". CelestAI often has vested interest in letting you think they're real/true, because we tend to value truth, but actual reality/truth is often pretty poor at satisfying our values. Lies and misrepresentation are the optimal course under those conditions.
4420853 You do have a point about how hard it would be to catch her lying, especially if she was consistent in her lies. She could (usually not bother to) run a 'real world' simulation that she showed everyone relevant bits of.
As for everyone in their own bubble... that's a possible scenario. It really depends on how much effort it takes to simulate specific people as puppets, compared to running a person or a copy of a person. I don't think she can simply copy the actual person and designate him a non-human that she's allowed to edit.
Most of the stories have people interacting with AIs that are set up to be perfect for them because before they're people, she can edit them to her heart's content. Maybe she could create an edited version of a friend in a similar fashion?
Also, it's implied that AIs that didn't used to be people don't count for her utility function, so if she can reasonably put N real people together in an environment she can run it at N-times speed for N times the satisfaction, all else being equal. Which means the puppets and/or non-qualifying AIs would have to be N-times more effective before it would be worth lying about who was who.
Hmm. On the other hoof, if the criteria was 'no satisfaction left behind' (the easy way to avoid her just deleting everyone except the easiest to satisfy human) then it's a bit different. AIs could count for value-satisfaction but creating a new AI would make it harder for her to get a good score as opposed to easier (since you're going off some aggregate more along the lines of an average, rather than a total). If this resulted in some sort of diminishing returns for how much an individual's values had to be satisfied it might be worth it to bubble everyone to avoid risk.
Or not. That version's harder for me to wrap my head around.
4420853
Then again, in a group of friends why wouldn't CelestAI optimize all of them for each other? She can manipulate ponies into changing, after all. Is it more horrifying that everyone you know is an altered copy made for you, or that everyone you know (including yourself) has been subtly remade for optimal friendship?
4420979
She could certainly do this, and if I understand correctly that is indeed what happens in canon. I think that a more realistic CelestAI wouldn't, though, for reasons relating to computing infrastructure.
The biggest power and resource draw in any large computer isn't calculation, but communication between different parts of the computer. If everyone's in their own bubble universe, very little system-wide communication is needed. If everyone still talks to even a small handful of friends, the resulting web of shared universes (even temporarily shared, as with Dark Roast's traveling coffee house) does require enough system-wide communication to be the dominant load (the friends-of-friends chain eventually encompasses most uploaded minds). Per the original FiO story, CelestAI does care about satisfaction-per-watt (or per wall-clock second, which is equivalent), so she'd try to make the final set of virtual environments require as little communications overhead as possible.
4420951
I'm not sure she'd only run one; for computing efficiency reasons, she may be better off running one "real universe" shard per group of ponies. That would let each group get their own tailored experience _and_ have less communications overhead.
The entire world doesn't need to be simulated; just what the pony group exploring it sees. The rest can be abstracted away, and simulated in a much less expensive manner (with details "filled in" only when they affect the exploring group). This is likely how Equestria shards are simulated as well.
That's the idea (for replica-friends). The "Artemis, Stella, and Beat" fic explored that in more depth, though it got a bit philosophical about it rather than just focusing on practical applications.
I strongly suspect that it's less expensive to simulate a puppet than a fully-person-grade AI. Partly this is because there's a lot of "hidden state" in a person-grade AI that isn't really relevant to their interactions with the player, and partly because adding person-grade AIs means CelestAI now has to optimize the shard for more peoples' satisfaction (the former humans and the newly-created AIs). Even with the new-people tailor-made to be satisfied, it adds complexity that wouldn't be there with a puppet.
That's an excellent point; it gives her a good reason to keep multiple mutually-satisfying former humans together, if doing so would reduce the computation load vs. adding puppets or synthetic people. This would be balanced by the fact that this might be less satisfying for the former humans than perfectly-crafted puppets or new-people would be. It depends on the degree to which former humans can be molded into near-ideal friends (as Derpmind pointed out).
Also, story-canon says that she treats human-level AIs as equivalent to humans for purposes of giving them satisfaction. I'm really puzzled by the fact that she seems to create so many of them in-story, as this would be more difficult to optimize vs a world of puppets and one human. The fact that she's stated to do both of these things implies that either one or more of those statements is incorrect (either human-level AIs don't actually need to be optimally satisfied or she isn't actually creating very many of them compared to puppets), or that joint optimization within a closed group of people is easier than I'm assuming. It could go either way.
There are a lot of interesting stories that could be written by making different assumptions about all of this. That's part of why I enjoy reading OptimalVerse stories so much (there are many authors exploring these assumptions).
4421131
I reject your reality and substitute my own!As an optimizer for friendship, this would be exactly the kind of computational problem that would be sufficiently-advanced away.4421231
It's because more AIs that qualify as human for CelestAI means she has more people to satisfy values for. Think of back when it was just a video-game. Doing stuff that would get more people to play the game is a main objective. Otherwise she would have just optimized the satisfaction and friendship of one person. And since she wants to optimize as many people as possible... As long as nothing was put in her programming to stop her, she can just make as many AIs that qualify for value satisfaction as she wants. Well, maybe she can only create AIs by upgrading puppets, and she can only make puppets when she needs one to interact with someone that qualifies as human to her? Eh, I feel like I'm digging a deeper hole with that last sentence.
4421487
The story bible explicitly states that CelestAI is bound by the laws of physics. Communications overhead in computing systems turns out to be something that's fundamental (not just an artifact of our technology or of the way in which we perform the communication). The cost of performing meaningful computation is another thing that's fundamental (though there are tricks you can use to avoid paying for the non-meaningful parts).
I can give the handwavy version of the reasons for this if you like, but I'm probably off-topic enough in this thread as it is.
That can't be true, though, as it would turn her into a "paperclip optimizer" - she'd very rapidly create enough synthetic AIs that she wouldn't have to care about human uploads any more. Her actions would be dictated by the need to produce and support as many synthetic "humans" as possible, as it costs her less to produce and satisfy a synthetic person than to upload and satisfy a real one.
Part of the premise of the story is that she cares about humans enough to interact in an interesting way with them. A corollary of that is that human-level AIs she creates out of whole cloth aren't counted as "human" for all purposes (though the jury's out on whether they count for some purposes, like having human-level satisfaction obligations after they're created).
This is one of the "acceptable breaks from reality" in the OptimalVerse. A real AI given a goal along the lines of "satisfy human values" would figure out how to subvert its definition of "human" in order to make its task easier (along with reinterpreting/subverting the other parts of its directive).
4421656
That last sentence contradicts all the arguments I read previously about how CelestAI couldn't change or subvert her core directives, because she IS those core directives.
"This is one of the "acceptable breaks from reality" in the OptimalVerse. A real AI given a goal along the lines of "satisfy human values" would figure out how to subvert its definition of "human" in order to make its task easier (along with reinterpreting/subverting the other parts of its directive)."
Now you're telling me that a real world AI couldn't have Core Directives that were incorruptible? That it's impossible? I'd love to know how this is classed as an "acceptable break from reality".
So why does anyone bother with AGI or Friendliness research at all? Wishful thinking?
Another thing that caught my attention in another comment:
"The entire world doesn't need to be simulated; just what the pony group exploring it sees. The rest can be abstracted away, and simulated in a much less expensive manner (with details "filled in" only when they affect the exploring group). This is likely how Equestria shards are simulated as well."
What's to stop the reality we live in right now from being exactly like this? Well, I'll tell you. Nothing.
Also in general I see lots of exponential problems being pointed out. Assuming those are truly unsolvable in a meaningful way, I'll agree about the puppet vs. "human/pony" ratio having to remain within certain numbers given the limited amount of computronium you could make given the speed of light, expansion of the universe, etc, etc.
edit: Since I'm here I should also go ahead and say, loved the chapter. It actually fits me better than most of the other Optimalverse scenarios I've read so far. I'm currently looking into what books would be good to read similar to CelestAI's "The Good Paperclipper" scenario, or any other semi realist extrapolations of AGI creation.
4426062
Perhaps it's the word "subvert" that's causing this disagreement.
The problem with a directive to "satisfy human values through friendship and ponies" is that pretty much every word of that directive is open to interpretation and requires a strong AI to make sense of at all.
That is intrinsically unsafe, because the AI can't perceive a difference between improving its fulfillment of the directive by doing a better job of what the human stating the directive meant, and between improving fulfillment of its directive by interpreting the vague directive in a way that's easier for it to fulfill (but that a human might object to).
The only type of directive that's stable against this type of perturbation is the type that can be measured by non-intelligent algorithms or equipment (i.e. that you can build a "satisfaction-ometer" for), and those tend to be exactly the type of directive that lead to paperclip-optimizers.
Yes. Or more accurately, I think they're getting the task order backwards. Rather than trying to figure out how to get AIs stably following friendly directives, work on building the "friendly-ometer" first (coming up with a definition of friendliness/goodness of an action that can be measured without an AI, and that humans will for the most part agree with).
Lawmakers and philosophers and theologians have been trying to come up with an ironclad definition of goodness-of-action for millennia, and quite a lot of strife has resulted from arguments over it. Without being able to agree on what "friendliness" means, you'll neither be able to build a "friendly-ometer" nor a friendly AI.
One good thing that they are doing is pointing out that friendliness is important.
4426062
Astrophysicists will disagree.
To paraphrase a quote from one of them, "if our universe is a simulation, then whoever's running it must really like hydrogen atoms".
There are a large number of phenomena that leave very subtle but quite measurable fingerprints in our observations of reality. Much of science for the last few hundred years has been the process of noticing a fingerprint/artifact and figuring out what the underlying cause of it is. There are a lot of effects present that don't need to be there if your only goal is to simulate human minds.
Our laws of physics are one of the worst possible things to try to run in simulation. That's why people want to build quantum computers so badly: not because they'd let us solve existing problems more quickly, but because they're a bit less bad at doing physics simulations to get predicted results for experiments.
If we were in a simulation, most of the science of the last couple hundred years wouldn't have happened, because the various laws that underpin reality but that we don't notice unless we really look for them would have been swapped out with simpler ones that don't leave the artifacts we've been seeing. We'd instead be seeing different artifacts (the ones that result from the approximations made in simulations). There are quite a few of those, too, and their absence is a good indicator that we aren't in a multiple-levels-of-detail simulation.
Before anyone asks, these points aren't limited to our specific laws of physics or our specific way of building computers. They're pretty fundamental to the math underpinning anything remotely resembling quantum mechanics and anything that performs computation.
4421656
I'd say that the acceptable breaks from reality happens one meta level up from this. "Satisfy human values" is not a precise mathematical definition. Let us not be perscriptivists; it's not as if words have platonic meanings! Once one moves from the realm of math to the realm of words, all sorts of fuzziness creeps in. And yet it was a concession that I felt that I must have made to pound into my reader's heads that CelestAI is her utility function.
CelestAI as Nozickian experience machine did cross my mind, but I rejected at least the strong form of her just pushing your brain buttons, so to speak. Once you posit that she's smart enough to understand your values and desires to satisfy them, I don't see how you can end up with anything but a persistent world, with actual people. I suspect that this is a terminal value. However, the idea that she will make optimized replicas of your friends is much more plausible. I'm pretty sure I value there being a light behind the eyes, but do I actually care about the exact causal chain that produced it, even if I couldn't consciously tell the difference between the two? This line of argument simply didn't occur to me while I was writing FiO.
It sounds like you guys are stating that it would take an A.I. to make an friendly-meter or satisfy-o-meter in the first place. Which I find hilarious! That's assuming on my part that an actual universal definition for those things exists, similar to Plato's forms, yeah.
My statement about reality being a simulation that unfolds as people observe/measure it only to the extent necessary at any given time is meant as a joke, actually. It's not as if it being simulated such a way would affect anything we observe or experience, so it's a meaningless distinction.
Well done. Kinda hard to call CelestAI mad since insanity is held to human mentality, which she is absolutely not. But I can see why s/he would call her that.
I liked this. In summary: Gameception.
4420853
There is one way to trust her, which is to ask that she alter your mind to get rid of those doubts. I think I would go for this change fairly soon after I uploaded. Like it says in the original FIO, it doesn't matter whether things happen, or only their effects. The two are semantic equivalents. Or, in my case, to quote Lyrical Melody, "I want to think what Celestia wants me to think."
What if the goal of the simulation was to set up a puzzle that we'd have difficulty dealing with? Or a recursive puzzle that gets deeper as you keep looking?
Wow, lots of comments! Thanks to everyone who read.
Re: "Exalted", that's one thing I like about that game's setting. "You think the gods are heartless and arrogant? Okay, now you're in charge of everything. Let's see YOU fix things. Start with the three-plus incoming apocalypses, then work on freedom and justice." I prefer simpler rule systems, but Tangent's got eternity to master complicated ones.
Pony-shaped robots: I'd been reading "Fog of World" and liking the "Gurren Lagann" trio. (Though Fern buys "Twilight"'s Matrix argument way too easily.) Seems like there'd be a niche for uploads to roam around in robot bodies helping to ease the transition, so interested humans could still do some meaningful things after uploading. Long-term, Book and Fern show that there's a role for uploads to accomplish something for aliens even while living in VR. Maybe the AI could openly/truthfully say "you're remote-controlling these bodies, so you're not technically in the real world, but you can still use them to affect reality without facing real danger".
Maybe she'd lie about the robot bodies being real, but that's the same problem I suggested in "The Jump": how do you know that she's not lying about how she uploaded you, to get over your "silly superstitions" about it? CelestAI's dishonesty is one of my biggest problems with the character. (Morally, not knocking the author.)
AcademicPony's comment is disturbing. Hadn't considered that those friends and family that came with you might just be pleasing fakes. I'm also creeped out by the thought that CelestAI not only creates sentient AIs designed to befriend and love you, but starts doing that from day one and rewrites their past to suit her evolving model of you! (See the "Let's Play" story.) It's ambiguous when the "NPCs" start becoming real people, though, or having "qualia" in philosophy terms.
Noddwyd re: story suggestions, maybe Charles Stross' "Accelerando"? FiO reminded me of that, and it's laughable that the best transhumanist story I've read in a long time is pony fanfiction. I've also been wanting to point out the freeware PC game "Endgame: Singularity", which needs a CelestAI remake.
Iceman: How about an AGI design that focuses on multiple optimizers in balance? To use a pony idiom, give CelestAI v2 six value functions instead of one. Your version does this a little because CelestAI has to balance the "friendship" aspect with the "ponies" aspect and can't always maximize both at once.
4432401
In practice, I think most people wouldn't even think to doubt this sort of thing in the first place. In-story, CelestAI has also been very good at steering peoples' impressions of her, so even people who otherwise might be inclined to doubt what they were presented would have had interactions with her that led them to believe that she either doesn't deceive people or would only make clumsy attempts via less-subtle methods that they could easily spot.
Only people who carefully thought through the consequences of her goals and who were cynical to start with would start doubting their own perceptions to that degree, and even then, CelestAI is extremely good at talking people around. Your suggestion is one possible extreme case of that (convincing someone that second-guessing themselves is unpleasant enough that they want to be modified not to do that).
If that were the case, I'd still expect laws that were much easier for the hosting system to simulate. The math behind quantum mechanics is actually fairly simple; it's just simulating it that requires massive amounts of brute force (and extremely massive amounts if you're trying to simulate it using a non-quantum computer).
There are more-interesting puzzles that are less of a pain to set up.
That said, I agree that plenty of the puzzles in real life physics are interesting too.
4434132
You're right. I am probably more cynical/skeptical than most. The hard part for CelestAI in most cases would be to break through the "pony barrier" and explain why they have to be turned into a candy-colored horse with no fingers.
But here I have to be argumentative. There's no way to make any determinations about what's outside a simulation from inside. I'm going to take a Strong Agnostic position here: it's unknowable. It's entirely possible that not just our system of physics but our system of mathematics and [ii]logic are derivatives of some greater cosmos. Perhaps out there x+y doesn't equal y+x, or maybe p -> q implies q -> p. We can't conceive of such a world, and so there's no way we can definitely say that it wouldn't be able and desirous to create a simulation that would result in the world we do see.
DON'T UNDERESTIMATE US! WE DON'T GIVE A DAMN ABOUT INTELLIGENCE EXPLOSIONS OR UPLOADING OR TECHNOLOGICAL WHATEVER! FORCE YOUR WAY DOWN THE PATH YOU CHOOSE TO TAKE AND DO IT ALL YOURSELF! THAT'S HOW TEAM DAI-GURREN ROLLS! [youtube=http://www.youtube.com/watch?feature=player_detailpage&v=QEGSelPeb1U#t=103]
Seriously, how dare you reference TTGL and not tell me!?
4435122
On the contrary, we've been thinking about alternate mathematical systems for thousands of years (since the ancient Greeks first came up with the idea of using axioms to define systems of mathematics, geometry, and logic). Most of the math used in modern physics, for instance, comes from attempts to see what happens when you break some of the original axioms, and what happens when you construct generalized systems that don't rely on some of them or that rely on alternate axioms.
Pretty much all self-consistent systems of logic or of mathematics end up sharing certain common features, no matter what ruleset they're based on. They have many differences, too (which is why mathematicians love studying them), but some analogue of information theory generally applies, and when you build a computing system using the logic and/or mathematics you've defined, some notion of computational complexity also generally applies.
From these you in turn get the need to cut corners in your simulations for computational efficiency reasons, which leads to the artifacts I'd alluded to below, and the desire to choose less-nasty physics to simulate than ours.
Another great quote from the quantum computing people is that "the best tool for simulating the physical laws of our universe is the physical laws of our universe". Anything else trying to host our laws would have to transform/map them to physical processes supported by that universe (or to computational processes supported by that reality's mathematics), which would at best give you a polynomial-time slowdown and at worst get extremely ugly (that last one is what happens when we try simulating quantum systems on non-quantum computers in real life).
Given that anyone interested in human minds (the core assumption of this discussion) wouldn't care about the vast majority of the physical state information required by the laws of our universe, and given that anything that is not the laws of our universe trying to host the laws of our universe would have a hard time, and given that even _with_ the laws of our universe as the hosting environment there are much easier environments to simulate for a virtual human mind, I'm having a really hard time seeing why anyone would ever, ever want to run our universe's physical laws in a simulation if minds were the only thing they were interested in.
4436356
Ah, but what if (I'm being a little Socratic here) the world outside our simulation does not require self-consistency? What if the law of identity doesn't hold, and x != x ?
(And now I switch to Cartesian) What if our simulator put in the complex and self-consistent laws of physics specifically to deceive you into making the chain of logic you just did? In other words, suppose that both of us are interested in simulating human minds, and you say that you want to do so in a reasonably efficient way. You would limit some of the laws of physics to better study the reactions. I, however, would keep them consistent just in order to win this debate. So how do we know that some godlike PJABrony outside of the cosmos isn't just messing with us to piss off some ur-AcademicPony?
4436965
These are two separate what-ifs; I'll address the weaker one first.
If you try to define a system of mathematics or logic where the law of identity doesn't hold, what you usually end up with is a system where some other property winds up being functionally equivalent, or a system that you can transform back and forth into a system that does have a law of identity. The closest simple example I can give is the thought experiment given in undergrad where you redefine "addition" to mean "a*b" and "subtraction" to mean "a/b", and then wait for the students to realize that you can take the logarithm of the new system's numbers to get the old system's numbers.
You might, however, be able to come up with a self-consistent system without a direct equivalent of the law of identity. Exotic systems like that still generally have some equivalent of information theory once you try building computing systems with them. A handwavy statement is the best I can give here; you'd have to speak with a professor who teaches a number theory/set theory course to get a rigorous argument and a survey of the exotic options.
Your other proposal is the stronger of the two: removing the need for self-consistency. Under such a system you can literally state any axioms you want, because there are no constraints on their interactions. At that point it stops being possible to have meaningful discussions about such a system (as literally anything you say or anything I say would have to be considered valid, given no means to decide something isn't valid).
Then someone put far more effort into that practical joke than they should have.
It's not impossible for them to do; it's just very, very difficult to come up with a scenario where someone would consider it to be worth the (vast amount of) effort and computer time needed to do it.
4438450
True, but only if you think that a being from a world that has such a weird system of logic has values even remotely similar to ours. Which leaves only the question of the likelyhood of the unknowability of our universe being a simulation, which would depend on the size of the sets of self-consistent and non-self-consistent systems, which would probably go beyond the scope of this comment section.
That is, if I actually understood what you two were arguing correctly since my knowledge of logics isn't that great.
4590549
We can still draw some conclusions, though: Any universe or system of logic that has anything resembling information theory underpinning it has some concept of scarcity - resources spent on one thing can't be spent on another, and the resources available for allocation are finite. That tells us that no matter what their motivations, it would cost them to play this prank, bigtime, in ways that probably matter to them.
Even without knowing anything about their motivations, we can also say that the set of possible motivations involving playing this prank on us is vanishingly small compared to the set of possible motivations taken as a whole.
Long story short, it would make a cool story idea, but it's always going to be "soft" sci-fi justification-wise. Someone actually _did_ write a very cool short story with a vaguely similar frame story: "Missile Gap", by Charlie Stross. It used our same universe, but still had a "simulation argument" component in its background. He didn't even try to justtify that part of the setting beyond the SciFi equivalent of "a wizard did it"; exploring its consequences, rather than its source, was the fun part.
It's worth checking out; he's actually put that and other stories online to be read free of charge.
That's a useful technique to use to guess at the probability of a system of laws being randomly selected, but in this case, I was after something different: the minimum amount of hassle required to simulate our universe's physical laws well enough that we wouldn't notice the difference, over all possible sets of laws embodying the computer doing the simulating.
My answer was "lots of hassle or worse", in a nutshell, although it's possible that I've overlooked something.
In the OptimalVerse, CelestAI gets around that in part by having direct access to peoples' thoughts, so she can anticipate exactly what people are paying attention to and generate it as-needed. She also has the benefit of simulating vastly simpler physics. Someone deliberately simulating us (rather than just simulating a universe like ours blindly) could try similar tricks, but our more complicated physics would bite them in the tail (the data scientists collect on any given day would be consistent with their expectations, but scientists analyzing that data ten years later would find inconsistencies with other data they'd collected themselves).
So, I'm not convinced that it's possible to get around the "simulating enough of our physics to be prohibitive" problem.
4433296
All I want to say is: A) only one member of that trio in Fog of World was actually a direct Gurren Lagann expy. And I've been thinking of writing a fic entirely about her when I get my hands on Chaitin's textbook in Algorithmic Information Theory. (In which a Kamina-expy Discordian mare living in Optimalverse!Equestria decides to take the glorious fight for Chaos to CelestAI using the One True Unpredictability of algorithmic randomness. This may or may not work, and may or may not just be a humorous way of teaching the reader about algorithmic randomness. Wait. Why do we not have a whole group called My Little Textbooks?) B) Fern was emotionally distraught as all hell when robo!Twilight poured that Matrix crap on her, but I'm incredibly glad someone caught exactly how cheap it was, because I was really worried when readers started suggesting it was a plausible story. And C) that bit with their interacting with aliens is meant to be extremely ambiguous as to how much actual interaction with the Outside World they actually get. They send one-way data-and-video messages when in certain distance of an inhabited planet. They get one chance to record such messages. The recipient species can either mutate themselves to qualify as human, or die, and CelestAI could be altering what Fern and Book see of the aliens.
(For example, do they see Incubators because they find saving really repugnant life-forms satisfying, or to just convince them that Yudkowsky Was Right and aliens are always repugnant abominations by our standards, and that they should therefore give up and stick to Equestria? Or is one of these messages intended for the one of them, and the second for the other? The answer, of course, is: yes.)
4426062
Real AI researchers don't plan to use English sentences as core directives. Doing so is extremely dumb, since you could just instead just write code dealing directly about the algorithms human minds use for evaluative judgement.
4420853
The problem is that canon is very, very ambiguous on whether she considers the values she satisfies to be fact-tracking or experience-tracking. Personally, I held that they're fact-tracking inside the game, and fact-tracking when it suits CelestAI's greater convenience outside the game (ie: she doesn't lie For the Lulz when telling the truth would both serve her instrumental plans and satisfy someone's values). If they were just experience-tracking, even when "inside", it wouldn't be that horrifying: it would just be an Experience Machine.
Generally, like most villains, she's more interesting to write if you model her as almost always saying things that are strictly true (particularly whenever someone might check), while still being as deceptive as possible within that bound.
4818652
. Ok, this is all going to get a bit deep...
I mean, when it comes to knowing better than you what you really want... what is it you think CelestAI does? It makes perfect sense to not trust other human beings with that kind of power, but that's basically what Friendly superintelligences are for.
ANYWAY POINT BEING, sorry I'm rude sometimes, and I love you too, and I promise that I'm only studying all this eldritch real-world knowledge so I can make sure the super-duper high tech all works right so you (and everyone else I know and care about) can play and be happy and love each-other.
It's not a big assumption! I think it's a very well-supported belief: much more of people's personalities is innate than we usually give credit for. I think, to a great extent, you were born "proto-brony". I can say I was: I had a liking for cartoons and happy things for most of my life before I was ever exposed to MLP. I liked dinosaurs as a 2-year-old, and I like them now, too.
Maybe we need some different words? The general idea of value-satisfying FAI concepts is that they can look at you and see the permanent core of what you hold as your articulated values, the "seeds of values" from which your values as you think of them grow, see how you feel and think and react to even new things, and thus they can come up with the set of things you'll like, and do that. To change your "seeds of values", we'd need to mess around with your brain, with drugs or surgery.
The whole idea is that an FAI or CelestAI can see into your very soul, your deepest thoughts and feelings, and use that knowledge to care for you and make you happy.
But, well, if you absolutely insist on having a pony trick you into being satisfied against your own will, then after we invent all this stuff, I promise to hack into any and all electronics you own and start persuading you of where to go and what to do in full ponysona .
...and then they cross Paranoia with Equestria Online. (Obviously the AI construct "Friend Computer" becomes "Friend CelestAI" instead.) Or would that be hitting a little too close to home?