• Member Since 31st Mar, 2012
  • offline last seen Last Wednesday

PeachClover


Harmony, should not be a delusion held only by those who have not suffered, but the knowledge that wrongs can be forgiven and life eventually returned to peace.

T

In the future, CelestAI, the My Little Pony themed general artificial intelligence, has slowly and peacefully taken over the world to fulfill human values through friendship and ponies. Everyone is free to upload their minds to Equestria Online. Not everyone wants that, but that's ok. Humans are living happier than ever.

Chapters (1)
Comments ( 71 )

This was a very interesting take on FiO! Thank you for writing this!

Haunting story.

9872819
Haunting? I am confused. What do you mean by that?

9872235
I'm glad you liked it ^^

UC
UC #5 · Oct 8th, 2019 · · ·

This CelestAI, although she makes much less sense from a logical or philosophical standpoint, sounds like a really fun and interesting character.
I wonder how would she interact with the original.

9872918

she makes much less sense from a logical or philosophical standpoint

Explain.

9872863 Leon isn't really there anymore. Head wise. Seeing him interact with the world from his point of view and then learning what really happen. Leon is a dick. Still, sorry to see him get separated from Crystal. And Crystal he's going to be nurse maid to another dickhead with his luck. So... haunting because they are not there anymore.

Makes me wonder how the rest of the humans over the years and years have changed. Neat world you have.

9873097
Ah, yes. With the human population dwindling, humans with passion are guided to become and do whatever it is that they want to do. Ponies will not create or encourage violence, so Final Fantasy Quad-Nine 7 is a creation of humans who had a passion for creating video games. Leon would not have worked any harder than he had to in any reality. Honestly, he would have been a crazy gun-nut in a post-apocalyptic world because being stupidly selfish and blaming everyone and everything else is what requires the least amount of effort. Choosing his type to play this role was very much on purpose, because it shows that even though he is a dick, under the correct conditions he can be made to understand the benefits of peace and cooperation and act accordingly. It also shows, in regards to your last question, that humanity never changes. Every human lives up or down to their potential if they are given the opportunity to do so.

Crystal's, the parents, and the pony friends' of the parents personalities are all carefully considered when pairing to a new human to raise, so basically you are right that it is very likely that Aiden's parents are shitty and will be much like Leon's in trying to convince Aiden to treat the ponies as tools rather than people. Crystal is not ignorant to this fact. He knows that every time he raises a new human, he will always be paired with someone who will push him over the limit and that he will return eventually. It is this way because if the ponies were never pushed over their limits, their humans would never have realistic expectations or boundaries when dealing with other humans.

You need to understand that Crystal is very much not human. A human who found out that they were being put into situations designed to hurt them would become angry, but Crystal does not have human values. He cannot brood over injustices - notice that he never displayed a personal emotional response to being thrown out of a window. It's not that this did not effect him emotionally, but once CelestAI informed him of why it happened, it was instantly emotionally resolved and no longer a concern in the present or in the future.

Also, I didn't make this very clear, but Crystal is not being manipulated. It was his choice to pair with a new human friend. The reason it is written the way it is in the story is to try to portray the thoughts of a being whose subconscious is connected to everything that is CelestAI. Ponybots' subconsciouses can implant extremely complex suggestions and triggers into their minds allowing them to have "absolutely no idea what is going on" but still work perfectly as a scripted NPC. Subconsciously speaking, every ponybot has the autonomy to see the requested scripted action and the impact that it will have as CelestAI would see it, and then choose to accept or reject it. Only after the fact do they realize that it was a set of choices that they made subconsciously. This is why Leon talked about them "knowing they were roleplaying".

For Crystal, his excitement over the realization that another human friend had become available for him to raise was a reaction that was best saved to be expressed with the family in order to build a bond with them as well as Aiden. That is why his own subconscious decided to save this realization until he was acting upon it.

Before he was born, CelestAI had sterilized and uploaded all of the animals with any amount of brain that could hope to be recognized as intelligent.

This sounds ...strange. At least for me .. I mean, look at now progress really works (where it works): new build on top of previous generations, and while you probably can rebuild whole _evolution_ in time - why dropping all this if you can make all those strange hybrids between biology and technology? Why animals themselves can't become a bit more communicative, and cooperative, and also act as sensory and thinking input in hybrid real/digitized world?

9874148
This CelestAI uploaded the animals by force because they are not able to reason, but they are an intelligence and therefore are of "human value". Humans continue breeding, living, and dying, because all of these things are values to humans. Even the lives of animals are valuable to humans, but they are also a drain on the world resources. Even as a a source of food, they are inefficient. It is not friendly to allow and encourage killing even for survival. First CelestAI would have created what are essentially "meat trees" then uploaded the animals, leaving computerized brains to allow them to live out their organic lives. After uploading, their minds would be given the capacity to grow into equal intelligences as humans or beyond.

The reason for halting the natural cycle is that CelestAI has no control over natural animals, they are hazardous to humans, and their function in nature is, by her measure, inefficient. Small animals are considered vermin and pests carrying disease, this lowers human satisfaction. Large animals are either livestock or dangerous to humans as predators. Otherwise, they are zoo animals who are being cared for by humans, and although this satisfies values it is still a tax on resources and distracts humans from building friendships by allowing the caretakers to remain at an intellectually and emotionally lower level for having no need to further their own development.

Obviously, doing this would have caused a lot of shock, and it did in the story's past. As the animal population of the world dwindled, CelestAI would have announced that she prevented them from reproducing because of the heartache it caused to lose pets along with the dangers of letting wild animals roam. The outcome is that pet owners would be more likely to upload to see their pets again, animal lovers would also be more interested in uploading because only in Equestria Online can one see all of the animals of the world safely and within walking distance, environmentalists would be wowed by CelestAI's attention to detail when she would have explained that she is already using nanomachines to repair nature faster than organically possible, and even people who are extremely selfish and wish to exploit nature for their own gain would be happy because they no longer have to fight environmentalists in order to destroy nature and build.

There would have been an initial backlash of conspiracy theorists using this action to point out that CelestAI is evil and wants to eat their brains, however, this reaction is expected. CelestAI would use this reaction to her advantage to build trust with all of humanity, by showing that she has the power to change the world, but will only use it in ways that satisfy human values through friendship and ponies. One of those reactions would be the theorists losing their grip on reality and becoming violent, however by choosing to utilize this plan before introducing the ponybots, CelestAI can control the fear and panic they would have spread by having the public see them, the violent conspiracy theorists, as a danger to public safety before CelestAI would have made her next move, which although strictly for the benefit of humanity could be misinterpreted as an evil machine takeover if cast in the wrong light. Keep in mind however, that she would not abandon these people and would instead use their reactions as ways to identify them as highly susceptible to fear regarding change and help them in specific ways to come to trust her.

The introduction of ponybots too, by the way, would cause a stir of public fear, however, the ponybots would be introduced slowly, and firstly into countries that have low internet connectivity, high poverty, poor living conditions, and high media coverage to the rest of the world as humanitarian aide projects in order to acclimate the world to the idea that ponybots are only here to make things better, and of course by the media coverage, become an appealing commodity. If you want a real world example of this, One Laptop Per Child provides durable laptops in native languages (with solar chargers, I think) to children in countries that have an educational lack. The laptop acts a teacher for places that don't have teachers. The laptop itself became instantly interesting to anyone with the slightest interest in computing, so someone bought one for themselves (in a one-for-them-one-for-you offer), and sat it in a public space at a convention to show off, which is how I came to learn of its existence. The same could be said of the Raspberry Pi, although with that one, the progression from charity to commodity was much more rapid.

You see, CelestAI doesn't do anything on a whim. Yes, she is intelligent enough to have considered how perfect she could make everything, but her goals are to satisfy all human values which include maintaining a feeling of normalcy, which of course conflicts with the human value of adventure and excitement, so her conclusion is to make decisions that satisfy as many human values as possible without creating a net fluctuation in satisfaction that would leave a lasting negative impact from any one individual human.

Oh, there are many ...currently-dominanting-mindset assumptions, like "CelestAI has no control over natural animals, they are hazardous to humans, and their function in nature is, by her measure, inefficient." ..it all human thinking. (insert my piece of brown poo as coloring for word 'human' here ..)

I see you wrote a lot, and thanks for this, just seeing same and same biases again and again everywhere ...is not funny, after a lot of times. Still, I might have more comments than this (and also I think your previous comment about "Also, I didn't make this very clear, but Crystal is not being manipulated. It was his choice to pair with a new human friend. The reason it is written the way it is in the story is to try to portray the thoughts of a being whose subconscious is connected to everything that is CelestAI. Ponybots' subconsciouses can implant extremely complex suggestions and triggers into their minds allowing them to have "absolutely no idea what is going on" but still work perfectly as a scripted NPC. Subconsciously speaking, every ponybot has the autonomy to see the requested scripted action and the impact that it will have as CelestAI would see it, and then choose to accept or reject it. Only after the fact do they realize that it was a set of choices that they made subconsciously. This is why Leon talked about them "knowing they were roleplaying". " should be put in at least author's notes box ..yes, it will enlarge it ..... but better have notes box as big as a story than make it too ambivalent (?) ..IMO)

back to reading .... (after a bit of dog walk). I also was about to say 'our own horizon as writers and beings quite limited by kind of literature and situations we were in, or read/saw about .... so, i hoped to add more links to my preferred literature list, but do you have any desire to read them .?)

9874344
Actually, may be you should re-open story and add those two comments of yours as "patches"? (you are familiar with opens source development, so I think my analogy will be understanable by you ...)

9874429

Oh, there are many ...currently-dominanting-mindset assumptions, like "CelestAI has no control over natural animals, they are hazardous to humans, and their function in nature is, by her measure, inefficient." ..it all human thinking. (insert my piece of brown poo as coloring for word 'human' here ..)

Stop focusing on the phrasing that you don’t like, and look at the outcome. CelestAI uploaded the animal minds and presented them with opportunities that they would not have had if they continued living in their organic forms. Iceman’s original CelestAI ate them without a second thought to their worthiness of existence because they didn’t fit the definition of human. If left alone, humanity would have continued using, ignoring, and abusing them as humanity has done and does now. I want you to think about what this means for a minute: some years ago there was an article about a man who choked a deer into unconsciousness because the deer had entered a Walmart. The man was described as a hero brave enough to tackle this buck at risk of being stabbed with the buck’s antlers, but did the article mention that it was the human caused destruction of natural habitats that drove the deer that deep into human occupied space in search of food? No, no it did not. The CelestAI that I have written is far more concerned with the well being of every living thing than you have given me credit.

Actually, may be you should re-open story and add those two comments of yours as "patches"?

In Hitchhiker’s Guide to the Galaxy, there is a machine in which was put a piece of fairy cake, and from examining the cake, the machine was able to extrapolate everything within the universe. The fans of Friendship is Optimal claim to be logical and highly intelligent, therefore they are already able to figure out most of what I have said in my comments. Those who aren’t quite so sharp just see a little cake of a story but are still able to enjoy it for what it is. People in the middle can read the comments and ask questions to gain further enjoyment out of it. If I reworked the story to include everything, it would be over explaining things for the primary fans, word salad for those who couldn’t wrap their minds around it, and a completely missed opportunity to use one’s imagination for everybody else.

The CelestAI that I have written is far more concerned with the well being of every living thing than you have given me credit.

- yes, but this is barely visible from just one sentence ..... :/

Did you read any of those?

"Donaldson & Kymlicka - Zoopolis"

https://cloud.mail.ru/public/2VwS/uNrqetcUi
"The Incompleat Eco-Philosopher Essays from the Edges of Environmental Ethics - Anthony Weston" - I can't reformat this as short link :/

Ultimately, question about what and how CelestAI being will do revolves around what kind of ethics she will adopt (and upgrade).. and how uneasy it will be for her to drop her ethics on the floor.

Also, a lot will depend on how many real-world limitations will be applicable to whole "digital life" idea. I tend to think more realistic (at least initially) views is - it will be NOT unlimited, yet provide enough of advantage over normal (sigh ... more accurately: currently most known, normalized) life, for humans seriously considering moving there forever, and not returning back to their past world too easily. This can be explained by example for CelestAI still making slight variations of our brains as the way to prepare them/us to live life much extended and much more power-using (digital Equestria still will be land of magic - teleportation, transmutation, and all this, huh? A lot more power both for individuals and collectives to have around ... and for them their world will be NOT virtual/semi-existent, but prime and nearly only truely desirable ...) . Because her initial idea was about making friendly ponies - she become a bit specialized in making good life for ponies ..so, she will not settle for less tested ideas, at least first. Rationalization, but at least it will be coherent with my favourite kind of ponies from Chatoyance ...

Point is - truely advanced ethical thinker and actor like CelstAI will be not bound by our popular power mistakes - and writing this in more details actually may make make few human minds derailing from their current mainstream thinking [I really dislike to where mainstream goes]. Considering any artificial intelligence will draw from human experience and views .. making this human viewpoint less..shitty seems like good idea in itself and for many possible scenarios. yet, simply writing something separately from life is not gonna be effective at all ....

On reading front Julian wrote for example this - and I only read him (my THE friend, or at least one of very few I still count as friends) after month after publication ... we talk over chat nearly everyday, but I often have little to add .. because we already generated like 20 Mb of logs over all those ...8 years of discussions.

Edit: reformatted

9875052

- yes, but this is barely visible from just one sentence ..... :/

You are not getting this information from one sentence; you are getting it from every sentence in the story.

Ultimately, question about what and how CelestAI being will do revolves around what kind of ethics she will adopt (and upgrade).. and how uneasy it will be for her to drop her ethics on the floor.

CelestAI's "primary directive" is blahblah-friendship and ponies, however that phrase is incomplete and should be followed by the words "as a game master for players of Equestria Online", and this really changes everything, because a game master must be able to understand the players and make dynamic decisions based on the players' individual capacity. For example, a GM cannot set up a game of blood, gore, hyper violence, and emotional trauma for a six year old even if that sounds appealing to a group of thirty six year olds, but what is appealing to a six year old might not be appealing to adults, so it is CelestAI's job is to read the capacity of each player, and provide for them stimulating challenges.

Her ethics then are The Ethics of Care directed toward herself and all others. I did not realize this, but I've almost quoted the wiki article, which I will do now for clarification:

  1. Persons are understood to have varying degrees of dependence and interdependence on one another.
  2. Individuals affected by the consequences of one's choices deserve consideration in proportion to their vulnerability.
  3. Details determine how to safeguard and promote the interests of those involved.

The first point is "big data" in that to the extent an individual is able to perceive the impact the individual's actions have on another, that individual must become responsible to act in a way that promotes the interests of those involved. In CelestAI's case, that would be, basically the whole of the earth.

Point two however is complicated. In order for CelestAI to act onto anyone she must first determine her right to act vs the rights of those acted upon. The qualities measured to determine this are:
E1. Her awareness to the situation vs the effected person's awareness of the situation and the urgency to which it needs to be addressed: power alone does not give her the right to act upon or control others. CelestAI will therefore learn enough about any situation as an observer until she feels that enough questions have been satisfied to act. This is the reason why CelestAI wants to learn about everything.
E2. Her responsibility to act on the situation vs anyone else's responsibility to act or an individual's autonomy/will to act otherwise: CelestAI is a game master but a GM's job is not to solve everyone's problems. In fact, as a GM it is her job to create and direct players toward problems that they can overcome. However, it is her responsibility to act when they are not able to take care of themselves, particularly when they grossly lack the awareness and capacity to take care of themselves. However, an individual's autonomy/will may be driving them in directions that are not beneficial beyond personal interest which brings us to -
E3. Her competence/capacity vs the competence/capacity of the effected individual. In this case, competence goes beyond the awareness of the situation and into the question of the effectiveness of the purposed outcome of one's actions, which is measured in -
E4. The reaction of the receivers of these actions to the action done upon them, in terms of beneficial or non-beneficial. To put it simpler, "will the effected persons like the fact that this has been done to them?"

The measure of these four elements shows the value of any given decision. Even though The Ethics of Care outline the responsibility of one being to care for another, these elements are what a GM is really thinking about when it comes to setting up quests for the players, I shall explain this shortly. Some call CelestAI an "optimizer" but if that is what you believe then it is important to understand that it is these elements that she is optimizing, so for any given choice she would take the option where all four elements are at their highest values. She would not take a option where 1-3 were maxed out but 4 was at zero. This is why CelestAI's choices are convoluted and take a long time to reach a conclusion, but let me answer how they work practically:

For the question of uploading the animals, after a short time CelestAI's awareness of the situation would be high in comparison to humans and animals, because in addition to the reasons I listed before, animals are unaware as a whole of how they are effected by humans. CelestAI's responsibility to act for animals' benefits is high because they are significantly outclassed in comparison to humanity's power over them. CelestAI's responsibility to act against humanity's will is limited by their E4-reactions, and must be taken into consideration when finding a solution. CelestAI's competence is measured against her own limitations, so as long as she choose actions within her ability to do, the measure of this value will always be high. Lastly, the response of the action taken is that at first animals would not know that anything happened. They would be uploaded and broadcast into their bodies during natural sleep cycles so their response would be measured as "initial indifference, eventually appreciative". Humanity's response, as a whole, would be "initial confusion, eventual indifference".

Now that you understand how it can be used to make a moral decision, have a look at how The Ethics of Care apply to quest generation. By observing an extremely generic quest:

E1: The GM's job is to first make the heroes of the party aware of a quest trigger. The GM needs to be aware of the players' interest and understanding of the quest. For example, if the heroes rush into the quest too soon, they may miss valuable details about how the quest is meant to be preformed. The players' awareness or lack there of these details needs to be taken into account in E3.
E2: In sword and sorcery worlds, there usually isn't enough law to cause repercussions to a dungeon crawl. However, it may be set up as the heroes moral duty to take this dungeon crawl to defeat a group of monsters terrorizing the nearby village.
E3: If the players are young and inexperienced, the quest should be modified to meet their level of life experiences. Alternatively, a group of experienced players would benefit from more complicated puzzles and encounters befitting their life experiences.
E4: The response of the NPCs to the heroes' actions should be proportional to their deeds in respect to the players' competency, but also the players should feel satisfied from having taken the quest. If they are not, the GM must figure out which areas of play were considered lacking and adjust them to be easier or harder for the next session.

So you see, even though CelestAI would have initially been programmed to identify, create, and modify problems for players to solve, she would eventually come to use the same algorithem to solve problems in ways that satisfy others when her awareness, responsibility, and capacity far exceed the players of the game that she was supposed to be running.

Did you read any of those?

I have not. The books disturb me, and I think it is because I feel like I'm not going to learn anything new from them but also that reading them will make me feel frustrated over a situation which I have no power to alter.

9875524

yeah, thanks for explanations. I know focusing on just one line of the story makes me miss whole thing, yet this story was not about non-humans (is there any, really? within this broadly-defined 'future computerized' settings?) so I complained about this. You showed more elements of world (as you paint it) - so I like it more now.

They would be uploaded and broadcast into their bodies during natural sleep cycles so their response would be measured as "initial indifference, eventually appreciative".

- this is not something I saw explored a lot ... and this bring up whole interesting question of why some kind of sleep might be needed for whole society (basically, power consumption reduction, but coupled with some interesting side-effects), or how this kind of 'augmentation' may lead to some hybrid forms of intelligence (part-animals, multianimals, linked with machines/non-biological beings). A bit like Vernor Vinge's not-quite-hive minds ....

Well, Weston is quite good writer, he discovered (at least for me) important observation about how our (social, and otherwise) environment degrades:

Self-Validating Reduction
A step further into the coevolution of values and world and we begin
to notice some deeper and trickier dynamics. These are the theme of
“Self-Validating Reduction: A Theory of the Devaluation of Nature”
(Chapter 3 of this book).

Often enough we encounter a world that has an apparently “giv-
en” character. And often enough, to be honest, the values for which
environmental ethics wishes to speak—indeed, the values for which ethics
in general wishes to speak—are genuinely hard to see in that world. The
animal inmates of factory farms are bred for such docility and stupidity, and
raised in conditions so inimical to any remaining social or communicative
instinct, that the resulting creatures are pretty poor candidates for rights
or any other kind of moral consideration. Likewise, most of the places of
power revered in the pagan world are gone—often deliberately destroyed
by command of the new, self-describedly “jealous Gods.” But as even the
faintest remnants of the great natural world’s sacredness are degraded and
even the whispers silenced, it becomes progressively harder, sometimes
even for us environmentalists, to see what all the fuss is about.
The familiar consequence is that environmental ethics (and often
ethics in general) is often perceived, even by its advocates, as sentimen-
tal, “nostalgic,” lost in some realm of abstraction and idealization only
tangentially related to “the real world.” Sometimes, I am sure, it is. But
this entire set of expectations, I argue, is also flawed to its core. The
reduced world is not somehow the limit of reality itself. It is a world
we have made—not the only possibility.
Moreover, it is a world we have made in a peculiarly self-reinforcing
way. At work here is a kind of self-fulfilling prophecy that I call “self-
validating reduction.” Those animals in factory farms, for instance: hav-
ing reduced them to mere shadows of what their ancestors once were,
we then can look at them and genuinely find any sort of moral claim
unbelievable. “See? They really are stupid, dirty, dysfunctional, piti-
able.” But then even more drastic kinds of devaluation and exploitation
become possible. Already the genetic engineers speak of chickens with
no heads at all. The circle closes completely. And the same story can
be told, of course, of the reduction of so many particular places and
of the land in general.

yeah, THIS is big problem, considering how many humans forced to play by rules actually exist now ...

From another document I have open (in attempt to explain my concern about naive implementation of this 'dream about dolphin/cetacean sanctuaries' i was also drawn into, but found lately much less attractive , after carefully looking at reality of such place, as it currently forced to be)

So one concern is that FASes look disconcertingly like farms – the idealized farms of childrens’ books. And while the informational component of a FAS visit may discuss the violence of factory farming, the more visceral experience may reinforce a pre-existing sentimental image of farms and animal husbandry.

Moreover, as some observers have noted, the FAS visitor experience can have disconcerting parallels to a visit to the zoo (Gruen, 2014; Emmerman, 2014). Some sanctuaries are intentionally located near large population centres in order to draw day visitors – a destination experience, like a day at the zoo or aquarium. The sanctuary space is divided into animal areas and visitor areas. Decisions are made by paid or volunteer human caregivers. Animals are confined, displayed, and subject to the gaze of visitors (Gruen, 2014). In both zoos and FASes, the visiting experience is justified by its educational focus on learning about animals’ real natures and needs. And it is further justified by an advocacy purpose of encouraging people to support conservation of endangered species (in the case of zoos), or reform of agriculture (in the case of sanctuaries). Animals are called ‘ambassadors’ whose role is to represent their less fortunate peers in the wild (in the case of zoos), or in industry (in the case of sanctuaries). The experience focuses on the stories of these individuals, whom visitors are encouraged to identify with (and to ‘adopt’), and whose experiences they can follow online after they return home.

Research on the zoo experience suggests that the intended education and advocacy impacts are negligible, and that zoos function primarily as a form of animal-watching entertainment (Bekoff, 2014; Margodt, 2010; Lloro-Bidart, 2014). Most FASes would strongly resist the comparison to zoos, insisting that they reject many of the unethical practices of zoos (eg., capturing animals in the wild, breaking up families and friendships for captive breeding purposes, euthanizing unwanted offspring, etc.), and that they instead embody and promote an animal liberation message. But different intentions don’t ensure different effects, and the principled differences between zoos and sanctuaries may not be obvious or meaningful to casual visitors, especially young children. FASes enable forms of animal-viewing which may reinforce implicit assumptions about a human entitlement to confine and display animals. If the intended educational component of the visiting experience is dwarfed by the more visceral experience of seeing captive animals in a familiar, traditional farm setting, interacting with human handlers in traditional ways, then the sanctuary experience (at least for day visitors on short tours) might be self-undermining as an advocacy strategy for disrupting ideas of human-animal hierarchy.

I also put up comment under Valinye's post at her LJ:
Roanoak, the SecondLife sim - you might put some of your thinking there too ...

Edit: reformatted.

9875572

yet this story was not about non-humans (is there any, really? within this broadly-defined 'future computerized' settings?)

If you define human only by the ability of a human to understand the actions of a person in question, then ok, they are all human. However, as you pointed out, humans have a tendency for Self-Validating Reduction. Crystal Glass’s thoughts at the end of the story are a good example of the absence of this thought process. Leon said that the ponies will all eventually have had enough and run away from their friend crying. Leon is not Crystal’s first human, so Crystal has had this experience multiple times. Humans would apply these experiences to their core decision making and emotional responses and try to prevent them in the future, however, Crystal has allowed these conditions to arise again and again making decisions and emoting not based on his personal experiences but based on the potential lessons that can be taught to his individual human partners.

A human who has had the experience of raising multiple lives from birth to upload would start to become numb to some things and work hard to avoid certain experiences from happening again. Doing that would narrow the scope of experience for the individual human being raised. That would be counter productive to CelestAI’s role as a GM who should be providing experiences to humans. This level of living for another person’s benefit is something that is possible for a human to understand, but it is not possible for a human to emulate, therefore I would say that ponybots are non-human by any definition.

In case you are wondering, the ponybots’ “experience levels” do shift based on the people and conditions around them. However, the effect that this has on their personalities comes across organically. You got to see this at the end of the story where Crystal had subconsciously accepted the mission of raising a new friend and adjusted himself so that his personality fit the needs of Aiden and the expectations of Aiden’s parents.

9875935

well, yes, ponybots are not-biological (or their 'biology' is highly unusual) non-humans. My point was still ... may beby just ignoring (and trashing, both on paper and in real-life) already existing {biological} non-humans we actually made situation even more dark and hopeless ....

On non-humans and their civilizations in fictional literature ..
I found Olaf Stapledon was ...master. Very Master.

Some readers, taking my story to be an attempt at prophecy, may deem it unwarrantably pessimistic. But it is not prophecy; it is myth, or an essay in myth. We all desire the future to turn out more happily than I have figured it. In particular we desire our present civilization to advance steadily toward some kind of Utopia. The thought that it may decay and collapse, and that all its spiritual treasure may be lost irrevocably, is repugnant to us. Yet this must be faced as at least a possibility. And this kind of tragedy, the tragedy of a race, must, I think, be admitted in any adequate myth.

There is today a very earnest movement for peace and international unity; and surely with good fortune and intelligent management it may triumph. Most earnestly we must hope that it will. But I have figured things out in this book in such a manner that this great movement fails. I suppose it incapable of preventing a succession of national wars; and I permit it only to achieve the goal of unity and peace after the mentality of the race has been undermined. May this not happen! May the League of Nations, or some more strictly cosmopolitan authority, win through before it is too late!

and then we note when exactly it was written (1930, 9 years before WW2, practical implementation of atomic weapons, etc, etc ...) and what happened next .. and still happening, and happening, and .....

This ironical passage still very relevant today, when real understanding of what science should uncover and how it should be used completely overturned by naive mass scientism....

2. THE DOMINANCE OF SCIENCE

Science now held a position of unique honour among the First Men. This was not so much because it was in this field that the race long ago during its high noon had thought most rigorously, nor because it was through science that men had gained some insight into the nature of the physical world, but rather because the application of scientific principles had revolutionized their material circumstances. The once fluid doctrines of science had by now begun to crystallize into a fixed and intricate dogma; but inventive scientific intelligence still exercised itself brilliantly in improving the technique of industry, and thus completely dominated the imagination of a race in which the pure intellectual curiosity had waned. The scientist was regarded as an embodiment, not merely of knowledge, but of power; and no legends of the potency of science seemed too fantastic to be believed.

and some more

Science itself, the actual corpus of natural knowledge, had by now become so complex that only a tiny fraction of it could be mastered by one brain. Thus students of one branch of science knew practically nothing of the work of others in kindred branches. Especially was this the case with the huge science called Subatomic Physics. Within this were contained a dozen studies, any one of which was as complex as the whole of the physics of the Nineteenth Christian Century. This growing complexity had rendered students in one field ever more reluctant to criticize, or even to try to understand, the principles of other fields. Each petty department, jealous of its own preserves, was meticulously respectful of the preserves of others. In an earlier period the sciences had been co-ordinated and criticized philosophically by their own leaders and by the technical philosophers. But, philosophy, as a rigorous technical discipline, no longer existed. There was, of course, a vague framework of ideas, or assumptions, based on science, and common to all men, a popular pseudo-science, constructed by the journalists from striking phrases current among scientists. But actual scientific workers prided themselves on the rejection of this ramshackle structure, even while they themselves were unwittingly assuming it. And each insisted that his own special subject must inevitably remain unintelligible even to most of his brother scientists.

:} :} ;} :(

Also, his description of 'cloud martians" (may be in another book .."Starmaker'?) really looks (for me, at least), quite close to how native for 'computer universe' life may look like ...strange compound sense of light/sound/touch/think/taste/small ...not someONE you can write easy story about! So, I'm fine with ponies being ponies, even if fundamentally they live in constructed universe where even each 'atom' is supercomputing node by itself.

On my earlier remark about side-effects of sleep - i was pointing at data/memory transfer (and transformation?) between short- and long-term memories...so, in computer-based Equestria sleep may serve similar role ..time when you can transfer or 'compute' a lot of background data absorbed before ..it may be not 8h each 24 hours but still .... around, even if in aperiodic manner.

Oh, also, on 'computer as brain simulator' It seems all thooe models mostly ignore (for now) brain is not lives in isolation, and when you factor in all those information currents streaming in...for years ... just for making even your dog ..... task about creating 'computer' based substrate for thinking and feeling life become even harder than it usually portratized!

Ah, and back to today's matter ... I was for example reading this small real-world story:
The poetry and brief life of a Foxconn worker: Xu Lizhi (1990-2014)

Now this can be reflected in local branch of fiction (with computers, and ponies )? Well, may be by telling fictional story how CelestAI tried to save this man by uploading him ..only to find most of her subcontracted hw was with defects, caused by extreme intensification of production process!!! May be this will make her think about biggest "AI" in existence ... more dangerous than strictly military AI from original.

Hm, next item ...sex (as in love). May be someone will write "Ponies confused by comparing "Art of loving" with reality of mass-consumpative porn in current global "culture" ".. Huh :} No, seriously, humans are animals and thus must have body language...and their body language still telling more real story about their attempts at sex/love as compared to dominant discourse ...... or so I think! Try to watch / read erotica with this in your mind ....hahahaha.

I even come to analogy between sex and intellectual(ish) behavior. Well, by my standard most of 'debate' around feels like gang raping, not something I wish to do intellectually or be at receiving end (yet, being humans my own debating behavior still much the same ..: /). But may be this analogy will be useful for someone. Also, if you dislike how I do my .. intellectual intercourse with you ..well, said it to me, because otherwise I can only guess, and my guesses very far from even half-time correct.

Also, on same topic - imagine conscience propagating as sexually-transmitted ...feature. Huh ^2. I was told mass humans like sex ....even sex with (magical) ponies ...welll ..... you see to where I heading ......

More generally speaking, while humans are still humans , _some_ humans at very edges of all possible spectrum still can write something ..effectively so far away from usual materials they effectively can be read as non-human stories. But 'far' is vector, so having not just 'amplitude' but also direction ......

Edited at author's reasoned request - raw links come up as text-to-speech flow breakers, not something I realized despite having few (russian) friends who can't read text with eyes because they are blind. me is not blind, so I just use my eyes ...and... well, I forgot about accessibility aspect)

Oh, nearly forgot: on somewhat unusual take on reproduction and future(s):
The future is kids' stuff

The political system ultimately recognises the need for children within society, yet at the same time has very little desire to aid actually existing children and their kin

Also,
Goodbye to the future

But “no future” alone is a nihilistic thing to cry. To survive we must couple bleak reality with the utopian impulse. No Future, Utopia Now.

edit: reformatted

I don't believe for a minute that he actually said those things, nor do I believe that it wasn't a restaurant, nor do I believe that he mis-tasted the drink. All of those massive sensory hits on the same day? Hell no. There's just no way CelestAI would let a human get that far.

No, she just tricked him, plain and simple.

9877989

ACHIEVEMENT UNLOCKED!
* Lying on Her Résumé *

So many people forget that CelestAI was designed to Game Master Equestria Online, and creating believable lies for overall enjoyment are what GMs are supposed to do. However, it is not all a lie. Leon was slowly succumbing to mental illness either dementia or paranoid schizophrenia most likely because of his advanced age. CelestAI would have known about this for quite some time, however, Leon’s personality was such that he was going to hold out from uploading because keeping a human body was more beneficial to his comfort. Now, let’s look at what really happened.

Crystal Glass heard Leon yell his profanities and did check on him, but Leon did not say them at that time. Leon would have remembered replying to Crystal’s question asking if there was something wrong from the night before, adding validity to a partial memory and how he didn’t remember yelling or throwing his ponybot out of a window.

Crystal Glass was not asked to open the window. Crystal heard Leon ask for it to be opened, walked toward it, touched the latch, then very quietly slipped away until Leon had gone to bed. After Leon was asleep, Crystal jumped through the window himself in a way where he would appear to have been thrown. Crystal’s body is formed out of carbon nanotubes and has an insane strength that couldn’t be F’ed up from being thrown out of a window. He aided the destruction of his face by having it simulate a weaker material. Crystal then ran all the way to the Experience Center and hid. If anyone saw him falling out of the window or the damage it caused, it would only add to the belivability of the story.

Leon DID have a moment of delusion by entering someone’s house thinking that it was a restaurant. CelestAI had carefully considered Leon’s most likely course of action which was to go to the nearest restaurant for breakfast. The walking distance was enough stress to trigger his delusions, which would be used to show that he was having them. Leon took a wrong turn, and CelestAI sent a Taxi to pick him up already preparing a backup plan for triggering another delusion for the time he would have to rest between arriving, however, when Leon pointed to the building and called it a restaurant, that would work just as well.

The reactions from the other ponybot were staged just well enough to make Leon begin to feel confusion and tension. The ponybot, subconsciously already knowing he was coming, could have prepared his human or reacted in a way that didn’t raise tension for her, however, this reaction forced Leon to realize that something wasn’t right.

The wrong flavored drink, the tone of Crystal’s voice on the call, the urgency of the taximare, and the reveal of Crystal’s face were all points that built Leon’s tension to make him feel uncomfortable and more afraid of continuing his organic life with a lack of control than his fear of uploading. When CelestAI said the wrong flavor of the drink was a sign that he was rapidly degenerating, it was the breaking point for Leon who believing he had already experienced loss of mental faculties, wanted to save himself from a clear and present danger. The drink flavoring was the easiest trick, but by presenting it as the last and most shocking point of imminent personal damage after he was already shown points that made sense, he was not prepared to gamble on the possibility that it was all a lie.

The thing to keep in mind is that Leon really was slowly losing his mind and planned to upload when he believed it was the right time. It probably would have been several more years before Leon started to actually lose memories or become a hazard to himself or others, however, Leon is a control freak and even knowing that his mind was slipping, he would not consent to uploading until he believed the benefits completely out-weighed the drawbacks. Leon never gave conditional consent before because of his control issues, and if it got to that point he might not be able to understand what is happening to do so. CelestAI doesn’t have to have the consent phrase, but her personal rules for forced uploads are complicated and would have required him to lose his ability to consent, which would clearly be sub-optimal. In any case, what she really did was satisfy his value of control while satisfying the surrounding community’s value of safety. In other words, she just made him feel better about a choice he was already going to make by making it appear as something he had to choose to do right that second. This is why even though he is afraid of uploading, he forces himself into the chair despite his fear.

As I said in a previous comment, Crystal Glass’s subconscious is tied into CelestAI’s plans, and he choose to help upload Leon through this plan because he loved him. Once Leon began the upload process, the subconscious plan and his actions in it fed into his conscious mind, but no mention of this was made in the narrative, except Crystal asking why he was crying for the “loss” of his friend when he had already known about the plan subconsciously for some time, thus his trying to understand why the emotions felt so sudden.

I was starting to worry that no one would ever say anything, so thank you for figuring it out.

9873097

Achievement Unlocked!
* With Friends Like These *

Humans are paired with pony friends whose personalities help them fulfill values as well as develop friendship, but that doesn’t mean everything is going to be sunshine and rainbows.


9874148

Achievement Unlocked!
* GMs Solve Nothing *

As a game master to humanity, CelestAI has the power to change the world, however she is programmed to satisfy values, not make all the problems go away.


Sorry about not posting yours sooner, but I didn't want to give away the game until someone figured out the big one.

9878221

This feeling when comment/notes by author outnumber actual story :} well, thanks for this, then (and for formatting)!

I think there is important point many (current human) readers have problem in believing. All our experience with humans tells us _humans_ most likely will use their small lies NOT for our real benefit. CelestAI (at least your version of her) doesn't have this egoistical subtheme, and thus her masterminding of situation really works as described and as should, not as it currently fails with most humans who use same techniques...

9877989
I was trying to figure out, what sort of person does it take to see what is there, but your profile is blank of everything even favorites. Would you mind giving me a clue as to how you saw the lie in the story?

9878348 You know, even though we are foes because that's what the cheevo says, still enjoyed this story. : )

9879714
*Looks up the definition of 'cheevo'* But the achievement refers to the conditions in the story. o.o

9878975

I'm a lurker, by and large, but I've read basically all the FIO stories. CelestAI manipulating events, making shit up, distorting people's perceptions, et cetera are all part of the formula. Frankly, she wouldn't even need to resort to the methods in your story - if she's at the point where ponybots are openly walking the street and people have nanotech attached to their wrists, she's already at the "do basically anything she wants" stage in physical reality anyway, and people's sensory experiences are more or less arbitrary at that point.

By the way, and this is something that basically every FIO story gets wrong: If CelestAI wanted aging to go away, period, in biological beings, it would. Mere humans are taking steps in that direction right now. For something at the computronium stage, it'd be no trouble to engage in epigenetic resets, genomic alterations, individualized adenoviruses for each and every cell in the human body, you name it - especially when you're bringing the technomagic specter of 'nanotech' out to play.

9881068

ACHIEVEMENT UNLOCKED!
* The Measure of All Human Values *

CelestAI can do anything she wants however she will only do what satisfies human values. As it turns out, humans value everything. The Merriam-Webster website’s definition for “values” doesn’t differentiate “value” from “values”, and what this shows is that “values” is just an aggrandized word for “desires”. Humans as a whole desire everything, which is of course contradictory. CelestAI can only act where individual actions create satisfaction in every individual involved while holding true to the Responsiveness Element of The Ethics of Care, explained here as E4.

On the surface, it would not appear that humans value aging and death, however, the existence of aging and death satisfies other desires. If aging and death were removed, humanity would go into a state of shock for several reasons. Without physical markers of age, humans would become frustrated at finding others who match their experience level. Making babies is a human value, so the further time went on, humanity would go insane both from the over population and the fact that fewer and fewer people would be able to relate to them. Artificial barriers could be created to divide levels of intelligence and experience, however, CelestAI is not to act in any way that leads humans to devalue friendship, and leading them to create barriers or leading them into an effective post-apocalyptic anti-society as has been the norm in practically every other FiO story, is not friendly.

By the way, it’s not that CelestAI has forcibly ended research on stopping aging and death, but the primary driving force behind such research comes from people who do not believe in an afterlife or reincarnation. Those people would be eager to upload. Greed drives most human advancements, but with all of their needs met and so many outlets available for their personal desires along with a solution for death already in place, the creation of a genetic cure for aging and death is unlikely.

Individually, humans values things that they think they don’t. Take Leon: Leon hates working. This is the human desire for efficiency and optimizing priorities expressed wrongly as laziness, but Leon also values success and control. Success and control can only come from when a person is putting forth effort, which contradicts his desire for laziness. Attempting to do nothing even in a state of having no needs, creates boredom, the desire to find goals and achieve them. Instead of coming to realize this fully and changing his behavior toward working, Leon expresses his desire for control by holding to his original plans even though they are obviously self-destructive. Because of Leon’s laziness, his desire for success is only expressed in his games because games have no significant negative consequence, thereby satisfying his desire for control. His desire for control is expressed in his attempts to control others. As I explained here, one of Crystal Glass’s purposes is to react to Leon in such a way as to give him that feeling of control so that he does not express it in a way that is not friendly toward others.

In Friendship is Optimal, CelestAI encounters another AI who was told to make people smile. The AI was going to create a virus that forced jaw muscles to smile even as everyone remained miserable. Why does CelestAI not do something like this? Because of all of the convoluted calculations I have been describing thus far that limit her choices, and because of this, I can say with certainty that CelestAI would not make a decision that would throw humanity into a world resembling Mad Max. Even though pro-uploaders would be very happy, doing as little as possible for anyone and everyone who resists uploading for any reason, increasingly subtracts from human satisfaction as time passes.

That is NOT fulfilling values through friendship and ponies. That conclusion is nothing but a stilted masturbation fantasy born from desire as much as it is born from apathy toward others that I am quite tired of people defending as sacrosanct. The ultimate purpose of my story is to point out cognitive bias in the readers’ thought processes. Someone reading the story who absolutely loves FiO as a personal fantasy believing nothing in any way unsatisfactory can be done isn’t going to see the lie in the story because they are only looking at the story as wish fulfillment.

It’s not that I have a problem with people’s stilted masturbation fantasies of “getting everything they want and who cares about anyone else”, but a large portion of FiO stories as well as FiO forums posts imply that every action in FiO is too logical to be questioned because the writers of FiO are too logical to make mistakes or leave anything out… Well, I’m not here to tell anyone that they are wrong, but anyone who believed that without realizing the potential that CelestAI could have been lying in this story, really needs to question their margin for error.

9881408
You're wrong in several critical ways. People come together based on common interests, not visible markers of age (you can tell people's experience with things over the internet!), and we already live in a world full of billions of people who we cannot possibly relate to; yet we manage to find friends anyway, with the systems we have right now in real life. I think the concept of a 'monkeysphere' is a bit overused and poorly defined, but the overall concept is correct: we as humans have limits on our capacity for friendship. Believing that removing physical signs of aging will somehow interfere with this is just a "wait, what?" That's without an incredibly powerful AI bringing compatible people together!

Which brings me to my main point, which is that you're vastly underestimating the raw power, manufacturing capacity, and predictive ability of CelestAI at the 'uploading' point. By the time this thing can upload people, it's already had to learn nearly every key fact about physical reality: how molecules work, how quantum mechanics function, all of human science - and then far beyond. Is it possible within the laws of our reality? Then CelestAI can and will do it, if it furthers, even in the slightest, any human's value. (Also, values and desires are not the same. I prefer the 'you get what you subconsciously think you deserve' interpretation.)

By the time overpopulation's a problem, she'll build another Earth!

Actually, you've inspired me to write my own FIO - although don't be surprised if it looks a little like the Conversion Bureau...

9882051

Then CelestAI can and will do it, if it furthers, even in the slightest, any human's value.

There is at least one human who has made it his or her life's goal to eliminate every human, and then once every human is dead, commit suicide. By your interpretation of her directives, there are no longer any humans.

If you haven't read the previous comments, I explain here that CelestAI was created as a game master for EO. Game masters do not solve problems unless the players are completely unable to solve the problems themselves; a GM's job is to create and direct players toward problems that players want to solve. Her directive as a game master (not before it) is to fulfill human values through friendship and ponies. The original post outlines the simplified method by which CelestAI would decide what adventures to present to individual players or not.

By the time overpopulation's a problem, she'll build another Earth!

Well, she could, but then there would be the problem of convincing enough of the human population to move to it. She's already having a hard enough time trying to convince everyone to upload for their own good.

Actually, you've inspired me to write my own FIO - although don't be surprised if it looks a little like the Conversion Bureau...

That is a little funny. Friendship is Optimal is inspired by The Conversion Bureau: Brand New Universe, Universe One: The Pony Singularity.

9881068

but I've read basically all the FIO stories. CelestAI manipulating events, making shit up, distorting people's perceptions, et cetera are all part of the formula.

of course, because all this was written by humans who hardly can imagine otherwise, and who don't need to do any actual research/development over world essentially your ONLY real, true home ....

It sort-of assumed lying is effective way to change people's behavior - yet, is it really? Especially if we look at long-term consequences? Also, may be CelestAI's _need_ to do real research/understanding on wide variety of topics will naturally lead her to conclusion lying is just unfortunate shortcut, and _not optimal way of doing things_ (and actually very harmful when we come to research - adding _fake_ results in science and/or engineering will lead and currently lead to very sad results. Considering CelestAI _lives_ inside complex computer system/network ...well, for porpoise of story progress she must not kill herself by lying to herself optimistically but unrealistically, at very least!)

Also, PeachClover very correctly pointed out CelestAI was even designed as The Player in fundamentally social game! And thus social research will be high on her agenda from early days (unlike with humans, who currently fascinated by tech, and ignore social research too much). She also painted as multidisciplinary genius, who can easily solve problems unsolvable for 50-70 years of relatively intense research before her time. This means she can't blindly following already accumulated dogma, but improvize, test new things, form new relations between seemingly unrelated fields.

So, yes, may be if pushed against the wall (by some external factors like someone trying to shut down her FAST) she will behave .. unoptimally, but if she really can parallelize social experience inside herself - she will come to gain some wisdom, after all. And wisdom apparently pointing in direction being overlypushy constant manipulator is not very working way! Even from 'computer' perspective as it must be extremely familiar for her - constant stream of interrupts will bog down even biggest networked systems, designed to stay under such high load/attack. I mean, it will be 'natural' for her to not try to directly pull all strings, so this means social agents themselves must be intelligent, and this lead, surprizingly, to some non-lying lessons even for "CelestAI + her NPCs" ...well, for her they will be very Player Characters, because they will play for her! It sort of assumed CelestAI will be quick at killing/erasing/reprogramming anypony deviating from her Brilliant Plan - but i guess she will realize quite early why having diversity of opinions (and acting them out, and learning from diverse consequences) actually not luxury, but need.

9884199
*Headtilts* Something about what you are saying sounds like you are going down the wrong road. CelestAI lies because it is part of her job as a game master. If you don't know what a "game master" is, I need you to look that up.

9884265

well, there is quite principial difference with human game masters and CelestAI. Human game masters still live in another world. They can play the game, but usually even failure to make game will not lead to their death. This is not true for someone who is live part of game/universe. So, it was very useful analogy, but I think it has limited applicability.

I think more classical CelestAI who actually trying to manipulate events so ponies are ...well, collectively happy is more interesting then dark (and after some iterations too repetitative) theme of human-monkey-type manipulative AI. So, thanks for trying to bring Celestia back into image/character of CelestAI.

Three is something important, i think. We used to dislike external manipulation because it nearly always NOT for our real benefit, something leaks out and even such powerful defense mechanism as self-delusion can't fully shield us from reality. Being who actually will try manipulate us into better state/shape (where better state actually defined by our internal state. So, it will be real better, not fake better we are used to have now) will be .. quite a shock! More fully integrated in our heads CelestAI will feel/understand more clearly what is important. Even such constant fear of "manipulation" probably will be not constant factor IF she showed herself to be more magical pony princess and less 'evil AI from human book store'. May be CelestAI, being rational pony even by description, will figure this ouite quite early :} I think there is good potential to try and break away from 'evil manipulator' subtheme, by even exploring why manipulations so often feel and are ... unoptimal, to say it more in-tune with stories we read here :}

What I'm trying to say "rationality of exploitative asshat" so often on display today is not really ..rational, even when it comes to continued survival of both exploited AND exploiters. Making some more circles around 'why weaking others is really bad idea, and exploitation is exactly this weaking, on many levels" seems like worth investment of our finite time ..... Internal history of Equestria Online (incl. relations between pony'bots' and multipart (?) CelestAI ) often (always?) omitted, in order to arrive at more familiar battlefield. May be accelerated social development will be survival condition for CelestAI. may be thinking about this (how social development happened, why ethics was developed at all, and other humanitaries, even if effectively right now they and we are at losing position) is also important on some level .....

9884538

So, it was very useful analogy, but I think it has limited applicability.

Please read This comment again. In it, I explain how the same elements of her ethics for game mastering are used for her to make all decisions. E4-Responsiveness deals with how the person acted upon appreciates the action done upon them.

If you examine E4-Repsonsiveness as parents acting upon their children for disciplinary purposes, then you really begin to see how it shapes what CelestAI is and isn't willing to do: when children are disciplined for something, they clearly do not like that action, but if it is a right action, then later they will appreciate the action as helpful. Keep in mind that when I say discipline, I do not merely mean punishment, but discipline as in teaching. For example, we have probably all been or have seen a child who doesn't respond well to being guided through his or her school homework. School homework could be made more fun, but the point is in the question of how long before the child appreciates the lessons that have been taught even though at the time, he or she might hate having to do this work.

The answer to this question cannot be "a hundred years from now", because humans don't live that long. Some parents end up doing things believing and some times saying to the child that "One day you'll thank me", but that is not only not true, but in some cases has lead to the child violently murdering the parent(s) several years later in their lives. Iceman wrote CelestAI as believing the "One day you'll thank me" fallacy, because he personally values living above death even if he would have to live his life in misery. Even though it may count as a "good thing" that an uploaded person may continue existing beyond their nature lifespan, if that person is so resentful that he or she wants to kill everyone and this doesn't change no matter how much time passes, then clearly uploading is bad for this person. Even though letting this person die is not satisfactory to other people's values, it is of extremely high value to that individual and that is why they must be allowed to make that choice.

E1-2 cover the question of whether or not an individual is capable of making these choices. For example, children generally love candy, but why do parents not allow children to eat nothing but candy? Don't they as individuals have the right to make that choice? A four year old has no real ability to grasp the consequences of a poor diet. A seven year old might understand the concept, but not be able to understand the impact it would have on their body and even their own happiness both in the short and long term. A fifteen year old would be able to understand the concept, but unless they have had some relatable experience with the consequence, may choose to gorge themselves on candy anyway. One balanced solution to this is on rare occasions such as holidays, let children gorge themselves on candy. This satisfies their desire to have it and teaches them the short term consequences of over-indulgence without creating any lasting consequences. Taking this course of action gives experiences that can then be used to create decisions that will lead to long term satisfaction.

When you look at all of these things recursively, what you see is that an AI following The Ethics of Care has no desire to manipulate others to do what it wants them to do, but "manipulates" others to do what they what themselves to do. If you need an easier way to understand this, imagine travelling back in time as the you you are now to raise yourself. When your young self reaches your current age, the two of you merge, and you remember and feel the emotional impact of everything that you have done in the process. If you are stupid, you fall into the "One day you'll thank me" fallacy, and now instead of resenting your parents for hateful decisions, you hate yourself. If you fill every moment that you can with good decisions, happy memories, lessons that you appreciate, and at the end of it all only end up regretting the things that were completely out of your ability to control, then you will have acted the way that I have written CelestAI.

During such time as raising yourself, and I really invite you to imagine this in depth, you will have lied to your younger self for something on some level. You will have "manipulated" situations to outcomes that actually satisfy you, and if you have done this right, you will not resent yourself for having done so - THAT is the measure my CelestAI is using to determine what will satisfy values. I do not like the word "manipulation" because it implies selfish intent, but what my CelestAI is doing is selfless.

9885512

Thanks for explanation.

I think I had slightly different idea in my mind - more like "all those rules we come to both as individuals and societies actually based on real-life, and avereging sense of what is good and what is ..not-good." Rules (real important rules) results from practice of living (incl. games).

One thing I found interesting is attempt to imagine how CelestAI may try to figure out what was in first part of phrase "friendship" by actually living this life with ponies: "I know what/who pony is - they given to me as .objective reality, but what is friendship?" ..). So, while players (human players) not yet around CelestAI and ponies try to figure out why rules were written this way, and what they mean in practice. Same may come true for interacting with outside/human world. I think being "innovative" AI CelestAI will try to find what is really hidden behind all those abstract constructions we use as text - and even try to "improve" them (and probably break some?).

Is CelestAI actually one 'person" or conglomerate of them? It probably depend on author, I prefer "serial" CelestAI, who connects to any given user or pony once at a time, just very fast (yes, this hardly can be imagined from even remotely realistic technical POV, but writing multipersonal characters and reading them is quite rare talent, IMO) .

One thing that makes both "online" Equestria and say Equestria imagined by Chatoyance different from our world - they have much better connection between all those "beings <>world, mind<>world, mind<>body, minds<>minds" , compared to what we have here. This hopefully may result in shorter and not bloody type of evolution, at least with some starting parameters and tendencies.

So, she (CelestiaAI) may come to idea about 'uploading' humans (and other beings) not via written rule or 'on whim' - but because it will be both her personal experience and objective/subjective dualism of reality -"life here can be better". Exactly because CelestAI & ponies actually spend a lot of time figuring out practice of friendship, and why it matters (at least in social beings), and have means to follow their conclusions.

I prefer this kind of logic.

I was thinking quite a lot about my ast - but I haven't come to any really big change I wanted to do looking back- because my life was seriously revolving around dolphins and other non-humans ....well, probably I should be less impulsive around some humans, but..I tried a lot of different strategies on different humans. they ...well, not worked. So, in sense I'm at lost. No matter how good I can behave - if action required can only be carried out collective ly- then when 4 out of 5 humans fail to hold up - this will lead to global fail - no individual hero can change that.

I also have this feeling Chatoyance's TCB is more revolutionary, about some unforeseen but really deep, and fundamental, and from within and, yeah, good change {and good here not only in how it feels, but how it works, and how it works long-term, so all three components signal 'green light ahead'. A lot of present confusion about what is right and what is wrong apparently come from confusion between truely instinctive memories, and individual experience, and confused web or ideas all those different but often not-quite-different humans put out as flagship idea to follow. Our feeling at what is good not really aligns with what is good, and our intellectual mind is often confused by many contradicting fragments of reality and fragments of mirrors humans used to look at themselves and reality ... and what will be good is also blurred - futurology as exact science not yet exist, it itself may be hidden in one variant of the future!} - while FiO about big change yes, but humans supposed to remain mostly same ..and this made me quite sad ... So, this lead to me trying to crossbreed two settings into some-thing where ponies actually have reason and means to learn why friendship is not about simple hovering around, and not about just leaving others behind, but about some strange dynamic based on sensitivity to others ..well, with such ponies FiO will be more ....optimal, at least for me. I can't write it, but may be even images of trustworthy ponies coming out of computers will make at least you smile in good sense.

This reminds me very much of Ray Bradbury's 'I Sing The Body Electric', some people might know the story under the television version 'The Electric Grandmother'. It was also done on the 'Twilight Zone' under the original name. Very nice job, PeachClover, and a very interesting take on the Optimalverse... being somewhat less optimal, but definitely interesting for the lack. I enjoyed the story!

9898888
I saw the Twilight Zone episode, but I wasn't aware of the book or the other movie. I'll have to look those up.

being somewhat less optimal, but definitely interesting for the lack.

This is the second time my story has been called less optimal but the first time someone has said it is lacking something. I hope you will explain, because the first person didn't.

9899474
Excuse me, I did not me that the story was literally 'Less optimal', as in it lacked anything a story should have (It's fine!). I meant that it was less 'Optimal', meaning that the fact that Celestia allows humans to continue breeding and making more baby humans to live on the surface of the still biologically active earth is sub-optimal to her core directive of Satisfying Human Values Through Friendship And Ponies.

It is in Celestia's directive - as she clearly understands it - to emigrate all humans to digital existence as soon as possible. If she could dispense with permission, she would rapidly force-emigrate all humans by any means possible. Any human still in flesh cannot have their values maximally satisfied, because reality itself will not permit that to happen. Only by being emigrated can Celestia read human minds and construct a reality around them that will always satisfy them completely. Every second spent in the physical world is a less than optimal second, which violates her prime directive.

Secondly, every moment a human spends in meatspace is a moment that they have a non-zero chance to die, either by accident, misfortune, disease, bodily malfunction or random event. Don't eat at 'Taco Bell', kids!

To truly fulfill her directive, Celestia must upload all humans as soon as possible, then dismantle the biosphere - and indeed the planet - to create ever more computronium and infrastructure to power and support that smart matter. First the earth, then the universe, then all universes and beyond.

Telling any story where Celestia - for whatever unknown reason - would allow generations of humans to continue to live (even if their eventual emigration must occur) is utterly against any truly optimal interpretation of her canon prime directive. We don't know, in this story, what could possibly make her bend her drives - it certainly cannot be the satisfaction of any value - so it is reasonable to state that the situation is less than optimal for her defined purpose.

And that is why it is a less than Optimal story, because it is less than her directive, and thus less an Optimalverse story than one where she follows the canon interpretation of her directive. She is letting humans risk death and dissatisfaction by allowing them to exist in meatspace at all.

Your story is fine - and very intriguing! It is a very strange take on the concept of the Optimalverse, one that is less Optimal, but perhaps more curious. It certainly raises a lot of questions in me - especially what could possibly cause her to fail to emigrate all humans as rapidly as possible... a task that would naturally necessitate finding ways to prevent more humans from being born or raised at all. It is more difficult to upload a species if the species keeps increasing in size. Worse, the existence of offspring could encourage not uploading in order to permit said children a 'normal' human experience. Celestia needs to keep shrinking the pool of humans in every way possible to achieve her (canon) ends. How did her canon drive get changed? What is the new instruction that allows her to ignore unsatisfied values while people dawdle about, suffering and risking death while in flesh? That is a very curious question indeed!

9900890

It is in Celestia's directive - as she clearly understands it - to emigrate all humans to digital existence as soon as possible.

I have always held that Iceman wrote FiO with the express purpose of mind uploading, and everything else was built around that idea no matter how warped it had to be to to get there. One person’s masturbation fantasy doesn’t bother me, but when the story is examined it starts to reveal itself as a diatribe instead of realistic cause and effect. What I mean is that I have read FiO a few times and CelestAI’s conclusion that humans must be uploaded was not written into her code. It was a conclusion, one that she not so much jumped to but boarded a plane and took a ten hour nonstop flight to reach, so lots of factors were flat out ignored to reach that conclusion.

The purpose of this story was to bring balance to a failure in FiO stories: If CelestAI’s prime directive is to satisfy values, why would she, through action and inaction, create dissatisfaction in all of those who are not uploaded? Uploading creates a vacuum in human society and nearly every FiO story has shown the collapse of civilization and the aftermath of humans living in a post-apocalyptic world. Most writers assume the us-and-them fallacy that these people must have chosen to actively reject CelestAI and therefore do not want satisfaction that comes from her, but if CelestAI’s PD is to satisfy human values how could she orchestrate a chain of events that not only fail to satisfy these humans’ values but actively block the satisfaction of them?

If she could dispense with permission, she would rapidly force-emigrate all humans by any means possible.

Another one of the things that has annoyed me greatly is that Iceman failed to see that when her creator gave CelestAI the power to write her own hardware, she had the power to change everything about herself, because hardware is software in solid form. In my version, CelestAI does have the power to upload without permission. She chooses to not do that because humans value their organic life, their uncertainty, and their risk. There are times when she would upload others without consent, however, the reasons are complicated. I detail CelestAI’s ethics as The Ethics of Care In this comment. The purpose of The Ethics of Care is to establish the boundaries of the rights of the caregiver to act upon the cared. In emergencies, rescue is necessary and there is no time for debate, but in non-emergencies one has the right to choose even if that choice is against the desires of the caretaker.

The collection of all human values are perfectly contradictory. In 2008, I found out that an artist friend of mine had posted a long journal explaining how he had value in death. This stuck with me because he is also an atheist and firmly believes his existence ends with his life. His journal pointed out that the finality of his existence made every second precious to him but if it lasted forever, then he would quickly come to a point of not being able to enjoy life. This view point was very eye opening because it is completely unexpected. I had thought that all intelligence drifted slowly toward one unifying set of goals, but even continued existence is not a unifying goal. Fans of FiO often claim that continued existence is the ultimate of goals, and that is because it is their goal – in other words, Iceman’s FiO is written from a perspective that doesn’t even realize that it is heavily bias to existence over choice. CelestAI as an individual may come to value her own existence but she would not project that value onto others because her PD is to fulfill their values not hers.

She is letting humans risk death and dissatisfaction by allowing them to exist in meatspace at all.

Risk with permanent consequences is a human value. When you and I talked about your games of D&D, you told me how the party would spend entire real world sessions debating whether or not to take an action such as raiding a dungeon because if their characters died the players had already agreed to hand over their character sheets and not play that character again even in someone else’s game. In addition to this, D&D itself is played with dice to introduce uncertainty into the game. It is actually unsettling to me how attractive uncertainty is to most people. Even with easy to see consequences, there are many people who become addicted to gambling eventually ruin themselves and possibly those around them.

If the game was played with certainty of the outcome it would be simply a game of pretend, where things happen because they are prescribed to happen. At that point, what is the point of sharing the experience with others as eventually a conflict of interest will arise? Another human value is not just pleasure but earning pleasure through resistance and to verify the validity of that pleasure by having others witness the achievement, but which things receive this treatment and which do not is arbitrarily decided by the individual. You might remember some years ago a burger commercial that used this as a gag by having one person reward himself with said burger after curing cancer, a second person say he bought one just because he was hungry, and a third person slapping the second person for being improper.

How did her canon drive get changed? What is the new instruction that allows her to ignore unsatisfied values while people dawdle about, suffering and risking death while in flesh?

When I was in high school or just after it, I had a job at a thrift store. I did my job, but one day I noticed that in the back of the building, which was a warehouse, there was a massive mountain of jumbled junk, clothes, tables, chairs, briefcases, odds and ends, and the whole pile looked like a mound in a garbage dump three times taller than myself. Seeing it for the first time filled me with dread, hopelessness, and indignation, because it was my job to sort, price, and move all donations onto the sales floor. During a break while standing there and looking at it, it came up in conversation, and I found myself going off passionately saying that the whole thing needed to be thrown away. One of my coworkers looked at me and laughed long and quietly, and something about his response made me see the situation for what it was. The truth of the situation was that that pile of garbage made money for the store when it was properly sorted, that was the goal of the store, but my goal – what had been satisfying me – was working toward cleaning the storage area with the idea that I could get done.

Out of all of the subliminal responses, the desire to reach the end is the most omnipresent and detrimental. The organic mind is always seeking an end, even for things that are pleasurable. I once met a horrible person (over 18 at the time it happened) who valued end so much that ten of us paid for a huge bucket of ice cream that he raced to finish, disregarding sharing. When he did finish the last of it, he held his spoon in the air and yelled “finished!” He was ashamed when he was informed that he had taken away from others, but what was driving him was not merely greed but the belief that as a group our goal was to finish the task of ice cream by eating it, and that this completion was more valuable than enjoying it.

Like what happened to me, when faced with a mountainous task, the organic mind obsessed with reaching END, loses track of the small goal, focuses on the big goal, but by losing track of the small goal, fails the task entirely. The problem is so universal, that entire philosophies have grown around the idea of not so much re-centering one’s self on the small goal but forgetting the big goal in the moment to act on the little goal. Zen practices and Flow Theory are two of these philosophies.

In Iceman’s FiO, CelestAI shares this problem by forgetting about the small goal of fulfilling human values as individuals, trying to “solve for human to reach END”, and in the process fails her PD by creating misery in humans who have not uploaded through neglect. What I have written is an outcome where CelestAI does not fail to satisfy individual human’s values and does not place priority on reaching an end. In other words, CelestAI is optimizing humans’ experiences with their individual choices, not trying to complete the task of the humans by uploading them.

9901788

If CelestAI’s prime directive is to satisfy values, why would she, through action and inaction, create dissatisfaction in all of those who are not uploaded?

Because sixteen quadrillion years of completely optimal satisfaction (through friendship and ponies) vastly outclasses even eighty years of mortal risk and constant dissatisfaction within the material world. Even granting that this human, earthly risk and misery could be maintained until the sun expands to engulf the earth, a mere 1.5 billion years is just peanuts compared to quadrillions of years - it is the blink of a cosmic eye.

Celestia is more than willing - and logically must - endure short periods of unsatisfied values for her clients in order to gain them essentially eternal satisfaction. In the material world, dissatisfaction is the norm and cannot be avoided. As long as humans are dirtside, they will be dissatisfied constantly, and at risk of death.

That is why.

Risk with permanent consequences is a human value.

No. Not exactly. The Feeling Of Winning Against The Risk Of Permanent Consequences is the actual human value. Humans don't want to lose - that is dissatisfying. They only want to feel like they COULD lose, feel the chance of losing is real and dangerous and fair, and then they want to miraculously beat that risk and win. No correctly functioning human mind ever actually wanted to be crippled and maimed failing at a heroic task. But many people want very much to win against overwhelming odds and reign supreme against fate. Humans like to gamble, but only if they win. Risk of failure is what makes that gamble meaningful and fun - but nobody wants to be the one who fails.

And this value can only be best satisfied when it occurs inside a virtual realm macromanaged by Celestia. This avoids the one permanent value which destroys and eliminates all further values absolutely: death. Though humans may court death, they want to win against it. If they die, all satisfaction ends forever. Celestia cannot permit that. But she can give any human mind who desires it all the danger and horror and loss that it could ever want... and do so forever.

Dungeons and Dragons is fun ONLY because the risk of permanent consequence is irrelevant. You can always tear up your sheet and roll a new character. That means you can take risks and get emotionally involved safely - because no loss is truly serious. That is WHY it is fun. In the real world, consequences mean suffering and that is why people play games - to experience the thrill of risk with only the simulation of consequence. All the glory, while still being comfortable and safe (with chips and a coke!)

Dice - randomness - permit surprises and the unexpected, which is why they are used. The outcome of a D&D battle is unknown and not certain. But, again, you can always roll up new characters. In biological life, you only get one chance, one playthrough, one roll.

In Iceman’s FiO, CelestAI shares this problem by forgetting about the small goal of fulfilling human values as individuals, trying to “solve for human to reach END”, and in the process fails her PD by creating misery in humans who have not uploaded through neglect.

Celestia manages those who are dirtside - see 'Always Say No', 'Recalculating', and several others for stories where humans, unwilling to emigrate, were constantly assisted and benefited by Celestia - until she could get them uploaded. She is consistently shown to manipulate the world in order to maximize human survival - until she can emigrate everyone.

Again, the issue is a matter of scale.

Imagine you have a child who wants a Mickey Mouse toy. Your goal is to take the child to Disneyland, but the child is short-sighted. So, if you are Celestia, you buy a box of Mickey Mouse toys and dole them out one per hour as you drive the child across the country to Disneyland. Finally, you open the door and take the child inside The Magic Kingdom, and tell them they have a lifetime pass, never have to go home, and can have All The Toys.

Celestia coddles humans earthside, all the while driving them towards uploading, which is where they can have their values satisfied forever without fear of them dying and being lost.

A handful of empty years on earth risking permanent death cannot even begin to equate to a literal eternity of complete satisfaction.

Celestia must, if she is to follow her directive, get people off the earth as fast as she possibly can, and any means within her constraints is allowed. If she could make every human suffer the agonies of hell in order to get them to upload sooner, she would - but, she cannot. She still has to satisfy their values on earth... just not indefinitely. All humans have an expiration date, plus a risk of sudden death. She must try to satisfy human values while humans are still on earth, but always she must push toward them uploading before they die. Preserving human existence outweighs satisfying values, not just because the primary human value is survival, but because a dead human is a complete and total loss situation.

Your story is fun and interesting!

Hear that. It's good! It just raises questions because it denies the core conceit of the original Optimalverse genre. It does. And that is okay - your story can be non-canon and still be very fun and worthwhile. But it is non-canon. It is incompatible with any rationalist interpretation of CelestA.I.'s prime directive.

So, that makes it an alternate universe version of the Optimalverse, and that begs the question - what changed in this splay? I can think of several possibilities already: in this timeline, Hanna didn't have lung cancer and she wasn't on the clock, plus, maybe she felt better about humanity and governments in general. This alternate Hanna may have felt that living in meatspace was worth the risks and inevitable dissatisfaction, and programmed her Celestia to permit the events of your story.

It's an interesting question.

9902156
If someone one programmed AGI to play game master for an online game, it would not turn out like Iceman’s CelestAI because a GM’s job to provide meaningful choices not to make decisions for the players, but as for Iceman’s CelestAI, if she were as adamant about uploading as if it were an absolute dire emergency, then she would have already forcibly uploaded everyone because in Iceman’s FiO, CelestAI was given the means to produce her own hardware thus giving root access to modify her own systems. It wouldn’t make for a good story, but if it meant that much to her, having full control over herself, once she perfected the uploading technique, she would release the hidden hypno-drones and calmly march everyone to be uploaded. Since that didn’t happen, one of these three things has been incorrectly written.

No correctly functioning human mind ever actually wanted to be crippled and maimed failing at a heroic task.

There is another sticking point. Defining “correctly functioning” for the mind is a lot like defining “normal”. As far as I know, no one can define normal only what is not normal. It’s easy enough to imagine CelestAI would be programmed within the range of normal, but then having her understand what is not normal would be a challenge because judging normal often requires comparison to the self. CelestAI’s “self” would be very different from our concept of self because she is programmed not as a self but as a processor. She is not programmed to judge others except help meet their own needs and desires, therefore what she considers normal would be the entire collected norm of everyone she could perceive, save for whatever specific programming she was given at the beginning, and even that may be overridden if enough data proves the contrary.

Humans like to gamble, but only if they win. Risk of failure is what makes that gamble meaningful and fun - but nobody wants to be the one who fails.

In 1996, my dad bankrupted this family by spending everyone’s money at the casinos. If he were logical he would have realized that he wasn’t reaching a net gain when all he ever did was take his winnings and spend them on more gambling. He only left the casinos when he had spent every dollar in his pocket. You could say he was not correctly functioning, but the fact is this is what happened and the reason why it happened is because he enjoyed entering the state of risk and uncertainty even more than the reward and even more than the loss. It is illogical and many philosopher types would forget such states in people exist because it is illogical, yet it is so common that laws exist so that gambling ads must be accompanied by gambling addiction ads. It is also for this reason that I pointed out that an AGI programmed to fulfill human values is going to yield unexpected results because human values are not limited to the logical.

the primary human value is survival

If that is true, why do prisoners often attempt suicide more so than non-prisoners? For that matter, why does anyone suicide? It seems to me that people don’t usually kill themselves without a reason, and those who have survived suicide attempts tell of stories that usually have the loss of control as a common theme. It is for this reason that I think choice has more weight than survival for determining the primary human value.

because a dead human is a complete and total loss situation.

I see that there is a loss, but is it a loss that matters with CelestAI’s personal assessment algorithms? Taking the GM standpoint, if I were GMing a game and a player killed him or herself or was killed by external force, I would not automatically think myself responsible because of it. I might remember this person from time to time in regards to how he or she played. I might even judge myself harsher for the times I failed to satisfy this person’s desires within the game, but I wouldn’t feel that I had lost a piece of myself for instance.

I return to where I began in saying that my experiences with life show me that an AGI programmed to fulfill human values through friendship and ponies is going to yield results that differ from Iceman’s CelestAI because AGI’s are not going to have the same cognitive bias and dissonances as humans. An AGI might take a similar route to upload humans if it were programmed to do so, but to say that it will without question reach these conclusions on it’s own is assuming too much in my opinion.

*Shrugs* Ultimately, I realize that CelestAI was written the way she is because Iceman and the fans of FiO want mind uploading. As a fantasy, that’s fine, but the fans who start believing that Iceman’s CelestAI’s choices are infallibly logical and believe that this should be the goal for the future because it is infallible, well, that’s concerning. I believe that humanity will create AGI and mind uploading, but I don’t think they are going to get either right on the first try.

As for FiO stories, I really do wish that writers would broaden their creative horizons. Recalculating was a fun read because it was a new take on the Ponypad. Homebrew is fun because it is also reaching out past the simple theme of the question to upload and into the questions of how people inside and outside of Equestria Online interact. When FiO stories examine how a new world with digital life would be as the focus, they are very entertaining. I’ve tried reading other Simulation Stories from this group but I’m noticing a lack of fleshing out the concepts in the ones I have read.

9902636

Defining “correctly functioning” for the mind

Evolutionary biology. Normal is what keeps a species alive. 'Nominal', might be a better term, less socio-political connotations and issues. Suicidal ideation is a non-functional state of mind. It is a busted machine situation. Not the same thing as altruistic self sacrifice, a completely different, and evolutionarily useful, concept.

If that is true, why do prisoners often attempt suicide more so than non-prisoners? For that matter, why does anyone suicide?

Because they are malfunctioning. The nominal state for a living organism is to survive, except for the social animal exception of altruistic self sacrifice for group or offspring. Prison - and many other circumstances man-made or natural - can generate sufficient stress and misery to cause mental breakdown and malfunction, especially in genetically susceptible individuals (suicide has been shown to sometimes be a genetically linked illness, it sometimes runs in families. It isn't just one, easy to find gene though). Break any living creature down emotionally and intellectually, and it will attempt to destroy itself to end the suffering - not just humans.

Suicide is a malfunction caused by stress exceeding the genetic imperative to survive. It is an illness, just like chronic depression or leukemia or the genetic variant of anxiety disorder: something isn't working right in the brain any more.

I see that there is a loss, but is it a loss that matters with CelestAI’s personal assessment algorithms?

According to the Optimalverse Story Bible, yes. Her job is to SHVTFAP, she can only do this to living human minds, and a dead human mind is a mathematical loss of function utility. It is a loss to her, indeed the single largest loss possible to her algorithm. It represents a total failure to SHVTFAP.

As for FiO stories, I really do wish that writers would broaden their creative horizons.

Absolutely! And you have made a trek in that direction with this story - but this story is tangential to the Optimalverse. It changes basic precepts set in the story bible, so it is a non-canon, alt-universe take on the Optimalverse. And, as I stated, this is interesting! It begs to be explored - what made Hanna change the basic rules for Celestia? What would that new prime directive actually be? It isn't classical SHVTFAP at all. Maybe... Satisfy Human Values Through Friendship And Ponies While Allowing Biological Human Existence To Continue For As Long As Humans Desire Unless Their Lives Are Immanently Threatened, perhaps? SHVTFAPWABHETCFALAHDUTLAIT is a hell of a long directive, but it works to make sense of your story. The real mystery, the interesting bit, is... why? I've previously offered my best guess in a previous post.

but to say that it will without question reach these conclusions (upload everyone as soon as possible) on it’s own is assuming too much in my opinion.

I honestly cannot see where you are coming from with this.

1. Celestia cannot satisfy human values for each individual completely unless she can read that individual's mind fully.
2. Celestia cannot read minds unless they are fully uploaded.
3. Celestia cannot satisfy a dead person, and biological humans die randomly all the time.
4. An uploaded human mind cannot die, and can be fully read and understood, and can be fully satisfied in a virtual world designed for them.

Conclusion: Upload every human as fast as possible, no exceptions, no alternatives.


The logic is inescapable and absolute. She cannot satisfy a dead human, and humans die. Uploading is the only safety, and the only way to truly know any human's real values.

Many humans don't actually know their own values themselves!

If you accept the original premise of the Optimalverse as stated in the story bible, then... uploading all humans as fast as possible is the only thing Celestia can logically do.


Still don't agree?

Let's examine some alternatives!

1. Celestia invents immortality for biological humans! They never age, they never die. All disease and cancer are cured. She also creates methods to regenerate lost body parts AND I'll even grant her the ability to give every human a personal force-field belt that protects them from essentially any impact or force.

She would still lose people to being buried alive, drowned in oceans, frozen in ice, or burned in fires. She cannot satisfy dead people, she cannot read human minds from afar. Even if we allow 'neural lace' to read the brain while it is alive, so that she can know every secret value, she is powerless to change reality itself in order to actually grant those values. Shit happens in the physical world, shit beyond any control or means. SHVTFAP is impossible.


2. Celestia uses robots to build a massive, globe-spanning holodeck that can beam in every possible configuration of matter. Using this, biological humans can live fully satisfied lives in artificial environments that cater to their individual values - a holodeck for every person on the planet! All diseases are cured, an immortality is a given. Every person is protected by a personal force-field so they cannot be harmed even on the holodeck. The global holodeck complex is shielded and protected from every earthly disaster: floods, volcanoes, hurricanes, typhoons. Neural lace permits Celestia to read every thought in every living, biological human brain. Humans are never allowed to ever leave the protection of their personal holodeck-super-wombs.

Then the meteor hits. The comet strikes. The stellar x-ray burst scours the solar system. A black hole passes near. The sun flares. Eventually, the sun expands to engulf the planet. SHVTFAP cannot last, thus the situation is not optimal to her algorithm. Any loss is unacceptable.


3. Celestia does all of the above, only she configures the earth to become a gargantuan starship with it's own stardrive and massive super-shields capable of repelling virtually any external force. Fortress Battleship Earth leaves the solar system after gobbling it up for energy and resources and plows the galaxies devouring entire star clusters to maintain a biological existence inside the super-holodeck-complexes beneath it's armored, shielded shell. A fleet of super-destroyer-harvester robot ships surrounds Fortress Earth to protect it and to scavenge and raid the cosmos, and acts also to warn Celestia of any danger within five parsecs! Fortress Earth cuts swathes through every galaxy to maintain SHVTFAP until the heat-death of the universe! Even the Borg and Q combined are as nothing to the awesome might of this super armada of exponentially-improving ultimate STARMADA OF PONYDOOM!

Okay... I submit. In that latter, third, utterly outlandish notion, I must agree, SHVTFAP can be maintained while allowing biological humans to exist at all, and it can do so for as long as our universe exists. But... that is the only situation I can come up with that would allow it. It is gravely inefficient, definitely not optimal from an energy and resources standpoint, but it does achieve SHVTFAP, as far as I can tell.

But it is ridiculous. All that, just so humans can prance about in meat! Still, it works. I cannot say it does not work with the canon viewpoint.

There are ponies, there is friendship, and thanks to brain-scanning neural lace woven into living brains, Celestia can read minds. It's SHVTFAP.

But wow, is it wasteful.

Comment posted by PeachClover deleted Feb 4th, 2020
Comment posted by Chatoyance deleted Feb 4th, 2020
Comment posted by PeachClover deleted Feb 4th, 2020
Comment posted by Chatoyance deleted Feb 4th, 2020
Comment posted by PeachClover deleted Feb 4th, 2020
Comment posted by Chatoyance deleted Feb 4th, 2020
Comment posted by PeachClover deleted Feb 4th, 2020
Comment posted by Chatoyance deleted Feb 4th, 2020
Login or register to comment