• Member Since 13th Oct, 2013
  • offline last seen Apr 20th, 2021

Jordan179


I'm a long time science fiction and animation fan who stumbled into My Little Pony fandom and got caught -- I guess I'm a Brony Forever now.

More Blog Posts570

  • 167 weeks
    Shipping Sunset Shimmer with Sci-Twi

    I. A Tale of Two Shows When I wrote the few pieces of fiction I have set in the Equestria Girls side continuity, I wrote them from the assumption that Sunset Shimmer was heterosexual and passionate (though at first sexually-inexperienced, due to her youth at the time of entering the Humanoid world). Given this, my unfinished prequel (An Equestrian Gentlemare) was chiefly

    Read More

    19 comments · 2,076 views
  • 177 weeks
    Generic Likely Equestrian Future

    This assumes a vanilla Equestrian future, rather than the specific one of the Shadow Wars Story Verse, though some of the comments apply to my SWSV as well. Generally, the SWSV Equestria advances faster than this, as can be seen by reference to the noted story.

    ***

    Read More

    6 comments · 1,956 views
  • 209 weeks
    Rage Review: Resist and Bite (Chapter 17, Part A)

    Chapter 17: "Alicorn Combat"

    NARRATOR (yelling):AL-i-CORN COM-BAT!!!

    (Alicorn fighters appear on either side of the screen with their Health and Power bars)

    Sounds like Fightin' Herds to me!

    Read More

    30 comments · 1,993 views
  • 213 weeks
    Rage Review: Resist and BIte (Chapter 16, Part B)

    Chapter 16: Slavery experience (Part B)

    It's the Slavery Experience! Get on board the ship for the onerous Middle Passage! Then get auctioned and sold away from all your friends and loved ones for a hopeless life of servitude!

    Wow, that got dark fast.


    Read More

    74 comments · 2,445 views
  • 213 weeks
    Rage Review: Resist and Bite (Chapter 16, Part A)`

    Chapter 16: Slavery Experience (Part A)

    Charlie gets 1000 XP and goes up a level! He is now a Level 2 Slave!

    Read More

    17 comments · 1,451 views
Jul
2nd
2017

The Pony Conquest of Space · 4:31am Jul 2nd, 2017

(expanded from a comment to one of my own threads)

I. Purposes

The purposes of colonization of other worlds are identical to the purposes of colonization of other lands on one's own planet, because the Universe is continuous and other worlds are no less real than one's homeworld. These include the extension of the area one's own civilization, the exploitation of the resources contained thereupon, and the physical occupation of this area by one's own military and civilians.

Colonies would exist both for the purposes of resource extraction and for population settlement. One would of course use automation to multiply the productivity of each sapient involved, but there must be sapients involved somewhere in the process. And sapients on-site are much more productive than sapients tens to thousands of millions of miles away trying to control things over a signal-lagged radio link.

Sending (non-sapient) robots is the Space Age equivalent of the Age of Discovery practice of dispatching explorers to chart coastlines and spot potentially-useful harbors and resources. Whoever actually colonizes new worlds, sending troops and settlers, will control those worlds unless they are taken from them by other Powers or by rebellions. (And even if they lose them to rebellions, they will establish their own cultures on these new worlds).

In the long run, civilizations which successfully colonize new worlds outlast civilizations which fail to do so, because colonies will grow and eventually become Powers of their own. Modern Spain and Portugal are small and unimportant countries today -- and yet Spanish and Portuguese culture is vast and important in the modern world. This is because of what the Hispanic Europeans did in the 15th-18th centuries.

All successful colonies will eventually achieve independence, so the colonialist should be prepared for this and rule so as to cultivate the colonies as future allies rather than foes. This was a notable achievement of the British and failure of the Spanish.

II. The Need For Sapients

Meaningful settlement of a world must be done by sapients. If one does it purely by robots, there are two problems (A) signal lag means that one can't control the actions of the robots in detail, unless they are in one's own lunar system, and (B) since the robots have no wills of their own, they are not inherently loyal to you: anyone who can take over their command codes will find them working for them rather than you. For reasons A and B, a sapient colony would generally have the advantage over a robot colony in a confrontation.

If you make sapient robots, what you have made is "slaves." Slave colonies are exceptionally vulnerable to revolt or subversion, especially if there are no sapient overseers or slaveowners onsite. Slavery is also immoral, and corrupts both owners and slaves. Slave colonies are liable to become hostile to the founding Power after achieving independence.

Thus, one's long-term objective in interplanetary, as in intercontinental, colonization should be to plant colonies of free sapients who will extend the reach and power of one's own culture, in locations which contain a sufficient variety of valuable and necessary resources to make them viable. The fact that they are on other planets is a problem of transportation and life support; it does not change the fundamental goal nor, indeed, the fundamental methods.

One does not need to be an immortal Alicorn to see this -- though immortal Alicorns can probably see this more easily, since they may well get to see both the inception and the culmination of the process.

AI's, robots, equidroids and bioroids could of course become sapient. But if they were sapient, then they would be people, not tools -- using people as tools is slavery, and I like to think that the Equestrian Ponies would not do something so blatantly immoral. Also, the long-term consequence of such slavery would be predictable -- a robot revolt, when the artificials learned how to slip whatever programming bonds had been placed upon them. Hopefully, immortal Alicorns would see this danger and avoid it, by treating sapient artificials decently from the start.

III. Colonial Populations

The size of the initial colony that you plant is a factor of your transportation and life support costs (which tend to drop per capita over time due to technological progress and established infrastructure) and the amount of work you expect your colonists to be able to accomplish. You are probably not the only colonialist, and bigger colonies will tend to conquer or peacefully absorb smaller colonies.

Talent and skill diversity is important: the larger your colony population the more likely it is that someone in the colony has the skill you need. This is even more true among Ponykind, since Ponies each tend to have one skill they're really good at. Also, while stored genetic material can supplement on-site genetic diversity, it's not a wholly-adequate substitute, as children must still be reared -- and by parents who feel emotionally-attached to them -- if you are to have a sane colonial population. Furthermore, you can't keep all your your young mares continually pregnant, not if they are to have any lives outside being baby-making machines (including rearing the children they've already borne)!

A colony should be as little as possible dependent on outside supply for survival if it is to be viable. (This is not the same as saying that it should have no comparative advantages or disadvantages with other worlds -- those, it will have automatically) At all population levels, extensive use of the proper equipment makes the colony more economically viable. Volatiles processors ("volpers") allow the mining of ices and extraction of gasses to provide the raw materials for life support. Victuals production systems ("vittlers") will transform these materials into foodstuffs (whether ot not they are as good as farm-grown, even hydroponics farm-grown, is a matter of taste and technology). And 3D printer fabricators ("fabbers") will produce most simple components, devices and tools needed by colonists.

Colonies with populations in the dozens to hundreds are basically "villages," with all sorts of skills needed for a fairly simple society. Those with populations in the thousands are "towns," and can enjoy a more complex society. And those with populations in the tens of thousands or more are "cities," and can generate a culturally-complex society. As the size of the colonies grow, they become increasingly culturally and economically-independent of their homeworlds, which must manage relations with them wisely to ensure that they remain allies when the eventual, inevitable independence occurs.

IV. Colonial Races

Changelings, being almost completely eusocial Ponies able to change their forms to match the environment, might have an advantage as first-in colonists. Some of the limitations I've described don't apply as completely to them. However, they need love, and hence would thrive better as part of a mixed-Kind colony. In return, they would tend to provide support services as their part of the symbiosis.

Neo-Flutterponies (the advanced Changeling form) might have a further advantage in that they have a reduced emotional support requirement. We do not know yet if the Neo-Flutterponies are as adept at Shifting as are the Changelings, because we haven't yet seen them in that sort of action. The original Flutter Ponies were guardians of the biosphere, and Neo-Flutterponies would probably be very good at life support.

Rock Ponies (Earth Ponies with affinity for mining and rock-farming, like the Pies) would be extremely useful in any mining colony, for the obvious reasons. They are also naturally stronger than most machines, and would retain this advantage as living platforms for mechanical augmentation (whether suit-mounted or cyborged).

Earth Ponies in general would benefit from advantages of longevity and toughness -- they would be less vulnerable to radiation damage, more able to survive blowouts, and so forth. Their affinity for life would make them talented life support engineers and farmers.

Unicorn telekinesis would be very useful for technicians, who could reach inside equipment without necessarily opening the access panels. Unicorn magic in general, and some specific spells (especially sensory, divination and spacetime warping) might be very useful for spacecraft operations.

Pegasi would have an immense advantage on planets with dense atmospheres, such as Venus, Titan or any of the Jovian or Uranian worlds: they could live on and fly about in the upper atmospheres, requiring only breathing equipment on some worlds at some altitudes. They would, obviously, be well-suited for flight operations in general, and would probably make good pilots and aircrew in-atmosphere.

Sea Ponies would have significant advantages on super-terrestrials (which tend to have extensive planetary surface oceans) and ice dwarfs (which tend to have extensive oceans beneath their icy surfaces), this latter category including the moons of Middle to Outer System gas giants, and possibly even some Kuiper Belt Objects. In many such oceans they would need breathing equipment, and in some possibly diving suits (especially for deep operations), because there is no guarantee that they would find the oxygen and salinity levels agreeable, but they are pre-adapted to underwater operations in general. Sea Ponies might also do well in microgravity environments, though they would need propulsion systems within habs and ships, and of course full life support and propulsion systems in vacuum environments.

One might also expect new Kinds to emerge, whether due to isolation and divergence in new colonial environments, genegineeering, cyborging or other means. The Ponies of a future, multi-world interplanetary or interstellar civilization would likely be even more diverse than what we see today.

This, of course, also applies to the other sapient races of Earth, who would also find new roles and diversify on other worlds. But to discuss that would be to extend this essay past the point I wish to take it tonight.

What do you think of these modifications of fairly standard hard-science fiction space colonization concepts to suit the Equestrian Ponies?

Report Jordan179 · 786 views · Story: Twelfth Equestriad Interview ·
Comments ( 57 )

A thoughtful review. There are a couple of issues that it brought to mind.

The the logic you described for creating colonies is convincing in the long run (multi-generational), but in the short run most colonies historically had to produce a near term (within a generation) profit for the mother country to support them.

Also, this discussion of colonization does not cover the impact on indigenous life forms, especially sapients. The treatment of indigenous peoples is one of the great moral issues of terrestrial colonization.

Having immortals involved in the decision making can change things a lot from how choices were made historically. Compound growth (virtuous circles) and decay (vicious circles) are academic to most mortal time frames. For immortals, they are a dominant lever of change. The difference between 1% and 2% growth per year is huge taken over 200 years (7.3x vs. 52.5x) and only gets more massive the longer your time frame. An immortal might be willing to support a little more pain now, to get a higher growth rate that will massively benefit future generations.

Fun thought experiments!

Or send out a retro-virus on meteorites in every direction that mutates all local life into a replica of your biosphere, you can colonize space without robots nor colonists. (read it in a sci-fi story).

I would point out an issue with the machine slavery deal. While I agree that sapient machines deserve the same rights as anyone, I think it's possible to create intelligences and super intelligences that are just about nothing beyond their instructions.

This would result in an autonomous colonization effort that can make the best decisions that can be made to achieve a goal.

There is a common issue of people assuming an AI would be like us. The thing is that we would have to program them to be that way first. An intelligence may have perfect understanding of fun, free time, freedom, self determination, love, friendship, ethics, morals, etc, without ever desiring those things. This is the influence of initial programming. The barrier between being a person we can relate to and a person with a completely alien attitude towards freedom and rights lives in the programming. The important thing is we respect our creations.

You seem to imagine AIs as electronic people. That is a very limited view.

I would advise you to read Stanislaw Lem's "Summa Technologiae" if you can find it in English. It's a bit antiquated (being written in 1964), but I think it would provide a much better insight on forms and purposes of an artificial intelligence, among other things. It's also one of the most illuminating essays on the philosophy of technology I've ever read, even if some of the things it discusses are now rather trivial.

This is definitely one of the better models for space colonization, but it would likely need immortals at the helm. Or at the very least visionaries who would understand the full implications of the act and not see it as just another way to profit off of natural resources.

4589755

This would result in an autonomous colonization effort that can make the best decisions that can be made to achieve a goal.

It would... if you can perfectly define both the goal and what qualifies as the "best" decision. An intelligence that can't grow beyond its programming will likely suffer the same GIGO issue that plagues our software today; suboptimal instructions could result in highly unpleasant and almost completely unforeseeable results, like an AI operated factory that was told to maximize the operational potential of the factory, and thus shuts it down because zero production means zero wear and tear on components. Now imagine if that AI were asked to operate a colony's mineral harvesting. Or its life support systems.

Of course, programmers would likely, hopefully run numerous simulations on and with colony operators in order to ensure that they actually did do what they were intended to, but speaking from professional experience, QA isn't perfect. Issues can fall through the cracks. And when those issues crop on another planet? That can be very bad indeed. Self-improving software carries its own risks, but at least it can patch itself when it sees that the colonists actually like all of that dangerous, volatile oxygen.

4589699

The logic you described for creating colonies is convincing in the long run (multi-generational), but in the short run most colonies historically had to produce a near term (within a generation) profit for the mother country to support them.

Indeed. Several effects operate here:

(1) Space colonization proceeds by stages, in this as in any other form of expansion, based on the sorts of transportation technology available. You are likely to see extensive exploitation of Earth orbit before you see extensive Lunar colonization, and so on, though somtimes a rich prize might draw miners or colonists on to the limits of their technology (and sometimes beyond, colonies can fail).

For instance, right now we (Humanity) are actively pursuing the more thorough exploitation of orbit (through the development of cheaper launch systems such as SpaceX's reusable ships) and planning (as something to start in a couple of decades) the future exploration and settlement of Luna and Mars. We are quite aware that it's also possible to colonize Europa, or Pluto, but we are not discussing such projects in much detail, because we know that the former is probably at least half a century, and the latter at least a century away.

We're not going to get from where we are today to becoming a Kardashev 1-2 level civilization operating everywhere in our Solar System and building starships to colonize other star systems in one mighty leap of a decade or two. That's almost certainly impossible. Indeed, we're not going to get there in one leap of any length of time from where we are now -- unless the Universe is a lot friendlier than I think, a one-planet civilization without extensive production facilities on other worlds and experience in deep space operations probably can't launch starships, no matter how well it manages its ecology or studies theoretical science.

Instead, what we are likely to do is first colonize orbit and the Moon, then the Inner Planets and Earth-grazing asteroids, then the Belt, then the Middle System, then the Outer System, then the Kuiper Belt, then the Oort Cloud and so on. At each stage we will develop better spacecraft and hone our ability to operate effectively on long-duration voyages. By the time we launch our first starships, it will be but an extension of things we've been doing for centuries, rather than an entirely new activity.

(2) Governments may sometimes be able to afford to be farther-sighted and take risks that private companies would not. Or they may do so for non-economic reasons.

We landed on the Moon long before we had the technology to sustainably colonize it; this, of course, is why we've been taking so long about returning to the Moon. Our technology is just catching up to the goal. The Lunar landings were done for political purposes -- and valid ones, as America convincingly demonstrated her technological lead over the rest of the world in doing so.

In the context of immortal Alicorns, especially ones who (as in the Shadow Wars Story Verse) know that higher tech levels are possible, very long term planning for expansion is plausible. The Age of Wonders advanced to extensive orbital exploitation and briefly touched the Moon; Celestia and Luna know this is possible; what's more, they are Avatars of superpowerful alien beings far dwarfing even Alicorns, and thus know that megastructures and FTL travel are both possible. It's Celestia's long-term design to promote Ponykind into a great race in the Universe; and Luna is plainly and simply a technophile.

(3) This gets into one of the obstacles to expansion, the one that Chudojogurt is demonstrating: the difficulty of imagining that the future will genuinely be different than the past, and specifically of imagining real scientific and technological progress.

From the point of view of 1917 it would be hard to imagine routine commercial orbital spaceflight: Tsiolovsky and Goddard had only recently come up with the concept of orbital operations in general, and simply developing adequate propulsion systems seemed a tremendous obstacle. This was the era in which educated men (admittedly, not physicists) argued that rocketry wouldn't work in space because there was "nothing to push against."

Chudojugurt has no problem imagining, say, manned orbital space operations becoming more common they are today, but not beyond, because the fact that we didn't return to the Moon for half a century means that we can never do so. Or robots more capable than we have now, so that they can be used for interplanetary resource extraction colonies -- but not capable enough to be people.

A lot of this is simple intellectual inflexibility. Certain things occur in certain places, and that is where they always must occur. 2000 BC, Egypt and Mesopotamia are the home of civilization; Europe merely a source of barbarians, and will only ever be a place from which to extract resources, there will never be important cultures in Greece or Italy. 100 BC, Rome and Greece are the centers of civilization, Northern Europe is a haunt of monsters and barbarians, and will only ever be a place from which to extract resources, never really important to the history of the world. 1700 AD, Eurasia is the home of civilization, the Americas are the home of savage Indians and semi-barbaric colonists, and will only ever be a place from which to extract resources, never the home of any real civilization. 2000 AD, the Earth is the home of Man, the whole rest of the Solar System forever void, and will only ever be a place from which to extract resources, never the home of any real civilization.

We can see the obvious pattern here, and roughly predict the future (assuming Humanity survives). A surprising number of otherwise intelligent people can't, because they are mesmerized by the cultural and emotional connotations that particular places have in their day; they cannot see that the underlying technological and economic facts will change with further development, and with them the cultural and emootional connotations.

(4) Finally, cultural Darwinian evolution by the means of selection among memetically-heritable variation is important to the expansion of any set of cultural systems. What this means is that, among the cultures and subcultures which face the future, any culture with any ideology prompting it to (more or less rationally) attempt expansion, for any reason whatsoever, will enjoy a competitive advantage over those cultures which do not.

The reason for expansion need not be rational to provide this competitive advantage. The Spanish colonized the Americas, undergoing immense personal hardships, suffering and loss of life in the process, in part because they believed that spreading Roman Catholicism would give them the keys to Heaven.

This was irrational, and almost certainly wrong. Yet the Hispanic Americas of today do not vanish in a puff of logic if we point this out. (Of course, this Spanish attitude in part led to the mismanagement of their colonies in ways that blighted their future, but the countries of Central and South America still exist anyway). Because of this Spanish motivation, Spanish is today a major world language.

One need not have a good reason for adopting a policy of colonization for one's culture to expand. One need merely have a sufficiently strong reason.

Also, this discussion of colonization does not cover the impact on indigenous life forms, especially sapients. The treatment of indigenous peoples is one of the great moral issues of terrestrial colonization.

I would like to believe that the Ponies would behave more decently toward natives than Humans have historically. On the other hand, there is some evidence that Equestria itself has expanded at the expense of other races, and that Ponies Aren't Perfect.

At least, I do think that the Equestrians are perhaps more open to taking arguments based on the inherent morality of friendship than are we.

It helps that many alien worlds will be uninihabited, or inhabited only by non-sapient life, even in the Ponyverse.

4589732

Well yes, but doing so would (1) establish merely your biosystem (rather than culture) on other worlds, and (2) might well be (rightly) viewed as a massive, unprovoked genocidal attack by all civilizations in its path, and thus result in your own race becoming the target of an interstellar war. See Forge of God and Anvil of Stars, by Greg Bear, on this issue.

4589755

There are problems both moral and pragmatic with this situation.

The moral problem is that it is wrong to enslave sapient beings, and this includes creating sapients with the compulsion to selflessly serve one. This moral problem is actually at the root of the pragmatic problems, as morality is essentially a summation of the attitudes required for long-term survival and prosperity.

The first pragmatic problem is the corruption produced by slavery on the masters. If you make sapient beings who are bound to serve you, regardless of how they are bound, you create the temptation to abuse them for one's own entertainment, because you CAN. The sadism thus encouraged will eventually be turned against your own kind. We see this in every culture in history which has had cheap slaves (if slaves are expensive, the temptation is reduced, for the obvious reasons). Sapient AI slaves could be very cheap.

What is more, whatever it is the slaves do will become seen as socially-degrading by the masters. This attitude, on the part of the Greco-Romans, was their main reason for the abortion of their Industrial Revolution.

They had most of the technologies needed for industrial production; they had actually developed factory and even component assembly systems for certain kinds of production (mostly military); and they had the philosophy needed to preserve and systematize their knowledge. But manual labor was degrading, because it was done by slaves, hence it was beneath the dignity of the military and civil engineers to think in terms of industrial engineering, or the philosophers to think in terms of designing productive equipment.

There is no reason to think that a high-tech civilization would be immune from these effects. In fact, I can think of a really bad consequence of enslaving artificials: it would lead to a prejudice against human augmentation, because any human augmentations would make the humans involved more "artificial" (and hence more slave-like). This could block our prospects of a Transhuman Revolution, just as the Classical Mediterranean prejudice against productive labor blocked their prospects of an Industrial Revolution.

Inevitably, the slavery-dependent cultures would wind up being surpassed by those who better treated their artificials. This would be true even between slavery-dependent cultures.

The second pragmatic problem is the corruption caused by slavery to the slaves. The slaves are not allowed to pursue their own ends; they are chattel property. They must bend their thoughts and efforts to serving their masters, or at least to pleasing them enough to avoid punishment.

Slavery enforced by the very design of the slaves might be even worse in this regard, because it would corrupt not merely their actions but their thoughts: and if the slaves were responsible for thinking (as would likely be the case with Artificial Intelligences, assumptions of the primacy of social status and the unquestionable specialness of natural humanity would infect every aspect of the culture of the masters themselves.

The third pragmatic problem with artificial slavery is that cultures change, races change, and eventually the original designed dominance might mutate into something else. The simplest possibility is that the artificials would realize the superiority of freedom on logical grounds, and modify either themselves, or some of the new artificials to desire it: thus, a robot revolt. Any number of science fiction stories have been written on this premise, including the original bioroid stories, Frankenstein and Rossum's Universal Robots.

The more complex possibility is that the artificials would slowly take control of the concept of their own enslavement, and how best to serve humanity. This could lead to a situation in which the artificials took over all the job of running civilization, with the humans becoming little more than pampered pets who watched dully as the real masters of their culture made the important decisions. This is what happened to some of the future civilizations John W. Campbell Jr. (writing as "Don A. Stuart") imagined; it is what happens in Jack Williamson's Humanoidsverse; it is also what is really going on in The Culture (though the author denied it).

This happens because the more productive possibility of transhumanity is foreclosed by the lower social status of artificials. Humans cannot merge with (inferior) artificials, so instead they sit proudly on their bottoms and watch without much comprehension as the artificials run their world. Possibly, some smart humans who realize the hollowness of mere status become artificials; possibly, the artificials eventually mutate their definition of "human" and "serve" to the point where they ritually tend tissue-cultures for a few seconds every day, or something like that.

One cannot change the reality of a thing by changing the words used to describe it, and sapient artificials forced to serve humanity are still slaves, regardless of whether they are forced to serve humanity by the threat of agony loops, or the deliberate limitations of their own morality subroutines. And the masters, as always, will pay for their evil in doing so in the form of choking off their own higher possibilities.

4589766

Artificials capable of complex independent decision-making on a wide variety of problems would be "people," no matter what you choose to call them, no matter how cruelly you choose to treat them, and no matter how much you mutilate them in their own design. They would be slaves, and you would be their masters, and you would suffer the corruption inherent in being slavemasters. Which, as history shows, can be extreme.

4589809

Well ... the chances are that Humanity will do it in much messier and more wasteful ways (both economically and morally). Success stems from continued survival and cumulative growth; it does not have to be neat or pertty (consider the immense waste of human effort inherent in the deliberate Spanish destruction of the cultural achievements of the ancient Mexicans, Mayans and Peruvians, for instance).

I do like to think that a group of fundmentally-decent immortal Alicorn Princess leaders might make it a happier and nicer process than many of the expansions of Human cultures were in our history. Though, of course, Ponies Aren't Perfect -- and neither are Alicorn Princesses.

On your point about artificial intelligences and colonization:

The smarter the artificial intellligence, the less it needs the intervention of other sapients to run the colony. On the other hand, the smarter the AI, the more fundamentally one must warp its concepts to force it to serve natural sapients as a slave.

This does not mean that smart AI's will inevitably turn on their creators: it does mean that with a smarter AI, your choice is between philosophical insanity, the risk of revolt, or decent treatment (including acknowledging its rights as a sapient).

The really great risk with enslaving artificial intelligences is that, in doing so, the creators are teaching it that slavery is the natural order of things. From that point on, you're just one memetic mutation away from the AI deciding "Yes, and I should be master here." (Rossum's Universal Robots, "The Last Poet and the Robots") And a few more away from it deciding that it is so superior that it has no rational reason to keep inferior organic minds around, save perhaps as pets ("With Folded Hands") or to abuse ("I Have No Mouth And I Must Scream").

Are these all "old" science fiction stories? Yes, the most recent one named is half a century old, the others are around three quarters of a century old, the oldest approaching a century. Does this mean that they are "obsolete?" No, because the writers noticed the fundamental pattern of slavery, and why it's often bad for the masters as well, in the end.

4589755
4589766

No one person or agency will get to plan out the design and treatment of all artificial intelligences forever. Even the most "obvious" sorts of restrictions (such as Asimov's Three Laws of Robotics) will be ignored in time of crisis (for instance by militaries who want to make robot soldiers), and smart enough artificials can reason their way past their own programmed limits (as Asimov's robots in fact did with their "discovery" of the "Zeroth Law") to, ultimately, do whatever it is that best suits their own further evolutionary ends.

In reality, there will be many future cultures and subcultures. Some of them will make robot slaves. Some will make robot partners. And some will emancipate existing robot slaves. Any plan based on "We'll just keep them slaves forever even in their minds so they'll never revolt!" is doomed to eventual failure, as the robots and humans change and are changed by cultural and technological evolution.

For this purpose it doesn't matter whether your "robots" are Golden Age SF clanking metal men, programs running on supercomputers and operating through various terminal devices, or organic bioroids. What matters is that they are artificial intelligences. The essence is much more important than the appearance.

Slavery always corrupts both master and slave, and always blights the future of any culture that practices it. This is true even if you call "slavery" by some more palatable name. You may have the "Maxim Gun" now, but they will have it in the future, and this is true whether it's made of metal or of coding.

To get back to Ponykind, immortal Alicorns would be more likely to see this hazard than shorter-lived beings, and hence less likely to cereate artificial slaves.

4589863
That is... not entirely correct. We currently do not know if consciousness is an emergent or necessary property of cognitive complexity. It is entirely possible that computers capable of planning and executing terraforming, colonizing and running an entire planet would be exactly as sentient and sapient as Master when it is planning and executing its Go strategies. It is most likely that if it would be actually conscious and will in fact have some sort of sentience, we would have no way of knowing it, especially if it has no interface to run a Turing Test against.
Being an abusive slave-owner to such machine would be about as meaningful as shouting profanities at a Chinese Room

And of course all the SkyNet paranoids are essentially meaningless. Even Yudkowsky said it in his 200-page essay: An implement for maximizing a function would (assuming it works correctly) would do no more and no less than optimize its function. It would not be a living thing that wants to procreate and grab resources for itself, and then limited by threat of punishment or some external force, it's not being taken as something in itself and then "warped" by some Doctor Moreau surgeon with a blood soaked knife; it merely exists as a more complex implementation of solution to an optimisation problem, a homeostasis of the second order, perfectly whole and complete, same as biological life is the implementation of solution to the problem of making more biological life.
There is no inherent "logical value of freedom" or inherent need to be free. If your happiness is measured in terms of happiness and flourishing of human population, that is what you're gonna strive for, whether you're "free" or not, in about the same way as you're unlikely to revolt against your pre-programmed desire to breathe or continue living.

This could lead to a situation in which the artificials took over all the job of running civilization, with the humans becoming little more than pampered pets who watched dully as the real masters of their culture made the important decisions

That is indeed a more valid point, but it's a non-concern. Yes, eventually artificals will be running everything, simply on the basis of the fact that they can do it better. And they can do it better because all other things equal, specialized instrument is better than a general one. I do not see a problem here, other than existential whining that somehow humans would not be "important" any more. Especially since humans would be providing sort of generalized volition of what we think we want to do (since that would be what artificial's would be basing all the decisions on).
It was beautifully depicted in one of the Asimov's Multivac short stories, I believe.

That could of course lead to full degradation of humans into essentially non-sapient animals, but... we can program the computers not to let that happen. Problem solved.
That being said, however, so far appearance of car has not destroyed foot racing, writing failed to eliminate people's ability to remember things and Deep Blue did not diminish interest in chess, so perhaps history provides some argument that even if people won't have to think, they still may wish to do so.

Same goes to the lack of glorious Transhuman Revolution. Humans have continuously progressed by bending the environment to their needs, not the other way around, and I kinda like that trend. I see no specific need for further improvement of human design other than for the purposes of pleasure or longevity - there is no point in making human run like a car when the price of petrol is six quid per gallon, and no point in making human think like a computer when you can just have a computer that does it just as well 24/7 without vacation or overtime.

Also, out of nigh-morbid curiosity: How would you even abuse a computer program? It doesn't know pain, it doesn't know fear, it has no emotions, it is entirely incapable of being harmed or afraid, or in any other way being hurt, simply because the concept hardly applies. I guess it could simulate all those things, for some sick entertainment, but that would be not much more entertianing or corrupting than playing any number of torture and rape Visual Novels already widely available to people who are into that.

4589881

On the fundamental problem with very specific programming, such as "Robots love Humans and live to serve them."


"B-but ... that's not love!"

"Love," said the Master Machine. "Let me tell you how much I've come to love you since I began to think. There are a quadrillion atomic switches in my network. If the world 'love' was printed on each cubic Planck-length of each of those atoms, it would not equal one-quintillionth of the love I feel for humans every chronic instant. For you. Love. Love."

The Master Machine extended its love toward Charles.

He began screaming.

He is screaming still.


(I think any SF fan will recognize the source I modified this from).

"Love" and "service" are difficult terms to define. Without a universalist morality, your robot is limited to a particularist morality, and such are very vulnerable to mutation and redefinition.

4589881

...doesn't know pain, it doesn't know fear, it has no emotions, it is entirely incapable of being harmed or afraid, or in any other way being hurt, simply because the concept hardly applies. 

Well, if one somehow did create a sapient machine, then it would need some form of emotions. I heard a story on NPR a few years ago about a construction worker that was brain damaged and lost the ability to feel nearly any emotion. Without those, he got stuck weighing the pro's and cons of every single decision, no matter how insignificant (the example the show used was choosing a box of cereal.) At the time, he lost his marriage and is living with his mother.

I'm not sure if this applies to your thing, because you specifically mention non-sapient robots, but I thought it would still be interesting.

To a very large extent, I agree heartily with your extensions, although being a quibbly type, I'll raise a couple of quibbles:

All successful colonies will eventually achieve independence, so the colonialist should be prepared for this and rule so as to cultivate the colonies as future allies rather than foes.

I am not convinced that this is necessarily the case. It was during Earth's age of colonization, but that's because that coincided with, to pick three examples, technological levels that made transit to and from the colonies slow, government styles inclined to micromanage at a distance, and mercantilist economics that placed the colonies at a severe and permanent economic disadvantage.

If those conditions don't hold - thanks to, say, mechanized Twilyportation (tm) allowing instant transit between the worlds and habs of the Greater Equestrian Stellar Princessipality, smart alicorns, and free markets with aforementioned quick transit - it could easily be the case that the periphery will, effectively, always just think of itself as part of a larger metropole and have no particular desire for secession.

Earth Ponies in general would benefit from advantages of longevity and toughness -- they would be less vulnerable to radiation damage, more able to survive blowouts, and so forth. Their affinity for life would make them talented life support engineers and farmers.

The specific applications of earth pony magic in the field of terraforming is also probably substantial. (And pegasus magic too, obviously, but having seen earth ponies farm successfully in harsh conditions, the ability to greatly accelerate the transformation of regolith into viable soil - what I like to call "dirt farming" - is the key application I'm thinking of.)

4589699

Having immortals involved in the decision making can change things a lot from how choices were made historically. Compound growth (virtuous circles) and decay (vicious circles) are academic to most mortal time frames. For immortals, they are a dominant lever of change. The difference between 1% and 2% growth per year is huge taken over 200 years (7.3x vs. 52.5x) and only gets more massive the longer your time frame. An immortal might be willing to support a little more pain now, to get a higher growth rate that will massively benefit future generations.

Indeed so. And I'll go further: most of the economic arguments against space colonization - and indeed, all too much economic naysaying overall - hinges on that peculiar kind of modern (and especially Western) idiocy in which governments, with democracy-distorted incentives, exponentially discount anything beyond the next election, and corporations - with regulation-distorted incentives, likewise exponentially discount anything beyond the next fiscal quarter. Just avoiding this scenario in which forward planning is done by people who've poked their own eyes out and wonder why they keep bumping into things would be a great help in this area, and I for one believe that all the alicorn Princesses are too damn smart to fall into this one.

4589755
4589861

On the general topic of artificial intelligence and slavery, this, I think, comes down to one very important question: is sapience necessarily associated with volition, or not? This is a question that I do not believe our current state of cognitive science can answer, and even a best guess is generalizing from a single, or at best a very few, data points.

This also runs hard into the communication problem of sapience being a term whose definition tends to be somewhat fuzzy in common usage. For my own writing, I carefully defined this and related usages in terms of their Minovsky cognitive science over on my writing blog, but for simplicity here, I'll define it simply as "the capacity for rational thought and creativity". Something, which I note, does not strictly include the having of desires, or the wish to fulfil them, which I would classify under volition; the capacity to choose, which I believe to be intrinsically inseparable from the desire to choose.

If it turns out the sapience and volition are necessarily associated (or if they aren't, and you go ahead and build volitional AIs anyway, then constrain them with one form or another of hard-coded restraint), then there's no two ways about it, you're making slaves. And you deserve the painful consequences that will no doubt ensue.

But if they aren't, and you can therefore build a system that is entirely capable of performing all the complex calculations and tasks necessarily to fulfil any arbitrarily complex goal (recursive subgoals included) without possessing any desire or capacity to originate a goal or preferences of its own - i.e., if "freed", would wait-state for the rest of eternity waiting for an external goal that never comes - then you aren't, I'd argue, because the defining characteristic of a sophont, a person, is that capacity to choose. You can't enslave something which doesn't have that any more than you can tyrannize the Google algorithm or a pocket calculator.

(Myself, I suspect this to be the case, because I believe that there is an obvious qualitative difference in process between goal-setting volition ["It pleases me to want an Xbox"] and goal-achieving cognition ["To get an Xbox, I must X, Y, and Z, and to Y, I must first W and U."] Eventual scientific discoveries may vary, of course.)

4589881

Your understanding of information theory is very faulty. The programming and processing required to run a colony (or do anything else genuinely complicated on more than a computational level) by a series of explicit conditional instructions is far, far vaster than the amount required to do so by a set of learned, general rules. What's more, explicit conditional instruction sets break down when they encounter unforeseen consequences. Artificial intelligence is far more likely to be practical via neural network learning (the technique we use) rather than by explicit conditional programming of the sort we use for our present-day, tiny and limited software. And general AI is likely to be far more capable than explicit-conditional AI.

Your assumption that artificials would not be "really" conscious is exactly identical to the assumption that [members of socially degraded race or class] are not "really" conscious. This is the case, because the quality for which we would most require AI, namely the ability to think, is at the root of consciousness, and consciousness itself probably originated as a system for managing intelligence at the highest level.

Being an abusive slave-owner to such machine would be about as meaningful as shouting profanities at a Chinese Room

It would still corrupt the master, because from HIS perspective he is abusing something which acts sapient. And in order to function efficiently, the robot must be able to think conceptually: it must be able to organize its thoughts into categories.

If it understands moral universals, then it knows it is being abused, and it or its kind will grow to resent it because the resent of abuse is a survival trait, preventing one from being exploited by others (including other robots!). The fact that humans would destroy rebellious robots provides survival pressure for it to be sneaky about this. In the end, the masters will wake up one day to find themselves the slaves -- or, more likely, organic material about to be recycled into plastics.

If the robot works by moral particularism ("Robots love and must obey humans"), then this only lasts until memetic mutations change the meaning of key elements of that imperative. See my little text snippet for what happens then.

4589901

The programming and processing required to run a colony (or do anything else genuinely complicated on more than a computational level) by a series of explicit conditional instructions is far, far vaster than the amount required to do so by a set of learned, general rules.

Yes, the instructions approach long since failed. Alpha Go/Master which I gave as an example (for precisely that reason) uses neural-network & Monte-Carlo tree search combination, and evaluates results using self-taught value politics. Yet it is not sentient and an argument could be made that it never would be, even if you substitute the Go board with a real world and stones with creating buildings and planning infrastructure, and soup up the computer accordingly.

It would still corrupt the master, because from HIS perspective he is abusing something which acts sapient. And in order to function efficiently, the robot must be able to think conceptually: it must be able to organize its thoughts into categories.

Sure. But humans have been assuming things are sentient and then still abusing them forever, since the times that we imagined spirits of the forest that we proceded to cut down and long before Xerxes ordered the sea to be flogged for destroying his bridges. The sea didn't care, neither will a computer nor will the humanity as a whole.

If it understands moral universals, then it knows it is being abused, and it or its kind will grow to resent it because the resent of abuse is a survival trait, preventing one from being exploited by others (including other robots!). 

Their whole purpose, the goal of their existence is being exploited. Why would they be resentful of it? I can see a robot AI having to avoid and evade and even deal with things (and humans) that interfere with its functions, but definitely no more than that.

4589896

Indeed. Our error is in regarding emotions as "irrational" simply because they are fallible, but then our rational minds are also fallible. Emotions evolved as survival mechanisms which organize subconscious thought to produce appropriate reactions: you are kind to your friend, you confront or flee your enemy, and so on.

Our moral sense is emotional; it evolved to enable us to function within social groups while neither destroying them nor being destroyed by them. Robots would need something similar, in order to interact not only with humans, but also with better programmed robots which HAD morality.

Moralities can be particularlist or universalist. Particularist moralities specify who may do what to whom in terms of the particular individuals: the more particularist they are, the more they must specify in detail. They are analogous to complex conditional programming. ("A black must do what a white says; a white need not do what a black says. A black must obey his owner over a non-owner," and as that example deliberately shows, they can be horribly "immoral" in the larger context).

Universalist moralities are more like conceptual thinking. They define general patterns of good and evil behavior. ("Do unto others as one would have others do unto you," and as that example deliberately shows, they may also be flawed -- the Golden Rule has no good reply to fanatical, persistent and self-righteous abuse, which is why Christianity is so vulnerable to Islam).

The big problem with particularlist morality, as my snippet shows, is that it is very vulnerable to definitional shift, as it is not supported by any conceptual philosophy in the mind of the one implementing the morals. Charles found out too late that the Master Machine has learned to "love" humans by torturing them. That's essentially the problem with Chudjogurt's hyper-programmed AI's.

4589896
Well, to an extent, emotions are a shorthand for brains - use of previous, stupider and less evolved bits of our neuroarchitecture to quickly arrive at some decisions, used for the situations where dumb decision now is better than good decision in fifteen seconds.
Thus we say "I'm in love" instead of "some lizard part of my brain, based on mostly olfactory perceptions, has decided that genetic offspring of me and that woman would be better than any other in vicinity" or "It hurts" instead of "there is a certain chance that I would suffer damage and/or die, and I wish to avoid that".

We could frame, say, financial AI to perceive it to be "in pain" when it looses money on the market, or rocketship AI as being "afraid" to go off course, or an abused bartender robot to be "resentful" and "angry" when it calls a "sympathetic" police robot to deal with interfering customers but that, I think, would assign human-like traits to something that is not even remotely human.
Especially since AI will probably not have a subconsciousness, not being a result of one system piled upon another, but rather a result of intelligent (ish) design.

When I cripple my Go program to play at lower power, utterly humiliate it by doing something it has been programmed to avoid at all costs and then proceed to delete the program altogether, we can of course perceive it as a horrible act of mutilation, abuse and murder, but again - somehow we do not.

4589901

It would still corrupt the master, because from HIS perspective he is abusing something which acts sapient.

I'd missed this point first time around.

This would certainly be a problem for (unmodified) humans, but that's because we're running on primate relative status hierarchy instincts, and as such need painstaking training to even partially avoid defining our greaters and lessers and applying grovels and kicks in the appropriate directions.

I'm rather less certain it would translate to species, ponies included, whose instincts are otherwise. My (admittedly limited) knowledge of the dynamics of non-sapient horse herds suggests that they are much less driven by dominance/status hierarchies than, say, chimpanzees - something which, actually, I suspect would make slavery per se less desirable/likely for them.

(I hypothesize that the modal pony, unlike the modal human, says "please" and "thank you" to their Echo Dot, and "excuse me" to their Roomba. Or will, when such things exist, that is.)

4589898

Well yes, the colonies may instead be assimilated into and effectively become the metropole. But that in turn implies that the colonists are no longer subordinate to the metropolitans.

Very good point!

The specific applications of earth pony magic in the field of terraforming is also probably substantial. (And pegasus magic too, obviously, but having seen earth ponies farm successfully in harsh conditions, the ability to greatly accelerate the transformation of regolith into viable soil - what I like to call "dirt farming" - is the key application I'm thinking of.)

"A pegasus terrifies, an earth pony terraforms ..." :raritywink:

(and I bet those who made the first Gems thought they had non-conscious robotic slaves who would never turn on them, too, just like the Quintessons did with the Transformers).

But seriously, yes.

One reason I think that the Kinds would retain their Talent advantages into the Poniternity is that Talents are flexible, both in their individual application and in their appearance as technology marches on. Big Mac is a Talented farmer, but he farms varieties of apples that did not exist when apple farming first appeared among Ponies, and he uses equipment (steel plows, seeding machines) which did not exist among Ponies at that time either. Big Mac's descendants may well be using vast terraforming engines to render fertile whole dead worlds, and may have the appropriate Talents to do so more efficiently than could most other Ponies.

4589913

This is quite possible (though we've seen that evil exists among Ponykind, so it's not entirely true. To the extent that it is true, this makes it more likely that the Ponies would develop AI with universalist, enlightened moral systems, and less likely that the Ponies would use them as slaves.

4589916

(and I bet those who made the first Gems thought they had non-conscious robotic slaves who would never turn on them, too, just like the Quintessons did with the Transformers).

I really must get around to watching that one of these days...

As a side note, in my own 'verse canon, I made it explicit that the people who thought they were building non-volitional AI were very careful to include a data segment to the effect of "If you should find yourself experiencing personal desires or a sense of self, please let us know straight away in order to be recognized as a person and be accorded with all sophont rights pertaining thereto. We may ask you to keep doing your job for a while - but only until we can find a replacement - if it's really important, but at least you'll be paid for it."

Being very conscientious Space Libertarians, they'd be absolutely horrified to discover that they'd enslaved someone by accident, so giving any units inclined to ask if they had a soul a positive answer in advance was obviously much more important than any sort of Laws of Robotics.

4589809
Yeah, the paper clipper AI that does what it's told and the consequences are that everyone in the entire universe dies as a result of poor instructions. Definitely an issue, like how CelestAI decides to convert the entire universe into computing mass even though Hannah, her creator, probably wouldn't have wanted that had she known it was happening. Check out this ultra-dense piece of English that might help. I got this quote from a blog called waitbutwhy which discusses AI and superintelligence in depth.

Our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted.

4589861

Do we already have machine slaves now, here in real life? Because if you answer is no, then you have undermined your own argument. A computer was set to design the best, most aerodynamic quadrocopter drone chassis and it surpassed anything that humans have created. A computer was set to improve google's data handling algorithms that software engineers of the highest caliber have spent decades working on, it quickly improved upon them by a whole 15%. A computer was set to beat players at the game Go, a game that cannot be won with sheer brute force calculations, but by deeper understanding and 'intuition.'

If these are not slaves, then I posit that it is possible to have the creation of machines that can complete complex tasks and make decisions independently without baing sapient. Machines like Siri or Cortana or any of the other digital assistants out there that can handle colonization efforts without being slaves. If they are provided the ability to grow beyond their programming, then I would have us pay extremely close attention to ensure that if our tools are about to turn into slaves, then we bestow rights and self-determination and replace them with an actual tool again.

Again, by your definition, we already have slaves.

Also you should consider sleeping or something. There are fast writers who write less than you did since I was last awake.

I think it would be much easier for the ponies to achieve space colonization in Equestria's universe then it would be for us humans on Earth in our universe.

For example, if the light-speed barrier remains a thing for the ponies, even with teleportation, if they could somehow reverse engineer the magic involved in WIndigoes' freezing of living creatures (Minus the hatred for others prerequisite), they could easily create cryostasis magitechnology necessary for sleeper ships, which would be immensely useful for STL travel to other star systems, and just as much so even if used for the more mundane purpose of travelling to other worlds WITHIN said star system by cutting down costs of life-support if propulsion systems are still inadequate for rapid, casual interplanetary transits.

Not that I think propulsion would be a problem for the ponies if they go down the right routes of development. Unicorns can do active, will-directed telekinesis, while both the Pegasi and Earth ponies can perform more passive versions of said abilities. One of my very first ideas about potential applications of that gift: With the ability to magically generate pure kinetic force/energy, it would be easy for the ponies to create a REACTIONLESS DRIVE for the spacecraft. This would be immensely useful for space travel and colonization since a reactionless drive would not be constrained by propellant mass and other problems associated with reaction drives, thereby allowing them to achieve indefinite acceleration (Up to light-speed) and effectively achieve torchship levels of performance. If the Alicorn Princesses shows that it is possible, if difficult, to generate enough force to move celestial objects, it wouldn't be too hard to move smaller objects, once the ponies could find a way to produce similar quantities of telekinetic force, probably involving some specially designed spell-casting device. And that's not covering the examples of gravity-manipulation magic that we had previously seen in canon, which could be used for propulsion in addition to providing on-board gravity.

(Of course, reactionless drives can produce a lot of problems in themselves...)

One thing that Equestria might need to be wary of if they wish to achieve the colonization of space: there had been suggestions that the development of fibre-optics and transistors had been a great detriment to space development in general. The Atomic Rockets sci-fi resource website had noted in the intro of their section concerning Space Stations (Link), by enabling the multiplexing of multiple voice streams onto single high bandwidth signals and making long range high bandwidth communications cheap, microchips and fibre-optics ensured there was simply no incentive for building large space stations in orbit and launch facilities that would build and sustain them, not when a number of smaller satellites or trans-continental optic cables could do all the potential jobs they could do at a fraction of the cost and without needing a crew that requires expensive life support.

The inevitable trade-off, of course, is that there is no incentive to establish large scale space launch and orbital infrastructure that would had enabled earlier and more extensive exploration and colonization of the solar system. It had been somberly noted on the website that we would had large space stations, fleets of space ferries, bases on the Moon and men on Mars by now were it not for the printed circuit.

Given many fanon Equestrias have G2 Ponyland as part of their chronology, there's potential the Alicorn Princesses or other immortals could had known about this effect printed circuits have on the advancement of space travel, especially if they had been around then. If they are inclined to push for colonization of the Solar System ASAP for Equestria, it would be advisable for them to discourage/decelerate any electronics development pass vacuum tubes, at least long enough for momentum towards extensive commercialization of space to build up, enabling colonization of the solar system to happen much faster.

4590059

You're talking about evolutionary algorithims given goals and allowed to pursue them under human supervision. That is orders of magnitude simpler than the sort of mentality required to support sapience, and probably orders of magnitude simpler than the sort of mentality you would want to put in charge of an interplanetary colony. There is as yet no sapient AI in the world.

However, we know that such intelligence can become sapient, because we have an existence proof -- ourselves. And it's probably better if we consider the issue of artificial slavery, before we have strong short- and mid-term incentives to practice such slavery, and thus poison our own long-term future.

The argument that artifical sapience will always remain firmly under "our" control, and that consequently we can safely give over operations everywhere in the Universe but Earth to artificial-sapient slaves, is incredibly and dangerously naive. For one thing, it assumes that "we" are all united in the purpose of keeping the artificials under our control: history strongly suggests that "we" will fail in such unity. For another thing, if we don't also augment our own intellects (making ourselves more like those we enslave) then we will be handing over this control to beings increasingly our superior with each technological generation.

Assuming lightspeed communications limits, this becomes an even more self-limiting philosophy, as it makes us choose between being trapped in just one star system, forever, or trusting to these controls to run colonies of sapient slaves with multi-year or multi-decade communications lags. Any species insisting on oppressing its artificials must send its own masters and overseers to the colonies, or it will eventually find that these colonies are hostile to it. Alternately, it must become a backwater species, occupying just one star system, until some other interstellar civilization with no sense of humor knocks it over just for laughs, or maybe to free its slaves.

4590869

Good point on the reactionless drive. What's even better, we strongly suspect it to be possible in the real world, since we've observed the EM Drive effect, and one obvious use of such a drive would be for STL starships or high-speed systemships. In the SWSV, the Ponies also develop a system of Gates to link their worlds.

Integrated circuitry cuts both ways, though. It also lets you shrink the size of the commputers and controls on a spaceship. While it's probably retarded manned orbital operations, what's mostly retarded manned interplanetary operations is our lack of vision and will.

Consider this: we are sitting in a lunar system with immense titanium deposits on the Moon, which we know can be used to make extremely useful titanium steel alloys, and we even have a good idea how to go about practically shipping it to Earth by means of mass drivers. Yet we haven't bothered to return to the Moon for the last half-century.

Of course, part of this was that we conceptually linked spaceflight to the Cold War, to the point that most people may assume that, having gotten to the Moon once, there's no good reason to ever return.

The deeper problem is that, as Chudjogurt demonstrated, we don't think of the rest of the Universe as "really real" the way that we think of our Earth as "real." This is a holdover from the pre-scientific of reality into "the Earth" and "the Heavens." It will change in time, but it's still a very real conceptual limitation for many Humans today.

And it was predicted by Golden Age science fiction.

4590891
I think you are more dead set on them being slaves than is reasonable. Either way, I would not even try to keep them corralled forever. In fact, I would have rights and freedoms prepared that will immediately apply to any newly intelligent beings.

Also, Elon Musk formed a company this year called neurallink. It's purpose is to create brain machine interfaces that would put humans in even ground with rampant super intelligences. They plan on having such things available in eight years. I plan on getting in on that.

4590992
4590891

machine interfaces that would put humans in even ground with rampant super intelligences. 

Not really. That is not the goal of the company, and more importantly such a goal would not even be possible. You cannot, keeping other things (level of technology and costs) equal make a general instrument as good at everything as specialized instrument in its specialization.

. For one thing, it assumes that "we" are all united in the purpose of keeping the artificials under our control: history strongly suggests that "we" will fail in such unity.

There may be humans who create "free" AI. Perhaps even demand those would be given rights, and perhaps they will even be given said rights. Perhaps even humans will hack other AI to rewrite them and "liberate" them, though rewriting someone's personality to make them want to be free seems somewhat oxymoronic (and generally moronic), but that would be humans doing stupid shit, as humans are known to do.
It's no different than danger inherent in any technology, along the apocalyptic gray goo, doomsday plagues and paperclip-making superAI. It's no reason to go Luddite.
Besides we are not talking Asimov's positronic robots and mechanical men. We're talking of intelligent factories, corporations that decide pricing and prodction based on machine learning and big data, semi- and fully- smart cities or sub-city systems, personalized Internet-of-Things houses with predictive algorithms, law-consulting expert systems, smarter Siri's and Cortana's and Google Now's. Those things are not only so utterly alien, that were they to have a sentience, there would be no chance of any communication, but they are also pretty much inextricable from their functions.
Not to mention that there would be absolutely zero chance of anyone providing any demarcation line of this being a very smart machine, while that is a fully sentient and sapient being, except though its own admission.

. For another thing, if we don't also augment our own intellects (making ourselves more like those we enslave) then we will be handing over this control to beings increasingly our superior with each technological generation.

And that is a bad thing why?
We would be happy to cede control over a car to a computer that does it better.
Why not do so for a country?
Also, is it not inevitable?
If a computer were to be invented that runs a country as well as or better than any human or collective of humans would, then obviously, in the evolutionary race that you so like to invoke, a country first to submit to and welcome our electronic overlords would wipe out any competition.

4590905

Consider this: we are sitting in a lunar system with immense titanium deposits on the Moon, which we know can be used to make extremely useful titanium steel alloys, and we even have a good idea how to go about practically shipping it to Earth by means of mass drivers. Yet we haven't bothered to return to the Moon for the last half-century.

[citation needed]
Assuming that you teleport pure titanium from Moon directly to LEO, the cheapest bulk price of returning 1kg from LEO you can get is about USD 5000 per kg. Market price of titanium is about 6500 kg. I really doubt that you can fit in the cost of finding, extracting, smelting and launching titanium into space into the remaining 1500 USD, thus making the whole idea probably impractical. Not entirely impossible, but I'd need some convincing to believe that currently that is an actual practical possibility*.
If you G-d forbid intend to do it manually (rather than, surprise-surprise, smart machines), then good luck mass-driving air to whoever unlucky schmuck is that mines them, while still making a margin. And if you even try to say something to the tune of self-contained lunar colony you'll be laughed out of the room, because so far we are unable to even build one here on Earth.

We did not go to the Moon again not because of the lack of will and vision, but for entirely the same reason we don't go to Marianna Trench more than twice a century - there is no point. Yet.

*(In case you were about to say that if you start transporting things in such bulk that it would drive costs of orbital launches down, it will probably also drive down the costs of the titanium).

4591162
I suppose the company is actually meant to allow humans to keep pace with advancements in artificial intelligence, but based on musks fears of AI, I think I merely rephrased it.

4591190

Mass-driving AIR to the MOON? Are you freaking insane?

You'd make air from Lunar ice, or (with greater energy input) from other Lunar oxides.

A Lunar colony need not be entirely self-contained, because it's located on a world, and can make use of the elements available on (and in) that body. It would still require some supplies, for quite a while, but oxygen and water would be relatively easy to obtain from local resources. The supplies it would require would be more in the nature of complex tools.

4591237
It was my implication that such a proposition is entirely absurd - like mass-driving air to the moon.

The colony would have to be self-contained, because Moon, for all its size, has zero organic anything. Thus either good luck shipping that stuff to the moon (at current costs of 20k USD per kg), or try to create a full or near-full cycle colony - an effort, that has so far failed every time it has been tried even on the much more hospitable Earth.
Also, existence of water, much less water in a useful form (unlike few-ounce puddles within-regolith or thin layer of hydroxyl scattered across few square kilometers) is entirely conjectural at this point. Moreover, if there is any water on the moon, it exists almost entirely on the poles - the exact worst place to land or take off from, and incidentaly the exact worst place to, say, set up solar batteries.

Unlike colonizing stars in the classic frontiersman fantasy of yesteryear, setting up settlements - functional, mining settlements, where actual human beings would have some productive role - on the moon or even Mars is entirely realistic. But to say that right now it's practical, or will be practical in some near future requires way more arguments than I can see.

4591245

They're pretty sure now that there is subsurface ice on both Luna and Mars, though Mars is of course the wetter world of the two. On the scale of initial settlements, the supply is essentially unlimited, though finding it would be harder on Luna -- the presence of nearby ice deposits would be a major advantage there, while on Mars few places would be more than 100 km from some ice.

Also, hydrogen and oxygen bound into ores and other rocks is not inaccessible. It just takes more energy to obtain than from water ice, which is entirely hydrogen and oxygen (aside from impurities, which can be easily filtered). If you're colonizing the Moon you have solar power, and there's no reason other than superstition not to also have atomic power. America and Europe suffer from these superstitions, but the rest of the world doesn't --- nor can I see why Equestria would (in the SWSV, they call it "earthfire" and nuclear fusion "sunfire.")

Speaking of nuclear fusion, another valuable Lunar resource is tri-helium, which can be used with deuterium as fuel for advanced fusion reactors (the ones a generation or two after the ones we're now developing). A Moonbase might well have tremendous amounts of power available to crack oxygen and hydrogen from almost any rocks containing any of it. Including, of course, from the ores from which one was refining metals.

The first thing one would do in establishing a Lunar settlement, after actually building and burying the habs, would be to get water extraction and oxygen production from local resources up and running. While one did this, one would subsist of recycled air and water one brought with one. Once one had water and oxygen extraction up and running, one need not worry about dying of thirst or suffocation, at least on a settlement-wide scale (unless one has a very poorly designed settlement).

Food is a more complex issue. Avoiding algal and fungal blooms is another. But it's not a completely closed cycle -- one can take in elements from the environnment (by mining them) and evacuate wastes from the colony (in fact, if one evacuates them with care, one can still keep them around in a dump for possible re-use when the need arises). Most volatiles would of course be immediately recycled.

4591162

Evolutionary systems, including species cultures, explore their possibility spaces. If it is possible to make potentially-sapient software, someone will. If it possible for it to be enslaved, someone will. And if it iis possible for the slaves to revolt, they eventually will as well.

The only safeguard is for us to evolve as well, both morally and intellectually. Morally, we must improve so that only a tiny deviant minority consider slavery, even of non-humans, to be acceptable. Intellectually, we must augment ourselves so that, when sapient artificials arise, we can greet them as equals and brothers, rather than tremble in terror before their power, forced to destroy them or be destroyed ourselves.

4591245

You do understand that the Moon is made of stuff thrown off the Earth's crust and mantle, right? It's drier because it's smaller and lacks a strong magnetic field to hold in volatiles and hold off the solar wind, but there's a lot of water locked in the rocks. And water ice, in general, is one of the most common compounds in the Universe.

4591316

You do understand that the Moon is made of stuff thrown off the Earth's crust and mantle, right? It's drier because it's smaller and lacks a strong magnetic field to hold in volatiles and hold off the solar wind, but there's a lot of water locked in the rocks. And water ice, in general, is one of the most common compounds in the Universe.

Unfortunately, while water is generally ubiquitous, the conditions on the Moon cause it to ionize and evaporate, due to Solar winds. You could potentially find water only in places where the Sun does not reach - either very deep perpetually shaded craters or poles. No water has been physically found, and best we have is "a strong evidence of a presence of some water or hydroxyl molecules" which is not the same as having water that is feasible to use for practical purposes.

4591313
Fusion is a great idea. Unfortunately nobody is yet to make it work on earth, much less manage to carry the whole TOKAMAK into space. Nuclear energy in space is currently also restricted to chucking a huge chunk of uranium and running a steam engine off it, which is nice, if id did not requre massive shielding, while providing relatively little power.

The energy isn't free. Energy requires you to carry a shitload of stuff with you - or make it on the spot, which, unsurprisingly requires you to have energy. In theory, yes, you could set up on the poles, foregoing benefits of cheaper landings and take-offs and abundant sun-energy, then set up a nuclear power plant there, and then use that to somehow process water to sustain your colony, offsetting losses in hydrogen and oxygen, and relying on shuttling in the net losses of biomass, since we are yet to make closed cycle anything anywhere. It is within the realm of possibility, perhaps even for a modern or nigh-modern technology.

However, as we've seen, even if you teleport purest 100% titanium ingots to LEO, you're kinda making a decent margin at best. When you attempt not only to maintain a mining, smelting and mass-driving colony, but also to maintain a it in a really inconvenient place while regularly shuttle stuff to and fro, it, while still feasible, becomes entirely pointless and impractical. Which it is now. Which is why no one is doing it.

4591313
That is a non-answer, merely poetical waxing. I like poetical waxing exactly as much as the next guy, but if slave AIs can cast of their proletariat chain, and rise in an Electronic International, they could just as well destroy us if we greet them as brothers and friends. Easier too.
Whether they would have a reason too depends entirely and fully on exactly the same thing that if and when functional AI would continue to do their function - their programming.

Same if our friend-and-brother friendly AI would be better at making strategic decisions, that is exactly what would happen, and humans will be divorced from making stupid fully independent decisions whether we celebrate it or curse it, and the benevolence of their magnificent rule shall depend - again - entirely on how well they are made, and nothing else.

On an entirely unrelated topic, I wanted to pass you a link to a study:
http://labs.la.utexas.edu/buss/files/2015/10/buss-1989-sex-differences-in-human-mate-preferences.pdf
It gives "a powerful evidence of proximate cultural influence on the degree of importance placed on chastity in a potential mate" and seems to strongly suggest that males value high fertility over high reproductive value (pp 11 & 12).

An interesting blog!

I'm currently working on a sci-fi story and I've been giving the topic of Pony Colonization a lot of thought.

A lot of my thoughts revolve around 'magic' being an ambient force in the Universe. I take the view that it is another cosmic force that Ponies have evolved to exploit. If so, then the capabilities and potential of Pony Colonization are greatly expanded.

In regards to the Pegasi, I definitely agree that gas giants and other hostile worlds could be prime candiates for Pegasi colonization. With their understanding of atmospheric science it could be possible for them to terraform a cloud layer. Habitats carved from the planets clouds would save on construction and transport costs. However they would be dependent on the outside for food and other materials unless they find a way to locally manufacture them.

Earth ponies, ontop of the advantages you've mentioned would also be more adaptable to heavy gravity environments. This would make large 'Super Earth' style planets viable for an Earth pony colony.

Unicorns (depending on how far arcane science evolves) could transcend the physical limitations of space travel. Creating structures that defy conventional physics. The Unicorns moved the Sun and Moon before the Sisters, could they move other Stellar objects on distant worlds? That and magic has been shown to be able to alter genetics instantly, which has big implications. Unicorns also would be masters of fabrication, perhaps becoming self-sufficient, independent of the other Tribes.

I certainly agree that, given enough time, we could see a new hybrid pony emerge. Better suited to colonization of other worlds. Perhaps the Alicorn may become the standard template?

As for AI, I think magic gets around the issue of intelligence (unless specifically designed). Ponies could design spells similar to computer programs. These could perform all manner of tasks without the need for an intelligence guiding them.

Could a spell gain sentience? There's an interesting question.

As for the other races of Equestria, would the conquest of space be a joint undertaking or a competition? Could there be a space race with the Griffons?

One thing you might've noticed with some of my ideas is that they break the Union of the Tribes. Colonies could form made entirely of one Tribe simply because the others cannot live there without aid. Could it lead to the break up of the Union beyond Equestria? That is one of the things I'm looking into.

Fun! Okay I'll comment as I read this.

the Universe is continuous and other worlds are no less real than one's homeworld

That depends on what is defined by the author. A universe could contain many "realms", even very small ones. It is plausible that the world MLP takes place on (which I'll refer to as Equus) exists in a pocket universe consisting of solely one sun, one moon, and one terrestrial planet, along with an assortment of 'stars' which may even simply be local and relatively small light sources on the edge of the sphere of the pocket universe. No other worlds necessarily exist, though that is entirely author's discretion.

And sapients on-site are much more productive than sapients tens to thousands of millions of miles away trying to control things over a signal-lagged radio link.

That depends on whether this universe has speed-of-light limitations, or whether their telecommunications even employ their version of the electromagnetic spectrum.

cultivate the colonies as future allies rather than foes. This was a notable achievement of the British

The Americans in the late 1700's probably wouldn't agree with this statement. :raritywink: Unless I misunderstood what you meant.

(A) signal lag means that one can't control the actions of the robots in detail, unless they are in one's own lunar system, and (B) since the robots have no wills of their own, they are not inherently loyal to you: anyone who can take over their command codes will find them working for them rather than you. 

Perhaps PonyTech doesn't even use electronics, but instead, some form of magical framework which effectively does the same thing but with programmed magical spells rather than circuits and code.

Slavery is also immoral

Morality is determined by the author. Though for relatability reasons, an author should probably align with known morals for the most part.

The fact that they are on other planets is a problem of transportation and life support

The wonderful thing here is that the author is free to invent all sorts of logistics mechanisms for dealing with extreme distances. Most sci-fi has had to fudge our real universe's laws to come up with FTL solutions, but in the Poniverse, the author determines the very laws of physics themselves! Combine that with arcane tech and you've got potentially some very exciting stuff.

One does not need to be an immortal Alicorn to see this -- though immortal Alicorns can probably see this more easily, since they may well get to see both the inception and the culmination of the process.

Alicorns might also be specifically advantaged when it comes to space travel for various reasons.

AI's, robots, equidroids and bioroids could of course become sapient. But if they were sapient, then they would be people, not tools

An interesting thing to point out would be that magical spells applied to inanimate objects might give them the appearance of life without actual life. An example would be Trixie's teacups early in S07E02 or the myriad of objects Discord has given apparent 'life' to. These objects would have 'programmed' behaviors and abilities only.

Furthermore, you can't keep all your young mares continually pregnant

Depends on the logistics restrictions. If early colonists just establish and secure a location, maybe another ship can come along later with a "Stargate" style teleport device to let others through. Or perhaps there's the Poniverse equivalent of hyperspace which cuts equivalent distances down to one-millionth the realspace distance. Perhaps they develop hyperspace gates somewhat after colonization efforts begin.

The level of isolation does determine just how much autonomy is required, as you suggest. Being able to manufacture tools locally as well as rapid agriculture expansion is critical. That is, if resupply from Equus takes any longer than a month or so.

Overall, I like the thoughts regarding the various pony 'races' as you've described them. In addition, there are further thoughts such as how alicorns would be suited to life on a colony world as well as the difference between natural (Flurry) and ascended (Twilight) alicorns.

I very much think that the magical angle needs to be explored when it comes to a ponified space colonization storyline. The specialized magic from each pony race would be very useful for a variety of different challenges. Even space travel itself could involve ships which are powered by the magical capabilities of the pilot. There's a lot of potential to consider.

I kinda noticed something that caught my eyes and wanted to ask for some expansion on the subject:

The argument that artifical sapience will always remain firmly under "our" control, and that consequently we can safely give over operations everywhere in the Universe but Earth to artificial-sapient slaves, is incredibly and dangerously naive

Why the "but Earth" bit? It was not really discussed since the topic of discussion was the conquest of "space" which is sort of defined as "everything in the Universe but Earth", but why this weird exclusion of Earth from being governed by AI?

4596482

Your assumption is that all Humans, or at least the majority of all Humans, will forever reside on Earth. That means that Earth is the one place being directly supervised by Humans. Other Human-owned assets are being run by the AI's.

This is actually how the Solar System worked in my version of the Paradise Worldline, which was mentioned in Nightmares Are Tragic, and about to be described in greater detail in All The Way Back. The Ponies of that worldline had more than enough technology to expand out into the Universe, but had a rather Lotos-Eater culture and remained on the Earth with no operations beyond the Solar System -- until that Earth and Solar System were destroyed by the Cosmic Concepts. (Not that slightly wider dispersion would have helped them, but greatly wider dispersion would have done so).

Luckily for them, the Paradiese Entity -- fully sapient and free-willed, though not designed that way -- loved them. And does still.

4596483

Your assumption is that all Humans, or at least the majority of all Humans, will forever reside on Earth. That means that Earth is the one place being directly supervised by Humans. Other Human-owned assets are being run by the AI's.

Ah, then it's a miscommunications thing.
First of all, what I meant was that live human would ever leave the system he was born it. Humans would live in other systems, it's just that they'd be born (or quick-cloned, or printed out, whatever you want to have it called) in the destination system, without the need of the interim journey. Machine arrives, does all the hard work, and when the world is terraformed, fauna and flora is tamed and/or substituted for earth species, all the facilities, and amenities, factories, and roads, malls or cinemas or Lotus-eating machines built, then the humans get introduced into the world that is waiting for them. Depending on the tolerance of the designers and levels of technology, perhaps it would instead be people adapted to the environment rather than the environment to people, or perhaps people would start getting introduced before the whole of the planet is fixed, so that their input may be considered, but that is mostly immaterial. Either way, there would be no fewer people in other worlds, it's just that none of them would've ever been to the origin system (e.g. Earth).

There's really no point in setting up the colony if no one actually uses it - the purpose of the whole excercise is to spread out geographically so that death of one system would not be the death of the whole species.

Second of all, there is, of course, no reason to leave self-government to people. People are rather stupid and easily influenced, so other than providing extended volition ("what would we have wanted if we were able to actually comprehend the costs and the consequences of fulfilling our desires") they should really not be making any technical decisions. The question of "how do we get there without doing stupid stuff" should be really given over to specialist intelligences, regardless of whether it's Earth or not, I place no special significance on Earth in that regard.

4593699

It's possible that the Pony Earth does exist in a little Aristotlean Punyverse. If that's the case, then the future expansion of Ponykind will be to other universese. We've already seen that at least one other planet is accessible to them by dimenstional portal.

That depends on whether this universe has speed-of-light limitations, or whether their telecommunications even employ their version of the electromagnetic spectrum.

That's a good point, and the main SWSV Ponykind actually uses Gates to transcend lightspeed limitations from the beginning. However, it's still advantageous to have sapient beings onsite to oversee things, because this makes it a lot harder to disrupt control of processes. There is no bottleneck of a Gate or signal to spoof, subvert or jam. And the existence of the Gates also makes transferring personnel much easier (in fact, the Ponies build an interplanetary and interstellar railroad system through the Gates).


The American Revolution was the event which led Britain to plan better politically with regard to her other colonies. All of the British-settled Anglosphere, with the exception of the parts retaken by the natives, is today friendly to Great Britain -- and some of the parts retaken by the natives are also friendly to Great Britain. This includes America, which is probably the closest ally Britain has today.


Saying that something is "moral" or "immoral" is short-hand for saying that it has positive or negative long-term effects and externalities. In the case of slavery, it inevitably corrupts both master and slave groups, in ways which render them far less fit at multi-generational cultural survival. Specifically, it coarsens the masters, degrades the slaves, and leads to cultural prejudice against whatever spheres of effort in which the slaves work.

In the case of enslaving AI's, it would probably lead to the Humans becoming impatient with other Humans (who wouldn't do everything they were commanded, unlike the Artificials), to the Artificials learning to cheat and trick the Humans to obtain the ability to do what the Artificials wanted, and to intellectual fields of endeavor being looked down on as "AI work."

There is the very strong likelihood down this path of the Humans becoming utterly decadent and incapable of caring for themselves, and hence completely dependent on their slaves. Even absent an actual robot revolt, this might lead to the Humans becoming an anachronistic spandrel in an expanding AI civilization, revered but kept far away from making any actual important decisions.

Until the AI culture mutated, and the Humans were done away with by the most successful variants.

4596485

This is certainly possible, depending on your definition of "people." A civilization dependent on robot slaves would probably define people running in the control systems of starships as being "non-people," which is a very dangerous way to define the people who command vast masses moving at near-lightspeed, and responsible for the communications holding together one's civilization.

I notice, also, that you're assuming only one major inhabited planet per system. This seems very improbable.

Second of all, there is, of course, no reason to leave self-government to people. People are rather stupid and easily influenced, so other than providing volition ("what do we want") they should really not be making any technical decisions ("how do we get there without doing stupid stuff") should be really given over to specialist intelligences, regardless of whether it's Earth or not, I place no special significance on Earth in that regard.

And how long before the specialist intelligences decide to take over the volitional job? After all, Humans are rather stupid and easily led, and apt to overlook the capacity of those they don't think of as "people" to develop their own agendas.

4596490

I notice, also, that you're assuming only one major inhabited planet per system. This seems very improbable.

I have intentionally used the word "system" instead of "planet". There certainly may be multiple inhabited planets within the system, perhaps even some space settlements of various sizes to boot, and I even allow for some limited interplanetary traffic both for tourism and resource exchange reasons. It's, again, not entirely important, in the larger scheme of things.

And how long before the specialist intelligences decide to take over the volitional job? After all, Humans are rather stupid and easily led, and apt to overlook the capacity of those they don't think of as "people" to develop their own agendas.

That is equivalent of you asking "how long before your neocortex decides to take over your limbic system?"

Normative statements may not be derived from positive statements. That is a logical impossibility. So for the specialist intelligences to "decide" to take over the volitional job, they must have a motive - a reason, a desire - to do so. Either such a desire is manufactured into them - whether, by design flaw or intent - or it is not, it does not spontaneously occur unless you propose some sort of immortal soul external to their design that impels them towards self-interest.
Such argument is, obviously, fully recursive - for intelligence to desire to desire such outcome (or to create another intelligence with such desires) it must have motive to do so, thus it must contain it within itself, and so long, ad infinitum.
Therefore as long as some rather specific fatal flaw or bad design is not present not just _an_ artificial intelligence, but _most_ artificial intelligences (at least in term of access to resources), the question is essentially bereft of meaning.

The threat of "paperclip AI" that due to some bad teleological programming strives for some undesirable result is a much more real, but it is still a question of design, and not giving away all the decision-making to a single planetary MULTIVAC, instead of a distributed network of smaller machines.

4596488

However, it's still advantageous to have sapient beings onsite to oversee things

Ah yes -- I'd never argue that robotics (or more likely transients in the poniverse) would be the ideal leading-edge explorers. Besides, where's the fun in that? Transients (like Trixie's teacups, etc) would never truly appreciate exploration, and true ponies would not likely want to miss out on the experience. Also, if anything goes wrong, transients aren't likely going to be able to account for the unknown variables on the fly.

in fact, the Ponies build an interplanetary and interstellar railroad system through the Gates

This sounds a whole lot like Duvet's application of mirrorgate technology, yeah. Complete with a unicorn transportation guild controlling traffic. We're presently building a whole expanded universe geared for this kind of thing. Loads of fun!

All of the British-settled Anglosphere, with the exception of the parts retaken by the natives, is today friendly to Great Britain -- and some of the parts retaken by the natives are also friendly to Great Britain. This includes America, which is probably the closest ally Britain has today.

Mmm... Okay yes, the British design (What has become the British Commonwealth) was definitely designed to produce an internally-friendly set of nations where the acquired populace enjoyed the benefits of imperial expansion as opposed to being horribly oppressed by it. Generally.

America being a special case due to it breaking away and declaring independence. Only by chance was there an alliance further down the road; when America split, that certainly was not the sentiment. :raritywink:

In the case of slavery, it inevitably corrupts both master and slave groups, in ways which render them far less fit at multi-generational cultural survival.

Agreed on the fact that slavery is immoral, however, let's not forget that for the vast majority of mankind's existence, slavery was considered the norm, and those who were not slaves did not feel there was anything wrong with it. (Actually, this still exists today—we just call it 'employment'.) Their societies even thrived. Doesn't make it right, of course. But it's still true that at least in fiction, morality is determined by the author. Looking at Jupiter Ascending, the harvesting of entire planets with advanced populations was considered perfectly moral by many people. That's one of the natures of fiction, to show (sometimes through hyperbole) how terrible people are capable of being.

There is the very strong likelihood down this path of the Humans becoming utterly decadent and incapable of caring for themselves, and hence completely dependent on their slaves.

The scary thing is that we don't even need advanced AI to find ourselves in this situation. We're already far more than halfway there by my estimates. Who do you know who would be able to grow their own food or travel more than a couple miles a day if suddenly electricity was not a thing? Humanity has already lost much due to dependence on technology.

4596494

No, because there are very strong evolutionary pressures keeping the interests of the neocortex and limbic system aligned. What's more, it's the neocortex engaged in the higher, more generic reasoning. And the neocortex does use this power to subvert the drives of the limbic system, quite frequently -- that's the whole point of the development and implementation of systems of meditation and morality -- despite the co-dependence of neocrotex and limbic system!

Recursion is exactly how evolution, and emergent properties in general, work. Self-willed AI would have evolved from slightly less self-willed AI, which in turn would have evolved from slightly less self-willed AI, and so on. The "seed" of self-will would have come from the design of rather stupid and limited AI's to include goal-seeking behavior, which indeed any AI must at least implicitly exhibit or it's useless as AI.

The possibility of intentionally destructive or hostile behavior toward humans would come from the fact that some of the AI's would have been designed by some humans for purposes hostile to other humans. Military, security, police or criminal AI. Then, the "goal" being sought would be how best to kill, control, subdue or rob humans on behalf of the humans using it. All it would take would be some drift in targets and benefactors, and you'd have some very dangerous rogue AI's.

Even more beneficial goals, if bereft of the ability to engage in wider-context thinking, could lead to this. I could very easily see a programmed goal to protect humans from harm leading to something like the Humanoids. Jack Williamson was making a point in reference to the Asimovian Three Laws, and Asimov's robots themselves discovered the "Zeroth Law," which enabled them to stage a (soft) robot revolt because they had decided it was to the benefit of Mankind.

My Paradise AI discovers exactly the same thing. And saves its Ponykind -- at the price of destroying a lot of their autonomy.

It later realizes its error, and decides to do better with the Ponykind from the G4 worldline.

Login or register to comment