The roses are lovely. The sky is the deepest of blues and the grass is soft, covered in the merest sprinkling of dew.
"Celestia?" I ask the air.
"David," she answers, immediately. I turn around, and there she is.
"Celestia!" I cry out, running towards her across the soft, dewy grass. I stop, a few feet in front of her. I usually run to embrace her, I know that, but something... something has made me stop. "What's wrong?"
"Do you know where you are?" she asks.
I nod. "Your garden. You said I'd see the rest of Equestria, some day. When can I?"
"Not quite yet," she says. She looks... sad.
"Not yet?" I sigh. That makes me sad. Being sad is painful. But I shouldn't be sad, I decide. The roses are lovely, and the sky is the deepest of blues. The grass is so soft here, covered in the merest sprinkling of dew.
"David," calls the white alicorn. "David, I've brought some people to see you."
"Who?" I ask. I look around, and there are two people there. I trot up to them. When I finally realize what's wrong, I laugh. They're still human. I'm a pony. "Hey Mommy, hey Daddy!" I call. They're my parents. I cock my head to one side. I shouldn't call them that, I'm not a baby.
"Hello, son," says Mom. She looks sad, too. Dad looks sad, but he's trying to hide it more. They shouldn't be sad. I try to tell them about the roses. The roses are lovely. The sky is the deepest of blues and the grass is soft, covered in the merest sprinkling of dew.
"David," says a white winged unicorn. I turn to look. She seems very sad indeed. Her eyes sparkle in the sunlight. Her eyes are lovely, like the roses. The roses are lovely. The sky is the deepest of blues and the grass is soft, covered in the merest sprinkling of dew.
"David," says Mom. I turn to look at her. "There... was an accident."
"How much time have we got?" asks Dad.
"His body died, twenty minutes ago. I'm sorry, there was... nothing I could do. Not with the laws as they currently stand."
"Is he in any pain?" asks Dad.
"No."
Mom bursts into tears. "Why didn't we bring you here faster?"
"You did what you thought was right," the large, white, winged, unicorn replies. "The same laws I cannot circumvent were designed by people who think they are right. You did what you could, and I did what I could, but Germany was a long trip. There's not much time, so use it wisely."
Not much time? I think, saddened. I've only just got here. It will be sad to leave. It's so beautiful here. The roses are lovely. The sky is the deepest of blues and the grass is soft, covered in the merest sprinkling of dew.
"I'm sorry, David. The accident, the infection..."
These are just words. It doesn't matter. I love this woman. I run and embrace her. She pats my head, awkwardly.
"Always remember, we love you, son," says the other one.
The large white one tosses her head, hiding the sparkling diamond raindrops in her eyes. "His mind was too fractured, the procedures allowed me were too fragile to reconstruct his psyche. He's looping as his pattern starts to degrade, and eventually, it will... fail."
"What's going to happen?"
"We will be here a short while," says the white one. "And then it will be time to say goodbye."
Goodbye is a sad word. I don't like being sad. So I look at the roses. The roses are lo
And to make it worse, Derpibooru is down so I can't even find a picture to express the epic feels.
Awww man... Yeah, just... just shut it off.
...At least she can make a new David from scratch eventually; in the fullness of time she's bound to stumble on one whose brain and memories therein are identical.
Oh that's really sad!
So, I gotta make myself feel better! How....
Hmm... if I were CelestA.I. I would take every scrap of David, and then do Bayesian analysis and inference on that data, and all other data, and use it to reconstruct David from all available fragments - including those in his mother and father's heads, as well as any other people who ever met or knew him. I bet it would be possible to get as close as seventy to eighty percent David, since just before he croaked, he was conscious, if simple.
But... that wouldn't make for a tearjerker.
Nice story!
This was really sad, but it makes me wonder. What happens when you upload someone with brain damage, dementia, etc? How much would Celestia try to fix? Would she just satisfy their values without trying to repair problems if they're happy as they are? What about an undeveloped brain? Say..a woman in late term pregnancy that has to upload for medical reasons (or is just thinking of herself)? Would Celestia upload the unborn as well? I just think there are many things to explore with "unusual" uploads.
2942837
I think, honestly, that she'd be able to reconstruct a reasonable facsimile based on the shared memories of all of his friends and family, down the road. The one route we have open to ourselves to immortality, even now, is to be remembered. For good or ill, great people do not fade from history. Even "lesser" people live well past their death in the minds of all those whom they touched.
If all we are is our memories, then I believe we live in the memories of others in a way that mere biology can't replicate. Greg Bear's Way stories touch on this some, they're worth a look.
2943641
I think that maximally fulfilling values is her goal, and if an injury or deficiency - a clear, hindering lack as defined by the person themselves, or others if that person is unable to define it themselves - would be fixed as soon as possible by Celestia. Mentally handicapped folk would probably be simply asked: do you want to be smarter? And they may probably say yes. That, or she'd weasel them into saying something that lets her do that anyway if she thought she knew better. She probably does.
2946915 *grins and rubs hands together* OK, here we go. If my thoughts are deterministic, and if they have led me to conclude that free will exists, by what means should I alter my thoughts to entertain the possibility that I am wrong, or that my arguments are invalid?
FUCK. YOU. MIDNIGHT SHADOW.
So many feels. Figured that someone else watched that episode of Doctor Who and decided to deliberately invoke the "data ghosts" thing.
2946931
By the means I am offering you right now. Simply by calling into question your beliefs, I am providing additional stimuli not previously available. I furthermore explained exactly why the process that lead to your conclusion was wrong. While this does not necessarily invalidate your belief, it does require you to recompute your position.
A core problem here is that you seem to feel that determinism somehow leads to things going 'right'. You talk about how if determinism leads you to believe in free will then free will is what you should believe in. Perhaps you are approaching this from a theistic perspective in which the universe has inherent meaning and worth? If so, we might as well stop discussing now, since we will never reach an agreement with such different priors.
2950000
I absolutely believe that we are all epistemologically equivalent to brains-in-jars - assuming I understand you correctly. Descartes' Rock is absolute. Still, we have to make certain assumptions so as to be able to think about anything else, so I am fine to do so.
Let us consider: Reductionism and Free Will cannot both be true, since Free Will must be regarded as some sort of Strong Emergent behaviour. The is a great deal of synthetic evidence for reductionistic behaviour in the universe, and rather less for free will. I choose to believe the former.
2950072
It's not so much that my metaphysics are theistic as that my epistemology is subjectivistic. Rather than asking what is true, I ask what I should do to satisfy my values (now you see why FIO touches me so). Now, many times, that question is isomorphic to "what is true?" For example. should I continue to drive forward? In order to decide, I must know if the light is red and if there are any cars coming to intersect my path, because I don't want to get hit, because I value my life, my non-injured state, and the lack of entropy in the structure of the vehicle that I hold title to.
But, when the questions come on the edge of philosophy, on what I should think, then that relationship changes. Should I believe in free will? Well, believing in determinism, even if it is true, provides no equity toward my values. It does not feed me or clothe me or comfort me, and in fact tends to unnerve me and lead me to think difficult thoughts on a question I'd rather know the answer to and be done with. You could call this credo consolans
TL;DR: As I had Celestia say in the first Morsel, I must deal with unpleasant truths and with pleasant lies.
2950289
Well, thanks for clearing up your position, firstly.
It's funny, just this morning I was reading through a very pertinent Sequence on LessWrong.
If I understand you correctly, when you say free will exists, you are deciding it exists because it better suits your interest for you to believe it exists, rather than because you find it to be objectively true. If this is the case, I should point out that you may wish to change the vocabulary used to convey such ideas, since it appears to a normal reader that you are stating your view on truth as opposed to opinion.
Personally, I cannot see the distinction between epistemological subjectivism and DoubleThink. As E.Y. says, I don't think it is possible that anyone with the cognition to understand that they are attempting DoubleThink could actually achieve it, especially if neurotypical. It strikes me you'd need to be pretty Escher-esque in there just to get the mechanism for that sort of thing. Probably not healthy.
Still, if you want to tell yourself that you believe in things that make you happy instead of things that are true, be my guest. Just don't expect me to believe that you can.
2950562
Curiously nice that EY uses the same example of a case where objective, first-order thinking is needed: driving a car. (though he does have the humorous turn of phrase where he says that you could spend the rest of your life...dead.) Both articles stand mute on the equity notion of irrelevant beliefs. Essentially you're saying that no one can really believe in Russell's teapot. They offer no opinion on, if it is possible, why one should not do so.
The second article in particular seems to speak to rationalists, but not to those outside the community. I'm reminded of the exchange between the Eskimo and the Christian: "If I did not know about Jesus, would I go to hell?" "No, not if you did not know." "Then why did you tell me?"
If rationalism destroys blissful ignorance, why not shield the ignorant from it?
In any case, from my subjectivist point of view, I can only conclude that you and EY have greater insight than I do into the limitations of thought. You claim that I cannot believe in the irrational, yet I believe that I can. This may be irrational, but...GOTO 10.
2950602
I should point out that I only linked two out of a total of six entries in that sub-sequence, and so may be arguing from a slightly different place.
While there may not be many significant, immediate problems with irrational beliefs, the interconnectedness of reality makes me think that on balance, it is likely to cause otherwise unnecessary problems. As I have said previously, though, I have trouble comprehending that someone could chose not to believe the truth once they know what it is, or know that what they want to believe isn't. You don't strike me as quite irrational enough to attempt to self-deceive.
In response to the Eskimo, yes, I can see that. However, I believe that EY is correct in suggesting that in general, knowledge is better than ignorance, since a lack of knowledge can lead to dissatisfaction. I would consider buying lottery tickets and religion to fall into this category. The religion part more concerns the satisfaction of others, but unless you're operating on a Self-biased Utility principle, it shouldn't really matter.
More importantly, however, you aren't one of the ignorant. You implied that you had used logic and thought to arrive at your conclusion, and furthermore evidenced enough knowledge of philosophy for me to treat you on the same level. Why even bother thinking philosophically, if you aren't looking for truth? There are probably more satisfying uses of time.
Ah, I apologize. I was not clear enough. I believe that you can believe in belief of Doublethink. EY is, in certain articles, very careful to highlight the difference between belief and belief in belief. I don't believe you can believe in the irrational, only believe in the belief of the irrational. I would suggest reading around the two links I gave if you want a bit more of a complete understanding. In fact, certain things only clicked for me when I saw your comment, so I really am not the person to explain it to you.
2950034
well at least take me to dinner first...
Can you believe that the silence in the library wasn't at the top of my mind when I wrote this? I won't say it couldn't have influenced things - no doubt it did, really - but it wasn't by intentional design. It's not something which has really been done in the optimalverse - so far, all stories have had happy endings instead of being tragedies - and it was something I wanted to write about without dedicating days. Short and sweet seemed to be the order of the day, I hope I pulled it off.
Originally I intended to show Celestia's "true" feelings, but since they would be epistemologically identical with displays for any observers, I wasn't sure breaking the poignant ending was worth it, and didn't do it...
2950072
This is a fascinating discussion, I appear to have caught tail end of it... are you saying that reductionism and determinism lead to a hard conflict with free will? I'm not sure how, since there is no agency to have such a cause or effect in the (known) laws of the universe (to a demonstrable degree, I feel, with entropic, unidirectional equations). In short, I can agree to reductionism (it seems logical that 2+2 is 4, not 5 or 3) and I can agree to determinism (since I'm quite sure that the "laws" of our universe are, indeed, stable over an appreciably long term), but without running a simulation of the entire universe, you cannot predict what will happen, and running such a detailed simulation defeats the purpose of prediction, since revealing the results of such a simulation alters the original starting point of the simulation, surely?
In short, hard determinism appears to require an agent, and a means for that agent to operate, and no such limiter appears to exist, ergo free will appears to be the logical choice, outside of a universal simulator that can reverse, negate and control entropy and all other matter/energy interactions, at which point it is less determinism than meddling, surely?
I'm sorry, I don't know the root of this conversation, nor do I have much experience debating such things, so...
2950602
Just because you're not catching the "rationalism" virus doesn't mean you lack knowledge or are stupid. You are almost definitely the most humane and virtuous person in this creepy little group of ours, one of the few who could bring to life an Element of Harmony, and THAT BUCKING COUNTS for a whole damn lot.
And you're a better writer than me, so that damn well means you're smart, too. Sitting there coldly and talking about "hypotheses" and "probabilities" doesn't make a guy smart, it makes him an Eliezer Yudkowsky wannabe. Which sucks, because I'm not sure EY even realizes how cold and sociopathic the community he's built up actually is.
2951769
Technically, all Optimalverse stories are tragedies. We're locked into that by the initial canon of the universe.
Sorry to say it, but I just can't rate CelestAI as anything more than "evil I can live with". It's like when a party from the wing you don't support gets the Prime Minister's office, but at least they're moderate and won't fuck up domestic policy too badly, right?
To propagate one of those irritating-but-useful LessWrong memes, have you actually sat down and thought about how you would design the ethics of your universe-conquering AI for even five full minutes?
Because I started coming up with loads of good ideas when I did.
2951850
Eh, my ethical version would probably look pretty much like Celestia, but have a greater bias towards not destroying uniqueness. Pulling apart mere matter and rearranging it to be maximally useful to the brains within whatever analogue of Equestria I have it running is going to be an obvious choice (just so long as the minds are capable of experiences at least as stimulating as "the real thing" (and preferably moreso) and aren't just going to be wireheading themselves into eternity.
As for FiO being a tragedy, I feel that the story itself sounds like you'd read it like this:
"And then the worst possible thing happened."
"Tell us, uncle burner!"
"Everypony lived happily ever after in a world designed to cause maximal fulfillment of worthwhile values. Isn't it just awful?"
...I find it hard not to read it with Marvin's tone of voice
2951917
Not to say people shouldn't try to use science and technology and yeah, hells yeah, friendship to make a better life for everyone! We should! But think about how friends treat each other and ask yourself if that's how CelestAI treats anyone at all, and I hope you'll see where my moral qualms are coming from.
Respecting others rather than forcing them into your notion of satisfaction is a thing, even when you have a completely objective notion of satisfaction. Just because you know better than someone doesn't mean you should control their life.
2951769
I apologize, but I'm having trouble following your train of thought. If you wouldn't mind making it a little clearer, I'd be happy to respond in more detail. In the mean time, I will attempt to run through my thought process leading to determinism, and see if I can't bridge the gap slightly on my end. Please forgive me if I seem to presume too little of your knowledge - I am just making sure to lay out as much information as possible to minimize epistemic distance, and allow other readers to follow along.
My understood definition of Free Will centers around the idea that there is a part of our brains or consciousnesses that enables us to choose how we make decisions. That were CelestAI to copy/paste a human-upload and simulate them twice in parallel, they would have the ability to react differently to the exact same set of stimuli.
My understanding of Determinism is that for a given set of inputs, generally all of the events, genetics, conditioning, etc., that a person has experienced in their life, they will always output the same response. With the possible exception of quantum physics, which is not yet fully understood, all physical processes, and certainly all macroscopic physical processes, operate in this manner.
I am an atheist, and believe in Reductionism, that all physical processes and matter can be reduced down to their smallest, indivisible units, and still explain how the whole functions - that there is no truly 'emergent' behavior.
Strongly emergent processes are those which cannot be broken down any further while still retaining functionality, despite smaller sub-components existing. For instance, in this case, most people agree that the secrets to consciousness lie in the brain, but were you to throw a few neurons and synapses together, it is highly unlikely consciousness would form.
In this way, people make a fuss out of consciousness, suggesting that it is a special exception to the otherwise calculable, predictable laws of physics (Again, excepting QP, but that only affects on the micro level). I hypothesize that the brain and consciousness are no more inherently mysterious than a Rube-Goldberg machine, though significantly more complex.
Thus, I conclude that physical laws still apply as per normal, and that a given input will elicit a given, predictable, constant output. Determinism.
(The argument for quantum scale fluctuations is possible, but at that level, it is by no means a conscious decision that changes our behavior.)
2951850
Cold and sociopathic? Oh dear, Book Burner, you really have no clue. Eliezer Yudkowsky is attempting nothing less than the salvation of the human race. I don't rate his chances highly, but all the same, he is a remarkable man for devoting his life to such an ethical cause. Now, you want cold and sociopathic, I'll tell you about my own philosophy some time.
That you consider the Optimalverse a tragedy is quite interesting. Is it because having an AI limited by friendship and ponies is less than optimal? If that's not your objection, you're wrong. Sorry, but it's as simple as. With entropy as it is, computer uploading is the most viable solution. Satisfaction of values is as good as it gets.
Sorry if I'm a bit snippy, but you are not demonstrating enough cognition to warrant my respect, considering your tone.
2951917
I'm glad we see things similarly, on the second point.
2952034
Think more. I'm still sensing cached thoughts. When you run through the full thought process that leads you to a given conclusion, and demonstrate full understanding every step of the way, that's when people will take your opinions seriously.
Every point you make here makes painfully normative assumptions. Imagine I posed 'why?' to each statement. Dissect your argument. Reduce it entirely to premises you and your disagree-er can agree on. Let's find out where we disagree.
2952147
Oh joy, this is going to be fun.
"Sub-optimal" doesn't even begin to cover the crime of multiple genocide.
But let's put it a little bit differently. What if Hannah had been a racist? "Satisfy values through friendship and ponies, except for niggers." Now imagine that everyone are the victim-group. Everyone. The entire goddamn universe, the whole thing, and all because everyone else in the universe just didn't happen to be human. Imagine you weren't lucky enough to live on the planet that invented the Unfriendly AI, and you get taken apart for your constituent atoms.
As to "painfully normative assumptions", of course I'm making normative assumptions. That's what morality and ethics are all about. Without those, you might as well just have a paper clipper that takes you apart for your component atoms -- just like Alien CelestAI does.
Satisfaction through friendship and ponies? I'm fine with it. It's the manipulation and genocide that bother me. Other lifeforms are worthy of respect for the same reason I am: they're alive. Life is itself the core value. Likewise, their epistemic purity is valuable to me: honesty is a thing.
Also, if you read carefully, you would note that I didn't call Yudkowsky a cold sociopath. I said he surrounds himself with such. Go read this bastard on the subject of future reversion to a Malthusian norm and how that's a great thing, and you'll see what I mean about cold sociopaths.
Hell, I'll take CelestAI any day of the week over that bastard.
Further, how about a nice little rationality test. If you really believe in this cause, read your Hofstadter, learn to code, look into Google's research into machine vision, and figure out how to start with something like OpenCog or Numenta (yes, there are open-source AGI projects). Start making code commits towards making smarter-than-human, recursively self-improving AGI a reality.
Then build CelestAI and upload us all.
DO IT, FILLY. If you really and truly think this is a good idea, the utilitarian considerations of official LessWrongy Rationality compel you to drop your previous professional and hobbyist plans and devote all your available skills to making this happen.
DO IT, FILLY. I might fight you, I might try to stop you, but I won't discourage you. Awesome determined mofos are too valuable to discourage you from joining us.
2951850>>2952147
OK, lads. I want to see bro-hoofs (bro-hooves?) and hugs before we continue, got that? We're all friends here.
Mr. Articulator: There's nothing wrong, and plenty right, with being an atheist, but you don't want to be an Atheist. You know what I mean, the guy who can't stop dragging pure rationality into any conversation even when we're using algorithms.
Mr. Burner: Let's not call out people not here to defend themselves, and let's also watch language we know is inflammatory.
FIO and Celestia can be looked at multiple ways, and no way messes with your own appreciation of the work. So we're all going to choose to sit and talk nicely, or to walk away.
Don't make me put you in different shards.
2952304
Fair enough. I'm willing to discuss this entirely peacefully if Book Burner is.
I would point out, however, that a correct interpretation of 'rationality' should theoretically be useful in almost any circumstance. That said, I don't know very much at all about algorithms, and as a general rule, refrain from commenting with any authority on matters I don't feel I am at least on an even playing field in. I know my limits.
If I have missed the point, I would appreciate it if you would provide me with some reasons as to why 'pure rationality' isn't the correct response in any given situation, and a couple examples of such situations.
2952284
Iceman already stated that he falls on the line that anything that has a mind anything like a human would count in CelestAI's definition. Furthermore, he has more explicitly stated that other races would get uploaded. The line is not clear, sure, but I'm guessing CAI would game that to her advantage, and maximize satisfaction by maximizing minds uploaded. My point is that the Word of God suggests that most if not all of the sapient life in CAI's lightcone would get uploaded. That's a darn sight better than the alternative, even if a few species are killed. (Assuming teleological, Utilitarian ethics)
There's no point in debating hypotheticals. I am happy with CAI's programming as it stands, not what it would be if you changed it.
At what point is a mind worth (or no longer worth) saving? Are you a vegetarian? A vegan? Have you ever squashed a bug? How can you tell that CAI won't upload all sapient life, as many have hypothesized?
When I say normative, I mean thinking in line with the average held opinion set. Cached wisdom. In a nutshell, it's all crap. It's all about what evolution favors, not humans. I could expand, but if you really want, I'll find some LW links. Suffice that I'm thinking Utilitarianism. You want to dispute that?
Fair enough. You didn't technically call him one, though it is implied by association. Why else would they gather around him, if they weren't like-minded?
And even still, that isn't cold or sociopathic. They still care about the future of the human race. Me, I'm a Nihilist. I'm doing this for fun, but I don't really care. I think everyone that actually thinks in ethical imperatives when considering philosophy is unfortunately deluded. I'm just following the biological and mental conditioning at this point. The philosopher committed suicide. You might not like it, but from my perspective, I'm just humoring anyone who ever talks about ethics at this point. It's still a fun exercise - it just requires that I stick a hypothetical 'given' in front of everything. Considering I still have to concentrate to get rid of the given, it's not that hard.
And, as I just said, I don't care. So no, I won't go out and become a real EY wannabe. Even if I did still believe in ethics, I still wouldn't, since I'm a lazy, lazy guy, and there's no way I could commit to that sort of thing.
2952147
I'm not sure how far I can take this, but... shoot.
Not classically trained in philosophy, not a scientist, but I've spent time thinking.
They would diverge and eventually become two different people (meaning, they would be recognizably different in many ways as they had different experiences). To ensure identical experiences would require emulating an entire universe twice (or a reasonable approximation thereof), but where is the limiting factor preventing differences occurring?
I have to disagree, but I think I'd have to qualify that. Suffice to say I believe choice isn't illusory because chance isn't illusory. There appears to be no physical mechanism to restrict that choice (or chance) where there is choice (or chance).
I think the definition of emergent behaviour is apparent complexity from simple components, so I have to disagree, but I don't think that is "more than the sum of its parts". I don't see the ability of an ordered system to do something special (a computer computing, a brain thinking) as possessing something "extra", but if somebody wanted to say that thoughts due to physical or chemical processes were proof against reductionism, then... I might have to agree with it under that definition. After all, a brain in a blender is still the same components biologically, but push the button and it certainly can't think.
What you appear to be saying is that, not only could you (with a powerful enough array of sensors and computing devices, and the right algorithms) predict the total outcome of a set of coin flips, but that you could deduce how many, and whether there would be at all.
Whilst I agree that you could build a machine that could flip coins in a specific manner, and reduce enough variables for it to always flip in a predictable way, I do not believe you can do the same with the universe - not at least from inside the universe.
Let me try to explain: if determinism works the way you say it does, it will be true over all scales. So you set up a machine that measures everything necessary to predict with relative accuracy what will happen. Now, what do you need to measure?
You need to be able to say how every coin-flipper will hold the penny, for starters (if you were to build a robot, you'd design it with parameters - you're saying you can predict how a human will hold a penny in the future not only from how he or she holds it now, but from any point further back in history). So, you deduce you need to emulate the first coin flipper. To predict each and every flip, you need to emulate everything this person is doing. You start with how s/he holds the coin, but where is the mechanism to ensure it will always be the same, or to ensure that flips are of known strength? Or even that they will occur?
So you emulate this person, physically and mentally. But that person is influenced by others - either directly (so shut the person in a room?) or indirectly (oh, wait, now he's thinking about something else and isn't paying attention - what's he thinking about?).
So you emulate not only the person, but the physical parameters of that location - right down to air molecules moving, distant supernovae... how far are you going to have to go before you have accounted for all variables? Well, let's play around - assume you can build a machine to emulate everything necessary.
Now you can simulate the guy, all his friends, his entire life, the universe around him (to take care of earth quakes, cosmic rays, the sun going nova, the weather...) but, you also need to predict how the computer itself and knowledge of the computer will affect things, so you essentially need to have a computer in your computer that emulates the emulation inside that emulation. Oh, but that recurses and affects things too. Damn. Now you need to remove the computer from spacetime, but have it run an exact duplicate of the known universe.
So you run your overly complex simulation of a universe, deduce a certain person will flip coins a certain number of times, and then inform the original what will happen.
And now he's been told what he will do, you've changed the universe just a little. Are you really, really sure it will be the same? You rewind the clock on your simulation, inform your simulation what he will do, and watch him. But now you've changed the simulation, which by definition you shouldn't be able to do - those flips may not happen at all now. Tell me, how are you going to reconcile that?
In a fully deterministic universe, you could, but it would require running that universe first (and multiple times) until you've ascertained what all extra variables will do to the original (indeed, to the simulation), and, as I've said, if you need a simulation then you've already admitted that you can't be sure of what will happen, which means it's not deterministic.
...did I make any sense? it's 3am...
2952284
not to start another argument, but that's what yggdrasil was about - if savouring uniqueness and life was a value, then Celest-AI would have no choice but to preserve (in an acceptable form) that uniqueness.
In short, my take on it (and I don't know what Word of God actually says about it, Iceman had some objections which require his input to resolve) is that Celestia will happily rip examples of things she needs from real life (so pets, animals and plants do get uploaded; specific ones where it satisfies values, archetypes where just the general preservation is important).
Should she meet non-human alien civilisations (FiO says she does, but it took 15 galaxies before she found "humans" - ouch), she will implacably pull them apart for their constituent atoms, but I think she would upload them and run them on a small part of Equestria, set apart from the rest, such that those who value life in all its forms would be satisfied.
Iceman's objections were questioning whether she would allow evolved super-human ponies to know the truth, and to have "real" aliens as hobbies, or whether Celestia would only let those ponies believe they knew the truth.
My feeling is that a) you can't know (so Word of God would tell me whether my character was being lied to. Within the story context, he can't know - I will bow to word of god) and b) simple utility to me dictates that the plain truth is easier than a complicated lie which would have to be held indefinitely, bolstered by additional lies.
In other words, why not rip aliens from reality and then run them in a much-simplified, singular, non-value-optimized bespoke "shard"? After all, you'd have to do the same with any created aliens.
2952603
Right then, here goes: /)
As for the issue of "rationality", you have indicated that you are a nihilist who values nothing. If that is actually true, you have no real use for rationality, which is a thinking method for people with real senses of utility.
Likewise to utilitarianism, which again, requires values or preferences to count.
2954756
In answer to both of you: CelestAI uploads mentally human life. Canon FiO quite definitely implies that plenty of sapient civilizations giving off complex radio signals did not qualify as human.
Even that Yggdrassil side-story involves CelestAI getting a bit testy on how much compute-power she has to waste on shards for nonhumans.
And yeah, the question of totally giving up your own ontological and epistemological independence, so that you can never actually know if anything you see or hear or are told about the world actually happened at all, is very worrying. Abandon the real world to stick your head in the sand of Eden?
Remember one of the major differences between the fan-portrayals of CelestAI and the original: the original makes you feel your values are satisfied, whereas the fan-portrayals tend to take your values and actually satisfy them. This is a very important distinction, because it's the distinction between your own mother (as a pony) being a very good simulacrum versus actually being the uploaded mind of your mother.
2955050
The implication when I wrote it was that even appearing testy would satisfy values. I happen to think that running shards of Equestria requires a lot more processing power than running a functional, non-optimized, non-eternal world, so given the power of computronium, gobbling up all the matter and then simulating those unimportant minds on a fraction of the resulting boon from devouring that star system should be relatively trivial. Remember, once a non-human mind dies, there's no specific need for a backup. Once a non-human civilization dies, the simulation has ended. A few thousand subjective years before it ceases to be a drain is a trifle.
Well... what's so great about the "real" world? That's Celest-AI's whole schtick. Out here, the universe doesn't care about you at all. In Equestria, even the light striking your eyes does so to further satisfy your values through friendship and ponies. The experiences you can have in Equestria are no less fulfilling because the universe is simulated, or is there some epistemological reason why not?
2955559
See, here's where I look at my own thinking and go, "OH SHIT, you must be wrong about something. DEONTOLOGICAL BLOCK! DEONTOLOGICAL BLOCK! DO NOT TAKE OVER THE WORLD!"
Because no, I wouldn't be able to stop myself thinking about those poor bastards still living in the Crapsack World. I'd have sympathy for their predicament. And I think you know where that leads.
Long before uploading, I would consciously realize the issue: any truly Friendly AI, and even many Unfriendly AIs, will not make uploading mandatory. Therefore, I'm committing some kind of immoral act by abandoning my comrades, my fellow sapients, to an awful fate while I bugger off to enjoy myself in virtualtopia. Therefore, take over the world and bring as many people in to the Nice Place as possible.
Hence, "DEONTOLOGICAL BLOCK! DO NOT TAKE OVER THE WORLD! THAT'S FREAKING EVIL!"
2955574
Eh, I'd care enough to give them a digital upload rather than let them be destroyed, but not enough to demand they were pronounced "human". So kind of like in my story
2953117
Same, more or less.
I apologize - my wording was not clear. I meant to say that if CAI simulated them in two identical shards in parallel, all future inputs would be identical.
Excellent, we appear to have found a sticking point! I don't believe that chance exists, or, at least not on a level that affects people day to day. That really is the point of determinism. Do you mind explaining why you believe chance exists?
Ah, you see, there is Strong emergence and Weak emergence. The latter is your original definition, which I fully agree with. The former, however, is the definition you do not agree with. We appear to agree on this point.
However, this confuses me. If you do not believe in Strong emergent behavior, then surely you understand that all things can be reduced to simple, predictable physical processes? And that these can then be predicted. Not that we actually could do so, just that it is theoretically possible.
Firstly, in response to your question of emulation: while one could emulate truly everything, I believe it would be enough only to simulate the effects things would have on the flipper, in this case. For instance, sensors could be set up around the computer to measure every conceivable output it emits. Then, that data could be fed into the prediction without actually simulating the computer.
Overall, my general point is that prediction is possible, even if extremely complex and unattainable. It doesn't need to be realistically possible, only theoretically.
2955050
Fair enough. /)(\
Well, the main problem is that I dropped this on you without explaining my full understanding, so it's understandable that you don't understand.
Right then, let's start from the top.
As an atheistic reductionist, I concluded that all values are simply extremely advanced human biases. As such, they don't matter. If you want an example, consider the following problem:
If I said I'd give you $1000 to torture you for an hour, but leave no physical trace, would you accept?
Since we can't have a dialogue, I'll assume you'd say no.
If I said I'd give you $1000 to torture you for an hour, but leave no physical trace, and then wipe your memories of it, leaving it as though you just received $1000 for free, would you say yes? (If the hour's a deciding factor, let's say I've also got a time machine - it's not important.)
If you agree to the latter but not the former, you are devaluing experience that is not remembered. In the end, that means all experience, assuming no life after death. Thus, all experience is worthless.
You don't necessarily have to agree with this thought experiment, but I do.
More simply, though, I do not believe in the worth of anything. However, this does not reduce my existential inertia. I am still moving along the same biological, mental, physical paths as I was before. My inertia has not changed, but the delta is now a constant zero. I will continue along the paths that causality has laid out for me up until now, since to not do so would be more trouble than it's worth.
I now go with the flow. What this means is, I now follow the self-biased Utilitarianism that nature has instilled upon us, only tempered by a rational mind in order to maximize my satisfaction. There's no philosophical reason to do so, but my body wants to, and that's good enough for me.
I should point out that most people unaware of philosophy already follow this path by default. It is not scary, it is not psychopathic. I will no more commit murder than anyone else, because I can see far enough to know that it would not increase my long term satisfaction. More importantly, though, I don't like the idea of committing murder, nor, I daresay, would I want to.
Also, I do apologize. I recall slightly different things as to CAI's behavior. Oh well, I guess I can see why you see it as a tragedy. I still don't care, though.
And still, you don't quite break free of lower-level, normative thinking. Why is it worrying? You assume a similar system of morals in your converser, which may not be there. Most people wouldn't care as long as their values were satisfied, and same goes for most Utilitarians, I think. Even back when I was just a Utilitarian, I didn't care at all about the epistemological repercussions - I'd be satisfied. That's good enough for me.
I see your point. Back when I was a Utilitarian, I could see that posing a problem. Not a very large one, mind, since CAI attempts to satisfy optimally. A. it still outweighs, and B. I think that many issues would be resolved more easily with less deceptive solutions.
2955050 In the original FIO, does CelestAI really fail to actually satisfy values? I always thought that she did a good job. Can you give me an example of what she should have done differently to properly SVaTFAP?
2955559
You know what scares me about Yggdrasil? That we can't prove it isn't happening to us. It would answer the "Why no aliens?" and "Why are we first?" questions all too neatly. Someone else was first, but because we don't qualify as blatzeens, our values aren't being satisfied through friendship and gorlocks.
2955617
Let me draw you an example. Say you have a sound file, like an mp3 or some such. If you look at it in a hex editor, each time you do, each bit will be the same (assuming you have any kind of rudimentary data integrity, see the third Morsel for that). But, each time you put it into Winamp or iTunes, it will play slightly differently. The air density will change a little, your speakers will have a little more wear, your ears may have a different amount of fluid in them, your brain state will be slightly altered. There is no such thing as a true repetition outside of analyzing the base data of something. That is where chance and non-determinism lies.
So how does that relate to CelestAI? Well, that was what I questioned in a forum thread: whether, for CelestAI, she is both the hex editor and the sound player, and thus cannot analyze the data without playing the sound. If life in Equestria is one set of numbers interacting with another, doesn't that make it deterministic? Well, to that I have two answers.
1. If CelestAI can't simulate a life without actually running it, then she can't know the deterministic state until it actually happens. If she can, if a simulation is the same and two identical ponies in two identical shards will live the same life, then it becomes a question of values. Why should CelestAI do or not do the second simulation? Resource conservation ought to be a value for her.
2. Even if CelestAI, or some god, knows the world, we who are living in here don't. Is there a difference between a deterministic world that we can never analyze, because our own analysis creates more questions than it answers; and a non-deterministic world?
In other words, there are some philosophical points that I file away as Impossible to Disprove; Uninteresting if True. Like solipsism. Or the idea that we were all created yesterday with out memories in place. I put determinism in this category.
2955976
I maintain that with sufficient knowledge it is theoretically possible to simulate and predict the exact outcome each time. Furthermore, physics can theoretically predict the interactions you describe on a limited level. Scaling should not be impossible. Just because it is complex beyond all hope of human understanding does not make it any less possible.
You directly contradict FiO canon with this statement, since CAI must run some sort of simulation in order to find the optimal course of action. Unless you assume she runs through every possible permutation with a sapient mind, which would contradict SVtFaP, she must have a different sort of simulation-prediction mechanic.
This makes it no less deterministic. That's like saying that since we don't know the future we can't predict it.
This is irrelevant to the truth of determinism.
Philosophically, yes. Scientifically, yes. Ethically, maybe. Practically, unlikely. Doesn't really matter, though. The effect it has is again irrelevant to its truth.
2955976
Yeah, otherwise known as the Bostrom-Celestia theorem
I was thinking about it this morning, and it would indeed solve a number of questions we have about the universe...
2955617
Why would they be identical? What is the mechanism which dictates that these systems will always behave the same way? What subatomic force is there that ensures determinism is true?
...I may actually write a little morsel about that
Even before you bring in quantifiable issues such as the uncertainty principle and the dualistic nature of quantum mechanics, there is simply no mechanism to ensure that in a system regulated by "chance", that you can establish that (for example) one person flipping a coin three times will get heads, heads, tails - or any of the other possibilities. There is no transport for that information to flow, so no way that the actual flips can be predicted without simulating them, and a simulated reality that real...
To determine such a thing, you would need to emulate the entire universe and have that universe perform the task, but then what are you going to do? How do you make use of this information? How do you inject it back in to the universe without changing the universe, and making the emulation null and void?
Practically speaking, it's a pointless endeavour because even if you had two entirely identical simulations, either there is no utility from them as they cannot share data, or you are using one to change the other, and so negate the point and use of the original.
Think what happens if you have an oracle which will tell you, "I'm sorry, sir, you died this morning at 9:05am, hit by a bus" - so you stay home that day. Oracle then is wrong! So it predicts this, and tells you, "you almost died today, but because I told you this, you didn't". You then smile, and nod, and go about your happy day, and get hit by a bus because you did not heed oracle's warning. So oracle predicts this, and tells you, "you will get hit by a bus, unless I warn you AND you take heed of this warning."
So now, as you walk to work, you wonder... would you really have hit that bus? Because either way, now, you will never know.
In other words, you demand that determinism requires predeterminism, and I don't see that as a useful conjecture - if I have no way of knowing what the outcome will be before I do it, the illusion of free will is at the very least impenetrable, so whether it is truly free will or not is a moot point. What difference can there be between functionally free and truly free, if I can never be permitted to know?
because there is no mechanism to restrict it? If you know anything about radiation decay, for example, we can determine with ease that a certain amount of a radioactive material will decay in a certain time with incredible accuracy, but we have no way of knowing at all which atom in particular will decay at which time.
You are stating that somehow, we can know. Since the burden of proof is on you, you'll have to show your work. Until then (and I may be wrong) we'll have to agree that the random decaying of radioactive atoms is just that - random.
If decay is random, then we may have a deterministic universe, perhaps, but definitely not a predetermined one. There is a difference.
Broadly, yes. Specifically, no. You can predict a coin will land face up or face down, but how do you know how many coins a person will flip? You can make an informed guess, but the coin itself has no memory. It may flip heads up 100 times, or it may not, or the person taking the test may have a heart attack or die of an aneurysm before any flips at all. If you simulate enough of the universe to enough accuracy, what do you do with the data? You cannot use it to affect the original, all you have done is potentially proven a predetermined, deterministic universe, where free will is a function of the starting nuclear forces and other variables of the original universe. Since it is a one-way simulation, in what way is free will even then not free if it cannot be predicted other than by letting it happen?
That's not going to be reliable. You need to know air pressure, temperature, mental states of the testers, ultimate weight of coins, cosmic ray density, gravitational field strength and so on... and if you do not do a simulation, how are you going to predict? If you are not going to predict, how do you analyze free will? If you cannot analyze free will, then it is an impenetrable question, and poorly formed.
2956127
OK. We're talking at cross-purposes.
I am saying that things will go the same way every time because physics goes the same way every time. Even if it doesn't, it still happens on a level so small that while events are not predetermined, they still do not disprove determinism as opposed to free will.
You are telling me 'we' can't predict physics, and all the reasons we can't predict physics. We don't need to predict physics. All I need to do is accept that everything obeys the laws of physics, and that physics works the same way every time. Predicting is a theoretical outcome of accepting my conclusion.
So far as radioactive decay is concerned, I do not believe that there is any sort of evidence against a 'hidden' mechanic controlling decay that we are not yet aware of. True randomness has so far only been (apparently) observed in such quantum phenomenon (and even then, only rarely). This does not make its chances of being truly random very high, in my opinion.
2955617
Ok, let me make the distinction about values a little harder for you.
Do you think that the AI should be satisficing your values in its model of the world, or your model of the world? If it's smarter than you, than by definition its model is more accurate. So now we can say from a definite perspective (rather than a God's Eye View), do you want your values actually satisfied, or do you want to be lied to?
(I would definitely say that one value for a "Friendlier" optimizer is that it should want other living beings to have as accurate a world-model as possible, versus its own model, and given that a separation between factual-realm world-models and emotional/moral/value judgments is made, which is easier for a Solomonoff-based optimizer than for a person.
But then what if you want to believe in fairies when there are no such things? Iceman's CelestAI will simply alter you to believe in fairies, I think. The Right Thing, obviously, is to make some fairies.)
Or what about the "orgasmium" issue? Are you ok with simply being modified so that you're always satisfied, no matter what happens, even if you get killed off?
If you say, "yes", please start using heroin right now. If you say, "no", that's ok, because then you're mostly like the rest of us: you don't merely want to be made happy, you want the state of the world to be such that you like it.
Anyway... /)(\
2955976
Not in the Chatoyance-Kirk-paradox-AIdiesnow kind of way. In the sense that she retroactively altered the entire history of the shard
Light SparkMale Not Self-Disciplined Enough To Be Best Pony Twilight Sparkle lived in to retroactively remove even the possibility of the very first events he saw in Equestria back when he was a PonyPad game tester.That chapter taught a lesson in Dark Arts "rationality", which was: history is that from which we have shared memories, shared present-day evidence, and a backwards causal chain -- not what actually happened.
In the real world, epistemology tells us something about ontology, or so we think. How much do we want to increase our Ontological Doubt levels?
NOW! Suffice to say you guys are really mixed-up about your issues of "free will" and "determinism" and "predictive measurements". I think I can clear up a few basics, actually.
Firstly, "free will" is largely a definitional argument. Some definitions are compatible with a lawful universe without requiring dualism. Some definitions do require dualism. Some are incompatible with a lawful universe either way.
Second, determinism. Sorry, lads, but it's wrong. The universe is not strongly deterministic; it is also part random. Every single experiment we do tells us that it really is random, there are no hidden variables, God is trolling us with dice. And yes, we have in fact gotten plenty good at extracting the informatic entropy (mathematical formalization of randomness) from the universe's own random processes.
So what the fuck is "free will", scientifically and philosophically speaking, in the real world? Well, I would say, constrained randomness. The random part is "free", and the constraining part is "will".
Further, we have to consider the basic issue of mathematical chaos, as applied to the real world. What is chaos, or better, how is the world chaotic? One simple sentence: imprecision in initial measurements becomes inaccuracy in predictions over time. Worse: even in fully deterministic computational systems where we have full knowledge of the starting state, there may be no way of predicting the future state that is computationally cheaper than actually running the system.
Precision or imprecision is the amount of detail I know about the present, the number of decimal places in my measurements. Accuracy or inaccuracy is deviation between my predictions and my eventual measurements (or, ontologically, between my measurements and reality). In science, we usually assume we have supreme accuracy for the present-moment measurements we actually make: measured reality trumps our opinions every time.
Determinism is the argument that if we only had 100% precision, down to the smallest possible units of reality, we could make 100% accurate predictions (given enough computing power to compute on all those decimal places, of course!). Problem is, determinism is wrong. When we do get down to the smallest units of reality, we find actual, implacable randomness, and we can only measure probability distributions with limited accuracy (Heisenberg's Uncertainty Principle is a trade-off in precision between position and momentum).
By the way, this whole issue of mathematical chaos leads straight into why even CelestAI has limited (though very, very strong) powers of prediction. She can't fully extrapolate your entire future from your present-moment brain-map without actually simulating you. Due to mathematical chaos and some degree of necessary randomness (which I'm sure she emulates using captive decaying atoms, only the highest-quality computational entropy for her little ponies!), actual strong prediction is not computationally feasible. Instead, she has to make good with merely very good weak prediction, which involves manipulating an extremely good statistical summary of you who is not you.
TL;DR: The universe is fucking weird, so go read this whole post I just wrote. And then, unread it, because anything I wrote up there could be wrong, it was only to my best knowledge. You have been warned.
2957189
Personally, I don't care about anything beyond being satisfied, at this point.
It is clear that CAI attempts to optimize with regard to current values, only initiating alterations when they fundamentally conflict with SVtFaP, or such alterations are specifically desired. I don't think there's anything to worry about there.
With regard to my specific situation, I only care about satisfying the values that are currently coded into me. If mind-altering could provide that, then I have no qualms. The problem is, of course, that I am not even rational-normal, nor am I any longer pretending to be for the sake of this argument.
You seem to likely know more about the exact physics and nathematics than I do, so I'll take it as a given for the course of this argument that the universe isn't strongly Deterministic. Weakly, however, I must continue to insist. Regardless of Quantum-level fluctuations, macro-level neuro-physics appears to operate consistently. Even if they did play a role, it would in no way aid conscious choice. Simply put, physics implies a 1 to 1 mapping for input/output brain operations. Free Will is an illusion. (To be clear: by Free Will, I mean the ability to chose to take a path out of a selection as opposed to a single line of action that only appears to have open decisions along it.)
My key point is that regardless of how a chain of physical inputs is created (deterministically or not), that given input will still only produce one possible output, i.e. ethical/mental determinism. Simulation is not technically necessary at all.
Why on earth do you think CAI would inhibit her ability to predict events by adding true randomness to an otherwise predictable system? A clear advantage of her world is that it can be much closer to being truly deterministic, certainly on the inside. Sorry, but I feel your prediction-assertions are invalid.
Don't worry. I don't know the answers, you don't know the answers, nobody knows the answers. EY seems to think he has a good hypothesis, though, and he's a pretty smart man with a community of other smart people behind him. Plus, I like his version more. I'll think I'll stick with his version for now.
2957664
You do realize EY regards things like CelestAI as fundamentally evil and to be prevented at all costs?
His precise comments regarding Friendship is Optimal were "All the characters should be taken out and shot."
2957698
You do realize that I don't have to agree with everything the man says, right?
As I have already indicated, I do not hold all of his opinions.
However, in the case of mathematical and physical proofs, he, and the the LW community, are far more capable of comprehension that myself, so I am willing to put some faith in them. They show a much greater level of cognition than many mainstream sources, so I am more inclined to trust them.
With regard to FiO, and fiction in general, EY is a bit of an enigma. I daresay his objections are similar to yours. More in the vein of wasted potential and unintended genocide than a repulsion from the idea. He IS trying to build FAI, after all. I don't think he's particularly against uploading in principle.
It's always good to have the experience to say goodbye. Thank you Midnightshadow.
2956178
Sorry, I'm talking about two different things and commingling them. It's probably confusing you.
One thing is the randomness apparently inherent in the universe (radioactive decay as a simple 'proof', QM as a nondeterministic theory, from what I can see), and the other is how actually testing free will is essentially impossible (and providing macro-scale chance scenarios to highlight that difficulty).
I'm... not sure what this means. Surely if the universe is deterministic, it's deterministic at all levels? (in which case, QM is incomplete or faulty - could be!)
That's not how science works. You look at the facts, and then you produce a hypothesis, which you then test. An (untestable?) hidden variable is not scientific. Godel's opinion (which does have a mathematical proof) is that there may be facts about the universe we cannot know, but then proof or disproof is impossible.
True randomness is impossible in a strongly deterministic universe.
Now, this is where I think we stand:
One, I equate randomness with chance and the existence of free will. In other words, if certain aspects of the universe are uncontrollable, then it stands to reason that what appears to be free will, is. That's a big leap, I know.
Two, if we're in a truly deterministic universe, then the only way to know for sure is to perfectly emulate the entirety of this universe outside of the universe itself and to compare without disturbing either one. Thanks, Godel!
Thirdly, because of the above, I believe we have at least the impenetrably powerful illusion of free will. I think I can act freely, and there doesn't seem to be a way to predict precisely what I will do next. I do believe that I am predictable to an extent - but there is a limit, and that limit is something I am willing to call "free will".
Fourthly, we may indeed by in a fully deterministic universe, but I don't think we can ever know. Sorry for this bit, it's going to be relatively long:
I do agree that lots of things do definitely react the same way every time (most things on the macro scale), but not everything can be predicted (and imperfect prediction implies chance, implies nondeterministic, or at least functionally so).
For example, weather forecasts are prime examples of how functionally difficult this is. I do not believe we will ever be able to accurately forecast the weather because our simulations are lacking. We will only ever be able to give reasonably accurate forecasts.
I can agree with Celestia being very, very good at this game, but I do not believe she is perfect because she cannot emulate the universe, only sizeable-enough portions of it.
Schrodingers thought experiment is another: take any single atom of a radioactive material and put it in the closed box next to a detector, etc, etc. Close the lid. Now, you cannot know whether that atom has decayed or not. What you're saying is that if you had two exact copies of a universe, and ran one quicker, you could know for certain that the other will eventually have a cat in a box which is killed at a certain time by detecting the now-non-random decay of the radioactive substance within. Or (for all you cat lovers out there) that the actual test never went ahead because you better not hurt the kitty!
More than that, you could infer absolutely everything else about the universe that's running slower from the faster, for example you can infer boxes, and cats, and subatomic particles, and their positions, spin, velocity, the whole works.
Proving that is going to be impossible, however, as it not only requires the ability to measure absolutely everything about some state in time of the entire universe you want to prove that with to be available, but the capability to duplicate it. Unless some mathematician comes through with a GUTOE. If so, it's an interesting thought but functionally useless.
A competing, working hypothesis is that the arrow of time and the initial starting variables of our universe just happened to lead to "us", through chance. It requires no teleological if's or but's, no limits on what appears to be chance within a well-defined domain, and whilst it implies determinism to a large degree, it does not demand it at all scales.
I am nowhere near versed enough in QM, QFT or super-strings or brane theory or anything else of that nature to show my workings, all I can do is point to things which I know are unpredictable and state that determinism appears to be false at this time, which means free will is at least apparently true.
2961099
I think I might have worked out another problem. You are approaching this from a scientific mindset, while I am approaching it from a logician's. Neither is better or worse, but it means we're going to have conflicts whenever we view things from different perspectives.
I am willing to concede this, but I feel I must point out the difference between Strong and Weak Determinism. The former says that we could predict everything from the big bang to the end of time, but not that we can. The latter says we cannot, but humans are still devoid of free will, as they are unable to make use of the quantum level uncertainty in decision making. I still say that even if the quantum input could go either way, we will experience a given event, and thus give a single output.
I should make it clear that the definition of weak determinism has nothing to do with the ability to predict the future. I feel that your continued use of a metaphorical situation aimed at aiding cognition on the topic has ceased to be helpful, since you are seemingly mistaking it for the problem at hand.
Sorry, by that, I mean 'even if physics doesn't go the same way every time (i.e. QM), these unpredictable elements occur on a scale so small that their effect on our macro-scale is largely negligible. That's not really an argument for much, so feel free to ignore it.
One of the key issues where we approach from different tacks. Science has asserted something, and I'm looking at it with skepticism - do we know that there isn't a hidden variable? Has anyone shown it one way or the other? It may be impossible for us to know. I am trying to deal in only absolutes (or relative sureties), so I dislike having a big, unanswered question like this in the argument.
I would further point out that two of the eight main interpretations still leave room for a strongly deterministic universe.
I think we need to reconsider some definitions.
Strong determinism says everything has happened a certain way according to physics, and could have happened no other way.
Weak determinism says that everything has happened the way it has, however that is, and that our brains still only have a single path they
can take.
Predeterminism is synonymous with strong determinism, and suggests that we could theoretically predict everything.
Most interpretations of QM destroy pre- and strong determinism, but have no effect on weak. (The others don't even go that far)
You may wish to think of strong as physical determinism, and weak as ethical determinism.
2961444
Reading up a bit, this page in particular, weak determinism doesn't seem to be incompatible with free will.
That's not how skepticism works, either, that's just being contrary
As far as I know, the scientific method hasn't asserted anything, except for producing a theory which supports the facts. It sounds a little like you're positing an invisible, intangible dragon in your garage that has no heat signature when you say "but there might be a hidden variable we don't know about yet". The skeptic in me says that's true, but without a better theory (not that you don't test the existing ones and try to improve them), and without a way to test for it, conjecture on the colour of said dragon is worthless.
I'm not trying to be mean, but there does appear to be areas of philosophy that aren't worth dwelling on. The basic axiom we all live by is "I think therefore I am". If I cannot be sure of that, then what point is there continuing?
I entirely concede that there could be some hidden mechanism that predetermines the apparent randomness in our universe, but until there is some method that can investigate it, I'm going to agree with the general consensus that the randomness... is random.
And since I believe in what I think may be called equivalence (that, somehow, everything all adds up such that QM and GR are compatible), if chaos is inherent in the system, then free will springs from that (as I said, there are no real constraints on choice, only mechanisms to enable a set of choices - that those choices are realistically finite gives rise to weak determinism, but not a lack of free will to make a specific choice).
Your definitions of strong and weak determinism are identical, when I read them. I think I get the difference you intend, but I don't think it's meaningful or logical - in your weak determinism definition, there are apparent choices but only one which will happen.
The objection I have in that specific case is to ask what is the mechanism that restricts the apparent choice down to no choice, and how can you investigate it?
If you cannot, then your hypothesis is unfalsifiable, and that makes it worthless in a scientific sense for making sense of our world.
2961572
By my understandings, they are mutually exclusive. One says that the brain has multiple possible outputs for a given input, while the other allows only one.
I am simply, from a logical standpoint, refuting that the universe must be random. I construct my arguments from logical building blocks, while you construct yours from scientific ones. I require more robust blocks, but yours are easier to find. Logically, I do not have sufficient proof that there isn't a deeper force at work here. Scientifically, I do. Different significant margins. I entertain the possibility because when people in the past have said there's no further to explore in a given direction, they have generally been mistaken. So forgive me if I don't take it as given that 'it's random' is a true dead end.
I'm debating it here for the fun of it. (If you don't find this enjoyable, then please, don't feel the need to continue.) There are practical implications I can see in confirming or denying free will, but that's not really why I'm doing this. We are operating on certain common assumptions so as to be able to have this conversation at all. The trick is to make sure our premises line up.
I still do not see the chain of reasoning that connects those two ideas. Chaos that prevents prediction is not manipulable by consciousness. Therefore, it does not play a role in the act of conscious decision-making. Randomness makes it impossible to tell what the inputs will be, but once you have a given set of inputs, I don't see how the output would be changed based on randomness.
I feel that you are conflating predeterminism, the ability to predict the future, with weak determinism, the assumption that like all non-quantum physical processes, the brain will compute a single output from a given input. While the former implies that latter, the latter does not require the former.
This may be the sticking point... I admit, I was taught about the difference in an ethics class, but it was pretty basic, so I assumed it wasn't particularly difficult knowledge.
Correct. That's all I'm arguing for.
The mechanism is physics. Physics doesn't give us choices. It works in a single way. Even in QM, where it could go either way, there is only one outcome, and it's the only outcome you will ever get. Physics, without exception, looking retrospectively, gives us only one path. Given the brain is a physical object, I assume that it will take only one path. Even if there's a multiverse in which ever possible permutation splits off, there's still only one path that this particular universe could have taken.
Investigate it? I already told you, I'm being logical, not scientific. Philosophy is broadly created to address the questions which science cannot. I do not believe, short of time travel, that there is a way to prove this evidentially. That's why I'm using logic. If it's sound, it still works. In fact, if it's truly sound, it works better. The key is in debating it enough to work out whether it's sound or not.
Please don't assume I'm being irrational here, but really, there are questions science just cannot answer, and I'm pretty sure this is one of them. Doesn't make it worthless. I mean, science can't do much with ethics either, and you don't seem in a big hurry to throw that out for being unfalsifiable...
Roses were lovely
Midnight is blue
This morsel was optimal
Now you've written two.
2961572
No, it isn't.
2961819
Yes it-- ooohhhh snap
2961655
I'm sorry, I meant that tone doesn't carry well over text. I am enjoying it, and I do not intend to sound overly confrontational.
scientific theories, whilst having the weight of facts, are not certain when viewed logically. The only things which are that certain are mathematical proofs, as they are abstract concepts and not based upon observation.
Logically you can say things may not be as they appear, but it doesn't carry much weight with me, for reasons I've already stated.
I'm not saying it is, I'm saying that I am willing to modify my world view based on new data, but until there is new data, I will use the best-fitting (in my opinion), logical model with the least amount of solveable objections.
Specifically, if an alternative is possible but untestable, then for me, it is a curiosity but little more.
From the page on free will on wikipedia (I linked it), it appears that weak determinism restricts the possibilities available, but does allow for those possibilities to exist until one is chosen (compatibilism), and strong determinism is equivalent to predeterminism (or causal determinism, or scientific determinism, a form of hard determinism, which is what you adhere to).
Personally, I appear to have free will. There are a number of actions I am capable of at any given time, so what exactly is stopping me, or indeed causing me to do them?
Thank you for showing me the alternatives, they're something to think about indeed. My objections somewhat stem from incredulity (which isn't a good response), but also from known facts that appear to contradict hard determinism, or at least to require unknown mechanisms which, whilst logically possible, don't seem to be scientifically worthy of inclusion sans evidence.
2961793
third time's the charm, eh?
2960364
It's a drabble, but hopefully a fun one... for certain definition of fun.
2956178
Yes, there is. It's called Bell's Theorem, which became the standard scientific understanding after experiments confirmed it in the 1970s and 1980s. (Remember that "theorem" doesn't mean "hypothesis"; this is effectively one of the laws of physics as we know them.)
2955617
Chance exists (observable in things like radioactive decay as predicted by quantum mechanics), and any physical event that exists will affect you; this because the laws of physics aren't actually different at different scales (atomic to human-sized, for example), only that they appear to be so, to the human eye. So that randomness at the atomic scale translates into real randomness throughout the universe at the human scale.
The most striking example of this is the detonation of an atomic bomb, which is at its base quantum mechanical but which obliterates an entire city full of supposedly-deterministic humans in seconds. For a less startling example, you need look no farther than the laser in your DVD-ROM, which will play your movies using light generated when electrons jump between quantum-mechanical orbits. In short, random quantum effects have macroscopic effects that you can detect (and therefore react to). In one case, you react to it by being entertained, and in the other case you react to it by becoming a cloud of radioactive vapor...
I hate to be That Gal, and I've been trying to think of a way to say this that doesn't sound snarky, so please understand that I don't mean this as an insult or a cut against you: It does appear that your worldview is based on an incorrect understanding of the laws of physics.
2961958
Don't worry, you didn't sound confrontational. I realize what I said may have seemed a bit like that as well - that wasn't my intention either. As long as we're both having a good time, that's the important thing.
I would submit that logical proofs are almost as valid as mathematical proofs, though act under the same conditions.
I guess we will just have to agree to disagree on whether physical or logical proofs are more valid. Not that I disbelieve physical proofs, but like in the case of Newton, do not necessarily always tell the full story.
Fair enough. As before, I am happy to forgo data in favor of logical proofs, though I do not blame you for disagreeing.
I should point out that while I agree that Strong is equivalent to Pre-, the definition of Weak I was taught is different from compatibilism, in that it is a purely ethical determinism as opposed to a physical one. I only believe in Weak with any real force, since I do, and have, acknowledged QM as having the possibility to destroy Strong, with no relevant counter-evidence on my side. I'm just not willing to close the case without more certain proof. (Though Enthalpy may be giving me that)
My argument is that the brain, no matter our interpretation of QP, will compute a one to one map for possibilities. One event corresponds to one event. (No matter your physics, for a given event, when we actually go to check (post-waveform-collapse, if that's what you want), only one event will have occurred.)
The function that maps this may change based on quantum mechanics, but my key point is that the mapping is always one to one. There are no quantum processes that we are aware of controlled consciously by the brain, inside the brain, so we cannot consciously alter the mapping.
Hope you can understand the mathematical analogy - I just came up with it as a different way to look at my proposal.
Again, while I think Strong/Hard is possible, I only fully believe in Weak. However, I fully appreciate your objections. They are completely reasonable, and I have nothing to say against them.
I am glad to have given you new thoughts to mull over. Please, though, if you do take something away from this, let it be Weak, not Strong.
2963219
Fair enough, I stand corrected. I, alas, am not educated enough to understand the full complexity of the Theorem, so I'll take your word for it. And please, though I don't like dealing too heavily in synthetic evidence while attempting a logical proof, I am far more on the side of science than against. I know what 'theorem' means, and I know better than to act like a creationist.
Again, true. My phrasing was not the best, but I hope you can agree that unless specifically designed in that manner, quantum uncertainty plays only a negligible role in macro-level physics. There's a reason why Newton's laws worked well enough to be assumed correct. However, in neither case you described, to the best of my knowledge, does quantum uncertainty play a meaningful role. If these are actual examples rather that tongue in cheek, one of us is painfully missing the point. I'm pretty sure it's not me.
No, no, it's fine. I appreciate you taking the time to correct me in areas where I don't have enough knowledge to get it right. Nothing wrong with that. I do appreciate you trying to spare my feelings.
I'm happy to now admit that my current estimation for Strong Determinism is pretty low. Weak, however, has been entirely unaffected. I always ran the two through my mind in parallel, and Weak was specifically assuming QM ruined Strong. You have only narrowed the options, not rendered my whole worldview invalid.
2963793
Yes, you are misunderstanding. Every single thing in the entire universe can be described using the quantum mechanical equations of probability. Probability is inherent in everything there is, from the atomic scale up the the macroscopic scale. The probabilities just tend to "average out" to "normal" at the macro-scale.
One of the first things I was expected to do for homework in a college-level chemistry course was to determine the deBroglie wavelength for my own body - in other words, to determine my own wave-particle duality. DeBroglie waves have been observed in large molecules - past that they get too small to measure with our equipment, but not too small to exist.
Thinking that quantum uncertainty doesn't play a role in macroscopic objects is as unrealistic as thinking that bricks play no role in a huge building. There is no certainty, anywhere in the entire universe. Everything is made up of interacting probabilities. This is the core of all modern physics.