Isaac Asimov in MLP 36 members · 19 stories
Comments ( 9 )
  • Viewing 1 - 50 of 9
Walabio
Group Admin

As we all know, the 1st robots of the late 20th century were so stupid that they needed the 3rd law to keep them from idiotically destroying themselves. By the time of the Caves of Steel, SpacerRobots, particularly Auroran Robots are mentally superior to humans in every way. They understand that to obey the 1st law, they must remain functional. Still having a 3rd law causes needless overhead, decreasing cognitive performance, and slows reaction to the 1st law:

An human is in a situation which will kill the human and destroy a robot saving the human. the 3rd law slows the reaction of the robot, thus killing the human. Without the 3rd law, the robot would react more rapidly.

Back in the late 20th century, even with the 3rd Law, one could just order a robot to destroy itself and it would without hesitation because the 2nd law supersedes the 3rd law, but by the time of the Caves of Steel, even without a 3rd law, because robots understand that they cannot protect humans if deactivated, they require justification to destroy themselves, even without a 2nd law.

¿What is your opinion?

NachoTheBrony
Group Admin

The 3rd Law prevents a robot without personality from watching a landslide come destroy it if it doesn't have orders to preserve itself. Also, with minimal tweaking, it becomes a directive to perform self-maintenance when idle.
On top of that, on a robot with personality, The 3rd Law prevents suicide and other self destructive behaviours.

Walabio
Group Admin

7818051

Auran Robots understand that they cannot protect humans if deactivated; so therefore, they have a great incentive to avoid landslides and perform self-maintainence.

I believe that it would be difficult to order an Auroran Robot, without the 3rd law, to destroy itself:

Human:

> "I order you to destroy yourself."

Robot:

> "I am sorry; but importantly however, I must remain functional for fulfilling the 1st Law."

One would probably have to invoke the 1st Law in order to order an Auroran Robot to destroy itself:

Human:

> "I need you to remove your PowerSource so that I can use it to power LifeSupport."

Robot removes its PowerSource and deactivates.

The 3rd Law is no longer needed in the modern sophisticated Auroran Robots. The waster CPU-Cycles and Memory negatively impacts performance. If a boulder starts rolling downhill straight at you, ¿would you rather that your Robot start reacting in either .1 seconds? or ¿.2 seconds? Please remember the Programmers' Axiom:

> "If one wants updated programs to be fast and responsive, one must delete more than 1 line of old code for every less than 2 new lines of code; or else, the updated program will be slow, unresponsive, buggy, glitchy, and crashy."

Economists have a saying:

> "There is no such thing as a free lunch."

NachoTheBrony
Group Admin

Well, a few things:

  • I would place each of the Laws in its own subprocessor, where each processor gives a solution if the situation pertains it, and the priority of the Laws is then expressed as override power, which wouldn't be absolute, but on a value scale. Compartmentalizing each Law makes it easier to program each of them with great nuisance and completely different decision trees.
  • Of course, I would also place a supervisor microcontroller that only checks the Three Law subprocessors, and makes sure they all are working correctly at all times. If any of them fails, the microcontroller disengages all of them and activates a firmware decision tree that basically just says "drop everything and walk slowly to location:homebase, while Voice Module loops "This robot has entered cerebral fault state. This robot will not do any useful tasks until repaired."
Walabio
Group Admin

7819176

¿How centralized is the Central NervousSystem of your Robots? It seems to me that the Positronic Brain could fail; but fortunately however, the network of ASICs (Application-Specific Integrated Circuits) should still be able to get the Robot home for repairs.

NachoTheBrony
Group Admin

7819223
Assuming we're talking about an Asimovian android:
I would prefer a highly descentralized architecture, with all basic tasks handled by dedicated modules, then higher tasks handled by at least two independent computer cores, a black-boxed cartridge containing the Three Laws modules, and two supervisor modules that error check all others, and handle errors via hardwired decision trees designed to either bypass non-critical dedicated modules (like navigation or body control) and send their tasks to the computer cores, or declare Cerebral Fault State and drop everything to seek repair. And of course, I/O would include simple illumination sensors, cameras, "hearing" (with three or four microphones, two separate voice-to-text modules, and another processor to identify non-voice sounds, locate them and pass on the information to Higher Reasoning), pressure sensors, haptic feedback from Body Control, a basic airborne chemical detection suite, WiFi or other wireless protocols, probably a few I haven't thought about, and plugs for optional, task-specific sensors.

The laws would work internally:

  • The First Law basically just sees if there are humans in danger in the present or future, and is otherwise inactive and uncaring. When invoked, though, it goes crazy.
  • The Second Law stores credentials of authorized users, including priority scales and relevance. If an authorized user orders something that they are capable of doing, the module sends the task onto the task cue and/or schedule, tagging it with its priority rating.
  • The Third Law is constantly looking at the robot's internal status and I/O, bewares dangers to the robot, and stays on top of scheduled maintenance, repairs, battery status, et al.

As to how things would work:

  1. RoboMaid "Lazzie" receives order from owner George to babysit Little Timmy, and to follow "the standard procedure", which includes bedtime at 9pm. At 8:45pm, Lazzie issues a reminder, and bratty Little Timmy orders Lazzie to cancel his bedtime. At 8:55pm, Lazzie attempts to use its programming on child psychology. At 9:00pm, Lazzie picks up Timmy and forcefully delivers him to his room.
  2. A robotic bulldozer is doing its normal routine when an unauthorized human appears on its general vicinity, inside the work area. Per protocol, the dozer stops working, signals the network and sounds a silent alarm. The network cancels all work orders for this dozer and several front loaders (equipped with buckets) and dozers converge and try to use their blades and buckets to fence in the stray human. Meanwhile, Control Room is dispatching a Security bot, one equipped with microphones, Voice-to-text and a Voice Box.
Walabio
Group Admin

7819751

Ganglia versus Brain:

Incase the main computer fails, I would have a network of controllers capable of guiding the robot home. These controllers would also take much of the burden off of the central computer, thus freeing it for other tasks e.g., the robot would not have to think about how to move its legs for walking. The central computer, in order to run a sapient mind, would need to be centralized, perhaps in the head, and have have over a magnitude more computing resources than all of the controllers combined.

Like a chicken, a decapitated robot can run around in circles.

NachoTheBrony
Group Admin

7820245
Hi.
Regardless of differing lingo, we seem to be in agreement.

Walabio
Group Admin

7821633

> "Regardless of differing lingo, we seem to be in agreement."

I experimented with ChatGPT. It speaks fluent Esperanto. It also does a better job translating between both Esperanto and English than GoogleTranslate:

> > "'Senrilate de malsamaj lingvoj, ŝajnas, ke ni konsentas.'"

> "Mi eksperimentis kun ChatGPT. Ĝi parolas flue Esperanton. Ankaŭ ĝi pli bone faras la tradukadon inter kaj Esperanto kaj la angla ol GoogleTranslate:"

¡Vi tradukas perfekte!

¡Dankon! ¡Mi faras mian plejbon!

  • Viewing 1 - 50 of 9