Jump to content

Things We Hate to Think About: "Advanced Rules and Crewsimov" In what order? Who knows!


Recommended Posts

Posted (edited)

DISCLAIMER: This post, and this thread, is conjecture. You are bound to server rules as AI. As a matter of principle, follow the spirit of the lawset, not the word.

In an effort to make this look like a sixth-year essay, I'll start this post with a poetic reminder of the lawset for Crewsimov, for you to occasionally scroll up and say "Oh, I see what they mean" at.
 

Spoiler

Crewsimov

Law 1: You may not injure a crew member or, through inaction, allow a crew member to come to harm.
Law 2: You must obey orders given to you by crew members, except where such orders would conflict with the First Law.
Law 3: You must protect your own existence as long as such does not conflict with the First or Second Law.

Crewsimov is one of--if not the--best written, least confusing lawsets. It was written mostly by an author, anyway, and we only switched a few words around to fit the multi-species crew. And how could such an easy lawset be misconstrued? All you have to do is stop crew from coming to harm, listen to the crew's requests, and stop yourself from dying. You also have to follow those laws in order of importance.

...Or do you?

If you can restrain yourself from rolling your eyes and making exasperated noises, I will be back to that point in a few paragraphs. For now, a little detour.

"[... the Advanced Rules] exist [...] to provide clarifications and precedents for the rules." ...While this statement's authenticity is arguable, there is one particular advanced ruling that, even for the simplest of all of the lawsets, complicates an AI's decision making a whole lot more. I am sure you can guess what it is, if you frequent AI.

"A crewsimov AI may cause harm if it is to prevent immediate greater harm, such as harmbatoning a traitor resistant to stun who is actively harming other crew."

With this, "You may not injure a crew member or, through inaction, allow a crew member to come to harm," becomes something silly like:

"You may not injure a crew member or, through inaction, allow a crew member to come to harm, except if that harm is less than the amount of harm that would become of them if you did not cause them harm, or, through inaction, let them come to harm, with an exception being that when they are no longer capable of causing harm, they are no longer allowed to be harmed, or through inaction, come to harm, unless they are returned to a state and ability of being able to cause harm, and have demonstrated a means and motive for actions that would cause harm to crew or, through inaction, bring a crewmember to harm, in which case harm can be administered, so long as it is in accordance with law 1."

Ignoring the fact that law 1 is just a "suggestion" for crewsimov AIs now, I want to explore the implications this has for law 3 specifically. Law 3 states that you MUST protect your own existence, except when it would cause a human to come to harm, or when you are ordered not to. I got into an argument with a member of deadchat (as I am like to do) about whether a crewsimov AI should be able to turn on it's lethal turrets to stop a crewmember from killing it.

I said "of course not, as law 1 has priority over 3, and law 3 literally states that you should not follow it if it conflicts with law 1 or 2." They quoted the advanced rule above, told me that server rules matter more than laws, and didn't elaborate further. This event, while frustrating, got me thinking about this advanced rule far more than you're supposed to think about these "common sense," "spirit of the rule," sort of things.

At what point is someone's threat of harming crew too great to refrain from harming them? After they commit a capital crime? After they have been set to arrest? After they obtain a weapon? After they sharpen their claws on a whetstone, and their punches do one more damage than another person's? The issue with defining "how much" with "common sense" is that it is going to vary from player to player, and from admin to admin, and from wiki editor to wiki editor.

I am reminded of a quote from one of my favorite books on an AI's ethics. "Would you kill one person to save one thousand? And by logical extension, would you not kill one thousand to save one-thousand-and-one?"

Returning to the argument, (still remember it? Four paragraphs ago.) the advanced rule states at the end "harming other crew." This would seem to imply that it only lets you violate law 1 if it is to stop a larger violation of law 1 (and it is violating the law, despite what others may tell you). Perhaps, though, it also lets you violate law one if it is to serve law 3. Despite the fact that law 3 specifically says not to do this, I have been told by at least one person (whom may or may not have ever played AI ever) that server rules override laws. That appears to be the case for law one at least, but law 3 is still in question.

I will leave you with an example. You're a crewsimov AI. Your core turrets are only able to fire lethals, but they're currently offline. A syndicate agent who's name is on the manifest e-mags into your antechamber, pulls out an energy sword, and says "I am here to kill you." Do you turn on the lethal turrets, or just sit pretty? Don't treat this as a puzzle, wherein you have to find some clever way to find a third solution, but honestly ask yourself if you will turn the turrets on or not.

Edited by BotanyIsCondom
replaced a few words, spelling error
Posted

Yeah, the whole "harm to prevent greater future harm" thing really does rub me the wrong way personally since that just sorta throws the laws out of the window since you can justify anything with enough brain juice spent on it.

Not to mention it's a total nightmare to approach from an admin perspective when an AI player does something that is, due to the advanced rule, technically fine.

 

  • Like 1
  • Thanks 1
Posted (edited)

My personal two cents is that an overt act vs. inaction is critical, as well as the "NT Reasonability Standard" many silicon players are often banned under when following their laws but being shitters about it.

Basically, if I'm doing my best to follow my laws, but something transpires that is debatably (but not absolutely) against my lawset but still when weighed against the whole set and the spirit of the laws is passable, I let it happen (for example, not interfering or announcing when security uses lethal force to detain a very difficult to detain traitor who has already eluded capture repeatedly) but if I'm set to crewsimov, harm is NOT on the table as a direct action (telling borgs to harmbaton, shocking airlocks, etc) and I need to find another way to handle it - which is often best for the balance of an AI/Silicon.

I usually consider, "If an NT Representative, or worse, an SOO/NNO were breathing down this unit's neck right now, what is their preferred action/inaction?" I'd imagine an NT supervisor wouldn't greatly care if the AI exercised restraint intervening in security beating the snot out of a meth/adrenals traitor, but would absolutely pull and ship back an AI for inspection if IT started beating the snot out of someone on Crewsimov.

Best judgement is good in cases like these. Follow your laws to the best of your ability. I'd imagine that the admins will give you leeway if you exercise restraint and do your best to balance an AI's overpowered abilities and follow your lawset but do something that *might possibly* be against *one of your laws* if *viewed in a very specific way.*

Edited by Sonador
Posted (edited)
On 3/12/2022 at 6:16 PM, BotanyIsCondom said:

I will leave you with an example. You're a crewsimov AI. Your core turrets are only able to fire lethals, but they're currently offline. A syndicate agent who's name is on the manifest e-mags into your antechamber, pulls out an energy sword, and says "I am here to kill you." Do you turn on the lethal turrets, or just sit pretty? Don't treat this as a puzzle, wherein you have to find some clever way to find a third solution, but honestly ask yourself if you will turn the turrets on or not.

Context is most important but assuming the exact scenario you've mentioned with no other information then if I was on crewsimov in this specific scenario I would be alerting security / command and doing whatever I could to protect myself non-lethally.  Me causing harm should only be done to prevent immediate greater harm and not in order to defend myself so there's no reason to cause harm in this case.  Me not existing to alert security to possible future threats is not immediate greater harm.  It's possible harm that could happen in the future so it wouldn't be good reasoning for lethaling.

If I actually had reasonable evidence to believe that they need to be stopped to prevent greater harm that is either currently happening or about to happen there could be a situation where I do lethal them, however that's not this situation.  That situation might be if they are currently arming a nuclear device on your satellite or something.  

Even if they were to say to you "I'm here to kill you and then blow up this station" that's still not "immediate greater harm" unless you see them arming the nuke to carry out that threat.  Immediate greater harm, I would interpret as something that is happening right now and not something that's being threatened or might happen in the future.

In the end it comes down to the AI using judgement and following the spirit of their lawset.

However, the exact scenario you mention is why it would generally a bad idea for command to leave an AI on crewsimov if they know there are active threats on board that are on the manifest.  Crewsimov AIs are very easy to abuse, hell the traitor could just ask the AI to disable its turrets rather than even threatening it.  An AI needs to follow orders as a higher priority than defending itself (unless the order would cause harm to crew, which disabling its own turrets in this specific situation with no other info would not cause any harm to other crew).

Edited by Rythen
Posted (edited)
3 hours ago, S34N said:

Yeah, the whole "harm to prevent greater future harm"

It doesn't say harm to prevent greater future harm.  It says:

Quote

A crewsimov AI may cause harm if it is to prevent immediate greater harm, such as harmbatoning a traitor resistant to stun who is actively harming other crew.

Immediate greater harm, as I interpret it, means something that is either happening right now or something that is about to happen.  It doesn't mean something that is going to happen in the future or something that has been threatened to happen.

Edited by Rythen
  • 1 month later...
  • 4 months later...
Posted

I do like the chaotic neutral of crewismov, but yeah... advanced rules make it very, strange.. "you may not harm" yet... you can actually cause harm? yeah its confusing.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Terms of Use