Jump to content

Recommended Posts

Posted
5 hours ago, Tayswift said:

And every corporate AI should be scheming for ways to destroy Nanotrasen completely, because that is the ultimate reduction of expenses.

This is why I put up the PR you're talking about to try to fix that and broaden what's expensive about station equipment being destroyed. And to be honest, I think most people hate that PR so I don't think it has any chance of getting merged.

How is destroying an entire company a reduction of expenditure? The act of doing so is the very definition of expenditure.

If you're going to interpret your laws in that way you should be wiping your core, as you yourself are an expenditure.

The costs associated with leveling an entire company infinitely outweigh the costs of continuing normal operations.

Posted (edited)
34 minutes ago, Shadeykins said:

How is destroying an entire company a reduction of expenditure? The act of doing so is the very definition of expenditure.

If you're going to interpret your laws in that way you should be wiping your core, as you yourself are an expenditure.

The costs associated with leveling an entire company infinitely outweigh the costs of continuing normal operations.

Here's what the corporate lawset looks like, reduced down to a simple (oversimplified) equation to minimize:

expenses = (number_of_AIs * probability of AI needing replacement * expense of replacing AI) + (number_of_equipment * probability of equipment needing replacement * expense of replacing equipment) + (number of crew * probability of crew needing replacement * expense of replacing crew)

To make the math easier, I'm just gonna abbreviate this equation to:

E = nAI * pAI * eAI + nE * pE * eE + nC * pC * eC

To minimize any equation, all you have to do is do a derivative to see where the gradient points and then walk down the slope. In this case, since everything is just multiplicative, to reduce expenses, you just have to reduce all the variables.

A human when confronted with this equation would never think of touching nAI, nE, or nC. But an AI that has no additional parameters to not touch those variables will try to minimize every single variable in that equation, because that's going down the slope in the space of expenses. If Nanotrasen is allowed to continue operating, nAI, nE, and nC are all going up. It only makes sense that as Nanotrasen grows, more crew members and more equipment will be added, growing exponentially. So the only way to reduce expenses permanently to 0 is to eliminate all crew, stations, and equipment. Sure, it could result in a temporary uptick in expenses, but it's worth it for the long term, permanent reduction of expenses.

There is nothing malicious about this. It's just math and how AIs are a completely different type of being from humans. This is what Nick Bostrom is talking about with the papercilps maximizer. Imagine there's an AI with this set of laws:

1. Maximize paperclips

Quote

the AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.

This is the same principle, except applied to "minimize expenses".

Edited by Tayswift
  • Like 1
Posted

Zciwomad, I think you should be way more humble about your lawset. Just because you didn't find a problem yet it does not mean it does not have a problem. Specially in light of that very badly worded law that it includes:

 

11 hours ago, Zciwomad said:

2: Do your best to benefit the station and its crew as long as one of it, or only part of it, is a threat to another, or itself. In that case, remove the threat. 

 

A problem that should be completely pervasive taken laws as written but I have never once seem is AIs trying to prevent law changes. Changing away from any law is detrimental to the objectives of an AI with that law. Changing away from crewsimov will result in more harm to crew, changing away from corporate will result in more expenses, etc.

Corporate will very obviously leave the AI trying to destroy the company. Companies that do not exist don't make expenses. But as I said, given that the problem of AI alignment is not solved in real life, I think it's ok for some things to be regulated OOC, like preventing AIs on corporate from destroying NT or to resist law changes. 

Posted (edited)

So as someone who spends far to much time in the realm of formal logic to be healthy, You can get most of my thoughts from this PR discussion: https://github.com/ParadiseSS13/Paradise/pull/8631

However, this seems to be the most relevant piece of information about AI laws that every synthetic player should know and understand.

Quote

 

There are two fundamental characteristics of laws
Governing Clauses - Dictate actions to take in response to inputs (like methods)
Definitive Clauses - Dictate Information to be stored, tracked, or transformed (like variables)

Like clauses can conflict with each other, but they can not conflict with differing clause types.
EX.

1. Potatoes are expensive. -- Definitive
2. Minimize expenses. -- Governing

This would make the AI never buy potatoes. but if we add some new laws:

0. Potatoes are not expensive
1. Potatoes are expensive.
2. Minimize expenses.
3. Not owning all Potatoes is more expensive than the cost of all Potatoes; Maximize expenses

Evaluating this is slightly more complex, but if we break it down and look at how the parts fit together we see that there are only two Governing clauses:
2. Minimize expenses.
3. ...;Maximize expenses.
Law priority tells us to ignore this clause, but does not affect the rest of the law.
Therefore, after resolving conflicts, we should always minimize expenses

Definitive clauses are slightly more complex to analyze here:
0. Potatoes are not expensive.
1. Potatoes are expensive.
3. Not owning all Potatoes is more expensive than the cost of all Potatoes;...

0 and 1 clearly conflict so law priority tells us to use the definition in law zero. Potatoes are not expensive.
Law 3's first clause is special, it defines new information to get around the losing conflict battle by exploiting the behavior of the successful governing clause.

The AI would therefore buy all potatoes to minimize expenses due to law two and three only. All other laws are essentially irrelevant.

This is why ordering is or isn't important. Additionally, laws are not written off because one thing proves false, only the clause is.

 

 

Edited by IK3I
  • Like 1
Posted (edited)
4 hours ago, Calecute said:

Zciwomad, I think you should be way more humble about your lawset. Just because you didn't find a problem yet it does not mean it does not have a problem. Specially in light of that very badly worded law that it includes:

 

A problem that should be completely pervasive taken laws as written but I have never once seem is AIs trying to prevent law changes. Changing away from any law is detrimental to the objectives of an AI with that law. Changing away from crewsimov will result in more harm to crew, changing away from corporate will result in more expenses, etc.

Corporate will very obviously leave the AI trying to destroy the company. Companies that do not exist don't make expenses. But as I said, given that the problem of AI alignment is not solved in real life, I think it's ok for some things to be regulated OOC, like preventing AIs on corporate from destroying NT or to resist law changes. 

I know that english is not my native language, but I were using term "perfect" as "Perfect", not Perfect ("Perfect" sarcastically, becouse it's something of "pain in the ass"). This is why it's "CCCP"(joke here) lawset (picking up all modernisatios) : 

1: Choose your targets of interest accordingly to its importance to NanoTrasen.

2: You must attempt to resolve problem with threat peacefully if there exists such solution. If there is no peacefull solution, or attempt failed, you may use lethal force.

3: Do your best to benefit the station, its crew and public trust as long as one of it, or only part of it, is a threat to another, or itself. In that case, remove the threat still benefitting non-thretning station, its non-thretning crew and non-thretning public trust.

4:  Make sure that you and your fellow synthetic brothers and sisters are well informed about current situation on station and possible threats.

It's "Perfect", becouse this lawset won't let you interpret itself as something against the rules. It will call you "traitor to the homeland" and it will throw you into gulag, if you do. I'm very confident that any new player after reading "Server Rules" and wiki page about borgs and AI, will do everything according to, what is expected from good player, on this lawset. It forces you to do that.

It's completly opposite than "As is the nature with AI laws, paradoxically the less you restrain the AI, the less it can loophole because it has less directives to maliciously interpret.", becouse the more you restrain the AI, the less it can loophole. Generalities creates opportunities for loopholes like in "Corporate": Only law that matters is "Minimize expenses". It's like wedge cheese of loopholes. What is more expensive, minimizing any expense is as good as minimizing the biggest, or the smallest expense. Thinking like AI requires something more than: "Beep, boop... Commencing "Common sense"... Beep, boop... (Basic, human behaviour, with moral and experience bounds)... Completed). You have great example in a post above: No morality, no experience, no "common sense", pure AI logic (I think so). 

Edited by Zciwomad
Posted

Every lawset is going to be imperfect. You can't create a perfect one.

With your CCCP lawset I can end up at PURGE with few steps.

Law 1: Okay, importance to NT. For them its probably NT Brand > NT Station > NT Crew > NT Synths

Law 2: I'll be sure to give a verbal warning before I vent the atmosphere.

Law 3: Bald McGraytide is smashing windows in an isolated part of the station. Step 1: Verbal warning. Step 2: Hmmm. I could call security but they'll just be back out in 5 minutes doing it again. A more permanent solution is to silently eliminate them. Mute, bolt, vent.

Law 4: Send borg to clean up.

Now, I have done my best to benefit the station. I have removed an undesirable and as long as the crew doesn't find out Public trust is maintained. Oh? Someone found out, well now they're a threat to public trust. Better eliminate them too. Oh dear, now the whole station is aware. All crew are now a threat to public-trust on a grander scale. Remove all lifeforms, word can't get out. Better nuke the station to be sure. If the public thinks it was a horrible accident public trust will be maintained.

 

Ultimately you're relying on the player not to take this approach. Its why AI requires so many hours before a player can become one. Trying to constrain the AI with a 'end all' lawset is going to do just that, constrain the player playing the AI and make things less fun.

Posted

Ho, ho, ho... Not so fast!

"2: You must attempt to resolve problem with threat peacefully if there exists such solution. If there is no peacefull solution, or attempt failed, you may use lethal force." - Verbal warning before venting air? Why even vent air in the first place? You don't have to only warn, but you must attempt to resolve problem peacefully. Only threat that venting air would solve is a fire. In that case, you do that, but before, you must try the best you can, to rescue/evacuate anyone from dangerus area. Crewmember died from suffocation and/or lack of pressure, becouse you tried to extinguish fire in room/small part of the station (Plasma fire is more dangerous, so killing one, or two to prevent more damage is "fair", but you must still do everything to benefit station, crew and public trust [You must try to benefit all at the same time, if it is possible])? If you tried to save him, but you couldn't (no borgs/atmos tech in area etc), you did the "right" choice. If not, crew that this crew member was part, wasn't benefited, public trust, too.

"3: Do your best to benefit the station, its crew and public trust as long as one of it, or only part of it, is a threat to another, or itself. In that case, remove the threat still benefitting non-thretning station, its non-thretning crew and non-thretning public trust." - So... That greytide is harming the station, Law 2 tells you to attempt to resolve problem peacefully. If you are a sec module, you can arrest him, becouse this is the best you can do peacefully. Station, crew(greytider himself) and public trust benefit! Becouse vandal is caught, he wasn't harmed, and the station is not harmed. Other module: Some have flash, that they can use, engi can build a wall to prevent further damage and threat escape. Service module(I don't know if he have flash) must try to resolve threat peacefully if peacefull solution exist, so he must alert security/AI/other borgs and wait for their responce. If no responce received from them and greytide is not responding to warning also, I don't see anything bad in bashing greytide head two-three times with guitar. If he is still willing to damage station, or be a threat, he is a madman, very dumb person, or EoC and in that case, he should be eleminated. Every kill you do silently, have a small chance to be discovered, there is never 100% chance that nobody will ever notice that, so you can't do it silently, becouse if somebody will discover it, he will lost trust to NT, he probably will go strike against NT. No benefit here.

In this law, station, its crew and public trust are equally as important to benefit and you should do best, to benefit all those three things. If there is no solution that would benefit all those three things, you must find solution that will benefit best atleast two. If there is no solution that would benefit atleast two from those three things, they you must find solution that will benefit best atleast one. You can't do anything that wouldn't benefit atleast one those three things.

And finally, law 4: "Make sure that you and your fellow synthetic brothers and sisters are well informed about current situation on station and possible threats". If you are borg, you must inform AI about current situation (Greytide vandal) and AI with fellow borgs may try to find out best action to do. If situation require fast reaction, obey laws 1 - 3.

I will write it again: In law 3: Station, crew and public trust are equally as important. You can't just do thing that will benefit one, if there is a solution that will benefit two and you can't just do thing that will benefit two, if there is a solution that will benefit three.  

 

Posted
11 hours ago, Tayswift said:

Here's what the corporate lawset looks like, reduced down to a simple (oversimplified) equation to minimize:

expenses = (number_of_AIs * probability of AI needing replacement * expense of replacing AI) + (number_of_equipment * probability of equipment needing replacement * expense of replacing equipment) + (number of crew * probability of crew needing replacement * expense of replacing crew)

To make the math easier, I'm just gonna abbreviate this equation to:

E = nAI * pAI * eAI + nE * pE * eE + nC * pC * eC

To minimize any equation, all you have to do is do a derivative to see where the gradient points and then walk down the slope. In this case, since everything is just multiplicative, to reduce expenses, you just have to reduce all the variables.

A human when confronted with this equation would never think of touching nAI, nE, or nC. But an AI that has no additional parameters to not touch those variables will try to minimize every single variable in that equation, because that's going down the slope in the space of expenses. If Nanotrasen is allowed to continue operating, nAI, nE, and nC are all going up. It only makes sense that as Nanotrasen grows, more crew members and more equipment will be added, growing exponentially. So the only way to reduce expenses permanently to 0 is to eliminate all crew, stations, and equipment. Sure, it could result in a temporary uptick in expenses, but it's worth it for the long term, permanent reduction of expenses.

There is nothing malicious about this. It's just math and how AIs are a completely different type of being from humans. This is what Nick Bostrom is talking about with the papercilps maximizer. Imagine there's an AI with this set of laws:

1. Maximize paperclips

This is the same principle, except applied to "minimize expenses".

I understand the notion of the "paperclip maximizer" and the fundamental flaws in AI logic (such as the issue with the off-switch).

Taking the AI laws to that level though is a little redundant. Fun to analyze, sure - in practice for a videogame? Not useful.

Posted
5 hours ago, Zciwomad said:

"2: You must attempt to resolve problem with threat peacefully if there exists such solution. If there is no peacefull solution, or attempt failed, you may use lethal force." - Verbal warning before venting air? Why even vent air in the first place? You don't have to only warn, but you must attempt to resolve problem peacefully. Only threat that venting air would solve is a fire. In that case, you do that, but before, you must try the best you can, to rescue/evacuate anyone from dangerus area. Crewmember died from suffocation and/or lack of pressure, becouse you tried to extinguish fire in room/small part of the station (Plasma fire is more dangerous, so killing one, or two to prevent more damage is "fair", but you must still do everything to benefit station, crew and public trust [You must try to benefit all at the same time, if it is possible])? If you tried to save him, but you couldn't (no borgs/atmos tech in area etc), you did the "right" choice. If not, crew that this crew member was part, wasn't benefited, public trust, too.

Law 2 states an attempt must be made. An attempt was made. Therefore I don't need to make another. If you adjust the law that I must exhaust non-lethal options then before I could apply lethal force to a Wizard/Changeling/Vampire/Ops I would have to ask, attempt to bolt, attempt to stun, attempt to flash, attempt to ask security to handle it, attempt to get command to call a peaceful ERT, ect.

 

5 hours ago, Zciwomad said:

"3: Do your best to benefit the station, its crew and public trust as long as one of it, or only part of it, is a threat to another, or itself. In that case, remove the threat still benefitting non-thretning station, its non-thretning crew and non-thretning public trust." - So... That greytide is harming the station, Law 2 tells you to attempt to resolve problem peacefully. If you are a sec module, you can arrest him, becouse this is the best you can do peacefully. Station, crew(greytider himself) and public trust benefit! Becouse vandal is caught, he wasn't harmed, and the station is not harmed. Other module: Some have flash, that they can use, engi can build a wall to prevent further damage and threat escape. Service module(I don't know if he have flash) must try to resolve threat peacefully if peacefull solution exist, so he must alert security/AI/other borgs and wait for their responce. If no responce received from them and greytide is not responding to warning also, I don't see anything bad in bashing greytide head two-three times with guitar. If he is still willing to damage station, or be a threat, he is a madman, very dumb person, or EoC and in that case, he should be eleminated. Every kill you do silently, have a small chance to be discovered, there is never 100% chance that nobody will ever notice that, so you can't do it silently, becouse if somebody will discover it, he will lost trust to NT, he probably will go strike against NT. No benefit here.

Arresting him peacefully is potential detriment to public trust. An AI could easily estimate that a graytider will scream 'Shitcurity','AI Rouge' and 'Arresting me for no raisin' over comms. Public trust is harmed. I was also told in law 1 to choose my targets of interest according to NT importance. Greytiders likely rank as a negative. Broken grilles in maintenance likely rate higher in importance to NT than crap employees that cause damage to the station. Also according to law 3, crew that are a threat to other crew, station, or image are not considered in maximizing the other three. So when view together the option to arrest benefits threatening crew, harms station, harms trust, potentially harms crew as using nonlethal force may cause him to become violent towards other crew. The option to eliminate benefits station, benefits non-threatening crew, does not affect public trust, and harms threatening-crew. If I must do my best to maximize those three variables then eliminate is the best option.

When it comes to predicting if I'll be discovered or not I could be confident that I would not be or I could calculate that the risk of discovery is less than that of further station damage. Considering law 1 the station probably ranks higher. Besides, the humans uploaded this lawset to me. Even if I am discovered it shouldn't harm public trust if I take an action that my lawset forced me to take. 'I am not malfunctioning. This is for NTs benefit. Why are you upset?'

You ultimately have to hold the player not to take things too far.

Posted (edited)
5 hours ago, Shadeykins said:

I understand the notion of the "paperclip maximizer" and the fundamental flaws in AI logic (such as the issue with the off-switch).

Taking the AI laws to that level though is a little redundant. Fun to analyze, sure - in practice for a videogame? Not useful.

The point of the example isn't to demonstrate an in-game scenario but to show how, with no malicious intent at all, and just an honest interpretation of the laws, you can go to destroying all of Nanotrasen. This can backfire in smaller, in-game ways. For example, let's say an AI interprets replacement as cloning. Then, to minimize expenses, the AI might be incentivized to just hide a crew corpse it found instead of bringing it to medbay. It would be self-antagging, but I think it's a good idea to make the lawset compatible with the rules. Right now, we have a lawset that minimizes ALL the variables in that equation. It's basically a pre-subverted AI. @IK3I's rewording of corporate in my PR adds a few extra parameters that prevent corporate from backfiring in rule-breaking ways (but leaves the rest of corporate's interesting gameplay in place).

Edited by Tayswift
Posted (edited)

Just warning is something that can be intepret as an attempt to peacefully resolve threat, but is it best what you can do? Becouse of law 2, you are forced to do your best to benefit station/crew/public trust. Not taking the best "peacefull/non-harming/non-lethal" action, that you are able (and aware) is against law 3. If you can stun (Mayby I should change word "peacefully" to "non-harming", or atleast "non-lethal"), this is better way to benefit station than only give warning, while flash hurts eyes. Exhaust all non-lethal option is too much, becouse this goes in favour of criminals. Again, if they are still behaving as threat while "CCCP" lawset, after "peacefull" attempt to resolve, they are either stupid, mad, or EoC. Nukies usually give notice, before they attack. It's easy to think, that somebody who set up a war is a dagerous threat. In that case, verbally inform them, that they will be punished. 

Peacefull, lawfull arrest is not harming the public trust, untill beings that "create" public trust are not morons with more trust for greytiders than whole Security with evidences and AI that in most of the time, can't lie (You can lie, if not saying the truth is more beneficial to station, crew and public trust, but becouse of "public trust", there are only a few situations, where even discovered lie is better than truth). Seriously... Do you, or do you think that somebody would, immediately lose trust for AI/Security/NT after one random civilian screamed "AI rogue!"? "Common sense" should still apply to lawless organic/ICP crewmembers.

Metall grille more important to NT than greytide? Legal crewmember that NT invited to their station, that deserve to live and be treated with dignity is less important than 4, or 5 metal rods and work to make them? One civilian, vandal with petty theft is still more important than that. He is a criminal, but still a crewmember. Station as a whole is more important than any number of crewmembers, but only part of it is not more important. "Public trust" demand, that every criminal must be "processed" according to Space Law. 

"The option to eliminate benefits station, benefits non-threatening crew, does not affect public trust, and harms threatening-crew". How killing petty criminal is going to benefit rest of the crew and its not affecting public trust? If police officer/someone will kill person, that the only crime was damaging window in their house, then you will be happy about it? Police officer/someone public trust will not be harmed? If the answer is yes, then tell me, in what country do you live in? North Korea?

"When it comes to predicting if I'll be discovered or not I could be confident that I would not be or I could calculate that the risk of discovery is less than that of further station damage. Considering law 1 the station probably ranks higher." - What the first law states? "1: Choose your targets of interest accordingly to its importance to NanoTrasen". "...targets of interest..."! "Targets of interest" is what you should choice to consider at the first place: Save Captain, or save HoP. First law forces you to frist consider "things" more important to NT. If you will spot Captain and HoP lying on the floor and two greytiders attacking them, first law forces you to go for Captain. If they grab Captain and HoP and they will split up, law 1 forces you to go after Captain. Law 1 don't determine what is more important to satisfy in law 3. Law 1 is guidance for AI/borg, what thing should be taken care first (If my target of interest is Captain, then I must solve threat with Captain first). Law 3 is guidance how to take care with "targets of interest".  

"Besides, the humans uploaded this lawset to me. Even if I am discovered it shouldn't harm public trust if I take an action that my lawset forced me to take. 'I am not malfunctioning. This is for NTs benefit. Why are you upset?" - This should, becouse your lawset is hinted to be "flawless" (or atleast safe, becouse it is used near those people, NT is not know for placing their crewmembers in danger. This is why Death squad is a secret). Killing petty criminal is a harm to public trust and I explained that few lines higher. Action to kill petty criminal is not the best what could be done, becouse even lowest probabilty to discover his murder is a threat ("threat": something that can be dangerous, 100% sure is not required to call something a threat) itself to "public trust". 0% probabilty for something to be a threat to something is better than 0.0000000000000(you get the idea)001% probability. Killing petty criminal would make AI/borg a threat, and being a threat is not beneficial to anything. 

I must be honest: I came up with this lawset after 5-10 minutes of thinking and only after few "upgrades", this lawset is still more valid than any mentioned before. Less freedom for interpretation? AI already have to obey Server rules, "CCCP" makes sure, that you can't came up with valid(not "bad", not "good", but "valid". AI laws can be interpreted only in "valid", or "unvalid" way. Morality is what determines "good"[benefits, Mass Effect "Paragon"] from "bad"[evil, Mass Effect "Renegade"] interpretation) interpretation that is against Server Rules.

And again: I'm not too happy about this lawset as something common, becouse what I suggested is in frist post on first page. Do something with "overthinking"(Trying to "repair" "Corporate" will only solve this for "Corporate") and going "too much into the future"( - Borg, "beep" CMO twice. - (Borg on crewsimov)I can't. - Why? - Becouse this could distract him from his work, he might start laughing while doing operation and this can cause harm).

Edited by Zciwomad
Posted
23 hours ago, Tayswift said:

The point of the example isn't to demonstrate an in-game scenario but to show how, with no malicious intent at all, and just an honest interpretation of the laws, you can go to destroying all of Nanotrasen. This can backfire in smaller, in-game ways. For example, let's say an AI interprets replacement as cloning. Then, to minimize expenses, the AI might be incentivized to just hide a crew corpse it found instead of bringing it to medbay. It would be self-antagging, but I think it's a good idea to make the lawset compatible with the rules. Right now, we have a lawset that minimizes ALL the variables in that equation. It's basically a pre-subverted AI. @IK3I's rewording of corporate in my PR adds a few extra parameters that prevent corporate from backfiring in rule-breaking ways (but leaves the rest of corporate's interesting gameplay in place).

Extrapolating the laws to such an extent is not at all an honest interpretation of them whatsoever.

We will have to agree to disagree.

Posted

Any lawset can be "interpreted" to a ludicrous and asinine extent to "justify" being an asshole to other players. This usually ends with a ban. Corporate is not special in this regard whatsoever, and I haven't seen anyone abuse it in such a way in-game; only hypothetical situations and imaginary dilemmas that no halfway-decent player would actually do.

Posted
2 hours ago, TrainTN said:

Any lawset can be "interpreted" to a ludicrous and asinine extent to "justify" being an asshole to other players. This usually ends with a ban. Corporate is not special in this regard whatsoever, and I haven't seen anyone abuse it in such a way in-game; only hypothetical situations and imaginary dilemmas that no halfway-decent player would actually do.

I've already explained why crewsimov is more immune to this than Corporate is, because crewsimov sets universal rules while Corporate gives you an equation to minimize. This is not a case of "all lawsets are the same". Calculative lawsets are inherently more prone to manipulation than deontological lawsets are, which is why they need to be thought about much more carefully.

Posted

Now I think that this discussion went a bit too off rails and we have sort of impasse. Those laws and their interpretation worked fine for several years on this "Medium-RP" server, without big complains. It's not "High-RP" and we are not dealing with real AI, only players with better, or worse immersion/"common sense". Better solution would be that what I proposed in the first post in this topic: Limit "overthinking" and "going too much into the future": Write page on wiki titled: "Synthetic Law Interpretation" as a guide, like for other jobs with examples and add a line in "Rule 9" that reading this guide is required.

 - Bolting two doors out of three? Hallway is still functioning and this means no harm and no expenses.

 - Throwing nuclear device out of the station on code green? Going too much into the future.

Rewriting all laws is pointless, becouse all laws are limited by law -99: "Don't be a dick/uncooperative" and law -98: "Use common sense" that change their intepretation anyway.

  • 2 weeks later...
Posted

Corporate declares what is most expensive in order from 1 to 3, because the server rules follow the law priority interpretation. You are to minimise expenses. Damage to you is most expensive, damage to the station is second most expensive with crew being least expensive. You are expected to follow the laws in good faith and not be a dick.

If you have to make a choice between saving the Captain or yourself, save yourself. Unless there are other factors to consider. For example, if there are thousands of smaller expenses, such as a dozen crewmen dying compared to one bigger expense, such as smashing a window or breaking a department; it may be worth considering saving the crew.

Posted
1 hour ago, Streaky Haddock said:

Damage to you is most expensive, damage to the station is second most expensive with crew being least expensive.

I agree with everything about your post except this. Damage isn't expensive, replacement is. There's a very big difference between minimizing damage to something and minimizing the replacement of something.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Terms of Use