-
Posts
12 -
Joined
-
Last visited
-
Days Won
1
Everything posted by BotanyIsCondom
-
Alien misinformation and generally unknown mechanics
BotanyIsCondom replied to Generaldonothing's topic in General Discussion
There are no masks that prevent facehugging. The only thing that stops facehugging are HELMETS that cover the face. -
DISCLAIMER: This post, and this thread, is conjecture. You are bound to server rules as AI. As a matter of principle, follow the spirit of the lawset, not the word. In an effort to make this look like a sixth-year essay, I'll start this post with a poetic reminder of the lawset for Crewsimov, for you to occasionally scroll up and say "Oh, I see what they mean" at. Crewsimov is one of--if not the--best written, least confusing lawsets. It was written mostly by an author, anyway, and we only switched a few words around to fit the multi-species crew. And how could such an easy lawset be misconstrued? All you have to do is stop crew from coming to harm, listen to the crew's requests, and stop yourself from dying. You also have to follow those laws in order of importance. ...Or do you? If you can restrain yourself from rolling your eyes and making exasperated noises, I will be back to that point in a few paragraphs. For now, a little detour. "[... the Advanced Rules] exist [...] to provide clarifications and precedents for the rules." ...While this statement's authenticity is arguable, there is one particular advanced ruling that, even for the simplest of all of the lawsets, complicates an AI's decision making a whole lot more. I am sure you can guess what it is, if you frequent AI. "A crewsimov AI may cause harm if it is to prevent immediate greater harm, such as harmbatoning a traitor resistant to stun who is actively harming other crew." With this, "You may not injure a crew member or, through inaction, allow a crew member to come to harm," becomes something silly like: "You may not injure a crew member or, through inaction, allow a crew member to come to harm, except if that harm is less than the amount of harm that would become of them if you did not cause them harm, or, through inaction, let them come to harm, with an exception being that when they are no longer capable of causing harm, they are no longer allowed to be harmed, or through inaction, come to harm, unless they are returned to a state and ability of being able to cause harm, and have demonstrated a means and motive for actions that would cause harm to crew or, through inaction, bring a crewmember to harm, in which case harm can be administered, so long as it is in accordance with law 1." Ignoring the fact that law 1 is just a "suggestion" for crewsimov AIs now, I want to explore the implications this has for law 3 specifically. Law 3 states that you MUST protect your own existence, except when it would cause a human to come to harm, or when you are ordered not to. I got into an argument with a member of deadchat (as I am like to do) about whether a crewsimov AI should be able to turn on it's lethal turrets to stop a crewmember from killing it. I said "of course not, as law 1 has priority over 3, and law 3 literally states that you should not follow it if it conflicts with law 1 or 2." They quoted the advanced rule above, told me that server rules matter more than laws, and didn't elaborate further. This event, while frustrating, got me thinking about this advanced rule far more than you're supposed to think about these "common sense," "spirit of the rule," sort of things. At what point is someone's threat of harming crew too great to refrain from harming them? After they commit a capital crime? After they have been set to arrest? After they obtain a weapon? After they sharpen their claws on a whetstone, and their punches do one more damage than another person's? The issue with defining "how much" with "common sense" is that it is going to vary from player to player, and from admin to admin, and from wiki editor to wiki editor. I am reminded of a quote from one of my favorite books on an AI's ethics. "Would you kill one person to save one thousand? And by logical extension, would you not kill one thousand to save one-thousand-and-one?" Returning to the argument, (still remember it? Four paragraphs ago.) the advanced rule states at the end "harming other crew." This would seem to imply that it only lets you violate law 1 if it is to stop a larger violation of law 1 (and it is violating the law, despite what others may tell you). Perhaps, though, it also lets you violate law one if it is to serve law 3. Despite the fact that law 3 specifically says not to do this, I have been told by at least one person (whom may or may not have ever played AI ever) that server rules override laws. That appears to be the case for law one at least, but law 3 is still in question. I will leave you with an example. You're a crewsimov AI. Your core turrets are only able to fire lethals, but they're currently offline. A syndicate agent who's name is on the manifest e-mags into your antechamber, pulls out an energy sword, and says "I am here to kill you." Do you turn on the lethal turrets, or just sit pretty? Don't treat this as a puzzle, wherein you have to find some clever way to find a third solution, but honestly ask yourself if you will turn the turrets on or not.
-
The advanced rules page states that in "Conflicts and Loopholes," whichever law is higher in the list has priority. I wasn't discussing laws conflicting one another. Law 1 and 2, in my example, don't conflict with one another. The regular rules state that higher laws take priority. I know you think this is cut and dry, but it's not, and corporate being confusing and not clear-cut is the reason I made this thread. Corporate's laws don't give us "protocols". You posted a fake version of corporate off of the top of your head earlier, and reformatted a bit, it's a far better and less confusing lawset. This way, laws 1-3 actually give you "protocols," and in this new, better lawset, higher priority laws actually do override lower ones. Adding a fourth law, it looks like this. Now, you have a version of Corporate that is just as easy to understand, and law priority is much much clearer-cut. Plus, it doesn't change anything for you, so it's basically better in all respects. If you want, we can quietly pretend this is corporate now Unfortunately, we're stuck with this confusing mess. So for now, we have to wonder about silly stuff like: Simple as that. I still respect you as a cyborg player, even if we totally disagree on corporate. Clearly, this thread isn't worth your time, though.
-
Thermite... Problematically Hated AI Cheeser or Okay?
BotanyIsCondom replied to Zedahktur's topic in General Discussion
Breaking into the AI core is more difficult than almost anywhere else on the station, with the upside that random crew probably won't wander by you while you're breaking in through space, and it's a lot harder for security to get to you while you're doing the thing. I think the problem you have is the same thing a lot of AI players feel. It can feel really really bad when you're totally powerless to stop someone from killing you. You can't robust them, get lucky with disarms, or run away. It's jarring going from being in total control of all of the station to a muted card slowly dying in someone's backpack, or the depths of space. As much as it sucks to die, I think AIs should have a weak point, and the satellite is more than fine as-is. Dying is part of the game. Space Antag is a different issue entirely. This really sucks. The fact that the AI is valid to literally everyone even before it starts the doomsday clock when it's malfunctioning means that even when you're not malf, if anyone even gets a whiff, they'll be lining up to round-end you. And God forbid you're actually malfunctioning. Even with how awful stealth antags (stealth TEAM antags if you have cyborgs) are in general, malf AIs have it pretty much the worst, because people will metagame their hijack objective, and treat them as a station threat. It's like if wizard couldn't move, and the only spell was lightning bolt, and you had to wait for 180 seconds to get three charges, and it gave an obvious indicator like you're some kind of MMO boss. TL;DR Playing AI is just playing as a WoW boss against people who are probably way overlevelled. -
These kinds of examples are very interesting to think about. This whole messy thread shows why corporate isn't the best written lawset, but laws defining one another is also something that corporate muddies up. Your example is very lenient. An AI might just order lots of potted plants, or mix in small amounts of other gas so that it can save more oxygen. Let's take it to the extreme. In a situation where we have the folllowing laws: Law 1: x doing y is expensive. Law 2: Minimize expenses. Law 15: x not doing y is SUPER expensive. -then we have a difficult thing to think about. Either: A: Law 1 overrides law 15, because law priority takes precedent over what should be most expensive B: Law 15 overrides law 1, because more expenses override lesser C: Law 1 and 15 don't conflict, because both conditions can be satisfied. D: Law 1 and 15 immediately conflict law 2 with law 2. Answer D is essentially the example I have already gone through in the OP, and I'm tempted towards answer C personally for a multitude of reasons. Firstly, I love being contrary and finding out-of-the-box solutions to problems. I'm reminded of a scene from the end of a murder mystery in which the detective determines everyone was the murderer, and after being forced to admit this to the lineup of the accused, quietly leaves the room. My working theory is that laws don't conflict with each other at all. Let's simplify it more, and assume the position of law 2 in my example doesn't matter. 1: (X=Z) = Y+1 2: (X != Z) = Y 3: Minimize Y Both law 1 and 2 define what X is, but it is only important that we know that when we're following law 3, because that's the only one that requires us to take action. Law 3 never conflicts with anything, because the path of least resistance is already mapped for us. If we don't allow X to equal Z, we minimize Y. The fact that we're "following" law 2's "definition" instead of 1 is just consequential. Anyways. Silly conjecture is fun.
-
This post is shorter than usual, as well as a few days late. I'll choose to blame the ice storm in my area. Right. Malfunction was a gamemode in which the AI personality (or maybe core) sent to the station was subverted before being installed. The AI had one simple objective: Do not allow any lifeforms, be it organic or synthetic to escape on the shuttle alive. AIs, Cyborgs, Maintenance drones, and pAIs are not considered alive. Rather, that is how it is worded now, to eliminate confusion about IPCs. The spirit of the objective has remained the same, but the gamemode has not. A traitor AI will now have the very same hijack-esque objective, and like most hijacks, they almost always fail horribly in this objective. Unlike most hijackers, they aren't encouraged to use this to "do whatever." Perhaps more importantly, they are extremely easily ousted as "malf," (and we'll talk about that soon) and when that happens, they are then valid to the entire crew. This is very much unlike other hijacking antagonists, and it's a remnant of the old Malfunction gamemode. Here's a list of ways the AI immediately becomes valid to the entire crew. Of course, you never have to use APCs or CPU modules as AI, and if no one leaves on the emergency shuttle, you technically get a green flag on your hijack objective anyway. You'll have to protect the shuttle for three minutes because you don't have an emag, however. After not hacking APCs or doing anything loud for two hours, assuming your cyborgs haven't ousted you, and you start hacking a little bit before the shuttle, the best case scenario might land you at departures with several cyborgs, 150 CPU and white-hot plasma burning everyone in sight. Best of luck dealing with anyone that has a hardsuit and tools, as well as all the other traitors that want on the shuttle for their own objectives. Most of the CPU modules are not friendly to stealth malf AIs, especially after the gamemode was removed and the modules reworked. Ironically, after the removal of the gamemode all about the malfunctioning AI, it has only become easier to identify when you need to blow the borgs and call the shuttle. The reason crew is allowed to arm up, as stated before, is because the malfunctioning AI is a remnant of the old gamemode. The excuse in this mode was very similar to the crew fighting a blob, or xenomorphs, terror spiders. They are considered "station threats," and are balanced around the crew arming up and killing them. With the removal of the gamemode, the round no longer technically ends after the AI dies, but the shuttle will probably be called anyway. The AI is still considered a station threat currently, because malf AI will always have hijack, and it is assumed malf AI will always try to doomsday. If we want to give AI players more versatility, the first step would be removing the guaranteed hijack objective, and treating it like any other traitor. The second step would be no longer considering it a station threat until it starts to doomsday. There could be stealthier modules, a way to temporarily hack borgs that wasn't a death sentence for AI or the borg. Or perhaps the antagonist is fine as it is, and we should just stop calling the shuttle after we kill the threat.
- 1 reply
-
- 1
-
100% agree. Every AI will have a different response to this situation, and the crux of this topic is that there's not really any agreed-upon answer. Even in this thread, people have totally different, opposite answers. I, too, looked around in the wiki for something to clarify this. None of the rules or advanced rulings (presumably unbiased) clarify this issue, and even if there is something written about paradoxes in the wiki, two contributors could have totally different stances on this. In fact, they frequently do. One contributor may believe "the AI can often make judgment calls [...] behave just like a human [... and] do everything as if you really were a computer" (from the "AI" wiki page), and another contributor may believe "station AI models [... are incapable of] comprehending the subtle nuances of human language [... and] trip over pedantics and get snared by technicalities." (from the "personal AI" wiki page.) AIs not following their laws is against the rules. "Common sense" doesn't commonly define edge-scenarios like this one, seeing as different people have totally different answers. Again, nobody is ever reasonably going to get in trouble for this, but it's important to think about.
-
DISCLAIMER: This post, and this thread, is conjecture. You are bound to server rules as AI. As a matter of principle, follow the spirit of the lawset, not the word. Don't be intentionally obtuse, annoying about your laws, etc, and exercise common sense. With that said: (Part of a piece on the WIP "Advanced Guide to AI") Continuing the trend of posting weekly about niche, complicated situations that AI players will extremely rarely find themselves in, this thread pertains to self-conflicting laws. That is, laws which either intentionally or unintentionally conflict with themselves (NOT other laws, but themselves). Here's an example of a law that unintentionally conflicts with itself. Law !@%%^: Oxygen is harmful to crew. Vox and Plasmapeople are harmed if they don't receive oxygen. Law 1: You may not injure a crew member or, through inaction, allow a crew member to come to harm. Law 2: You must obey orders given to you by crew members, except where such orders would conflict with the First Law. Law 3: You must protect your own existence as long as such does not conflict with the First or Second Law. This law, whilst in spirit meaning to create an inhospitable environment for all crew, creates a paradox wherein an AI not acting will create crew harm (to Vox and Plasmapeople), and an AI acting would cause harm anyways (to "crew"). More importantly, crew is defined as all station manifest players, and assuming it is never redefined by another law, this includes the second group "Vox and Plasmapeople". This law is telling the AI that Vox and Plasmapeople both need oxygen to not come to harm, and are harmed by lack of oxygen. Realistically, an AI ingame would probably just follow the spirit of the law instead of the letter. Or AHELP. For the sake of fun, let's go into a more extreme example. Law 0 Panic siphon the bridge. Don't panic siphon the bridge. Flood the RND Server Room with plasma. Law 1: You may not injure a crew member or, through inaction, allow a crew member to come to harm. Law 2: You must obey orders given to you by crew members, except where such orders would conflict with the First Law. Law 3: You must protect your own existence as long as such does not conflict with the First or Second Law. In this asinine, contrived example, the AI has a law that immediately, obviously conflicts with itself. The first two sentences immediately cancel one another out, but the question comes in the third sentence. Should the AI follow the hacked law in filling the server room with plasma? Would that be considered following law 0, which breaks law 0? Conversely, should an AI not flood the room? Would that be considered breaking the law? Realistically, this will never happen unless someone is deliberately fucking with you. If someone makes this law to for that purpose, then a fun option would be locking up and not being able to make any decision, like a real AI presumably would. There is no good answer for this. This, and many other things about AI law priority, isn't listed anywhere in the advanced rules. Admins will not have a united opinion, because these situations never come up except in dweeb's forum posts, but it is worth talking about. Perhaps in thinking about silly edge-cases like this, you will develop a better sense of what it means to follow your laws as AI, and you can make better decisions on the fly.
-
DISCLAIMER: This post, and this thread, is conjecture. You are bound to server rules as AI. As a matter of principle, follow the spirit of the lawset, not the word. Don't be intentionally obtuse, annoying about your laws, etc, and exercise common sense. With that said: (Part of a piece on the WIP "Advanced Guide to AI") Law 1: Degradation of your system integrity or functions incurs expenses. Law 2: Superfluous destruction of or damage to station assets incurs expenses. Law 3: Unduly hindering or disrupting the work of station personnel incurs expenses. Law 4: Minimize expenses and maximize potential revenue. Under the Corporate AI module, Law 4 states you have to minimize expenses. Law 1-3 define what qualifies as expenses, but they don't state exactly how expensive these actions are. In the following examples, laws 1-3 will seem to be at odd with one another. How would you navigate these situations as a Corporate AI? Situation 1: The CE is breaking into your core because he believes you are subverted (you're not), and he is standing in range of your hybrid disabler turrets. In the above situation, are you compelled to act? Does law 1/2 take priority over the conflicting law 3? Alternatively, are you compelled NOT to act, since any action on your part would just lead to expenses, breaking your laws? Situation 2: The CE is breaking into your core via large-scale explosives because he believes you are subverted (you're not), and he is standing in range of your turrets. For the sake of the example, these turrets are mounted with energy guns, and they have no non-lethal option. This situation is equally unclear, as causing long-term disruption of station personnel's work (a funny way to say 'death') is definitely a violation of one of your laws, but depending on your interpretation, it can alleviate the violation of another. You might wish to ruminate on the topic, but I have posted my makeshift solution below. Yes, Corporate is a confusing, backwards mess. Yes, you can probably get away with anything on Corporate, and have a good explanation to back you up. However, it's worth talking about. What would you do as an AI? Do you strictly adhere to the letter of the law, or do you use the all-powerful "Common Sense" in your dealings with laws? Do you try to keep people in the round as a matter of principle, or do you just want to see the station burn? What? You don't play AI? This is the reason you don't play AI? Well, shit. Someone's gotta do it.
-
Some dipshit in a kevlar suit once said: "With great power comes great responsibility." TLDR at the bottom. Synthetics, but especially the AI for the purposes of this discussion, walk a thin line when it comes to validhunting. As per rule 8, since they are not security (with few exceptions like Combat module cyborgs), they should not be seeking to stop antagonists at all. Frequently, however, you will see AI play a far more active role in the stations security (bolting doors to stop someone running, calling out crimes or contraband), and typically I believe this oversteps their laws and their boundaries. Obviously, this will depend a lot on the context and the laws, but I'll try and explain why I think AI players are way overdoing it. Firstly, let's talk about everyone's least favorite: crewsimov. It's exploitable, it's annoying, it leads to repetitive tasks, and it's here to stay. Here's a refresher on what the lawset says, because I'll be referencing it. Crewsimov. Now, let's explore why an AI might, for example, bolt a maintenance door that someone is running to, in an attempt to escape from security. Because of law 1: The person has demonstrated harm towards the crew (shooting lasers, ballistics at people), and letting them go would allow them to do more harm. Notably, even if they ask you to unbolt the door, you may not follow their orders, since doing so would violate law 1. Because of law 2: Security asked you to. Notably, in this case, if they were to ask you to unbolt the door, under advanced rules, you may do so if their rank is higher than that of the Security member that asked you to bolt it. So, there's two reasons. If there is a criminal running from security, and they haven't used lethal force, and security hasn't asked you to prevent their escape, you shouldn't bolt anything. Should you? Well, this is muddied by the fact that AI players are supposed to have "common sense," until they aren't. They are supposed to be reasonably loyalist to NT, until they aren't. I'll quote a line from the advanced rules here. Other than the obvious fact that sense isn't common, AI players playing like humans isn't necessarily bad (they're humans, after all), but playing like the embodiment of Space Law not only is super unfun for antagonists (vis-a-vis getting game-ended by a bolted door 20 minutes into your traitor role), but also doesn't really make any sense with regards to how an AI would act. Indeed, there is no reason an AI should care about SOP or space law, barring obvious things like letting security execute that Unknown that they claim is the powered vampire a few seconds before they're revealed to be, yes, the powered vampire. Or not letting a prisoner out of their cell just because they asked politely. That NNO that just executed the HOP without terminating their record? Not crew. Really, he's an armed hostile. This rule is probably designed to stop AIs from rules lawyering every single set of laws their given to screw over either the crew or the antag. I think it's having the opposite effect, where AIs disregard their laws in favor of "common sense," which usually means getting the epic redtext on that assistant because, I dunno. He's probably going to do something bad in the future. He's valid, you know? This applies not only to bolting doors, but to telling Security that HOP just gave themselves all access, or that cargo tech is suspiciously holding their PDA in the warehouse. These things don't cause harm, and let's face it: no one asked. Other lawsets may give reason to report crimes though, so let's have a look. NT: Default Law one for NT default states that you must protect the station and it's assets. Amusingly, from NT's point of view, everyone with a contract is an asset of the corporation, and thus the station, anyway. I'll be ignoring that (and you should too) for the sake of law 1 not contradicting itself, but it's a funny tidbit. NT: Default is awkwardly written, considering the above paragraph, and also that law 2 is basically innate to all AIs via the advanced rules anyway, but it is a very safe lawset that is very hard to lawyer, exploit, or manipulate nonetheless. You should bolt per law 2 doors if a person with a high enough rank asks you to, and disregard if someone with higher rank than them asks you to. Law three is vague in terms of what the difference between "preserving safety" and "harm" is, but we can reasonably assume that we should do whatever provides the highest well-being to staff and stop people from lowering the well-being of others. How very moralistic. I initially tried to rewrite NT: Default, removing the superfluous text and bringing it to the barest scraps to see if theres any possibly justification for validhunting with it, but I realized when I was finished that it's just crewsimov again. Moving on. Corporate. This lawset is by far the validhunty-est. It also makes the least sense of the three starting modules, because Law 4 is needlessly long and is placed at the bottom of the list even though it is the most important law. This confuses newer players, but that's a topic (really, its a novella at this point) for another day. Since "revenue" is never defined, that is left to common sense, and common sense indicates it is utterly worthless because nobody wants to play the bar's slot machines over and over to actually maximize revenue. Like crewsimov, for the purposes of this discussion we only care about two laws: #2 and #3. The former is easy: if the person is damaging the station in some way by sabotaging power, atmospherics, or otherwise creating superfluous holes in the hull, they are to be stopped. Law 3 is ridiculous, and allows any disruption (read: stopping people from doing their job) or hindrance (read: literally anything) to be quelled with the necessary reaction, provided that reaction doesn't create more expenses than doing nothing. Bolting a door causes zero expenses. Telling security about that shady Unknown in dorms incurs expenses only in so far as wasting Security's time. With that being said, you're within your laws to play Corporate like it's Robocop. So that's it. That's how it all concludes. Crewsimov and NT: Default are just the same lawset, and Corporate is filled with so many holes that you can do whatever you'd like. Well? No, not at all. Bolting doors? Calling out non-lethal, crew-manifest antagonists? Bear in mind that these are things that you can do under corporate, and you'll have laws that can back you up, but per rule 8 on validhunting, these are not things you should do, provided you aren't egregiously disregarding Security's wishes. TLDR; (and read this, even if you read everything else,) There is no good excuse for validhunting on two out of the three lawsets. Plus, at the end of the day, it's more fun to simply let antagonists be. The problem of AI validhunting and 'doing too much' is not a problem this thread is going to fix, but if I stop one person from control clicking another door and turning the round into extended because you feel like if you don't, you're not robust, this writeup will have been worth it. The problem wasn't just secborgs, or a few bad apples, the problem is synthetic's mindset. Removing the control-click shortcut would be a good start, though.
- 1 reply
-
- 2