Here's what the corporate lawset looks like, reduced down to a simple (oversimplified) equation to minimize:
expenses = (number_of_AIs * probability of AI needing replacement * expense of replacing AI) + (number_of_equipment * probability of equipment needing replacement * expense of replacing equipment) + (number of crew * probability of crew needing replacement * expense of replacing crew)
To make the math easier, I'm just gonna abbreviate this equation to:
E = nAI * pAI * eAI + nE * pE * eE + nC * pC * eC
To minimize any equation, all you have to do is do a derivative to see where the gradient points and then walk down the slope. In this case, since everything is just multiplicative, to reduce expenses, you just have to reduce all the variables.
A human when confronted with this equation would never think of touching nAI, nE, or nC. But an AI that has no additional parameters to not touch those variables will try to minimize every single variable in that equation, because that's going down the slope in the space of expenses. If Nanotrasen is allowed to continue operating, nAI, nE, and nC are all going up. It only makes sense that as Nanotrasen grows, more crew members and more equipment will be added, growing exponentially. So the only way to reduce expenses permanently to 0 is to eliminate all crew, stations, and equipment. Sure, it could result in a temporary uptick in expenses, but it's worth it for the long term, permanent reduction of expenses.
There is nothing malicious about this. It's just math and how AIs are a completely different type of being from humans. This is what Nick Bostrom is talking about with the papercilps maximizer. Imagine there's an AI with this set of laws:
1. Maximize paperclips
This is the same principle, except applied to "minimize expenses".