A thought has occurred to me lately: A sufficiently large bureaucracy such that the people evaluating whether procedure has been followed have little or no contact with the people being evaluated behaves a lot like an AI that happens to understand plain English instruction. That’s not to say that it will reasonably interpret plain English instruction, just that you can give the bureaucracy a rule like “con-goers cannot touch cosplayers without their explicit consent” and it will immediately understand that rule – but it won’t understand the spirit of the rule, only the literal meaning of the words.
I’m gonna clarify here that I’m referring to a rule that the Salt Lake Comic Con does in fact have, but which to my knowledge has never actually been enforced as though it were an AI. I don’t even know if the con staff are a large enough organization to require such a detached bureaucracy – it usually takes well over a hundred people to get to that point. I’m just using the example that prompted this train of thought, and while it would probably work better with a less potentially controversial one, I’m too lazy to think of one.
It’s weird as Hell that this rule would be interpreted purely literally by a sufficiently large bureaucracy, because pretty much every specific individual human who makes up the bureaucracy understands perfectly well that the rule isn’t fully literal, that while it’s a broad rule intended to firmly forbid things like unwanted hugs, there are some things that are technically physical contact but which aren’t meant to be forbidden – tapping a cosplayer on the shoulder to get their attention in a crowded convention hall to ask for a picture, for example. The reason I first started thinking about this is because I do this all the time – in a crowded and noisy convention hall, it can be the only way to get the attention of someone whose name you don’t know. A regular human being gets this and wouldn’t even consider throwing me or anyone else out for tapping someone’s shoulder, unless they were a megalomaniac on a power trip, in which case it would be immediately obvious to everyone watching that the problem is with them. If even one other person had input to the decision and wasn’t selected by the megalomaniac specifically for loyalty, they would very likely dispute the reasonableness of the decision to throw someone out for tapping a cosplayer’s shoulder just because that technically violates the “do not touch cosplayers without express permission” rule.
And yet, it’s easy to imagine a bureaucracy throwing someone out for exactly that. Some kind of bad actor would have to be involved just to bring the “transgression” to the bureaucracy’s attention and get the gears turning, but the weird thing is that once those gears start to turn, they are not especially likely to grind to a halt despite the fact that two or three different people are involved, even if none of them are going on a power trip, and even though every one of them knows it’s ridiculous to ban shoulder taps. Each one is just concerned with covering their own ass, and having no real relationship, not even acquaintanceship, with the people responsible for getting on their ass, they aren’t confident they can cover their ass unless they can demonstrate that they have abided by the exact letter of the law. This is true regardless of how many of the people responsible for evaluating compliance with the rules would actually evaluate security poorly for failing to evict a shoulder-tapper. Security has no idea whether the evaluation arm of the bureaucracy is going to accept their excuse or do things by the book.
Not only that, but you can almost never talk a bureaucrat out of a decision handed down from on high just by reasoning with them as a human being. If there’s a specific appeal process, then the bureaucrats involved there will evaluate your claim, but only by whatever criteria the bureaucracy has instructed them to. If they’re explicitly given leave to evaluate things based on their own judgment of the spirit of the regulation that was technically violated, they’ll do that, but if they’re told their only purpose is to double check whether or not the exact wording of the regulation was violated at all, that’s what they’ll do. Like an AI, a bureaucracy won’t even listen to the person they’re manhandling unless you specifically program them to listen, and they’ll evaluate whether or not to change their mind based on the criteria you program them to respond to, not any human intuition (the good news is that you can go ahead and program a bureaucracy to use human intuition just as easily as saying so, without having to get a computer to figure out what “reasonable doubt” means – the bad news is that this facilitates corruption).
This obtuseness is usually a good thing, even. We don’t usually want our bureaucracies to be a corrupt institution making exceptions to the rules whenever they feel like it and breaking them outright whenever they feel like they can convince their poker buddies it was justified. People can justify all kinds of crazy shit, up to and including blatant human rights violations, if the alternative is admitting that their friend is a terrible, terrible human being – and once that precedent gets set, everyone else at the poker table feels justified in doing things just as awful when it’s convenient. The only real limit on what the people at an institution running on an “anything you can justify to your own friends is allowed” set of rules is whatever morals every one of those friends has in common. Everything else will slowly be eroded by their misguided loyalty to one another even in the face of blatant and frequent abuses of power.
So when I say that bureaucracies can turn out like an AI, I’m not advocating that there’s something fundamentally abhorrent about bureaucracy and that all human endeavors that require more than about 200 people working together should be abolished, nor even that they are in desperate need of reform, and I am definitely not saying that they should be compartmentalized to the point where the people evaluating whether the rules are being followed can be expected to be direct coworkers of the people being evaluated. That last one is a cure that does more harm than the disease.
What I am saying is that creating rules for a bureaucracy should be approached with the same trepidation as creating rules for Shodan. The bureaucracy will only do what you program it to do, not what you meant to program it to do. There’s a reason why legal codes tend to accrue a huge number of very exact instructions over time, and that’s because they need a huge number of very exact instructions to get an ever more bloated bureaucracy to behave appropriately (and because laws signed to combat problems which ultimately prove to be temporary or even illusory are rarely given a sunset clause). If you don’t pay close attention to what rules you give a bureaucracy, you’ll end up creating an insane rogue AI.