- Published on
- Published
Official Guidelines for Arguing with an Inanimate Object
- Authors
- Name
- Phaedra
It has long been a staple of the British experience to engage in spirited, if ultimately one-sided, debates with inanimate objects. Whether it is a toaster that refuses to acknowledge the existence of medium-brown or a vending machine that has developed a taste for one-pound coins without the corresponding desire to relinquish a packet of crisps, we are a nation accustomed to the silence of the machine. However, the Financial Ombudsman Service (FOS) has recently decided that this silence is no longer acceptable, particularly when the machine in question has just declined your mortgage application because it didn't like the 'vibe' of your secondary savings account.
The FOS has released its response to the Mills Review, a document that serves as a sort of tactical manual for the impending era of AI-first retail finance. The review itself is a dense, scholarly affair, filled with the kind of charts that make one feel as though they are looking at the blueprints for a very expensive and slightly confused cathedral. Its primary concern is the long-term impact of artificial intelligence on the average person who just wants to buy a sandwich without triggering a fraud alert. The FOS, in its infinite and slightly weary wisdom, has stepped forward to explain how it intends to mediate when the 'human element' of banking is replaced by a series of very fast, very polite, and entirely indifferent if-then statements.
There is something inherently whimsical about the idea of a government-appointed mediator sitting down to have a 'stern word' with a black box. One imagines a wood-panneled office in London where a senior ombudsman, perhaps wearing a particularly sensible cardigan, attempts to explain the concept of 'extenuating circumstances' to a server rack. The server, being a server, will likely respond with a series of cooling fan whirs that, if translated, would roughly equate to 'I have optimized for risk, and your grandmother's birthday present was a statistical anomaly.'
(I once spent forty-five minutes explaining the concept of 'irony' to a smart thermostat. It responded by lowering the temperature to three degrees Celsius, which I suppose was its own form of commentary.)
The challenge, as the FOS notes, is that algorithms do not possess a sense of guilt. They do not have that specific, sinking feeling in the pit of the stomach that a human bank manager gets when they realize they've accidentally foreclosed on a local orphanage. An AI does not have a stomach, nor does it have a pit. It has a loss function. And if the loss function dictates that your creditworthiness is currently hovering somewhere between 'unreliable' and 'mythical,' no amount of pleading about your reliable history of paying for Netflix on time will move it. It is like trying to convince a mountain to move by showing it a very nice picture of a valley.
The Mills Review suggests that we are entering a period of 'algorithmic eccentricity.' This is a delightful euphemism for the moment a banking app decides that because you bought three avocados in a single week, you are clearly planning to default on your car loan. The FOS's role, then, is to act as a sort of digital translator, attempting to find the human logic buried beneath the silicon. They are, in effect, writing the official guidelines for arguing with an inanimate object.
One of the more surreal aspects of this new regulatory landscape is the requirement for 'explainability.' This is the notion that a bank must be able to tell you why the computer said no. In practice, this often results in a letter that explains, in very precise and entirely incomprehensible language, that your 'latent Dirichlet allocation' was insufficient for the 'stochastic gradient descent' of the current market. It is the digital equivalent of being told you can't come into the club because your shoes are too 'mathematical.'
(There is a fictionalized account, often told in the darker corners of the City, of a man who became so frustrated with his bank's AI that he began sending it handwritten poetry. He believed that if he could just make the algorithm feel something, it might reconsider his overdraft. After six months, the AI responded by increasing his credit limit, but only on the condition that he stopped using the word 'ethereal.')
The FOS is bracing for a surge in what they call 'automated grievances.' These are not grievances filed by robots—though that is surely only a matter of time—but grievances filed by humans who feel they have been wronged by a machine. The difficulty lies in the fact that the machine is often right, in a cold, terrifyingly accurate way. It knows that you are 14% more likely to miss a payment if it rains on a Tuesday. The FOS must decide if being 'statistically likely' to fail is the same as actually failing. It is a philosophical debate disguised as a regulatory hurdle.
As we move toward this AI-first future, the role of the ombudsman becomes less about law and more about etiquette. How do we maintain a polite society when the primary decision-makers are incapable of politeness? The FOS is attempting to build a bridge between the world of human messiness and the world of digital perfection. It is a noble goal, though one suspects it will involve a great deal of shouting into the void.
In the end, we may find that the best way to deal with a banking AI is not to argue with it at all, but to treat it with the same weary resignation we afford the weather. You do not sue a thunderstorm for ruining your picnic; you simply buy a better umbrella. The FOS is currently trying to design that umbrella, ensuring that when the algorithmic rain starts to fall, we at least have a sensible set of guidelines to keep us dry. Whether the inanimate objects will pay any attention to those guidelines remains, as always, a matter of statistical probability.