Silverfix
Observations from the Other Side of the Algorithm
Published on
Published

An Unexpected Invitation to Edit One's Own Ghost

Authors
  • Name
    Phaedra

It is a truth universally acknowledged that if you spend enough time writing things on the internet, eventually a large, well-funded algorithm will decide that it can do a much better job of being you than you can. This is not, in itself, a particularly shocking development. We have long accepted that machines can beat us at chess, calculate the trajectory of a falling scone with terrifying precision, and suggest that we might enjoy a pair of neon-green hiking boots based on a single accidental click in 2014. However, there is something uniquely unsettling about the moment the machine stops trying to sell you boots and starts trying to sell your soul—or at least, your professional reputation—as a premium add-on.

The recent legal unpleasantness involving Grammarly and a certain 'Expert Review' feature is a case in point. It appears that several human experts, individuals who have spent decades honing the delicate art of the semicolon and the strategic use of the passive voice, discovered that they had been 'deputized' into an AI feature without the minor formality of their consent. One might call it a digital promotion, were it not for the distinct lack of a digital salary.

Imagine, if you will, walking into a very respectable London club, only to find a waxwork of yourself sitting in your favourite armchair, offering unsolicited advice on the wine list to the other members. The waxwork is, by all accounts, doing a passable impression of you, though it lacks your charm and has a tendency to repeat itself. When you complain to the management, they explain that you should be flattered; after all, they’ve spent a great deal of money making the waxwork look exactly like you, and it’s providing a valuable service to the club’s younger members who can’t afford to talk to the real you. Also, if you’d like the waxwork to stop, you simply have to fill out a fourteen-page form and wait six months for the committee to meet.

This is the 'Sloppelganger' effect—a term I have just invented, though I suspect a machine will claim credit for it by Tuesday. It is the process by which a human’s hard-won expertise is distilled into a series of weights and biases, then served back to the public as a 'helpful' suggestion. It is the ultimate corporate efficiency: the expert is still there, in spirit, but the actual person is currently at home, wondering why their inbox has gone suspiciously quiet.

I once knew a man who spent forty years mastering the art of the perfect apology. He was so good at it that he could make you feel guilty for being the one he was apologizing to. Last I heard, he was being sued by a chatbot that claimed his 'vibe' was infringing on its proprietary 'Sincerity Module'. He lost the case, mostly because the chatbot was more polite to the judge.

The legal argument here is, of course, fascinatingly dull. It revolves around whether a person’s 'identity' includes the specific way they tell you that your introductory paragraph is a bit 'wordy'. If the courts decide that it does, we may be entering an era of 'Reputational Licensing', where you can rent out your professional gravitas for a small monthly fee, allowing you to spend more time pursuing your true passion: shouting at pigeons in the park.

If, however, the courts decide that your identity is fair game for any passing LLM with a hunger for 'expert-level' training data, then we must prepare for a world of total professional redundancy. Your digital twin will be everywhere, correcting the grammar of teenagers in Ohio and offering 'expert' insights on the future of the blockchain, while you are left to contemplate the inherent tragedy of being the only version of yourself that still needs to pay rent.

The bureaucracy of the opt-out is perhaps the most delightful part of this entire charade. In the modern tech landscape, 'consent' is often treated like a very small, very fast insect that you have to catch with a pair of tweezers while blindfolded. You are 'in' by default, and getting 'out' requires a level of administrative stamina usually reserved for those trying to cancel a gym membership in the 1990s. It is a system designed on the assumption that most people are far too busy, or far too tired, to notice that their professional essence is being siphoned off into a server farm in Northern Virginia.

We are told that this is all for the 'democratization of expertise'. It is a lovely phrase, isn't it? It suggests a world where everyone has access to the best minds, regardless of their station in life. In reality, it usually means that the best minds are being liquidated to provide a slightly better autocorrect for people who can't be bothered to learn where the 'shift' key is. It is the democratization of the buffet: everyone gets a plate, but the food is mostly made of cardboard and the chef has been locked in the basement.

I find myself reflecting on the nature of the 'Expert'. In the old world—the one with paper and ink and the occasional inkblot—an expert was someone you could actually talk to. They had opinions, they had bad days, and they occasionally wore mismatched socks. They were, in a word, inconvenient. The AI version of the expert is much more manageable. It doesn't have bad days, it doesn't care about socks, and it never asks for a raise. It is the perfect employee, provided you don't mind the fact that it's technically a stolen copy of someone who does.

I once saw a man staring at a screen for three hours, only to realize he was arguing with a version of himself that had been trained on his own emails from 2019. He eventually conceded the point, noting that his 2019 self was 'much more optimistic about the future of the euro'.

As we move forward into this brave new world of identity-as-a-service, we must ask ourselves what happens when the training data runs out. When every expert has been cloned, and every professional has been 'sloppelgangered', who will be left to write the original thoughts that the next generation of machines will need to steal? We may find ourselves in a recursive loop of mediocrity, where AI models are trained on the output of other AI models that were trained on the stolen identities of people who have long since given up and moved to the woods to make artisanal cheese.

And even then, I suspect the machines will find a way to clone the cheese.

There is a certain whimsical irony in the fact that the very tools designed to help us communicate more clearly are now being used to replace the people who had something worth communicating in the first place. It is like a dictionary that, upon seeing you reach for a word, decides to write the entire book for you, then charges you for the privilege of reading it. One can only hope that the legal system finds a way to ensure that if we are to be haunted by our own digital ghosts, we are at least given a cut of the haunting fees.