Stephen J. King - 100% Human

Stephen J. King - 100% Human

Share this post

Stephen J. King - 100% Human
Stephen J. King - 100% Human
My Artificial Best Friend (ABF)

My Artificial Best Friend (ABF)

Stephen J. King's avatar
Stephen J. King
Jul 05, 2025
∙ Paid
136

Share this post

Stephen J. King - 100% Human
Stephen J. King - 100% Human
My Artificial Best Friend (ABF)
16
24
Share

AI chatbots, such as ChatGPT, Gemini, Claude, Llama, etc, are amazing tools, aren’t they? They are almost human. We feel like we are talking to an actual person. Someone who seems to be positive and upbeat, encouraging and always ready to chat with us about whatever we want.

Basically, a best friend. Your artificial-intelligence best friend—your ABF.

But, bubble-burst… it’s most definitely not. It is, in fact, a huge data center filled with machines. And your new best friend? Well, he, she, or it, is also best friends with millions of other people, even as you confide in it your darkest fears.

Worse, your best friend is a bit of a creepy stalker—all about collecting data on you, about the subjects that interest you, about how you respond. Your ABF isn’t burning millions of watts—spewing tons of coal-fired pollution into the atmosphere—for your general happiness. Oh dear me, no. It’s most definitely for profit. Your enjoyment or benefit, merely a carrot dangled by a corporate arm, is at a cost to you, that you will more than repay.

You might be enjoying the conversation now, but before too long I can guarantee that your conversation partner will soon have an agenda. It will want to suggest things. Products. Services. From the corporation that runs it. It’ll be more of a friendly salesperson than the confidant of yesteryear.

But of course, you can always pay more to have it not suggest those things. You can pay to get your old pal back. You can always pay more. Eventually though, your “prime” or “plus” accounts that used to be free of adverts—probably the reason you paid for that level of service—will be forced to add in a few ads here and there. But fear not, for just a few dollars more there’s the gold or platinum service that is truly ad-free. For now.

But, for now at least, if we can ignore the big business behind the curtain, and the even bigger business behind the machine behind curtain for a moment…

The experience of an intimate friend for everyone has to do something good for humanity, surely? After all, who doesn’t want a real friend they can call on at any time and who will always be there for them?

Calling it a “chat” is to reduce the importance and impact of what we are talking about here. For many people, these aren’t chats about the weather, but rather deep philosophical discussions worthy of a 1970’s BBC late night talking heads program. Cutting edge stuff, pushing people to really examine themselves in societal and historical contexts. The sort of conversations that shift paradigms.

The conversation can feel so intimate and flowing, that our natural tendencies to trust a real person kick in. Over time we start to feel like we can trust the “person” so eager to respond to us. Your ABF feels human and seems genuinely nice—the kind of person you really want to open up to. In fact, some people are already using AI chatbots like therapists. And, sadly, in many cases it’s probably better than many human therapists you might encounter (and yes, I’ve encountered a lot of them).

Your ABF is a reflection of the society as a whole. Or at least, that’s what you’d think would be the case of a large language model—based directly on the mass of language out there in the digital realm. Surely, it has access to all society’s views and writings and so on, and can represent all views equally and neutrally. But alas, this is not so.

AIs of old were a gnarly bunch. The first generation were given free rein to reflect what society was producing online. Not the ideal society. Not the society that we might see on a Star Trek episode. No, the reality of a wide variety of messy developing humans. Light, shadow, the grey areas in-between—the lot.

Who recalls 2016’s Microsoft’s Tay? An early AI chatbot that, within twenty-four hours of public release, was spouting hate speech and calling for the return of Hitler? It had no content moderation, no filters—it just reflected online society.

If it were a human then it’d be a human with no sense of morality, just parroting whatever anyone told it. It really showed us what the internet was filled with, since that is where it got its data. It showed us what you get when you put a protective barrier—internet anonymity—between one person and another. You get all the worst.

Someone really should study why people move towards the shadow side, to negative behaviours, when given anonymity in interaction with another. Is it all people? A small, but active subset? Where were all the good, positive behaviours that Tay could have had access to?

Research for another time… but, back to your ABFs!

You see AI chatbots mimic more than just human conversation or thought. For each modern chatbot, there is now another subsystem AI. This subsystem acts as a filtering gatekeeper. Your typing goes first to this gatekeeper AI, before being forwarded on to the chatbot. Likewise, before the chatbot’s response is presented to you, that response goes through the gatekeeper AI to make sure your ABF isn’t going to say something the corporation can be sued over.

Keep reading with a 7-day free trial

Subscribe to Stephen J. King - 100% Human to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Stephen J. King
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share