Bubble-making – Does it Hurt?
Blocking, muting, connecting, following – we can control a lot on social media, or can we?
When algorithms entered the life of the common man or woman, until then untouched by the magic spells of the programmers — until then just using a service that was easily predictable, like a telephone or a food processor — well, when that happened, they lost all control.
Suddenly, they didn’t see their friends on social media, instead they saw other people — strangers — and they were not heard themselves by their friends, even though they kept posting.
“Gaming the algorithm” became a thing, until then a fantasy of cyberpunk writers who saw a future where people had to live in a fictitious world that, at the same time, was made very real in that we all decided to live in it.
Social media.
If you don’t control your life, someone else does. The times are gone where we would attribute all events to the will of a master, a dominant spouse, fate, or a god. Now, our lives are controlled by algorithms.
But there is a catch. You can twist the algorithm. And it works more or less the same as praying to a god, in that you cannot know if the algorithm will listen to your wishes.
If you have a website, for instance, you can pray to the spiders/crawlers that they will leave some of your contents alone, not including it in the search engine results, but these robots decide for themselves if they will listen to your prayer. In fact, they will most likely index everything you asked them to not index, just marking it in their database as something that should not be displaying to just anyone searching for something. These data are reserved for other purposes, such as selling advertising or demographic information to companies, who can then target you on the basis of what you had so deeply wished would not be used by the search engine companies.
The algorithms of the crawlers may know about your wishes, but they obey a higher command; that of the commercial interests of the search engine company.
If, on social media, you have some expectations to the tone and topics of the posts and discussions, and you feel that some of the other members of that medium behave in a way that doesn’t match, then you can often block them. The social media platforms are, in general, not happy to let you do that, but somehow they have felt that they should allow you the pleasure to feel that you control a small part of your own life.
If you block someone, you will probably not see anything the one posts. So far, so good. The algorithms tend to work that well. But if your unblocked connections enter into a conversation with the blocked one, you may or may not see part of that — and a funny thing here is that your blocking at times can end up reaching much further than you wanted. You just wanted to block that one person, but if many other people’s replies in threads where that one is present will also be blocked, then you have sort of exploded a bomb in your social world rather than just closed a door.
Sometimes these algorithms allow you to tell that you “don’t want to see more of this kind of contents” — whatever that means! How can any algorithm know if you would consider another post, perhaps from another person, to be of the same “kind”?
Again, your action, based on a wish, end up being nothing but a prayer to the algorithm god, and there is no guarantee for this to end up being what you hoped for.
I have described a couple of situations where the algorithm blocks too much. But the opposite can easily be the case. If there are millions of posts added to a platform each day, the algorithm will fill up your stream with some of it. If you were unhappy to see sexist or some other wrong contents from one person, you may successfully have blocked the one, but then there will be a gap in your stream that the algorithm will cover with posts from other posters — who could happen to be just as bad.
Depending on the platform and your tolerance level, you may never get to an end with your cleaning session, as there can be uncountable numbers of bad people there, and you just keep seeing new ones.
You may also be so lucky as I was once, when Facebook freaked out in right-wing xenophobia, filling up the stream with one more brutal claim or threat than the other. A terrible atmosphere there, and I felt it as if the platform had been taken over by psychopaths. However, instead of closing my account (at that time), I made one attempt: began blocking the accounts that seemed to be the worst, and after about 100 blockings, my stream was completely free of all the hatred and xenophobia. Only normal people talking about all the normal social media things.
100 accounts was what it took, nothing more. These few accounts had infected the platform and made all newspapers and politicians scream at each other at increasing volumes, many of them claiming that a demand from the people, on social media, should be obeyed — all foreigners should be expelled. And similar madness. But all that negative, all those “demands”, came from 100 acounts out of millions that didn’t express such things.
I had checked each of the 100 accounts before blocking them, and they basically all followed a certain pattern with a fixed set of posts on their walls, obviously fake personal information, and a picture of a dog as their profile picture. Clearly fake accounts, and based on the behavioural pattern it was clear, that most of them were operated by bots.
A hatred machine, simple. A swarm of locusts, eating all the naivity and good will of the normal users for breakfast.
But it was possible for me to clear all that out of my bubble on Facebook. Sadly, everybody else seemed to not do the same, those I talked to didn’t want to listen, didn’t believe me when I told that this was all a storm in a glass of water, all made up and controlled through 100 bots.
Later, Facebook would adjust the algorithm so that I no longer could shape the stream, such as I had done it here. Again, an overwhelming amount of right-wing propaganda filled the scenery, and it no longer helped to block the bad accounts — others just took their place.
On Substack, I have met quite many bot accounts so far. They are kind and conversating, not trying to tell me anything bad, even though some of them have a strange setup of “Reads” on their accounts, such as supersticious substacks, stacks about UFOs, about Trump…
I wonder if it really is possible to participate in shaping the world we see on Substack. It is already being shaped. Things do not happen all by coincidence, at least not when looking at what you see in your Notes stream, which follow-recommendations are given in Notes, and which articles are being suggested.
You can mute, block, like, restack, and interact in even more ways. But what will it actually do to your stream? Will the Notes algorithm and the recommendation algorithms hear your prayers? Do they, like the Internet search engines, have a higher master whose wish matters more than yours?
I don’t know. I just know that my food processor does what I want from it — follows my commands, when I push its buttons. Social media doesn’t.
So, it isn’t my fault, really, what may happen to other people’s posts if anything I do may influence the algorithms’ selections of posts and recommendations for me.
Not my fault, but I still tend to feel guilty. After all, in the real life, “blocking” someone would mean that the one would know about it. Something about not answering phone calls or emails from the one, maybe even going to court and have a court order made that this person must stay away from me.
But is social media blocking similar? Does it hurt the blocked one? Of course, maybe they really behave badly, and I shouldn’t feel bad myself for restricting their bad behaviour a bit, but often these people are not necessarily bad people: they just talk about things I don’t want to hear, or they fill up the stream with nonsense, where I was hoping for something meaningful. And one person’s nonsense might lead to another person’s insights, you could say. “Nonsense” is in itself a nonsense word, as it says more about the listener than the speaker — more about the one calling the words nonsense than about the words themselves.
So what happens if I block or mute someone? And I am not speaking about the bots, as these hardly can have any feelings to hurt. Do the real people feel somehow that they are blocked — through less attention, distrusting comments, perhaps, maybe they feel frozen out altogether?
No. Most often, they will not even know it. Their stream will be full of other people’s posts, even if they don’t see yours, and they will hardly notice that yours are not there any more. And their own posts will be distributed to many other people’s streams.
Each of us has such a carefully, algorithm-made bubble that we don’t even know about, at least not in details, how exactly it is made, so pushing one person out of our bubble will not change much. The days I experienced with Facebook are over now, algorithms are much more independent, less listening, always trying to feed us all with an endless, smooth stream of posts.
It’s a fantasy world. Nothing hurts for real in there. Nothing is real. There are real people behind, but sort of cartoonised, just shown as a sketch. Many are even there under some invented name, telling invented stories about themselves, adding more fantasy to what is already not real.
In such a world, you can only affect the story a bit, for a short while, or make the characters there apparently react with real emotions — like the characters in an adventure game on the computer, where “the gnome casts his fishing rod” each and every time you say the magic word to him. And he will do it whether you listen to him or not.
In a social media fantasy, nothing hurts.