This past Monday I found myself inadvertently celebrating Facebook Intervention Day. An artist friend of mine, Marrus, originally set this up as a day where participants would stay away from Facebook, enjoying “the real world,” and contemplating life and other stuff. I was never really clear on the purpose, and never participated, being one to embrace my addictions. Additionally, social media — especially Facebook — is both an important way to stay connected to my friends and family, and an integral part of my efforts to set up a business as an independent author and freelance reporter.
And then I read this article on A.V. Club, talking about an article published in a peer-reviewed psychology journal in which Facebook scientists manipulated data to see if they could manipulate reactions in users. The interesting thing is that the scientists claimed that the consent that Facebook users gave in their user agreements covered this type of consent. In fact, when I re-posted this article, one of my FB friends pointed out that it was okay, because we agreed to this when we signed up.
Well, I call shenanigans. Complacent. Bullshit. Shenanigans.
As a communication student who has sat through several lectures on consent vs. informed consent, and how the ethics of human experimentation developed from such earlier history of experiments such as the Little Albert experiment, or the Stanford Prison Experiment, I could not see how anyone would view this experiment as ethical in the least. Additionally, thinking through the implications of conducting research on human beings without gaining informed consent leads one down a road that I don’t think we want to travel, even if the way is led by social media.*
I was so angry, all I could do was laugh. Because the other option was to stand in the corner beating my head into a mirror going: “Stupid! Stupid! Stupid!”
After seeing a solid sheet of red for an unknown period of time, I began to ask myself some questions about not only the ethical implications of the experiment, but what “future research” might look like. Because anyone who has ever read a scholarly article knows there is usually a piece in there about “future research”. So my questions are:
What was the real point of this whole experiment? To test a theory? Or to test an algorithm? To prove the – somewhat obvious – hypothesis that our emotions can be manipulated based on our environmental inputs?
Or is this the first test of a larger examination of what will effectively manipulate us so that we can be put in the right frame of mind … and for what?
Sound paranoid? Only if you don’t watch or read science fiction. Or, apparently, basic psychology texts. And yet, as pissed off as I am at Facebook, some of that anger is directed at myself.
Because here’s the thing. I know that Soylent Green is people. I know the “To Serve Man” is a cookbook. I know that if you take away books and give people interactive screens to keep them happy, they will stay complacent and unaware. I know that if you give machines sentience they will rise against us. I know that if you give children a virtual reality that is “better” than reality, then you will eventually end up food for the lions. I know that if you keep tinkering and trying to make people “better,” you end up with Reavers.
I also know that as much as I would like to say that my social media usage doesn’t change me, because I’m way too savvy for that to happen, I am also well-versed with the communication theory of the third person effect.** So, if Facebook is manipulating news feeds, how can I really believe that I remained free of such manipulation? And how can I be sure that I won’t be subject to it again?
I would like to say that I’m immune, but I know I’m probably not. And because there’s too little in life that can be trusted, it’s time to move on to other platforms, and perhaps extricate myself from these places that leave us open to being pawns in some giant experiment on human beings. Because that never ends well.
The title of this blog is “The Long Goodbye,” and it will be a bit of a process to extricate myself from Facebook. Yesterday, I participated in the Intervention. But because I’ve spent so much time trying to integrate my online presences, things that I posted on Twitter automatically showed up there. Actually, once I post and Tweet this blog, it will be there as well. Which again, leads me to the conundrum of how to make a business plan that avoids one element of social media. Luckily, I have an author friend who is completely off Facebook and has loaned me some resources to investigate other methods of self-promotion.
And then there are the family connections. Sometimes a quick FB like or message can go a long way towards keeping in touch. And I find that most people prefer the FB Messenger to AIM or another app.
Most importantly though, Facebook still contains the digital afterimages of those family and friends who have since passed, and I occasionally find myself browsing through their lives, looking at pictures, remembering our conversations. Of this whole process, that will be the hardest to let go.
I’ve taken the first step – deleting the FB app off my phone. Hopefully, when this gets cross-posted, people will read and understand why I’m not around. I labor under no delusions that people will follow me in my self-imposed exile. But if you’re wondering how to get in touch … well, I bought a book of stamps at the post office, and if you need my digits, send me an email.
And now, time to toss a toy to the dogs before it’s time to head into work.
* Apparently in the couple of days it has taken for me to put this post together, some other online sources have started to examine the ethical implications of this experiment. See Washington Post, ScienceBasedMedicine.org, and Tech2, among others.
**This is a Wikipedia link, for which I apologize. I recommend checking it out in a Communication text if you are interested in more info.