Thursday, May 9, 2019

Fear-mongering Vox joins the Fake News crew

Vox calls Nextdoor "fear-based." For the sake of argument, Vox might as well call Facebook "loneliness-based" or Twitter "narcissism-based."

This morning, an article from Vox was presented to me in my new tab news feeds in Firefox:

The rise of fear-based social media like Nextdoor, Citizen, and now Amazon’s Neighbors

The title was provocative, so I clicked on it.

"Nextdoor" is an invaluable tool for communities to reestablish their status as communities, in this digital day and age. Block parties and other face-to-face activities have gone by the wayside, so apps like Nextdoor give people a sense of that lost community. In any community, there will not be one hundred percent agreement about anything, much less how people view other human beings.

With this article (written by Rani Molla, reporter for Recode and formerly Bloomberg Gadfly) Vox is attempting to turn the reader against a positive force for the community by, once again, playing that tried-and-true, unassailable, and punishable-by-shame-if-resisted racism card.

The article, besides referring to Nextdoor as a "fear-based social media app," which immediately paints a very one-dimensional picture of it's users, also states that Nextdoor's Crime and Safety section is a "hotbed for racial stereotyping."

The Wired article being referenced for the hotbed reference is here:

For Nextdoor, Eliminating Racism Is No Quick Fix

If you read the Wired piece, you will find in the fourth paragraph the following information:

"Caught off guard, Tolia asked his neighborhood operations team, which handles customer service, to review Nextdoor postings. They discovered several dozen messages posted over the course of the previous year that identified suspicious characters only by race. By some measures, it was a tiny issue. The site distributes four million messages daily."

In the world of perception, journalists know all too well the power of words to manipulate. To refer to "several dozen" messages over the course of an entire year as a "hotbed," when the total messages for that same year amounted to one billion and four hundred and sixty million (50 messages as compared to 1,460,000,000 messages), is nothing short of exaggeration in the highest order.

We are told that "by some measures" it is a tiny issue. By what other measure is it not a tiny issue? I'm a bit puzzled by that statement. Later in the same paragraph, Tolia (founder of Nextdoor) expresses his dismay that even a tiny problem can cast his service as racist. Sure, in the minds of those who think the fringe are as relevant as the average user. Most adults are coherent enough to know this is not true.

Coming back to the Vox piece, there are other manipulative statements, some allegedly backed by handy graphs, such as "Public perception of crime rate at odds with data," "Apps can fuel a vicious cycle of fear and violence," "Citizen — whose previous form was called Vigilante and which appeared to encourage users to stop crimes in action," "These apps have become popular because of — and have aggravated — the false sense that danger is on the rise," "Examples abound of racism on these types of apps, usually in the form of who is identified as criminal," "Apps didn’t create bias or unfair policing, but they can exacerbate it," "These apps can also be psychologically detrimental to the people who use them," "Like all new technology, we’re struggling to use it correctly," "But why would we use something that plays on demonstrably false fears and has so many negative side effects?" "The rise of fear-based social media apps might also have to do with the decline of local news." [bold text not included in original sentences]

Notice all the "cans" and similar words used to suggest that the most negative possibility is likely correct. This is easily identified manipulation.

Of course Rani brings in an 'expert' to tell us that "These apps foment fear around crime, which feeds into existing biases and racism and largely reinforces stereotypes around skin color."

That same expert is further quoted that there's "very deep research" indicating we're all predisposed to mentally picture a black person when we hear about or read about a crime. Oh, I'm sure there is this sort of biased research, especially if it will contribute to the reader further believing this hit piece, and the overarching Leftian narrative that the entire country is teeming with white racists who hate "people of color."

Then another convenient expert is quoted, telling us that these apps don't actually help us as advertised, but instead simply reflect our own ugly biases, an accusation easily leveled at a nebulous crowd, but not so easily proven on an individual basis.

Other articles are cited, from Motherboard, Vice, and The Outline, all of which are not necessarily about racism but conveniently refer to possibilities that Rani found advantageous to her point.

At the end of all this, I have to ponder:

Why take an app that millions of people enjoy and rely upon, and falsely cast it as racist and fear-mongering, based on the questionable use of it by only a handful of people?

Are some of us losing our senses of proportion and accurate perception?

Or is the answer a bit more sinister, such as a desire to dismantle something that people use to protect each other without government or media control?



1 comment: