The Kenyan moderators' PTSD reveals the fundamental paradox of content moderation: we've created an enterprise-grade trauma processing system that requires concentrated psychological harm to function, then act surprised when it causes trauma. The knee-jerk reaction of suggesting AI as the solution is, IMO, just wishful thinking - it's trying to technologically optimize away the inherent contradiction of bureaucratized thought control. The human cost isn't a bug that better process or technology can fix - it's the inevitable result of trying to impose pre-internet regulatory frameworks on post-internet human communication that large segments of the population may simply be incompatible with.
I'm wondering if there are precedents in other domains. There are other jobs where you do see disturbing things as part of your duty. E.g. doctors, cops, first responders, prison guards and so on...
What makes moderation different? and how should it be handled so that it reduces harm and risks? surely banning social media or not moderating content aren't options. AI helps to some extent but doesn't solve the issue entirely.
I don’t have any experience with this, so take this with a pinch of salt.
What seems
novel about moderation is the frequency that you confront disturbing things. I imagine companies like Meta have such good automated moderation that what remains to be viewed by a human is practically a firehose of almost certainly disturbing shit. And as soon as you’re done with one post, the next is right there. I doubt moderators spend more than 30 seconds on the average image, which is an awful lot of stuff to see in one day.
A doctor just isn’t exposed to that sort of imagery at the same rate.
> I imagine companies like Meta have such good automated moderation that what remains to be viewed by a human is practically a firehose of almost certainly disturbing shit.
On the contrary I would expect that it would be the edge cases that they were shown - why loop in a content moderator if you an be sure that it is prohibited ont he platform without exposing a content moderator?
In this light, it might make sense why they sue: They are there more as a political org so that facebook can say: "We employ 140 moderators in Kenya alone!" while they do indifferent work that facebook already can filter out.
> They are there more as a political org so that facebook can say: "We employ 140 moderators in Kenya alone!" while they do indifferent work that facebook already can filter out.
Why assume they're just token diversity hires who don't do useful work..?
Have you ever built an automated content moderation system before? Let me tell you something about them if not: no matter how good your automated moderation tool, it is pretty much always trivial for someone with familiarity with its inputs and outputs to come up with an input it mis-predicts embarrassingly badly. And you know what makes the biggest difference.. is humans specifying the labels.
I don't assume diversity hires, I assume that these people work for the Kenyan part of Facebook and that Facebook employs an equivalent workforce elsewhere.
I am also not saying that content moderation should catch everything.
What I am saying is that the content moderation teams should ideally decide on the edge cases as they are hard for automated system.
In turn that also means that these people ought not to be exposed to too hardcore material - as that is easier to classify.
Lastly I say that if that is not the case - then they are probably not there to carry out a function but to fill a political role.
Content moderation also involves reading text, so you’d imagine that there’s a benefit to having people who can label data and provide ground truth in any language you’re moderating.
Even with images, you can have different policies in different places or the cultural context can be relevant somehow (eg. some country makes you ban blasphemy).
Also, I have heard of outsourcing to Kenya just to save cost. Living is cheaper there so you can hire a desk worker for less. Don’t know where the insistence you’d only hire Kenyans for political reasons comes from.
Also a doctor is paid $$$$$ and it mostly is a vocational job
Content moderator is a min wage job with bad working hours, no psychological support, and you spend your day looking at rape, child porn, torture and executions.
How about grouping the jobs into two categories: A) Causes PTSD and B) Doesn't cause PTSD
If a job as a constantly high percentage of people ending up with PTSD, then they aren't equipped well enough to handle it, by the company who employs them.
>How about grouping the jobs into two categories: A) Causes PTSD and B) Doesn't cause PTSD
I fail to see how this addresses my previous questions of "it's purely a monetary dispute?" and "where do you draw the line?". If a job "Causes PTSD" (whatever that means), then what? Are you entitled to hazard pay? Does this work out in the end to a higher minimum wage for certain jobs? Moreover, we don't have similar classifications for other hazards, some of which are arguably worse. For instance, dying is probably worse than getting PTSD, but the most dangerous jobs have pay that's well below the national median wage[1][2]. Should workers in those jobs be able to sue for redress as well?
What could a company provide a police officer with to prevent PTSD from witnessing a brutal child abuse case? A number of sources i found estimate the top of the range to be ~30% of police officers may be suffering from it
I wouldn't say purely, but substantially yes. PTSD has costs. The article says some out; therapy, medication, mental, physical, and social health issues. Some of these money can directly cover, whereas others can only be kinda sorta justified with high enough pay.
I think a sustainable moderation industry would try hard to attract the kinds of people who are able to perform this job without too much negative impacts, and quickly relieve those who try but are not well suited, and pay for some therapy.
“I would imagine that companies like Meta have such good automated moderation that what remains to be viewed by a human is practically a firehose of almost certainly disturbing shit.”
This doesn’t make sense to me. Their automated content moderation is so good that it’s unable to detect “almost certainly disturbing shit”? What kind of amazing automation only works with subtleties but not certainties?
I assumed that, at the margin, Meta would prioritise reducing false negatives. In other words, they would prefer that as many legitimate posts are published as possible.
So the things that are flagged for human review would be on the boundary, but trend more towards disturbing than legitimate, on the grounds that the human in the loop is there to try and publish as many posts as possible, which means sifting through a lot of disturbing stuff that the AI is not sure about.
There’s also the question of training the models - the classifiers may need labelled disturbing data. But possibly not these days.
However, yes, I expect the absolute most disturbing shit to never be seen by a human.
—
Again, literally no experience, just a guy on the internet pressing buttons on a keyboard.
>In other words, they would prefer that as many legitimate posts are published as possible.
They'd prefer that as many posts are published, but they probably also don't mind some posts being removed if it meant saving a buck. When canada and australia implemented a "link tax", they were happy to ban all news content to avoid paying it.
> I imagine companies like Meta have such good automated moderation
I imagine that they have a system that is somewhere between shitty and none functional. This is the company that will more often than not flag marketplace posts as "Selling animal", either completely at random or because the pretty obvious "from an animal free home" phrase is used.
If they can't get this basic text parsing correct, how can you expect them to correctly flag images with any real sense of accuracy?
I watch surgery videos sometimes, out of fascination. It's not gore to me - sure it's flesh and blood but there is a person whose life is going to be probably significantly better afterwards. They are also not in pain.
I exposed myself to actual gore vids in the aughts and teens... That stuff still sticks with me in a bad way.
My understanding is that during surgery, your body is most definitely in pain. Your body still reacts as it would to any damage, but anesthetics block the pain signals from reaching the brain.
But there is a difference between someone making an effort healing someone else vs content with implications that something really disturbing happened that makes you lose faith in humanity.
I agree. But that might be comorbid with PTSD. It’s probably not good for you to be _that_ desensitised to this sort of thing.
I also feel like there’s something intangible regarding intent that makes moderation different from being a doctor. It’s hard for me to put into words, but doctors see gore because they can hopefully do something to help the individual involved. Moderators see gore but are powerless to help the individual, they can only prevent others from seeing the gore.
It's also the type of gore that matters. Some of the worst stuff I've seen wasn't the worst because of the visuals, but because of the audio. Hearing people begging for their life while being executed surely would feel different to even a surgeon who might be used to digging around in people's bodies.
Imagine if this becomes a specialized, remote job where one tele-operates the brain and blood scrubbing robot all workday long, accident, after accident after accident. I am sure they'd get PTSD too, airey, sometime it's just oil and coolant, but there's still a lot of body-tissue involved.
Desensitization is only one stage of it. It's not permanent & requires dissociation from reality/humanity on some level. But that stuff is likely to come back and haunt one in some way. If not, it's likely a symptom of something deeper going on.
My guess is that's why it's after bulldozing hundreds of Palestinians, instead of 1 or 10s of them, that Israeli soldiers report PTSD.
If you haven't watched enough videos of the ongoing genocides in the world to realize this, it'll be a challenge to have a realistic take on this article.
Burnout, PTSD, and high turnover are also hallmarks of suicide hotline operators.
The difference? The reputable hotlines care a lot more about their employees' mental health, with mandatory breaks, free counseling, full healthcare benefits (including provisions for preventative mental health care like talk therapy).
Another important difference is that suicide hotlines are decoupled from the profit motive. As more and more users sign up to use a social network, it gets more profitable and more and more load needs to be borne by the human moderation team. But suicide and mental health risk is (roughly) constant (or slowly increasing with societal trends, not product trends).
There's also less of an incentive to minimize human moderation cost. In large companies, some directors view mod teams as a cost center that takes away from other ventures. In an organization dedicated only to suicide hotline response, a large share of the income (typically fundraising or donations) goes directly into the service itself.
A friend's friend is a paramedic and as far as I remember they can take the rest of a day off after witnessing death on duty and there's an obligatory consulation with a mental healthcare specialist. From reading the article, it seems like those moderators are seeing horrific things almost constantly throughout the day.
I've never heard of a policy like that for physicians and doubt it's common for paramedics. I work in an ICU and a typical day involves a death or resuscitation. We would run out of staff with that policy.
Maybe it's different in the US where ambulances cost money, but here in Germany the typical paramedic will see a wide variety of cases, with the vast majority of patients surviving the encounter. Giving your paramedic a day off after witnessing a death wouldn't break the bank. In the ICU or emergency room it would be a different story.
Ambulances cost money everywhere, it's just a matter of who is paying. Do we think paramedics in Germany are more susceptible to PTSD when patients die than ICU or ER staff, or paramedics anywhere?
Not in the sense that matters here: the caller doesn't pay (unless the call is frivolous), leading to more calls that are preemptive, overly cautious or for non-live-threatening cases. That behind the scenes people and equipment are paid for and a whole structure to do that exists isn't really relevant here
> Do we think paramedics in Germany are more susceptible to PTSD
No, we think that there are far more paramedics than ICU or ER staff, and helping them in small ways is pretty easy. For ICU and ER staff you would obviously need other measures, like staffing those places with people less likely to get PTSD or giving them regular counseling by a staff therapist (I don't know how this is actually handled, just that the problem is very different than the issue of paramedics)
My friend has repeatedly mentioned his dad became an alcoholic due to what he saw as a paramedic. This was back in the late 80s, early 90s so not sure they got any mental health help.
Trauma isn’t just a function of what you’ve experienced, but also of what control you had over the situation and whether you got enough sleep.
Being a doctor and helping people through horrific things is unlike helplessly watching them happen.
IIRC, PTSD is far more common among people with sleep disorders, and it’s believed that the lack of good sleep is preventing upsetting memories from being processed.
There's a very good reason moderators are employed in far-away countries, where people are unlikely to have the resources to gain redress for the problems they have to deal with as a result.
In many states, pension systems give police and fire service sworn members a 20 year retirement option. The military has similar arrangements.
Doctors and lawyers can’t afford that sort of option, but they tend to embrace alcoholism at higher rates and collect ex-wives.
Moderation may be worse in some ways. All day, every day, you see depravity at scale. You see things that shouldn’t be seen. Some of it you can stop, some you cannot due to the nature of the rules.
I think banning social media isn’t an answer, but demanding change to the algorithms to reduce the engagement to high risk content is key.
A content moderator for Facebook will invariably see more depravity and more frequently than a doctor or police officer. And likely see far less support provided by their employers to emotionally deal with it too.
This results in a circumstance where employees don’t have the time nor the tools to process.
Outside of some specific cities, I can guarantee it. Even a busy Emergency Dept on Halloween night had only a small handful of bloody patients/trauma cases, and nothing truly horrific when I did my EMT rotation.
I think part of it is the disconnection from the things you're experiencing. A paramedic or firefighter is there, acting in the world, with a chance to do good and some understanding of how things can go wrong. A content moderator is getting images beamed into their brain that they have no preparation for, of situations that they have no connection to or power over.
As other sibling comments noted: most other jobs don't have the same frequent exposure to disturbing content. The closest are perhaps combat medics in an active warzone, but even they usually get some respite by being rotated.
at least in the US, those jobs - doctors, cops, firefighters, first responders - are well compensated (not sure about prison guards), certainly compared to content moderators who are at the bottom of the totem pole in an org like FB
What does compensation have to do with it? Is someone who stares at thousands of traumatizing, violent images every day going to be less traumatized if they're getting paid more?
Yes, they will be much more able to deal with the consequences of that trauma than someone who gets a pittance to do the same thing. A low-wage peon won't even be able to afford therapy if they need it.
From those I know that worked in the industry, contractor systems are frequently abused to avoid providing the right level of counseling/support to moderators.
Billions of people use them daily (facebook, instagram, X, youtube, tiktok...). Surely we could live without them like we did not long ago, but there's so much interest at play here that I don't see how they could be banned. It's akin to shutting down internet.
I worked at FB for almost 2 years. (I left as soon as I could, I knew it wasn't a good fit for me.)
I had an Uber from the campus one day, and my driver, a twenty-something girl, was asking how to become a moderator. I told her, "no amount of money would be enough for me to do that job. Don't do it."
I don't know if she eventually got the job, but I hope she didn't.
Yes, these jobs are horrible.
However, I do know from accidently encountering bad stuff on the internet that you want to be as far away from a modern battlefield as possible.
It's just kind of ridiculous how people think war is like Call of Duty. One minute you're sitting in a trench, the next you're a pile of undifferentiated blood and guts. Same goes for car accidents and stuff. People really underestimate how fragile we are as human beings. Becoming aware of this is super damaging to our concept of normal life.
Watching someone you love die of cancer is also super damaging to one's concept of normal life. Getting a diagnosis, or being in a bad car accident, or the victim of a violent assault is, too. I think a personal sense of normality is nothing more than the state of mind where we can blissfully (and temporarily) forget about our own mortality. Obviously, marinating yourself in all the horrible stuff makes it really hard to maintain that state of mind.
On the other hand, never seeing or reckoning with or preparing for how brutal reality actually is can lead to a pretty bad shock once something bad happens around you. And maybe worse, can lead you to under-appreciate how fantastic and beautiful the quotidian moments of your normal life actually are. I think it's important to develop a concept of normal life that doesn't completely ignore that really bad things happen all around us, all the time.
there’s a difference between a one or two or even ten off exposure to the brutality of life, where various people in your life will support you and help you acclimate to it
Versus straight up mainlining it for 8 hours a day
hey kid, hope you're having a good life. I'll look at the screen full of the worst the internet and humanity has produced on the internet for eight hours.
I get your idea but in the context of this topic I think you're overreaching
Actually reckoning with this stuff leads people into believing in anti-natalism, negative utilitarinism, Scopenhaur/Philipp Mainlander (Mainlander btw was not just pro-suicide, he actually killed himself!), and the voluntary extinction movement. This terrified other philosophers like Nietzsche, who spends most of his work defending reality even if it's absolute shit. "Amor Fati", "Infinite Regress/Eternal Recurrence", "Übermensch" vs the literal "Last Man". "Wall-E" of all films was the modern quintessential nietzschian fable, with maybe "Children of Men" being the previous good one before that.
You're literally not allowed to acknowledge that this stuff is bad and adopt one of the religions that see this and try to remove suffering - i.e. Jainism, because at least historically doing so meant you couldn't use violence in any circumstances, which also meant that your neighbor would murder you. There's a reason that Jain's population are in the low millions
Reality is actually bad, and it should be far more intuitive to folks. The fact that positive experience is felt "quickly" and negative experience is felt "slowly" was all the evidence I needed that I wouldn't just press the "instantly and painlessly and without warning destroy reality" (benevolent world-exploder) button, I'd smash it!
Interesting to see this perspective here. You’re not wrong.
> There's a reason that Jain's population are in the low millions
The two largest Vedic religions both have hundreds of millions of followers. Is Jainism that different from them in this regard? I know Jainism is very pacifist but on the question of suffering.
Emergency personnel might need to braze themselves for car accidents every day. That Kenyans need to be traumatized by Internet Content in order to make a living is just silly and unnecessary.
Even the wording is wrong - those aren’t accidents, it is something we accept as byproduct of a car-centric culture.
People feel it is acceptable that thousands of people die on the road so we can go places faster.
Similarly they feel it’s acceptable to traumatise some foreigners to keep social media running.
I spent my civil service as a paramedic assistent at the countryside, close to a mountainroad that was very popular with bikers. I was never interested in motorbikes in the first place, but the gruesome accidents I've witnessed turned me off for good.
Yes, but you're also far less likely to kill other people on a motorcycle as in a car (and even less, as in an SUV or pick-up truck). So some people live much less dangerously with respect to the people around them.
It's not that we're particularly fragile, given the kind of physical trauma human beings can survive and recover from.
It's that we have technologically engineered things that are destructive enough to get even past that threshold. Modern warfare in particular is insanely energetic in the most literal, physical way - when you measure the energy output of weapons in joules. Partly because we're just that good at making things explode, and partly because improvements in metallurgy and electronics made it possible over time to locate targets with extreme precision in real time and then concentrate a lot of firepower directly on them. This, in particular, is why the most intense battlefields in Ukraine often look worse than WW1 and WW2 battles of similar intensity (e.g. Mariupol had more buildings destroyed than Stalingrad).
But even our small arms deliver much more energy to the target than their historical equivalents. Bows and arrows pack ~150 J at close range, rapidly diminishing with distance. Crossbows can increase this to ~400 J. For comparison, an AK-47 firing standard issue military ammo is ~2000 J.
Watch how a group of wild dogs kill their prey, then realise that for milenia human like apes were part of their diet. Even the modern battlefield is more humane than the African savannah.
Funny you mention crossbows; the Church at one point in time tried to ban them because they democratized violence to a truly trivial degree. They were the nuclear bombs and assault rifles of medieval times.
Also, I will take this moment to also mention that the "problem" with weapons always seem to be how quickly they can kill rather than the killing itself. Kind of takes away from the discussion once that is realized.
> Humans can render other humans unrecognizable with a rock.
They are much less likely to.
We have instinctive repulsion to violence, especially extending it (e.g. if the rock does not kill at the first blow).
It is much easier to kill with a gun (and even then people need training to be willing to do it), and easier still to fire a missile at people you cannot even see.
Extreme violence then? With rocks, clubs of bare hands? I was responding to "render other humans unrecognizable with a rock" which I am pretty sure is uncommon in schools.
Not in public schools in the British sense. I assume it varies in public schools in the American sense, and I am guessing violence sufficient to render someone unrecognisable is pretty rare even in the worst of them.
You are discounting the complexity of the logistics required for an AK47 army. You need ammo, spare parts, lubricant and cleaning tools. You need a factory to build the weapon, and churn out ammunition.
Or, gather a group of people, tell them to find a rock, and go bash the other sides head.
Complexity of logistics applies to any large army. The single biggest limiting factor for most of history has been the need to either carry your own food, or find it in the field. This is why large-scale military violence requires states.
It should be noted that the purported advantages of AK action over its competitors in this regard are rather drastically overstated in popular culture. E.g. take a look at these two vids showing how AK vs AR-15 handle lots of mud:
As far as cleaning, AK, like many guns of that era, carries its own cleaning & maintenance toolkit inside the gun. Although it is a bit unusual in that regard in that this kit is, in fact, sufficient to remove any part of the gun that is not permanently attached. Which is to say, AK can be serviced in the field, without an armory, to a greater extent than most other options.
But the main reason why it's so popular isn't so much because of any of that, but rather because it's very cheap to produce at scale, and China especially has been producing millions of AKs specifically to dump them in Africa, Middle East etc. But where large quantities of other firearms are available for whatever reason, you see them used just as much - e.g. Taliban has been rocking a lot of M4 and M16 since US left a lot of stocks behind.
1) I don't have squeamishness about trauma. In the end, we are all blood and tissue. The calls that get to me are the emotionally traumatic, the child abuse, domestic violence, elder abuse (which of course often have a physical component too, but it's the emotional for me), the tragic, often preventable accidents.
2) There are many people, and I get the curiosity, that will ask "what's the worst call you've been on?" - one, you don't really want to hear, and two, "Hey, person I may barely know, do you think you can revisit something traumatic for my benefit/curiosity?"
That’s an excellent way to put it, resonates with my (non medical) experience. It’s the emotional stuff that will try to follow me around and be intrusive.
I won’t watch most movies or TV because they are just some sort of tragedy porn.
What's interesting now is how many patients will say "You're not going to give me fentanyl are you? That's really dangerous stuff", etc.
Their perfect right, of course, but is sad that that's the public perception - it's extremely effective, and quite safe, used properly (for one, we're obviously only giving it from pharma sources, with actually properly dosed solutions for IV).
It's also super easy to come up with better questions: "What's the funniest call you've ever been on?" "What call do you feel like you made the biggest difference?" "What's the best story you have?"
I'm pretty sure watching videos on /r/watchpeopledie or rekt threads on 4chan has been a net positive for me. I'm keenly aware of how dangerous cars are, that wars (including narcowars) are hell, that I should never stay close to a bus or truck as a pedestrian or bycicle, that I should never get into a bar fight... And that I'm very very lucky that I was not born in the 3rd world.
I get more upset watching people lightly smack and yell at each other on public freakout than I do watching people die. It's not that I don't care about the dead either, I watched wpd and similar sites for years. I didn't enjoy watching it, but I liked knowing the reality of what was going on in the world, and how each one of us has the capacity to commit these atrocities. I'm still doing a lousy job at describing why I like to watch it. But I do.
One does not fully-experience life until you encounter a death of something you care about. It being a pet, person; nothing gives you that real sense of reality until your true feelings are challenged.
I used to live in the Disney headspace until my dog had to be put down. Now with my parents being in their seventies, and me in my thirties I fear losing them the most as the feeling of losing my dog was hard enough.
That's the tragic consequence of being human. Either the people you care about leave first or you do, but in the end, everyone goes. We are blessed and cursed with the knowledge to understand this. We should try to maximize the time we spend with those that are important to us.
Well, i think it goes to a point. I'd imagine there's some goldilocks zone of time spent with the animal, care experienced from the animal, dependence on the animal, and manner/speed of death/ time spent watching the thing die.
I say animal to explicitly include humans. Finding my hamster dead in fifth grade did change me. But watching my mother slowly die a horrible, haunting death didn't make me a better person. I'm just saying that there's a spectrum that goes something like: easy to forget about, I'm able to not worry, sometimes i think about it when i dont want, often i think about it, often it bothers me, and do on. You can probably imagine the cycle of obsession and stress.
This really goes for all traumatic experiences. There's a point where they can make you a better person, but there's a cliff after which you have no guarantees that it won't just start obliterating you and your life. It's still a kind of perspective. But can you have too much perspective? Lots of times i feel like i do
>> ridiculous how people think war is like Call of Duty.
It is also ridiculous how people think every soldier's experience is like Band of Brothers or Full Metal Jacket. I remember an interview with a WWII vet who had been on omaha beach: "I don't remember anything happening in slow motion ... I do remember eating a lot of sand." The reality of war is often just not visually interesting enough to put on the screen.
I don't mean to trivialize traumatic experiences but I think many modern people, especially the pampered members of the professional-managerial class, have become too disconnected from reality. Anyone who has hunted or butchered animals is well aware of the fragility of life. This doesn't damage our concept of normal life.
My brother, an Eastern-European part-time farmer and full-time lorry driver, just texted me a couple of hours ago (I had told him I would call him in the next hour) that he might be with his hands full of meat by that time as “we’ve just butchered our pig Ghitza” (those sausages and piftii aren’t going to get made by themselves).
Now, ask a laptop worker to butcher an animal whom used to have a name and to literally turn its meat into sausages and see what said worker’s reaction would be.
Lots of people who spend time working with livestock on a farm describe a certain acceptance and understanding of death that most modern people have lost.
I don't have any data, but my anecdotal experience is a yes to those questions.
>Are there other ways we can get a sense of how a more healthy acceptance of mortality would manifest?
In concept, yes, I think home family death can also have a similar impact. It is not very common in the US, but 50 years ago, elders would typically die at home with family. There are cultures today, even materially advanced ones, where people spend time with the freshly dead body of loved ones instead of running from it and compartmentalizing it.
I don't know what people are seeing in my questions, but apparently they don't like answering them, because no one has.
I'm trying to understand what people mean by 'detachment from reality' and how such a thing is related to 'understanding of mortality', and how a deeper understanding of mortality and acceptance of death would manifest in ways that can be seen.
If 'acceptance of death' does not actually mean that they are more comfortable talking about death, or allowing people to choose their own deaths, or accepting their loved one's deaths with more ease, then what does it mean? Is it something else? Why can't anyone say what it is?
Why it is so obvious to the people stating that it happens, but no one can explain why the questions I asked are not being answered or are wrong?
If this is come basic conflict of frameworks wherein I am making assumptions that make no sense to the people who are making the assertions I am questioning, then what am I missing here?
> Why it is so obvious to the people stating that it happens, but no one can explain why the questions I asked are not being answered
you seem to be a bit anxious like waiting for the answer of your existential crisis.
> or are wrong?
that’s exactly what we did but you think your questions are actually super smart and we, fools, can’t answer them. Well, go on a quest to find the answers by yourself, young one.
In Japan, some sushi bars keep live fish in tanks that you can order to have served to you as sushi/sashimi.
The chefs butcher and serve the fish right in front of you, and because it was alive merely seconds ago the meat will still be twitching when you get it. If they also serve the rest of the fish as decoration, the fish might still be gasping for oxygen.
Japanese don't really think much of it, they're used to it and acknowledge the fleeting nature of life and that eating something means you are taking another life to sustain your own.
The same environment will likely leave most westerners squeamish or perhaps even gag simply because the west goes out of its way to hide where food comes from, even though that simply is the reality we all live in.
Personally, I enjoy meats respecting and appreciating the fact that the steak or sashimi or whatever in front of me was a live animal at one point just like me. Salads too, those vegetables were (are?) just as alive as I am.
Plenty of westerners are not as sheltered from their food as you. Have you never gone fishing and watched your catch die? Have you never boiled a live crab or lobster? You've clearly never gone hunting.
Not to mention the millions of Americans working in the livestock and agriculture business who see up close every day how food comes to be.
A significant portion of the American population engages directly with their food and the death process. Citing one gimmicky example of Asian culture where squirmy seafood is part of the show doesn't say anything about the culture of entire nations. That is not how the majority of Japanese consume seafood. It's just as anomalous there. You only know about it because it's unusual enough to get reported.
You can pick your lobster out of the tank and eat it at American restaurants too. Oysters and clams on the half-shell are still alive when we eat them.
>Plenty of westerners are not as sheltered from their food as you. ... You only know about it because it's unusual enough to get reported.
In case you missed it, you're talking to a Japanese.
Some restaurants go a step further by letting the customers literally fish for their dinner out of a pool. Granted those restaurants are a niche, that's their whole selling point to customers looking for something different.
Most sushi bars have a tank holding live fish and other seafood of the day, though. It's a pretty mundane thing.
If I were to cook a pork chop in the kitchen of some of my middle eastern relatives they would feel sick and would probably throw out the pan I cooked it with (and me from their house as well).
Isn't this similar to why people unfamiliar with that style of seafood would feel sick -- cultural views on what is and is not normal food -- and not because of their view of mortality?
You're not grasping the point, which I don't necessarily blame you.
Imagine that to cook that pork chop, the chef starts by butchering a live pig. Also imagine that he does that in view of everyone in the restaurant rather than in the "backyard" kitchen let alone a separate butchering facility hundreds of miles away.
That's the sushi chef butchering and serving a live fish he grabbed from the tank behind him.
When you can actually see where your food is coming from and what "food" truly even is, that gives you a better grasp on reality and life.
It's also the true meaning behind the often used joke that goes: "You don't want to see how sausages are made."
I grasp the point just fine, but you haven't convinced me that it is correct.
The issue most people would have with seeing the sausage being made isn't necessarily watching the slaughtering process but with seeing pieces of the animal used for food that they would not want to eat.
I grew up with my farmer grandpa who was a butcher, and I've seen him butcher lots of animals. I always have and probably always will find tongues & brains disgusting, even though I'm used to seeing how the sausage is made (literally).
Some things just tickle the brain in a bad way. I've killed plenty of fish myself, but I still wouldn't want to eat one that's still moving in my mouth, not because of ickiness or whatever, but just because the concept is unappealing. I don't think this is anywhere near as binary as you make it seem, really.
I wouldn't want to eat a cockroach regardless of whether I saw it being prepared or not. The point I am making is that 'feeling sick' and not wanting to eat something isn't about being disconnected from the food. Few people would care if you cut off a piece of steak from a hanging slab and grilled it in front of them, but would find it gross to pick up all the little pieces of gristle and organ meat that fell onto the floor, grind it all up, shove it into an intestine, and cook it.
> Few people would care if you cut off a piece of steak from a hanging slab
The analogy here would be watching a live cow get slaughtered and then butchered from scratch in front of you, which I think most Western audiences (more than a few) might not like.
A cow walks into the kitchen, it gets a captive bolt shoved into its brain with a person holding a compressed air tank. Its hide is ripped off and it is cut into two pieces with all of its guts on the ground and the flesh and bones now hang as slabs.
I am asserting that you could do all of that in front of a random assortment of modern Americans, and then cut steaks off of it and grill them and serve them to half of the crowd, and most of those people would not have an problem eating those steaks.
Then if you were to scoop up all the leftover, non-steak bits from the ground with shovels, throw it all into a giant meat grinder and then take the intestines from a pig, remove the feces from them and fill them with the output of the grinder, cook that and serve it to the other half of the crowd, then a statistically larger proportion of that crowd would not want to eat that compared to the ones who ate the steak.
> I am asserting that you could do all of that in front of a random assortment of modern Americans, and then cut steaks off of it and grill them and serve them to half of the crowd, and most of those people would not have an problem eating those steaks.
I am asserting that the majority of western audiences, including Americans, would dislike being present for the slaughtering and butchering portion of the experience you describe.
Most audiences wouldn’t like freshly butchered cow - freshly butchered meat is tough and not very flavorful, it needs to be aged to allow it to tenderize and develop.
That the point is being repeated to no effect ironically illustrates how most modern people (westerners?) are detached from reality with regards to food.
In the modern era, most of the things the commons come across have been "sanitized"; we do a really good job of hiding all the unpleasant things. Of course, this means modern day commons have a fairly skewed "sanitized" impression of reality who will get shocked awake if or when they see what is usually hidden (eg: butchering of food animals).
That you insist on contriving one zany situation after another instead of just admitting that people today are detached from reality illustrates my point rather ironically.
Whether it's butchering animals or mining rare earths or whatever else, there's a lot of disturbing facets to reality that most people are blissfully unaware of. Ignorance is bliss.
To be blunt, the way you express yourself on this topic comes off as very "enlightened intellectual." It's clear that you think that your views/assumptions are the correct view and any other view is one held by the "commons"; one which you can change simply by providing the poor stupid commons with your enlightened knowledge.
Recall that this whole thread started with your proposition that seeing live fish prepared in front of someone "will likely leave most westerners squeamish or perhaps even gag simply because the west goes out of its way to hide where food comes from, even though that simply is the reality we all live in." You had no basis for this as far as I can tell, it's just a random musing by you. A number of folks responded disagreeing with you, but you dismissed their anecdotal comments as being wrong because it doesn't comport with your view of the unwashed masses who are, obviously, feeble minded sheep who couldn't possibly cope with the realities of modern food production in an enlightened way like you have whereby you "enjoy meats respecting and appreciating the fact that the steak or sashimi or whatever in front of me was a live animal at one point just like me." How noble of you. Nobody (and I mean this in the figurative sense not the literal sense) is confused that the slab of meat in front of them was at one point alive.
Then you have the audacity to accuse someone of coming up with "zany" situations? You're the one that started the whole zany discussion in the first place with your own zany musings about how "western" "commons" think!
Earlier this year, I was at ground zero of the Super Bowl parade shooting. I didn’t ever dream about it, but I spent the following 3-4 days constantly replaying it in my head in my waking hours.
Later in the year I moved to Florida, just in time for Helene and Milton. I didn’t spend much time thinking about either of them (aside from during prep and cleanup and volunteering a few weeks after). But I had frequent dreams of catastrophic storms and floods.
Different stressors affect people (even myself) differently. Thankfully I’ve never had a major/long-term problem, but I know my reactions to major life stressors never seemed to have any rhyme or reason.
I can imagine many people might’ve been through a few things that made them confident they’d be alright with the job, only to find out dealing with that stuff 8 hours a day, 40 hours a week is a whole different ball game.
A parade shooting is bad, very bad, but is still tame compared to the sorts of things to which website moderators are exposed on a daily/hourly basis. Footage of people being shot is actually allowed on many platforms. Just think of all the war footage that is so common these days. The dark stuff that moderators see is way way worse.
I have often wondered what would happen if social product orgs required all dev and product team members to temporarily rotate through moderation a couple times a year.
I can tell you that back when I worked as a dev for the department building order fulfillment software at a dotcom, my perspective on my own product has drastically changed after I had spent a month at a warehouse that was shipping orders coming out of the software we wrote. Eating my own dog food was not pretty.
Many (all?) Japanese schools don't have janitors. Instead students clean on rotation. Never been much into Japanese stuff but I absolutely admire this about their culture, and imagine it's part of the reason that Japan is such a clean and at least superficially respectful society.
Living in other Asian nations where there are often defacto invisible caste systems can be nauseating at times - you have parents that won't allow their children to participate in clean up efforts because their child is 'above handling trash.' That's gonna be one well adjusted adult...
Perhaps this is what happens when someone creates a mega-sized website comprising hundreds of millions of pages using other peoples' submitted material, effectively creating a website that is too large to "moderate". By letting the public publish their material on someone else's mega-sized website instead of hosting their own, perhaps it concentrates the web audience to make it more suitable for advertising. Perhaps if the PTSD-causing material was published by its authors on the authors' own websites, the audience would be small, not suitable for advertising. A return to less centralised web publishing would perhaps be bad for the so-called "ad ecosystem" created by so-called "tech" company intermediaries. To be sure, it would also mean no one in Kenya would be intentionally be subjected to PTSD-causing material in the name of fulfilling the so-called "tech" industry's only viable "business model": surveillance, data collection and online ad services.
It's a problem when you don't verify the identity of your users and hold them responsible for illegal things. If Facebook verified you were John D SSN 123-45-6789 they could report you for uploading CSAM and otherwise permanently block you from using the site if uploading objectionable material; meaning only exposure to horrific things is only necessary once per banned user. I would expect that to be orders of magnitude less than what they deal with today.
You can thank IRL privacy activists for the lack of cameras in every room in each house; Just imagine how much faster domestic disputes could be resolved!
Sure, there’s a cost-benefit to it. We think that privacy is more important than rapid resolution of domestic disputes and we think that privacy is more important than stopping child porn. That’s fine as a statement.
Rubbish. The reason Facebook doesn't want to demand ID for most users is that it adds friction to using their product, which means fewer users and less profit.
A return to less centralized web publishing would also be bad for the many creators who lack the technical expertise or interest to jump through all the hoops required for building and hosting your own website. Maybe this seems like a pretty small friction to the median HN user, but I don't think it's true for creators in general, as evidenced by the enormous increase in both the number and sophistication of online creators over the past couple of decades.
Is that increase worth traumatizing moderators? I have no idea. But I frequently see this sentiment on HN about the old internet being better, framed as criticism of big internet companies, when it really seems to be at least in part criticism of how the median internet user has changed -- and the solution, coincidentally, would at least partially reverse that change.
Introducing a free unlimited hosting service where you could only upload pictures, text or video. There’s a public page to see that content among adds and links to you friends free hosting service pages. TOS is a give-give: you give them the right to extract all the aggregated stat they want and display the adds, they give you the service for free so you own you content (and are legally responsible of it)
Conversely, those who are subjected to harsh conditions often develop a cynical view of humanity, one lacking empathy, which also perpetuates the same harsh conditions. It's almost like protection and subjection aren't the salient dimensions, but rather there is some other perspective that better explains the phenomenon.
Just scrolled a lot to find this. And I do believe that moderators in a not so safe country seen a lot in their lives. But this also should make them less vulnerable for this kind of exposures and looks like it is not.
I tend to agree with growth through realism, but people often have the means and ability to protect themselves from these horrors. Im not sure you can systemically prevent this without resorting to big brother shoving propaganda in front of people and forcing them to consume it.
> The moderators from Kenya and other African countries were tasked from 2019 to 2023 with checking posts emanating from Africa and in their own languages but were paid eight times less than their counterparts in the US, according to the claim documents
Why would pay in different countries be equivalent? Pretty sure FB doesn’t even pay the same to their engineers depending on where in the US they are, let alone which country. Cost of living dramatically differs.
Some products have factories in multiple countries. For example, Teslas are produced in both US and China. The cars produced in both countries are more or less identical in quality. But do you ever see that the market price of the product is different depending on the country of manufacture?
If the moderators in Kenya are providing the same quality labor as those from the US, why the difference in price of their labor?
I have a friend who worked for FAANG and had to temporarily move from US to Canada due to visa issues, while continuing to work for the same team. They were paid less in Canada. There is no justification for this except that the company has price setting power and uses it to exploit the sellers of labor.
A million things factor into market dynamics. I don’t know why this is such a shocking or foreign concept. Why is a waitress in Alabama paid less than in San Francisco for the same work? It’s a silly question because the answers are both obvious and complex.
Because people chose to take the jobs, so presumably they thought it was fair compensation compared to alternatives. Unless there's evidence they were coerced in some way?
Note that I'm equating all jobs here. No amount of compensation makes it worth seeing horrible things. They are separate variables.
No amount? So you wouldn't accept a job to moderate Facebook for a million dollars a day? If you would, then surely you would also do it for a lower number. There is an equilibrium point.
Sorry, but I don't believe you. You could work for a month or two and retire. Or hell, just do it for one day and then return to your old job. That's a cool one mill in the bank.
> work for a month or two and retire --> This is a dream of many, but there exist a set of people that really like their job and have no intention to retire
> just do it for one day and then return to your old job. --> Cool mill in the bank and dreadful images in your head. Perhaps Apitman feels he has enough cash and wont be happier with a million (more?).
Also your point is true but lacks of Facebook interest to elevate that number. I guess it was more a theorical reflexion than an argument for concrete economie.
Because that’s the only reason why anyone would hire them. If you’ve ever worked with this kind of contract workforce they aren’t really worth it without massive cost-per-unit-work savings. I suppose one could argue it’s better that they be unemployed than work in this job but they always choose otherwise when given the choice.
You haven't actually explained why it's bad, only slapped an evil sounding label on it. What's "exploitative" in this case and why is it morally wrong?
>they're imprisoned within borders
What's the implication of this then? That we remove all migration controls?
Of course. Not all at once, but gradually over time like the EU has begun to do. If capital and goods are free to move, then so must labor be. The labor market is very far from free if you think about it.
If that's the case then there can also be no ethical employment, either, both for employer and for employee. So that would seem to average out to neutrality.
That's only exploitation if you combine it with fact of the enclosure of the commons and that all land and productive equipment on Earth is private or state property and that it's virtually impossible to just go farm or hunt for yourself without being fucked with anymore, let alone do anything more advanced without being shut down violently.
>the enclosure of the commons and that all land and productive equipment on Earth is private or state property and that it's virtually impossible to just go farm or hunt for yourself without being fucked with anymore, let alone do anything more advanced without being shut down violently.
How would land allocation work without "enclosure of the commons"? Does it just become a free-for-all? What happens if you want to use the land for grazing but someone else wants it for growing crops? "enclosure of the commons" conveniently solves all these issues by giving exclusive control to one person.
Elinor Ostrom covered this extensively in her Nobel Prize-winning work if you are genuinely interested. Enclosure of the commons is not the only solution to the problems.
That's actually an interesting question. I would love to see some data on whether it really is impossible for the average person to live off the land if they wanted to.
An adjacent question is whether there are too many people on the planet for that to be an option anymore even if it were legal.
Probably way, way over the line. Population sizes exploded after the agricultural revolution. I wouldn't be surprised if the maximum is like 0.1-1% of the current population. If we're talking about strictly eating what's available without any cultivation at all, nature is really inefficient at providing for us.
Worked at PornHub's parent company for a bit and the moderation floor had a noticeable depressive vibe. Huge turnover. Can't imagine what these people were subjected to.
You don't mention the year(s), but I recently listened to Jordan Peterson's podcast episode 503. One Woman’s War on P*rnhub | Laila Mickelwait.
I will go ahead and assume that on the wild/carefree time of PornHub, when anyone could be able to upload anything and everything, from what that lady said, the numbers of pedophilia videos, bestiality, etc. was rampant.
Yeah, it was during that time, before the great purge. It's not just sexual depravity, people used that site to host all kinds of videos that would get auto-flagged anywhere else (including, the least of it, full movies).
> You don't mention the year(s), but I recently listened to Jordan Peterson's podcast episode 503. One Woman’s War on P*rnhub | Laila Mickelwait.
Laila Mickelwait is a director at Exodus Cry, formerly known as Morality in Media (yes, that's their original name). Exodus Cry/Morality in Media is an explicitly Christian organization that openly seeks to outlaw all forms of pornography, in addition to outlawing abortion and many gay rights including marriage. Their funding comes largely from right-wing Christian fundamentalist and fundamentalist-aligned groups.
Aside from the fact that she has an axe to grind, both she (as an individual) and the organization she represents have a long history of misrepresentating facts or outright lying in order to support their agenda. They also intentionally and openly refer to all forms of sex work (from consensual pornography to stripping to sexual intercourse) as "trafficking", against the wishes of survivors of actual sex trafficking, who have extensively document why Exodus Cry actually perpetuates harm against sex trafficking victims.
> everything, from what that lady said, the numbers of pedophilia videos, bestiality, etc. was rampant.
This was disproven long ago. Pornhub was actually quite good about proactively flagging and blocking CSAM and other objectionable content.
Ironically (although not surprisingly, if you're familiar with the industry), Facebook was two to three orders of magnitude worse than Pornhub.
But of course, Facebook is not targeted by Exodus Cry because their mission - as you can tell by their original name of "Morality in Media" - is to ban pornography on the Internet, and going after Facebook doesn't fit into that mission, even though Facebook is actually way worse for victims of CSAM and trafficking.
I have a throwaway Facebook account. In the absence of any other information as to my interests, Facebook thinks I want to see flat earth conspiracy theories and CSAM.
When I report the CSAM, I usually get a response that says "we've taken a look and found that this content doesn't go against our Community Standards."
They should probably hire more part time people working one hour a day?
Btw, it’s probably a different team handling copyright claims, but my run-in with Meta’s moderation gives me the impression that they’re probably horrifically understaffed. I was helping a Chinese content creator friend taking down Instagram, YouTube and TikTok accounts re-uploading her content and/or impersonating her (she doesn’t have any presence on these platforms and doesn’t intend to). Reported to TikTok twice, got it done once within a few hours (I was impressed) and once within three days. Reported to YouTube once and it was handled five or six days later. No further action was needed from me after submitting the initial form in either case. Instagram was something else entirely; they used Facebook’s reporting system, the reporting form was the worst, it asked for very little information upfront but kept sending me emails afterwards asking for more information, then eventually radio silence. I sent follow-ups asking about progress, again, radio silence. Impersonation account with outright stolen content is still up till this day.
I’m wondering if, like looking out from behind a blanket at horror movies, if getting a moderately blurred copy of images would reduce the emotional punch of highly inappropriate pictures. Or just scaled down tiny.
If it’s already bad blurred or as a thumbnail don’t click on the real thing.
This is more or less how police do CSAM classification now. They start with thumbnails, and that's usually enough to determine whether the image is a photograph or an illustration, involves penetration, sadism etc without having to be confronted with the full image.
We’re talking about Facebook here. You shouldn’t have the assumption that the platform should be “uncensored” when it clearly is not.
Furthermore, I’ll rather have the picture of my aunt’s vacation taken down by ai mistake rather than hundreds of people getting PSTD because they have to manually review if some decapitation was real or illustrated on an hourly basis.
Then what is freedom of speech if every plattform deletes your content? Does it even exist? Facebook and co. are so ubiquitous, we shouldn't just apply normal laws to them. They are bigger than governments.
Freedom of speech means that the government can't punish you for your speech. It has absolutely nothing to do with your speech being widely shared, listened to, or even acknowledged. No one has the right to an audience.
Not if we retain control and each deploy our own moderation individually, relying on trust networks to pre-filter. That probably won't be allowed to happen, but in a rational, non-authoritarian world, this is something that machine learning can help with.
The solution to most social media problems in general is:
`select * from posts where author_id in @follow_ids order by date desc`
At least 90% of the ills of social media are caused by using algorithms to prioritize content and determine what you're shown. Before these were introduced, you just wouldn't see these types of things unless you chose to follow someone who chose to post it, and you didn't have people deliberately creating so much garbage trying to game "engagement".
Currently content is flagged and moderators decide whether to take it down. Using AI, it's easy conceive a process where some uploaded content is preflagged requiring an appeal (otherwise it's the same as before, a pair of human eyes automatically looking at uploaded material).
Uploaders trying to publish rule-breaking content would not bother with an appeal that would reject them anyway.
Because edge cases exist, and it isn't worth it for a company to hire enough staff to deal with them when one user with a problem, even if that problem is highly impactful to their life, just doesn't matter when the user is effectively the product and not the customer. Once the AI works well enough, the staff is gone and the cases where someone's business or reputation gets destroyed because there are no ways to appeal a wrong decision by a machine get ignored. And of course 'the computer won't let me' or 'I didn't make that decision' is a great way for no one to ever have to feel responsible for any harms caused by such a system.
This and social media companies in the EU tend to just delete stuff because of draconian laws where content must be deleted in 24 hours or they face a fine. So companies would rather not risk it. Moderators also only have a few seconds to decide if something should be deleted or not.
I already addressed this and you're talking over it. Why are you making the assumption that AI == no appeal and zero staff? That makes zero sense, one has nothing to do with the other. The human element comes in for appeal process.
> I already addressed this and you're talking over it.
You didn't address it, you handwaved it.
> Why are you making the assumption that AI == no appeal and zero staff?
I explicitly stated the reason -- it is cheaper and it will work for the majority of instances while the edge cases won't result in losing a large enough user base that it would matter to them.
I am not making assumptions. Google notoriously operates in this fashion -- for instance unless you are a very popular creator, youtube functions like that.
> That makes zero sense, one has nothing to do with the other.
Cheaper and mostly works and losses from people leaving are not more than the money saved by removing support staff makes perfect sense and the two things are related to each other like identical twins are related to each other.
> The human element comes in for appeal process.
What does a company have to gain by supplying the staff needed to listen to the appeals when the AI does a decent enough job 98% of the time? Corporations don't exist to do the right thing or to make people happy, they are extracting value and giving it to their shareholders. The shareholders don't care about anything else, and the way I described returns more money to them than yours.
> I am not making assumptions. Google notoriously operates in this fashion -- for instance unless you are a very popular creator, youtube functions like that.
Their copyright takedown system has been around for many years and wasn't contingent on AI. It's a "take-down now, ask questions later" policy to please the RIAA and other lobby groups. Illegal/abuse material doesn't profit big business, their interest is in not having it around.
You deliberately conflated moderation & appeal process from the outset. You can have 100% AI handling of suspect uploads (for which the volume is much larger) with a smaller staff handling appeals (for which the volume is smaller), mixed with AI.
Frankly if your hypothetical upload is still rejected after that, it 99% likely violates their terms of use, in which case there's nothing to say.
> it is cheaper
A lot of things are "cheaper" in one dimension irrespective of AI, doesn't mean they'll be employed if customers dislike it.
> the money saved by removing support staff makes perfect sense and the two things are related to each other like identical twins are related to each other.
It does not make sense to have zero staff in as part of managing an appeal process (precisely to deal with edge cases and fallibility of AI), and it does not make sense to have no appeal process.
You're jumping to conclusions. That is the entire point of my response.
> What does a company have to gain by supplying the staff needed to listen to the appeals when the AI does a decent enough job 98% of the time?
AI isn't there yet, notwithstanding, if they did a good job 98% of the time then who cares? No one.
There's a huge gap between "we will scan our servers for illegal content" and "your device will scan your photos for illegal content" no matter the context. The latter makes the user's device disloyal to its owner.
The choice was between "we will upload your pictures unencrypted and do with them as we like, including scan them for CSAM" vs. "we will upload your pictures encrypted and keep them encrypted, but will make sure beforehand on your device only that there's no known CSAM among it".
> we will upload your pictures unencrypted and do with them as we like
Curious, I did not realize Apple sent themselves a copy of all my data, even if I have no cloud account and don't share or upload anything. Is that true?
Apple doesn't do this. But other service providers do (Dropbox, Google, etc).
Other service providers can scan for CSAM from the cloud, but Apple cannot. So Apple might be one of the largest CSAM hosts in the world, due to this 'feature'.
Apple is already categorizing content on your device. Maybe they don't report what categories you have. But I know if I search for "cat" it will show me pictures of cats on my phone.
"In 2023, ESPs submitted 54.8 million images to the CyberTipline of which 22.4 million (41%) were unique. Of the 49.5 million videos reported by ESPs, 11.2 million (23%) were unique."
And, indeed, this is why we should not expect the process to stop. Nobody is rallying behind the rights of child abusers and those who traffic in child abuse material. Arguably, nor should they. The slippery slope argument only applies if the slope is slippery.
This is analogous to the police's use of genealogy and DNA data to narrow searches for murderers, who they then collected evidence on by other means. There's is risk there, but (at least in the US) you aren't going to find a lot of supporters of the anonymity of serial killers and child abusers.
There are counter-arguments to be made. Germany is skittish about mass data collection and analysis because of their perception that it enabled the Nazi war machine to micro-target their victims. The US has no such cultural narrative.
> And, indeed, this is why we should not expect the process to stop. Nobody is rallying behind the rights of child abusers and those who traffic in child abuse material. Arguably, nor should they.
I wouldn't be so sure.
When Apple was going to introduce on-device scanning they actually proposed to do it in two places.
• When you uploaded images to your iCloud account they proposed scanning them on your device first. This is the one that got by far the most attention.
• The second was to scan incoming messages on phones that had parental controls set up. The way that would have worked is:
1. if it detects sexual images it would block the message, alert the child that the message contains material that the parents think might be harmful, and ask the child if they still want to see it. If the child says no that is the end of the matter.
2. if the child say they do want to see it and the child is at least 13 years old, the message is unblocked and that is the end of the matter.
3. if the child says they do want to see it and the child is under 13 they are again reminded that their parents are concerned about the message, again asked if they want to view it, and told that if they view it their parents will be told. If the child says no that is the end of the matter.
4. If the child says yes the message is unblocked and the parents are notified.
This second one didn't get a lot of attention, probably because there isn't really much to object to. But I did see one objection from a fairly well known internet rights group. They objected to #4 on the grounds that the person sending the sex pictures to your under-13 year old child sent the message to the child, so it violates the sender's privacy for the parents to be notified.
No, they had backlash against using AI on devices they don’t own to report said devices to police for having illegal files on them. There was no technical measure to ensure that the devices being searched were only being searched for CSAM, as the system can be used to search for any type of images chosen by Apple or the state. (Also, with the advent of GenAI, CSAM has been redefined to include generated imagery that does not contain any of {children, sex, abuse}.)
That’s a very very different issue.
I support big tech using AI models running on their own servers to detect CSAM on their own servers.
I do not support big tech searching devices they do not own in violation of the wishes of the owners of those devices, simply because the police would prefer it that way.
It is especially telling that iCloud Photos is not end to end encrypted (and uploads plaintext file content hashes even when optional e2ee is enabled) so Apple can and does scan 99.99%+ of the photos on everyone’s iPhones serverside already.
> Also, with the advent of GenAI, CSAM has been redefined to include generated imagery that does not contain any of {children, sex, abuse}
It hasn’t been redefined. The legal definition of it in the UK, Canada, Australia, New Zealand has included computer generated imagery since at least the 1990s. The US Congress did the same thing in 1996, but the US Supreme Court ruled in the 2002 case of Ashcroft v Free Speech Coalition that it violated the First Amendment. [0] This predates GenAI because even in the 1990s people saw where CGI was going and could foresee this kind of thing would one day be possible.
Added to that: a lot of people misunderstand what that 2002 case held. SCOTUS case law establishes two distinct exceptions to the First Amendment – child pornography and obscenity. The first is easier to prosecute and more commonly prosecuted; the 2002 case held that "virtual child pornography" (made without the use of any actual children) does not fall into the scope of the child pornography exception – but it still falls into the scope of the obscenity exception. There is in fact a distinct federal crime for obscenity involving children as opposed to adults, 18 USC 1466A ("Obscene visual representations of the sexual abuse of children") [1] enacted in 2003 in response to this decision. Child obscenity is less commonly prosecuted, but in 2021 a Texas man was sentenced to 40 years in prison over it [2] – that wasn't for GenAI, that was for drawings and text, but if drawings fall into the legal category, obviously GenAI images will too. So actually it turns out that even in the US, GenAI materials can legally count as CSAM, if we define CSAM to include both child pornography and child obscenity – and this has been true since at least 2003, long before the GenAI era.
Thanks for the information. However I am unconvinced that SCOTUS got this right. I don’t think there should be a free speech exception for obscenity. If no other crime (like against a real child) is committed in creating the content, what makes it different from any other speech?
> However I am unconvinced that SCOTUS got this right. I don’t think there should be a free speech exception for obscenity
If you look at the question from an originalist viewpoint: did the legislators who drafted the First Amendment, and voted to propose and ratify it, understand it as an exceptionless absolute or as subject to reasonable exceptions? I think if you look at the writings of those legislators, the debates and speeches made in the process of its proposal and ratification, etc, it is clear that they saw it as subject to reasonable exceptions – and I think it is also clear that they saw obscenity as one of those reasonable exceptions, even though they no doubt would have disagreed about its precise scope. So, from an originalist viewpoint, having some kind of obscenity exception seems very constitutionally justifiable, although we can still debate how to draw it.
In fact, I think from an originalist viewpoint the obscenity exception is on firmer ground than the child pornography exception, since the former is arguably as old as the First Amendment itself is, the latter only goes back to the 1982 case of New York v. Ferber. In fact, the child pornography exception, as a distinct exception, only exists because SCOTUS jurisprudence had narrowed the obscenity exception to the point that it was getting in the way of prosecuting child pornography as obscene – and rather than taking that as evidence that maybe they'd narrowed it a bit too far, SCOTUS decided to erect a separate exception instead. But, conceivably, SCOTUS in 1982 could have decided to draw the obscenity exception a bit more broadly, and a distinct child pornography exception would never have existed.
If one prefers living constitutionalism, the question is – has American society "evolved" to the point that the First Amendment's historical obscenity exception ought to jettisoned entirely, as opposed to merely be read narrowly? Does the contemporary United States have a moral consensus that individuals should have the constitutional right to produce graphic depictions of child sexual abuse, for no purpose other than their own sexual arousal, provided that no identifiable children are harmed in its production? I take it that is your personal moral view, but I doubt the majority of American citizens presently agree – which suggests that completely removing the obscenity exception, even in the case of virtual CSAM material, cannot currently be justified on living constitutionalist grounds either.
My understanding was the FP risk. The hashes were computed on device, but the device would self-report to LEO if it detects a match.
People designed images that were FPs of real images. So apps like WhatsApp that auto-save images to photo albums could cause people a big headache if a contact shared a legal FP image.
No, the point of on-device scanning is to enable authoritarian government overreach via a backdoor while still being able to add “end to end encryption” to a list of product features for marketing purposes.
If Apple isn’t free to publish e2ee software for mass privacy without the government demanding they backdoor it for cops on threat of retaliation, then we don’t have first amendment rights in the USA.
Already happened/happening. I have an ex-coworker that left my current employer for my state's version of the FBI. Long story short, the government has a massive database to crosscheck against. Often times, the would use automated processes to filter through suspicious data they would collect during arrests.
If the automated process flags something as a potential hit, then they, the humans, would then review those images to verify. Every image/video that is discovered to be a hit is also inserted into a larger dataset as well. I can't remember if the Feds have their own DB (why wouldn't they?), but the National Center for Missing and Exploited Children run a database that I believe government agencies use too. Not to mention, companies like Dropbox, Google, etc.. all has against the database(s) as well.
Borrowing the thought from Ed Zitron, but when you think about it, most of us are exposing ourselves to low-grade trauma when we step onto the internet now.
What's more; popular TV shows regularly have scenes that could cause trauma, the media has been ramping up the intensity of content for years. I think it's simply seeking more word of mouth 'did you see GoT last night? Oh my gosh so and so did such and such to so and so!'
That's the risk of being in a society in general, it's just that we interact with people outside way less now. If one doesn't like it, they can always be a hermit.
Not just that, but that algorithms are driving us to the extremes. I used to think it was just that humans were not meant to have this many social connections, but it's more about how these connections are mediated, and by whom.
Worth reading Zitron's essay if you haven't already. It sounds obvious, but the simple cataloging of all the indignities we take for granted builds up to a bigger condemnation than just Big Tech.
https://www.wheresyoured.at/never-forgive-them/
Is there any way to look at this that doesn't resort to black or white thinking? That's a rather extreme view in itself that could use some nuance and moderation.
There have been multiple instances where I would receive invites or messages from obvious bots - users having no history, generic name, sexualised profile photo. I would always report them to Facebook just to receive a reply an hour or a day later that no action has been taken. This means there is no human in the pipeline and probably only the stuff that's not passing their abysmal ML filter goes to the actual people.
I also have a relative who is stuck with their profile being unable to change any contact details, neither email nor password because FB account center doesn't open for them. Again, there is no human support.
BigTech companies must be mandated by law to have the number of live support people working and reachable that is a fixed fraction of their user number. Then, they would have no incentive to inflate their user numbers artificially. As for the moderators, there should also be a strict upper limit on the number of content (content tokens, if you will) they should view during their work day. Then the companies would also be more willing to limit the amount of content on their systems.
Yeah, it's bad business for them but it's a win for the people.
I have several friends who do this work for various platforms.
The problem is, someone has to do it. These platforms are mandated by law to moderate it or else they're responsible for the content the users post. And the companies can not shield their employees from it because the work simply needs doing. I don't think we can really blame the platforms (though I think the remuneration could be higher for this tough work).
The work tends to suit some people better than others. The same way some people will not be able to be a forensic doctor doing autopsies. Some have better detachment skills.
All the people I know that do this work have 24/7 psychologists on site (most of them can't work remotely due to the private content they work with). I do notice though that most of them do have an "Achilles heel". They tend to shrug most things off without a second thought but there's always one or two specific things or topics that haunt them.
Hopefully eventually AI will be good enough to deal with this shit. It sucks for their jobs or course but it's not the kind of job anyone really does with pleasure.
Uhh no I'm not giving up my privacy because a few people want to misbehave. Screw that. My friends know who I am but the social media companies shouldn't have to.
Also, it'll make social media even more fake than it already is. Everyone trying to be as fake as possible. Just like LinkedIn is now. It's sickening, all these people toting the company line. Even though they do nothing but complain when you speak to them in person.
And I don't think it'll actually solve the problem. People find ways to get through the validation with fake IDs.
So brown/black people in the third world who often find that this is their only meaningful form of social mobility are the "someone" by default? Because that's the de-facto world we have!
That's not true at all. All the people I speak of are here in Spain. They're generally just young people starting a career. Many of them end up in the fringes of cybersecurity work (user education etc) actually because they've seen so many scams. So it's the start of a good career.
Sure some companies would outsource also to africa but it doesn't mean this work is only available to third-world countries. And there's not that many jobs in it. It's more than possible to be able to find enough people that can stomach it.
There was another article a few years back about the poor state of mental health of Facebook moderators in Berlin. This is not exclusively a poor people problem. More of a wrong people for the job problem.
And of course we should look more at why this is the only form of social mobility for them if it's really the case.
> post-traumatic stress disorder caused by exposure to graphic social media content including murders, suicides, child sexual abuse and terrorism.
If you want a taste of the legal portion of theses just got to 4chan.org/gif/catalog and look for a "rekt", "war", "gore", or "women hate" thread. Watch every video there for 8-10 hours a day.
Now remember this is the legal portion of the content moderated as 4chan does a good job these days of removing illegal content mentioned in that list above. So all these examples will be a milder sample of what moderators deal with.
And do remember to browse for 8-10 hours a day.
edit: it should go without saying that the content there is deep in the NSFW territory, and if you haven't already stumbled upon that content, I do not recommend browsing "out of curiosity".
As someone that grew up with 4chan I got pretty desensitized to all of the above very quickly. Only thing I couldn’t watch was animal abuse videos. That was all yers ago though, now I’m fully sensitized to all of it again.
These accounts like yours and this report of PTSD don't line up. Both of them are credible. What's driving them crazy but not Old Internet vets?
Could it be:
- the fact that moderators are hired and paid
- that kids are young and a lot more tolerant
- that moderators aren't intended audiences
- backgrounds, sensitivity in media at all
- the amount, of disturbing images
- the amount, in total, not just bad ones
- anything else?
Personally, I'm suspecting that difference in exposure to _any kind of media_ might be a factor; I've come across stories online that imply visiting and staying at places like Tokyo can almost drive people crazy, from the amount of stimuli alone.
Doesn't it sound a bit too shallow and biased to determine it was specifically CSAM or whatever specific type of data that did it?
Of course not. What drew me in was the edginess. What kept me there was the very dark but funny humor. This was in 2006-2010, it was all brand new, it was exciting.
I have a kid now and my plan is to not give her a smartphone/social media till she’s 16 and heavily monitor internet access until she’s atleast 12. Obviously I can’t control what she will see with friends but she goes to a rigorous school and I’m hoping that will keep her busy. Other than that I’m hoping the government comes down hard on social media access to kids/teenagers and all the restrictions are legally codified by the time she’s old enough.
The point is that you don't know which one will stick. Even people who are desensitized will remember certain things, a person's facial expression or a certain sound or something like that, and you can't predict which one will stick with you.
One terrible aspect of online content moderation is that, no matter how good AI gets and no matter how much of this work we can dump in its lap, to a certain extent there will always need to be a "human in the loop".
The sociopaths of the world will forever be coming up with new and god-awful types of content to post online, which current AI moderators haven't encountered before and which therefore won't know how to classify. It will therefore be up to humans to label that content in order to train the models to handle that new content, meaning humans will have to view it (and suffer the consequences, such as PTSD). The alternative, where AI labels these new images and then uses those AI-generated labels to update the model, famously leads to "model collapse" [1].
Short of banning social media at a societal level, or abstaining from it at an individual level, I don't know that there's any good solution to this problem. These poor souls are taking a bullet for the rest of us. God help them.
Autistic people do have empathy, it just works differently. Most of them are actually really caring, just not very good at showing it. Nor at picking up others' feelings. But they do care about them in my experience.
Most of them I know will have more difficulty with this type of work, not less. Because they don't tend to process it as well. This includes myself as I do have some autistic tendencies. No way I could do this.
You think people who took these jobs had a list of job offers and were jumping for joy to be able to pick this one out? Or that they stuck with it after the first 5 minutes of moderating necrophilia because they believed other jobs would have similar downsides? You’re so out of touch with the real world and hardships people face trying to make a living for themselves and their family.
I’m curious of other perspectives and conclusions on this.
Why do you think Facebook is the responsible party and not the purveyors of the content that caused them PTSD? From my perspective, Facebook hired people to prevent this content from reaching a wider audience. Thanks for any insight you can provide.
I never said Facebook is the responsible party. I’m saying these workers deserve our sympathy and I’m saying it’s not a case of people who had a simple choice but willingly chose a job that caused them PTSD.
I don’t think Facebook is blameless though. They practically brag about their $40B of AI spend per year and absolutely brag about how advanced their AI is. You can’t focus some of your R&D to flag content that’s instantly recognizable as disgusting content, like pedophilia, necrophilia, and beastiality? There’s already a ton of pre-labeled data they can use from all these workers. No, they don’t get a pass on that. I think it’s shameful they focus all their AI compute and engineering on improving targeted ads and not put a major focus on solving this specific problem that’s directly hurting so many people.
While that would solve the problem within Facebook, I think you're kidding yourself if you think that's going to stop the demand or supply of horrible content.
If others want to moderate why should these complainers get in the way? They are free to not take the job, which obviously involves looking at repulsive content so others don’t have to. Most people don't have a problem with social media existing or moderators having the job of a moderator.
At first glance you may have a point. Thing is they’re often recruited with very promising job titles and descriptions, training on mild cases. Once they fully realize what they got themselves into the damage has been done. If they’re unlucky, quitting also means losing their house.
This may help empathize a bit with their side of this argument.
If you pay someone to deliver post, and they get their leg blown of because you order them to go through a minefield, you can’t just avoid responsibility by going “that’s what they signed up for” obviously the responsibility of ensuring that the job can be carried out not physically and r safe is with the employer and workers are well within reason to demand compensation if the employer hasn’t ensured the job can be safely carried out.
I think a better example is mining, where miners received no safety equipment, and the mines were not built with safety foremost.
The idea was, if you didn't like it, leave. If you wanted safety equipment, buy it yourself. Or leave. Can't work due to black lung disease partially from poor ventilation the company was responsible for? You're fired; should have left years ago.
There are still people who believe the contract is all that counts, nothing else matters, and if you don't like it, leave.
> It’s the job they signed up for. I don’t understand the complaint. If they don’t want to do the part of the job that is obviously core to it, they should move on. The mass diagnosis just seems like a tactic to generate “evidence”. And the mention of pay compared to other countries makes this look like a bad faith lawsuit to get more compensation.
its also their right to sue their employer for damage if they believe it affected them in a extremely harmful way. signing up for a job doesnt make the employer above the law.
But some here can't fathom that workers also have rights.
They aren’t exploited. They’re paid money in return for viewing and filtering content for others. They could not apply or decline the offer and look at other jobs. The availability of this job doesn’t change the rest of their employment options. But it’s pretty clear what this job is. If it was just looking at friendly content, it wouldn’t need to exist.
Exploitation nearly always involves paying. Plenty of people caught up in sex trafficking still get paid, they just don't have a viable way out. Plenty of people working in sweat shops still get paid, but again not enough with enough viable alternatives to get out.
You’re still not acknowledging the key points - that it is obvious up front that the job fundamentally involves looking at content others don’t want to, and that it is a new job that can be accepted or avoided without taking away from other employment opportunities. Therefore it doesn’t match these other situations you’re drawing a comparison to.
Most of these people are forced to take these jobs, because nothing else is available, they don't have the power to avoid this job. You cannot make a principled decision if your basic needs, or those of your family are not met. In fact, many well-off, privileged people who are simply stressed cannot make principled decisions if their livelihood is at stake.
The world is not a tabula rasa where every decision is made in isolation, you can't just treat this like a high school debate team exercise.
Not acknowledging the social factors at play is bordering on bad faith in this case. The social conditions of the moderators is _the_ key factor in this discussion. The poorer you are, the more likely you are to be forced to take a moderator job, the more likely you are to get PTSD. Our social and economic systems are essentially punishing people for being poor.
It isn’t known in advance though. These people went to that job and got psychiatric diseases that, considering the thirdworldiness, they are unlikely to get rid of.
I’m not talking about obvious “scream and run away” reaction here. One may think that it doesn’t affect them or people on the internet, but then it suddenly does after they binge it all day for a year.
The fact that not less than 100% got PTSD should be telling something here.
The 100+ years of research on PTSD, starting from shell shock studies in WWI shows that PTSD isn't so simple.
Some people come out with no problems, while their trenchmate facing almost identical situations suffers for the rest of their lives.
In this case, the claim is that "it traumatised 100% of hundreds of former moderators tested for PTSD … In any other industry, if we discovered 100% of safety workers were being diagnosed with an illness caused by their work, the people responsible would be forced to resign and face the legal consequences for mass violations of people’s rights."
Do those people you know look at horrible pictures on the internet for 8-10 hours each day?
> Perhaps if looking at pictures of disturbing things on the internet gives you PTSD than this isn’t the kind of job for you?
Perhaps these are jobs people are forced to do because the price of labour isn't as rich as other countries, trafficked and the likes.
> I know lots of people who can and do look at horrible pictures on the internet and have been doing so for 20+ years with no ill effects.
Looking at is different to moderating. I've sen my fair share of snuff from the first iraqi having their head cut off in 2005 all the way down to: ogrish/liveleak, goatse, tubgirl, 2girls1cup shock sites.
But when you are faced with imagery of gruesome material day-in day-out on 12-hour shifts if not longer non-stop, being paid very little, it would take a toll on anyone.
I've done it, lone-wolf sysop for a adult dating website for two years and the stuff I saw was moderate but still made me feel mentally disturbed. The normality wears off very quickly.
Could you work a five days week looking at extreme obscenity imagery for $2 an hour?
The alternative is they have no job. And it is clear what this job entails, so complaining about the main part of the job afterwards, as this small group of moderators is doing, seems disingenuous.
Maybe so, but in places with good civil and human rights, you can't sign them away via contract, they're inalienable. If Kenya doesn't offer these protections, and the allegations are correct, then Facebook deserves to be punished regardless for profiting off inhumane working conditions.
If I was a tech billionaire, and there was so much uploading of stuff so bad, that it was giving my employee/contractors PTSD, I think I'd find a way to stop the perpetrators.
(I'm not saying that I'd assemble a high-speed yacht full of commandos, who travel around the world, righting wrongs when no one else can. Though that would be more compelling content than most streaming video episodes right now. So you could offset the operational costs a bit.)
Large scale and super sick perpetrators exist (as compared to small scale ones who do mildly sick stuff) because Facebook is a global network and there is a benefit to operating on such a large platform. The sicker you are, while getting away with it, the more reward you get.
Switch to a federated social systems like Mastodon, with only a few thousand or ten thousand users per instance, and perpetrators will never be able to grow too large. Easy for the moderators to shut stuff down very quickly.
> Switch to a federated social systems like Mastodon, with only a few thousand or ten thousand users per instance, and perpetrators will never be able to grow too large.
The #2 and #3 most popular Mastodon instances allow CSAM.
Tricky. It also gives perpetrators a lot more places to hide. I think the jury is out on whether a few centralized networks or a fediverse makes it harder for attackers to reach potential targets (or customers).
The purpose of facebook moderators (besides legal compliance) is to protect normal people from the "sick" people. In a federated network, of course, such people will create their own instances, and hide there. But then no one is harmed from them, because all such instances will be banned quite quickly, same as all spam email hosts are blocked very quickly by everyone else.
From a normal person perspective on not seeing bad stuff, the design of a federated network is inherently better than a global network.
That's the theory. I'm not sure yet that it works in practice, I've seen a lot of people on Mastodon complaining about how as a moderator, keeping up with the bad services is a perpetual game of whack-a-mole because everything is access on by default. Maybe this is a Mastodon specific issue.
That's because Mastodon or any other federated social network hasn't taken off, and so not enough development has gone into them. If they take off, naturally people will develop analogs of spam lists and SpamAssassin etc for such systems, which will cut down moderation time significantly. I run an org email server, and don't exactly do any thing besides installing such automated tools.
On Mastodon, admins will just have to do the additional work to make sure new accounts are not posting weird stuff.
Big tech vastly underspends on this area. You can find a stream of articles from the last 10 years where BigTech companies were allowing open child prostitution, paid-for violence, and other stuff on their platforms with little to no moderation.
If you were a tech billionaire you'd be a sociopath like the others and wouldn't give a single f about this. You'd be going on podcasts to tell the world that markets will fix everything if given the chance.
They are not wrong. Do you know any mechanism other than markets that work at scale and that don’t cost a bomb and don’t involve abusive central authority?
Tech billionaires usually advocate for some kind of return to the gilded age, with minimal workers rights and corporate tax. Markets were freer back then, how did that work out for the average man? Markets alone don't do anything for the average quality of life.
But is it solely because of markets? Would deregulation improve our lives further? I don't think so, and that is what I am talking about. Musk, Bezos, Andreessen and cie. are advocating for a particular laissez-faire flavor of capitalism, which historically has been very bad for the average man.
"More than 140 Facebook content moderators have been diagnosed with severe post-traumatic stress disorder caused by exposure to graphic social media content including murders, suicides, child sexual abuse and terrorism."
What part here are you suggesting is similar to seeing two men kissing?
Not defending this person in particular, but you should take a look at how anti-LGBT, including at the government level, most countries are in Africa. Maybe a decent number of them do regard seeing homosexuality as PTSD inducing.
There are several places where they legally can and will kill you for homosexuality.
The existence of anti-LGBTQ wasn't where their argument was leading to though.
Their line of logic was that our society moral values are but social construct, that change from place to place and with time, therefore being exposed to sexual violence, child abuse, gore etc. is PTSD inducing only because we're examining it through our limited perspective whereas it's quite possible that all these things those FB mods were exposed to can be perfectly normal in some cultures.
I wanted to see where that argument would lead them to, as in, what kind of people would FB should have hired that would be resistant to all these horrible stuff, but other than to let me know that in fact there are such cultures, I never got a straight answer out of them.
But we're not talking about an 80's christian mom since you proceeded to make the observation that "People are outraged by whatever they are told to be outraged over and ignore everything that they aren't".
Which is to say, being exposed to extreme violent/abusive content could only cause PTSD iff one is subject to a specific social construct that define these acts in a certain way.
Let's just assume you're right, what kind of employees would that imply are immune to getting PTSD from such content given your previous observation?
You're lying and you know it. I remember the 80s as much as anyone else. Especially the part where Elton John and Freddy Mercury where at the peak of their popularity, unless you where living in a religious shithole but that was (and still is) a small part of the world.
Seeing something you think is culturally wrong is not necessarily traumatizing, is it? And surely there are degrees of "wrongness", ranging from the merely uncomfortable to the truly gross to the utterly horrifying. Even within the horrifying category, one can differentiate between things like a finger being chopped off and e.g. disembowelment. It's reasonable to expect a person would be more traumatized the further up the ladder of horror they're forced to look.
This would be believable if not for the fact that hanging gutting and quartering was considered good wholesome family entertainment to watch while getting fast food in the market not three centuries ago literally everywhere.
Depends seriously on the country, the Netherlands was way ahead there. In many ways more ahead than it is now because it has become so conservative lately.
In what 80s fantasy were you living in where gay people were open. Rumor was John was bisexual and Freddy getting aids in the late 80s was a huge deal. Queen's peak is around 1992 with Wayne's world. No men kissed on stage or in movies neither did women.
Equating outrage to PTSD is absolute nonsense. As someone that lives with a PTSD sufferer, it is an extremely severe and debilitating condition that has nothing to do with “outrage” and can’t be caused by seeing people kiss.
You’re very wrong, it can be caused by different things to different people- as the causes are emotional it requires severe emotional trauma, which does not have to happen through a specific category of event- a lot of different types of trauma and abuse can cause it.
It’s hard to imagine a more disgusting thought process than someone trying to gatekeep others suffering like you are doing here.
actually, not everybody gets PTSD in for example a combat situation, and Gabor Mate says that people who do develop PTSD are the people who have already suffered traumas as children; in a sense, childhood trauma is a preexisting condition.
A lot of PTSD is also not from combat all all- childhood emotional trauma alone can cause it. This is recognized now, but it took a while because initially it was discovered in war veterans and categorically excluded other groups- eventually they discovered that war wasn’t unique in causing the condition.
However, I would point out that Mates’ views are controversial, and don’t fully agree with other research on trauma and PTSD. He unrealistically associates essentially all mental illness and neurodivergence with childhood trauma, even in cases where good evidence contradicts that view. He claims ADHD is caused by childhood emotional trauma, although that is proven not to be the case, so I don’t put much stock in his scientific reasoning abilities- he has his hammer and sees everything as a nail.
HN ethos is to assume good faith, but my imagination is failing me here as to how you might be sincere and not trolling- can you please share more info to help me out?
What makes you think people have experienced clinically diagnosed or diagnosable PTSD from seeing someone kiss? Has anyone actually claimed that?
You used the word outrage, and again outrage is not trauma- it describes an outer reaction, not an inner experience. They’re neither mutually exclusive nor synonymous.
Your assertion seems to be that only being physically present for a horrific event can be emotionally traumatic- that spending years sitting in a room watching media of children being brutally murdered and raped day in and day out is not possibly traumatic, but watching someone kiss that you politically think should be banned from doing so, can be genuinely traumatic?
In this context, this is dangerously close to asserting "people are only outraged about CSAM because they're told to be." I don't think that's what you mean.
I think your logic is backwards. The main reason for a culture to ban pedophilia is because it causes trauma in children. For thousands of years, cultures have progressed towards protecting children. This came from a natural sense of outrage in a majority of people, which became part of the culture. Not vice versa. In many of your comments, you seem to assume that people are only automatons who think and do exactly what their culture teaches them, but that's not the truth. The culture is made up of individuals, and individual conscience frequently - thankfully - overrides cultural diktat. Otherwise no dictatorship would ever fall, no group of people would ever be freed, and no wicked practices would ever be stamped out. It has always been individual people acting against the culture whose outrage has caused the culture to change. Which strongly implies that people's sense of outrage is at least partly intrinsic to human nature, totally apart from cultural practices of the time.
I'm now old enough to have seen people who treated homosexuals in the 1980s the same we treat pedophiles today start waving rainbow flags and calling the people they beat up for being gay in highschool Nazis.
There maybe a few people with principles who stick with them.
The majority will happily shove who ever they are told to in a gas chamber.
I'm not saying there aren't a lot of people who are natural conformists, who do whatever they're told to, and hate or love whatever the prevailing culture hates or loves. They may be a majority. And yes, a prevailing culture can take even the revulsion of murder out of people to some extent (although check out the state sanctioned degree of alcohol and drug use among SS officers and you'll see it's not quite so easy to make people do acts of murder and torture every day).
What I am saying is that the conformists don't drive the culture, they're just a blunt weapon of whoever is driving the culture. That weapon can be turned toward gay rights or toward burning people at the stake, but what changes a culture are the individuals with either a conscience or the individuals with wicked plans. Both of which exist outside the mainstream in any time and place.
Maybe another way of saying this is that I think most people are capable of murder and most people are capable of empathy (and therefore trauma) with someone being tortured, but primarily they're concerned with being a good guy. What throws the arc of history towards a higher morality is that maybe >0% of people naturally need to perceive themselves as "good" by defending life and the dignity and humanity of other people, to the extent that needing to be a good person overrides their cultural programming. And those are not the only people who change a culture, but 51% of the time they change it for the better instead of worse.
Wait, why are they calling gay people Nazis? This story is very unclear. And I can't see how it relates to CSAM and the moderators who have to see it, which is a categorically different issue to homosexuality, so different as to be completely unconflatable.
The nature of the job really sucks. This is not unusual; there are lots of sucky jobs. So my concern is really whether the employees were informed what they would be exposed to.
Also I’m wondering why they didn’t just quit. Of course the answer is money, but if they knew what they were getting into (or what they were already into), and chose to continue, why should they be awarded more money?
Finally, if they can’t count on employees in poor countries to self-select out when the job became life-impacting, maybe they should make it a temporary gig, eg only allow people to do it for short periods of time.
My out-of-the-box idea is: maybe companies that need this function could interview with an eye towards selecting psychopaths. This is not a joke; why not select people who are less likely to be emotionally affected? I’m not sure anyone has ever done this before and I also don’t know if such people would be likely to be inspired by the images, which would make this idea a terrible one. My point is find ways to limit the harm that the job causes to people, perhaps by changing how people interact with the job since the nature of the job doesn’t seem likely to change.
So you're expecting these people to have the deep knowledge of human psychology to know ahead of time that this is likely to cause them long term PTSD, and the impact that will have on their lives, versus simply something they will get over a month after quitting?
I don’t think it takes any special knowledge of human psychology to understand that horrific images can cause emotional trauma. I think it’s a basic due diligence question that when considering establishing such a position, one should consult literature and professionals to discover what impact there might be and what might be done to minimize it.
The Kenyan moderators' PTSD reveals the fundamental paradox of content moderation: we've created an enterprise-grade trauma processing system that requires concentrated psychological harm to function, then act surprised when it causes trauma. The knee-jerk reaction of suggesting AI as the solution is, IMO, just wishful thinking - it's trying to technologically optimize away the inherent contradiction of bureaucratized thought control. The human cost isn't a bug that better process or technology can fix - it's the inevitable result of trying to impose pre-internet regulatory frameworks on post-internet human communication that large segments of the population may simply be incompatible with.
I'm wondering if there are precedents in other domains. There are other jobs where you do see disturbing things as part of your duty. E.g. doctors, cops, first responders, prison guards and so on...
What makes moderation different? and how should it be handled so that it reduces harm and risks? surely banning social media or not moderating content aren't options. AI helps to some extent but doesn't solve the issue entirely.
I don’t have any experience with this, so take this with a pinch of salt.
What seems novel about moderation is the frequency that you confront disturbing things. I imagine companies like Meta have such good automated moderation that what remains to be viewed by a human is practically a firehose of almost certainly disturbing shit. And as soon as you’re done with one post, the next is right there. I doubt moderators spend more than 30 seconds on the average image, which is an awful lot of stuff to see in one day.
A doctor just isn’t exposed to that sort of imagery at the same rate.
> I imagine companies like Meta have such good automated moderation that what remains to be viewed by a human is practically a firehose of almost certainly disturbing shit.
On the contrary I would expect that it would be the edge cases that they were shown - why loop in a content moderator if you an be sure that it is prohibited ont he platform without exposing a content moderator?
In this light, it might make sense why they sue: They are there more as a political org so that facebook can say: "We employ 140 moderators in Kenya alone!" while they do indifferent work that facebook already can filter out.
Even if 1% of images are disturbing that’s multiple per hour, let anyone across months.
US workman’s comp covers PTSD acquired on the job, and these kinds of jobs are rife with it.
> They are there more as a political org so that facebook can say: "We employ 140 moderators in Kenya alone!" while they do indifferent work that facebook already can filter out.
Why assume they're just token diversity hires who don't do useful work..?
Have you ever built an automated content moderation system before? Let me tell you something about them if not: no matter how good your automated moderation tool, it is pretty much always trivial for someone with familiarity with its inputs and outputs to come up with an input it mis-predicts embarrassingly badly. And you know what makes the biggest difference.. is humans specifying the labels.
I don't assume diversity hires, I assume that these people work for the Kenyan part of Facebook and that Facebook employs an equivalent workforce elsewhere.
I am also not saying that content moderation should catch everything.
What I am saying is that the content moderation teams should ideally decide on the edge cases as they are hard for automated system.
In turn that also means that these people ought not to be exposed to too hardcore material - as that is easier to classify.
Lastly I say that if that is not the case - then they are probably not there to carry out a function but to fill a political role.
Content moderation also involves reading text, so you’d imagine that there’s a benefit to having people who can label data and provide ground truth in any language you’re moderating.
Even with images, you can have different policies in different places or the cultural context can be relevant somehow (eg. some country makes you ban blasphemy).
Also, I have heard of outsourcing to Kenya just to save cost. Living is cheaper there so you can hire a desk worker for less. Don’t know where the insistence you’d only hire Kenyans for political reasons comes from.
Also a doctor is paid $$$$$ and it mostly is a vocational job
Content moderator is a min wage job with bad working hours, no psychological support, and you spend your day looking at rape, child porn, torture and executions.
>Also a doctor is paid $$$$$
>Content moderator is a min wage job
So it's purely a monetary dispute?
>bad working hours, no psychological support, and you spend your day looking at rape, child porn, torture and executions.
Many other jobs have the same issues, though admittedly with less frequency, but where do you draw the line?
> but where do you draw the line?
How about grouping the jobs into two categories: A) Causes PTSD and B) Doesn't cause PTSD
If a job as a constantly high percentage of people ending up with PTSD, then they aren't equipped well enough to handle it, by the company who employs them.
>How about grouping the jobs into two categories: A) Causes PTSD and B) Doesn't cause PTSD
I fail to see how this addresses my previous questions of "it's purely a monetary dispute?" and "where do you draw the line?". If a job "Causes PTSD" (whatever that means), then what? Are you entitled to hazard pay? Does this work out in the end to a higher minimum wage for certain jobs? Moreover, we don't have similar classifications for other hazards, some of which are arguably worse. For instance, dying is probably worse than getting PTSD, but the most dangerous jobs have pay that's well below the national median wage[1][2]. Should workers in those jobs be able to sue for redress as well?
[1] https://www.ishn.com/articles/112748-top-25-most-dangerous-j...
[2] https://www.bls.gov/oes/current/oes_nat.htm
[dead]
[dead]
What could a company provide a police officer with to prevent PTSD from witnessing a brutal child abuse case? A number of sources i found estimate the top of the range to be ~30% of police officers may be suffering from it
[1] https://www.policepac.org/uploads/1/2/3/0/123060500/the_effe...
You can’t prevent it but you can help deal with it later.
> So it's purely a monetary dispute?
I wouldn't say purely, but substantially yes. PTSD has costs. The article says some out; therapy, medication, mental, physical, and social health issues. Some of these money can directly cover, whereas others can only be kinda sorta justified with high enough pay.
I think a sustainable moderation industry would try hard to attract the kinds of people who are able to perform this job without too much negative impacts, and quickly relieve those who try but are not well suited, and pay for some therapy.
Also doctors are very frequently able to do something about it. Being powerless is a huge factor in mental illness.
“I would imagine that companies like Meta have such good automated moderation that what remains to be viewed by a human is practically a firehose of almost certainly disturbing shit.”
This doesn’t make sense to me. Their automated content moderation is so good that it’s unable to detect “almost certainly disturbing shit”? What kind of amazing automation only works with subtleties but not certainties?
I assumed that, at the margin, Meta would prioritise reducing false negatives. In other words, they would prefer that as many legitimate posts are published as possible.
So the things that are flagged for human review would be on the boundary, but trend more towards disturbing than legitimate, on the grounds that the human in the loop is there to try and publish as many posts as possible, which means sifting through a lot of disturbing stuff that the AI is not sure about.
There’s also the question of training the models - the classifiers may need labelled disturbing data. But possibly not these days.
However, yes, I expect the absolute most disturbing shit to never be seen by a human.
—
Again, literally no experience, just a guy on the internet pressing buttons on a keyboard.
>In other words, they would prefer that as many legitimate posts are published as possible.
They'd prefer that as many posts are published, but they probably also don't mind some posts being removed if it meant saving a buck. When canada and australia implemented a "link tax", they were happy to ban all news content to avoid paying it.
Yes, Meta are economically incentivised to reduce the number of human reviews (assuming the cost of improving the model is worthwhile).
This probably means fewer human reviewers reviewing a firehose, not the same number of human reviewers reviewing content at a slower rate.
> I imagine companies like Meta have such good automated moderation
I imagine that they have a system that is somewhere between shitty and none functional. This is the company that will more often than not flag marketplace posts as "Selling animal", either completely at random or because the pretty obvious "from an animal free home" phrase is used.
If they can't get this basic text parsing correct, how can you expect them to correctly flag images with any real sense of accuracy?
I’d think the higher density/frequency of disturbing content would cause people to be desensitized.
I never seen blood or gore in my life and find seeing it shocking.
But I’d imagine gore is a weekly situation for surgeons.
I watch surgery videos sometimes, out of fascination. It's not gore to me - sure it's flesh and blood but there is a person whose life is going to be probably significantly better afterwards. They are also not in pain.
I exposed myself to actual gore vids in the aughts and teens... That stuff still sticks with me in a bad way.
Context matters a lot.
> They are also not in pain.
My understanding is that during surgery, your body is most definitely in pain. Your body still reacts as it would to any damage, but anesthetics block the pain signals from reaching the brain.
But there is a difference between someone making an effort healing someone else vs content with implications that something really disturbing happened that makes you lose faith in humanity.
I agree. But that might be comorbid with PTSD. It’s probably not good for you to be _that_ desensitised to this sort of thing.
I also feel like there’s something intangible regarding intent that makes moderation different from being a doctor. It’s hard for me to put into words, but doctors see gore because they can hopefully do something to help the individual involved. Moderators see gore but are powerless to help the individual, they can only prevent others from seeing the gore.
It's also the type of gore that matters. Some of the worst stuff I've seen wasn't the worst because of the visuals, but because of the audio. Hearing people begging for their life while being executed surely would feel different to even a surgeon who might be used to digging around in people's bodies.
There are many common situations where professionals are helpless, like people that needs to clean up dead bodies after an accident.
Imagine if this becomes a specialized, remote job where one tele-operates the brain and blood scrubbing robot all workday long, accident, after accident after accident. I am sure they'd get PTSD too, airey, sometime it's just oil and coolant, but there's still a lot of body-tissue involved.
Desensitization is only one stage of it. It's not permanent & requires dissociation from reality/humanity on some level. But that stuff is likely to come back and haunt one in some way. If not, it's likely a symptom of something deeper going on.
My guess is that's why it's after bulldozing hundreds of Palestinians, instead of 1 or 10s of them, that Israeli soldiers report PTSD.
If you haven't watched enough videos of the ongoing genocides in the world to realize this, it'll be a challenge to have a realistic take on this article.
Burnout, PTSD, and high turnover are also hallmarks of suicide hotline operators.
The difference? The reputable hotlines care a lot more about their employees' mental health, with mandatory breaks, free counseling, full healthcare benefits (including provisions for preventative mental health care like talk therapy).
Another important difference is that suicide hotlines are decoupled from the profit motive. As more and more users sign up to use a social network, it gets more profitable and more and more load needs to be borne by the human moderation team. But suicide and mental health risk is (roughly) constant (or slowly increasing with societal trends, not product trends).
There's also less of an incentive to minimize human moderation cost. In large companies, some directors view mod teams as a cost center that takes away from other ventures. In an organization dedicated only to suicide hotline response, a large share of the income (typically fundraising or donations) goes directly into the service itself.
A friend's friend is a paramedic and as far as I remember they can take the rest of a day off after witnessing death on duty and there's an obligatory consulation with a mental healthcare specialist. From reading the article, it seems like those moderators are seeing horrific things almost constantly throughout the day.
I've never heard of a policy like that for physicians and doubt it's common for paramedics. I work in an ICU and a typical day involves a death or resuscitation. We would run out of staff with that policy.
I might have misremembered that, but remember hearing the story. Maybe it was only after unsuccessful CPR attempts or something like that.
Maybe it's different in the US where ambulances cost money, but here in Germany the typical paramedic will see a wide variety of cases, with the vast majority of patients surviving the encounter. Giving your paramedic a day off after witnessing a death wouldn't break the bank. In the ICU or emergency room it would be a different story.
Ambulances cost money everywhere, it's just a matter of who is paying. Do we think paramedics in Germany are more susceptible to PTSD when patients die than ICU or ER staff, or paramedics anywhere?
> Ambulances cost money everywhere
Not in the sense that matters here: the caller doesn't pay (unless the call is frivolous), leading to more calls that are preemptive, overly cautious or for non-live-threatening cases. That behind the scenes people and equipment are paid for and a whole structure to do that exists isn't really relevant here
> Do we think paramedics in Germany are more susceptible to PTSD
No, we think that there are far more paramedics than ICU or ER staff, and helping them in small ways is pretty easy. For ICU and ER staff you would obviously need other measures, like staffing those places with people less likely to get PTSD or giving them regular counseling by a staff therapist (I don't know how this is actually handled, just that the problem is very different than the issue of paramedics)
Maybe a different country than yours ?
My friend has repeatedly mentioned his dad became an alcoholic due to what he saw as a paramedic. This was back in the late 80s, early 90s so not sure they got any mental health help.
Sounds crazy. Just imagine dying because paramedic responsible for your survival just wanted end his day early.
Trauma isn’t just a function of what you’ve experienced, but also of what control you had over the situation and whether you got enough sleep.
Being a doctor and helping people through horrific things is unlike helplessly watching them happen.
IIRC, PTSD is far more common among people with sleep disorders, and it’s believed that the lack of good sleep is preventing upsetting memories from being processed.
I expect first responders rarely have to deal with the level of depravity mentioned in this Wired article from 2014, https://www.wired.com/2014/10/content-moderation/
You probably DO NOT want to read it.
There's a very good reason moderators are employed in far-away countries, where people are unlikely to have the resources to gain redress for the problems they have to deal with as a result.
In many states, pension systems give police and fire service sworn members a 20 year retirement option. The military has similar arrangements.
Doctors and lawyers can’t afford that sort of option, but they tend to embrace alcoholism at higher rates and collect ex-wives.
Moderation may be worse in some ways. All day, every day, you see depravity at scale. You see things that shouldn’t be seen. Some of it you can stop, some you cannot due to the nature of the rules.
I think banning social media isn’t an answer, but demanding change to the algorithms to reduce the engagement to high risk content is key.
Frequency plus lack of post traumatic support.
A content moderator for Facebook will invariably see more depravity and more frequently than a doctor or police officer. And likely see far less support provided by their employers to emotionally deal with it too.
This results in a circumstance where employees don’t have the time nor the tools to process.
I'm not sure your comparisons are close enough to be considered precedents.
My guess is even standing at the ambulance drive in of a big hospital, you'll not see as much horrors in a day as these people see in 30 minutes.
My friends who are paramedics have seen some horrific scenes. They have also been shot, stabbed, and suffered lifelong injuries.
They are obviously not identical scenarios. They have similarities and they also have differences.
Outside of some specific cities, I can guarantee it. Even a busy Emergency Dept on Halloween night had only a small handful of bloody patients/trauma cases, and nothing truly horrific when I did my EMT rotation.
I think part of it is the disconnection from the things you're experiencing. A paramedic or firefighter is there, acting in the world, with a chance to do good and some understanding of how things can go wrong. A content moderator is getting images beamed into their brain that they have no preparation for, of situations that they have no connection to or power over.
> A paramedic or firefighter is there, acting in the world, with a chance to do good and some understanding of how things can go wrong.
That's bullshit. Ever talked to a paramedic or firefighter?
As other sibling comments noted: most other jobs don't have the same frequent exposure to disturbing content. The closest are perhaps combat medics in an active warzone, but even they usually get some respite by being rotated.
at least in the US, those jobs - doctors, cops, firefighters, first responders - are well compensated (not sure about prison guards), certainly compared to content moderators who are at the bottom of the totem pole in an org like FB
What does compensation have to do with it? Is someone who stares at thousands of traumatizing, violent images every day going to be less traumatized if they're getting paid more?
Yes, they will be much more able to deal with the consequences of that trauma than someone who gets a pittance to do the same thing. A low-wage peon won't even be able to afford therapy if they need it.
At least they can pay for therapy and afford to stop working or find another job
Shamefully, first responders are not well compensated - usually it's ~$20 an hour.
I've lived places where the cops make $100k+. It all depends on location.
Sorry - I'm specifically referring to EMTs and Paramedics, who usually make somewhere in the realm of $18-25 an hour.
From those I know that worked in the industry, contractor systems are frequently abused to avoid providing the right level of counseling/support to moderators.
ER docs definitely get PTSD. Cops too.
Doctors, cops, first responders, prison guards see different horrible things.
Content moderators see all of that.
Don't forget judges, especially the ones in this case ...
And it used to be priests who had to deal with all the nasty confessions.
Judges get loads of compensation and perks.
> surely banning social media or not moderating content aren't options
Why not? What good has social media done that can't be accomplished in some other way, when weighed against the clear downsides?
That's an honest question, I'm probably missing lots.
Billions of people use them daily (facebook, instagram, X, youtube, tiktok...). Surely we could live without them like we did not long ago, but there's so much interest at play here that I don't see how they could be banned. It's akin to shutting down internet.
Doctors, cops, first responders, prison guards, soldiers etc also just so happen to be the most likely groups of people to develop PTSD.
I worked at FB for almost 2 years. (I left as soon as I could, I knew it wasn't a good fit for me.)
I had an Uber from the campus one day, and my driver, a twenty-something girl, was asking how to become a moderator. I told her, "no amount of money would be enough for me to do that job. Don't do it."
I don't know if she eventually got the job, but I hope she didn't.
Yes, these jobs are horrible. However, I do know from accidently encountering bad stuff on the internet that you want to be as far away from a modern battlefield as possible.
It's just kind of ridiculous how people think war is like Call of Duty. One minute you're sitting in a trench, the next you're a pile of undifferentiated blood and guts. Same goes for car accidents and stuff. People really underestimate how fragile we are as human beings. Becoming aware of this is super damaging to our concept of normal life.
Watching someone you love die of cancer is also super damaging to one's concept of normal life. Getting a diagnosis, or being in a bad car accident, or the victim of a violent assault is, too. I think a personal sense of normality is nothing more than the state of mind where we can blissfully (and temporarily) forget about our own mortality. Obviously, marinating yourself in all the horrible stuff makes it really hard to maintain that state of mind.
On the other hand, never seeing or reckoning with or preparing for how brutal reality actually is can lead to a pretty bad shock once something bad happens around you. And maybe worse, can lead you to under-appreciate how fantastic and beautiful the quotidian moments of your normal life actually are. I think it's important to develop a concept of normal life that doesn't completely ignore that really bad things happen all around us, all the time.
Frankly
there’s a difference between a one or two or even ten off exposure to the brutality of life, where various people in your life will support you and help you acclimate to it
Versus straight up mainlining it for 8 hours a day
hey kid, hope you're having a good life. I'll look at the screen full of the worst the internet and humanity has produced on the internet for eight hours.
I get your idea but in the context of this topic I think you're overreaching
Actually reckoning with this stuff leads people into believing in anti-natalism, negative utilitarinism, Scopenhaur/Philipp Mainlander (Mainlander btw was not just pro-suicide, he actually killed himself!), and the voluntary extinction movement. This terrified other philosophers like Nietzsche, who spends most of his work defending reality even if it's absolute shit. "Amor Fati", "Infinite Regress/Eternal Recurrence", "Übermensch" vs the literal "Last Man". "Wall-E" of all films was the modern quintessential nietzschian fable, with maybe "Children of Men" being the previous good one before that.
You're literally not allowed to acknowledge that this stuff is bad and adopt one of the religions that see this and try to remove suffering - i.e. Jainism, because at least historically doing so meant you couldn't use violence in any circumstances, which also meant that your neighbor would murder you. There's a reason that Jain's population are in the low millions
Reality is actually bad, and it should be far more intuitive to folks. The fact that positive experience is felt "quickly" and negative experience is felt "slowly" was all the evidence I needed that I wouldn't just press the "instantly and painlessly and without warning destroy reality" (benevolent world-exploder) button, I'd smash it!
Interesting to see this perspective here. You’re not wrong.
> There's a reason that Jain's population are in the low millions
The two largest Vedic religions both have hundreds of millions of followers. Is Jainism that different from them in this regard? I know Jainism is very pacifist but on the question of suffering.
... okay.
Emergency personnel might need to braze themselves for car accidents every day. That Kenyans need to be traumatized by Internet Content in order to make a living is just silly and unnecessary.
Car “accidents” are also completely unnecessary.
Even the wording is wrong - those aren’t accidents, it is something we accept as byproduct of a car-centric culture.
People feel it is acceptable that thousands of people die on the road so we can go places faster. Similarly they feel it’s acceptable to traumatise some foreigners to keep social media running.
Nitpick that irrelevant example if you want.
ISISomalia loves that recruitment pool though
I'm no longer interested in getting a motorcycle, for similar reasons.
I spent my civil service as a paramedic assistent at the countryside, close to a mountainroad that was very popular with bikers. I was never interested in motorbikes in the first place, but the gruesome accidents I've witnessed turned me off for good.
The Venn diagram for EMTs, paramedics, and motorbikes is disjoint.
You’re only about 20x as likely to die on a motorcycle as in a car.
What can I say? People like to live dangerously.
Yes, but you're also far less likely to kill other people on a motorcycle as in a car (and even less, as in an SUV or pick-up truck). So some people live much less dangerously with respect to the people around them.
I suppose 20x a low number is still pretty low, especially given that number includes the squid factor.
It's not that we're particularly fragile, given the kind of physical trauma human beings can survive and recover from.
It's that we have technologically engineered things that are destructive enough to get even past that threshold. Modern warfare in particular is insanely energetic in the most literal, physical way - when you measure the energy output of weapons in joules. Partly because we're just that good at making things explode, and partly because improvements in metallurgy and electronics made it possible over time to locate targets with extreme precision in real time and then concentrate a lot of firepower directly on them. This, in particular, is why the most intense battlefields in Ukraine often look worse than WW1 and WW2 battles of similar intensity (e.g. Mariupol had more buildings destroyed than Stalingrad).
But even our small arms deliver much more energy to the target than their historical equivalents. Bows and arrows pack ~150 J at close range, rapidly diminishing with distance. Crossbows can increase this to ~400 J. For comparison, an AK-47 firing standard issue military ammo is ~2000 J.
Watch how a group of wild dogs kill their prey, then realise that for milenia human like apes were part of their diet. Even the modern battlefield is more humane than the African savannah.
That reminds me of this[0]. It's a segment of BBC's Planet Earth, where a pack of Cape Hunting Dogs are filmed, hunting.
It's almost military precision.
[0] https://www.youtube.com/watch?v=MRS4XrKRFMA
>Crossbows can increase this to ~400 J.
Funny you mention crossbows; the Church at one point in time tried to ban them because they democratized violence to a truly trivial degree. They were the nuclear bombs and assault rifles of medieval times.
Also, I will take this moment to also mention that the "problem" with weapons always seem to be how quickly they can kill rather than the killing itself. Kind of takes away from the discussion once that is realized.
Humans can render other humans unrecognizable with a rock.
Brutal murder is low tech.
> Humans can render other humans unrecognizable with a rock.
They are much less likely to.
We have instinctive repulsion to violence, especially extending it (e.g. if the rock does not kill at the first blow).
It is much easier to kill with a gun (and even then people need training to be willing to do it), and easier still to fire a missile at people you cannot even see.
Than throwing a face punch or a rock? You should check public schools.
Than killing with bare hands or a rock, which I believe is still pretty uncommon in schools.
GP didn't talk about killing
Extreme violence then? With rocks, clubs of bare hands? I was responding to "render other humans unrecognizable with a rock" which I am pretty sure is uncommon in schools.
Render unrecognizable? Yeah, I guess that could be survivable, but it's definitely lethal intent.
That's possible with just a well placed punch to the nose or to one of the eyes. I've seen and done that, in public schools.
Not in public schools in the British sense. I assume it varies in public schools in the American sense, and I am guessing violence sufficient to render someone unrecognisable is pretty rare even in the worst of them.
Not at scale.
Armies scale up.
It’s like the original massive scale organization.
Scaling an army of rock swingers is a lot more work than giving one person an AK47 (when all who would oppose them have rocks).
(Thankfully in the US we worship the 2A and its most twisted interpretation. So our toddlers do shooter drills. /s)
You are discounting the complexity of the logistics required for an AK47 army. You need ammo, spare parts, lubricant and cleaning tools. You need a factory to build the weapon, and churn out ammunition.
Or, gather a group of people, tell them to find a rock, and go bash the other sides head.
Complexity of logistics applies to any large army. The single biggest limiting factor for most of history has been the need to either carry your own food, or find it in the field. This is why large-scale military violence requires states.
> You need ammo, spare parts, lubricant and cleaning tools.
The ak-47 famously only needs the first item in that list.
That being the key to its popularity.
It should be noted that the purported advantages of AK action over its competitors in this regard are rather drastically overstated in popular culture. E.g. take a look at these two vids showing how AK vs AR-15 handle lots of mud:
https://www.youtube.com/watch?v=DX73uXs3xGU
https://www.youtube.com/watch?v=YAneTFiz5WU
As far as cleaning, AK, like many guns of that era, carries its own cleaning & maintenance toolkit inside the gun. Although it is a bit unusual in that regard in that this kit is, in fact, sufficient to remove any part of the gun that is not permanently attached. Which is to say, AK can be serviced in the field, without an armory, to a greater extent than most other options.
But the main reason why it's so popular isn't so much because of any of that, but rather because it's very cheap to produce at scale, and China especially has been producing millions of AKs specifically to dump them in Africa, Middle East etc. But where large quantities of other firearms are available for whatever reason, you see them used just as much - e.g. Taliban has been rocking a lot of M4 and M16 since US left a lot of stocks behind.
Normal does not exist - it’s just the setting on your washing machine.
> Becoming aware of this is super damaging to our concept of normal life.
Not being aware of this is also a cause of traffic accidents. People should be more careful driving.
Speaking as a paramedic, two things come to mind:
1) I don't have squeamishness about trauma. In the end, we are all blood and tissue. The calls that get to me are the emotionally traumatic, the child abuse, domestic violence, elder abuse (which of course often have a physical component too, but it's the emotional for me), the tragic, often preventable accidents.
2) There are many people, and I get the curiosity, that will ask "what's the worst call you've been on?" - one, you don't really want to hear, and two, "Hey, person I may barely know, do you think you can revisit something traumatic for my benefit/curiosity?"
That’s an excellent way to put it, resonates with my (non medical) experience. It’s the emotional stuff that will try to follow me around and be intrusive.
I won’t watch most movies or TV because they are just some sort of tragedy porn.
> movies or TV because they are just some sort of tragedy porn
100% agree. Most TV series nowadays are basically violence porn, now that real porn is not allowed for all kinds of reasons.
I'd be asking "how bad is the fentanyl situation in your are?"
Relatively speaking, not particularly.
What's interesting now is how many patients will say "You're not going to give me fentanyl are you? That's really dangerous stuff", etc.
Their perfect right, of course, but is sad that that's the public perception - it's extremely effective, and quite safe, used properly (for one, we're obviously only giving it from pharma sources, with actually properly dosed solutions for IV).
It's also super easy to come up with better questions: "What's the funniest call you've ever been on?" "What call do you feel like you made the biggest difference?" "What's the best story you have?"
I'm pretty sure watching videos on /r/watchpeopledie or rekt threads on 4chan has been a net positive for me. I'm keenly aware of how dangerous cars are, that wars (including narcowars) are hell, that I should never stay close to a bus or truck as a pedestrian or bycicle, that I should never get into a bar fight... And that I'm very very lucky that I was not born in the 3rd world.
I get more upset watching people lightly smack and yell at each other on public freakout than I do watching people die. It's not that I don't care about the dead either, I watched wpd and similar sites for years. I didn't enjoy watching it, but I liked knowing the reality of what was going on in the world, and how each one of us has the capacity to commit these atrocities. I'm still doing a lousy job at describing why I like to watch it. But I do.
Street fight videos, where the guy recording is Hooting and egging people on are disgusting
One does not fully-experience life until you encounter a death of something you care about. It being a pet, person; nothing gives you that real sense of reality until your true feelings are challenged.
I used to live in the Disney headspace until my dog had to be put down. Now with my parents being in their seventies, and me in my thirties I fear losing them the most as the feeling of losing my dog was hard enough.
That's the tragic consequence of being human. Either the people you care about leave first or you do, but in the end, everyone goes. We are blessed and cursed with the knowledge to understand this. We should try to maximize the time we spend with those that are important to us.
Well, i think it goes to a point. I'd imagine there's some goldilocks zone of time spent with the animal, care experienced from the animal, dependence on the animal, and manner/speed of death/ time spent watching the thing die.
I say animal to explicitly include humans. Finding my hamster dead in fifth grade did change me. But watching my mother slowly die a horrible, haunting death didn't make me a better person. I'm just saying that there's a spectrum that goes something like: easy to forget about, I'm able to not worry, sometimes i think about it when i dont want, often i think about it, often it bothers me, and do on. You can probably imagine the cycle of obsession and stress.
This really goes for all traumatic experiences. There's a point where they can make you a better person, but there's a cliff after which you have no guarantees that it won't just start obliterating you and your life. It's still a kind of perspective. But can you have too much perspective? Lots of times i feel like i do
>> ridiculous how people think war is like Call of Duty.
It is also ridiculous how people think every soldier's experience is like Band of Brothers or Full Metal Jacket. I remember an interview with a WWII vet who had been on omaha beach: "I don't remember anything happening in slow motion ... I do remember eating a lot of sand." The reality of war is often just not visually interesting enough to put on the screen.
I don't mean to trivialize traumatic experiences but I think many modern people, especially the pampered members of the professional-managerial class, have become too disconnected from reality. Anyone who has hunted or butchered animals is well aware of the fragility of life. This doesn't damage our concept of normal life.
My brother, an Eastern-European part-time farmer and full-time lorry driver, just texted me a couple of hours ago (I had told him I would call him in the next hour) that he might be with his hands full of meat by that time as “we’ve just butchered our pig Ghitza” (those sausages and piftii aren’t going to get made by themselves).
Now, ask a laptop worker to butcher an animal whom used to have a name and to literally turn its meat into sausages and see what said worker’s reaction would be.
What is it about partaking in or witnessing the killing of animals or humans that makes one more connected to reality?
Lots of people who spend time working with livestock on a farm describe a certain acceptance and understanding of death that most modern people have lost.
Are farmers more willing to discuss things like end of life medical decisions?
Are they more amenable to terminally ill people having access to euthanasia?
Do they cope better after losing loved ones?
Are there other ways we can get a sense of how a more healthy acceptance of mortality would manifest?
Would be interested in this data if it is available.
I don't have any data, but my anecdotal experience is a yes to those questions.
>Are there other ways we can get a sense of how a more healthy acceptance of mortality would manifest?
In concept, yes, I think home family death can also have a similar impact. It is not very common in the US, but 50 years ago, elders would typically die at home with family. There are cultures today, even materially advanced ones, where people spend time with the freshly dead body of loved ones instead of running from it and compartmentalizing it.
[flagged]
Never seen a rhetorical question before?
Socratic questioning is not cluelessness and your inability to answer does not bolster your position.
Socratic questioning requires the asker to have a deeper understanding whereby they guide with their questions.
Do you think that’s what people see in yours?
I don't know what people are seeing in my questions, but apparently they don't like answering them, because no one has.
I'm trying to understand what people mean by 'detachment from reality' and how such a thing is related to 'understanding of mortality', and how a deeper understanding of mortality and acceptance of death would manifest in ways that can be seen.
If 'acceptance of death' does not actually mean that they are more comfortable talking about death, or allowing people to choose their own deaths, or accepting their loved one's deaths with more ease, then what does it mean? Is it something else? Why can't anyone say what it is?
Why it is so obvious to the people stating that it happens, but no one can explain why the questions I asked are not being answered or are wrong?
If this is come basic conflict of frameworks wherein I am making assumptions that make no sense to the people who are making the assertions I am questioning, then what am I missing here?
> Why it is so obvious to the people stating that it happens, but no one can explain why the questions I asked are not being answered
you seem to be a bit anxious like waiting for the answer of your existential crisis.
> or are wrong?
that’s exactly what we did but you think your questions are actually super smart and we, fools, can’t answer them. Well, go on a quest to find the answers by yourself, young one.
In Japan, some sushi bars keep live fish in tanks that you can order to have served to you as sushi/sashimi.
The chefs butcher and serve the fish right in front of you, and because it was alive merely seconds ago the meat will still be twitching when you get it. If they also serve the rest of the fish as decoration, the fish might still be gasping for oxygen.
Japanese don't really think much of it, they're used to it and acknowledge the fleeting nature of life and that eating something means you are taking another life to sustain your own.
The same environment will likely leave most westerners squeamish or perhaps even gag simply because the west goes out of its way to hide where food comes from, even though that simply is the reality we all live in.
Personally, I enjoy meats respecting and appreciating the fact that the steak or sashimi or whatever in front of me was a live animal at one point just like me. Salads too, those vegetables were (are?) just as alive as I am.
Plenty of westerners are not as sheltered from their food as you. Have you never gone fishing and watched your catch die? Have you never boiled a live crab or lobster? You've clearly never gone hunting.
Not to mention the millions of Americans working in the livestock and agriculture business who see up close every day how food comes to be.
A significant portion of the American population engages directly with their food and the death process. Citing one gimmicky example of Asian culture where squirmy seafood is part of the show doesn't say anything about the culture of entire nations. That is not how the majority of Japanese consume seafood. It's just as anomalous there. You only know about it because it's unusual enough to get reported.
You can pick your lobster out of the tank and eat it at American restaurants too. Oysters and clams on the half-shell are still alive when we eat them.
>Plenty of westerners are not as sheltered from their food as you. ... You only know about it because it's unusual enough to get reported.
In case you missed it, you're talking to a Japanese.
Some restaurants go a step further by letting the customers literally fish for their dinner out of a pool. Granted those restaurants are a niche, that's their whole selling point to customers looking for something different.
Most sushi bars have a tank holding live fish and other seafood of the day, though. It's a pretty mundane thing.
If I were to cook a pork chop in the kitchen of some of my middle eastern relatives they would feel sick and would probably throw out the pan I cooked it with (and me from their house as well).
Isn't this similar to why people unfamiliar with that style of seafood would feel sick -- cultural views on what is and is not normal food -- and not because of their view of mortality?
You're not grasping the point, which I don't necessarily blame you.
Imagine that to cook that pork chop, the chef starts by butchering a live pig. Also imagine that he does that in view of everyone in the restaurant rather than in the "backyard" kitchen let alone a separate butchering facility hundreds of miles away.
That's the sushi chef butchering and serving a live fish he grabbed from the tank behind him.
When you can actually see where your food is coming from and what "food" truly even is, that gives you a better grasp on reality and life.
It's also the true meaning behind the often used joke that goes: "You don't want to see how sausages are made."
I grasp the point just fine, but you haven't convinced me that it is correct.
The issue most people would have with seeing the sausage being made isn't necessarily watching the slaughtering process but with seeing pieces of the animal used for food that they would not want to eat.
But isn't that the point? If someone is fine eating something so long as he is ignorant or naive, doesn't that point to a detachment from reality?
I grew up with my farmer grandpa who was a butcher, and I've seen him butcher lots of animals. I always have and probably always will find tongues & brains disgusting, even though I'm used to seeing how the sausage is made (literally).
Some things just tickle the brain in a bad way. I've killed plenty of fish myself, but I still wouldn't want to eat one that's still moving in my mouth, not because of ickiness or whatever, but just because the concept is unappealing. I don't think this is anywhere near as binary as you make it seem, really.
I wouldn't want to eat a cockroach regardless of whether I saw it being prepared or not. The point I am making is that 'feeling sick' and not wanting to eat something isn't about being disconnected from the food. Few people would care if you cut off a piece of steak from a hanging slab and grilled it in front of them, but would find it gross to pick up all the little pieces of gristle and organ meat that fell onto the floor, grind it all up, shove it into an intestine, and cook it.
> Few people would care if you cut off a piece of steak from a hanging slab
The analogy here would be watching a live cow get slaughtered and then butchered from scratch in front of you, which I think most Western audiences (more than a few) might not like.
A cow walks into the kitchen, it gets a captive bolt shoved into its brain with a person holding a compressed air tank. Its hide is ripped off and it is cut into two pieces with all of its guts on the ground and the flesh and bones now hang as slabs.
I am asserting that you could do all of that in front of a random assortment of modern Americans, and then cut steaks off of it and grill them and serve them to half of the crowd, and most of those people would not have an problem eating those steaks.
Then if you were to scoop up all the leftover, non-steak bits from the ground with shovels, throw it all into a giant meat grinder and then take the intestines from a pig, remove the feces from them and fill them with the output of the grinder, cook that and serve it to the other half of the crowd, then a statistically larger proportion of that crowd would not want to eat that compared to the ones who ate the steak.
> I am asserting that you could do all of that in front of a random assortment of modern Americans, and then cut steaks off of it and grill them and serve them to half of the crowd, and most of those people would not have an problem eating those steaks.
I am asserting that the majority of western audiences, including Americans, would dislike being present for the slaughtering and butchering portion of the experience you describe.
You’re just going down the list of things that sound disgusting. The second sounds worse than the first but both sound horrible.
Most audiences wouldn’t like freshly butchered cow - freshly butchered meat is tough and not very flavorful, it needs to be aged to allow it to tenderize and develop.
The point is that most Western audiences would likely find it unpleasant to be there for the slaughtering and butchering from scratch.
That the point is being repeated to no effect ironically illustrates how most modern people (westerners?) are detached from reality with regards to food.
To me, the logical conclusion is that they don't agree with your example and think that you are making connections that aren't evidenced from it.
I think you are doing the same exact thing with the above statement as well.
In the modern era, most of the things the commons come across have been "sanitized"; we do a really good job of hiding all the unpleasant things. Of course, this means modern day commons have a fairly skewed "sanitized" impression of reality who will get shocked awake if or when they see what is usually hidden (eg: butchering of food animals).
That you insist on contriving one zany situation after another instead of just admitting that people today are detached from reality illustrates my point rather ironically.
Whether it's butchering animals or mining rare earths or whatever else, there's a lot of disturbing facets to reality that most people are blissfully unaware of. Ignorance is bliss.
To be blunt, the way you express yourself on this topic comes off as very "enlightened intellectual." It's clear that you think that your views/assumptions are the correct view and any other view is one held by the "commons"; one which you can change simply by providing the poor stupid commons with your enlightened knowledge.
Recall that this whole thread started with your proposition that seeing live fish prepared in front of someone "will likely leave most westerners squeamish or perhaps even gag simply because the west goes out of its way to hide where food comes from, even though that simply is the reality we all live in." You had no basis for this as far as I can tell, it's just a random musing by you. A number of folks responded disagreeing with you, but you dismissed their anecdotal comments as being wrong because it doesn't comport with your view of the unwashed masses who are, obviously, feeble minded sheep who couldn't possibly cope with the realities of modern food production in an enlightened way like you have whereby you "enjoy meats respecting and appreciating the fact that the steak or sashimi or whatever in front of me was a live animal at one point just like me." How noble of you. Nobody (and I mean this in the figurative sense not the literal sense) is confused that the slab of meat in front of them was at one point alive.
Then you have the audacity to accuse someone of coming up with "zany" situations? You're the one that started the whole zany discussion in the first place with your own zany musings about how "western" "commons" think!
Earlier this year, I was at ground zero of the Super Bowl parade shooting. I didn’t ever dream about it, but I spent the following 3-4 days constantly replaying it in my head in my waking hours.
Later in the year I moved to Florida, just in time for Helene and Milton. I didn’t spend much time thinking about either of them (aside from during prep and cleanup and volunteering a few weeks after). But I had frequent dreams of catastrophic storms and floods.
Different stressors affect people (even myself) differently. Thankfully I’ve never had a major/long-term problem, but I know my reactions to major life stressors never seemed to have any rhyme or reason.
I can imagine many people might’ve been through a few things that made them confident they’d be alright with the job, only to find out dealing with that stuff 8 hours a day, 40 hours a week is a whole different ball game.
A parade shooting is bad, very bad, but is still tame compared to the sorts of things to which website moderators are exposed on a daily/hourly basis. Footage of people being shot is actually allowed on many platforms. Just think of all the war footage that is so common these days. The dark stuff that moderators see is way way worse.
> Footage of people being shot is actually allowed on many platforms.
It's also part of almost every American cop and military show and movie. Of course it's not real but it looks the same.
> Of course it's not real but it looks the same.
I beg to differ. TV shows and movies are silly. Action movies are just tough-guy dancing.
"Tough guy dancing" is such an apt phrase.
The organizer is even called a "fight choreographer".
I mean more the gory parts. Blood, decomposed bodies everywhere etc.
And I wasn't talking about action hero movies.
I have often wondered what would happen if social product orgs required all dev and product team members to temporarily rotate through moderation a couple times a year.
I can tell you that back when I worked as a dev for the department building order fulfillment software at a dotcom, my perspective on my own product has drastically changed after I had spent a month at a warehouse that was shipping orders coming out of the software we wrote. Eating my own dog food was not pretty.
Yeah I've wondered the same thing about jobs in general too.
Society would be a very different place if everyone had to do customer service or janitorial work one weekend a month.
Many (all?) Japanese schools don't have janitors. Instead students clean on rotation. Never been much into Japanese stuff but I absolutely admire this about their culture, and imagine it's part of the reason that Japan is such a clean and at least superficially respectful society.
Living in other Asian nations where there are often defacto invisible caste systems can be nauseating at times - you have parents that won't allow their children to participate in clean up efforts because their child is 'above handling trash.' That's gonna be one well adjusted adult...
Perhaps this is what happens when someone creates a mega-sized website comprising hundreds of millions of pages using other peoples' submitted material, effectively creating a website that is too large to "moderate". By letting the public publish their material on someone else's mega-sized website instead of hosting their own, perhaps it concentrates the web audience to make it more suitable for advertising. Perhaps if the PTSD-causing material was published by its authors on the authors' own websites, the audience would be small, not suitable for advertising. A return to less centralised web publishing would perhaps be bad for the so-called "ad ecosystem" created by so-called "tech" company intermediaries. To be sure, it would also mean no one in Kenya would be intentionally be subjected to PTSD-causing material in the name of fulfilling the so-called "tech" industry's only viable "business model": surveillance, data collection and online ad services.
It's a problem when you don't verify the identity of your users and hold them responsible for illegal things. If Facebook verified you were John D SSN 123-45-6789 they could report you for uploading CSAM and otherwise permanently block you from using the site if uploading objectionable material; meaning only exposure to horrific things is only necessary once per banned user. I would expect that to be orders of magnitude less than what they deal with today.
You can thank online privacy activists for this.
You can thank IRL privacy activists for the lack of cameras in every room in each house; Just imagine how much faster domestic disputes could be resolved!
Sure, there’s a cost-benefit to it. We think that privacy is more important than rapid resolution of domestic disputes and we think that privacy is more important than stopping child porn. That’s fine as a statement.
Unsurprising lack of response to this statement. It's 100% true, and any cost-benefit calculation of privacy should account for it.
Rubbish. The reason Facebook doesn't want to demand ID for most users is that it adds friction to using their product, which means fewer users and less profit.
[dead]
A return to less centralized web publishing would also be bad for the many creators who lack the technical expertise or interest to jump through all the hoops required for building and hosting your own website. Maybe this seems like a pretty small friction to the median HN user, but I don't think it's true for creators in general, as evidenced by the enormous increase in both the number and sophistication of online creators over the past couple of decades.
Is that increase worth traumatizing moderators? I have no idea. But I frequently see this sentiment on HN about the old internet being better, framed as criticism of big internet companies, when it really seems to be at least in part criticism of how the median internet user has changed -- and the solution, coincidentally, would at least partially reverse that change.
Content hosting for creators can be commoditized.
Content discovery may even be able to remain centralized.
No idea if there's a way for it to work out economically without ads, but ads are also unhealthy so maybe that's ok.
Introducing a free unlimited hosting service where you could only upload pictures, text or video. There’s a public page to see that content among adds and links to you friends free hosting service pages. TOS is a give-give: you give them the right to extract all the aggregated stat they want and display the adds, they give you the service for free so you own you content (and are legally responsible of it)
I mean, the technical expertise thing is solvable, it’s just no-one wants to solve it because SaaS is extremely lucrative."
When people are protected from the horrors of the world they tend to develop luxury beliefs which leads them to create more suffering in the world.
Conversely, those who are subjected to harsh conditions often develop a cynical view of humanity, one lacking empathy, which also perpetuates the same harsh conditions. It's almost like protection and subjection aren't the salient dimensions, but rather there is some other perspective that better explains the phenomenon.
Just scrolled a lot to find this. And I do believe that moderators in a not so safe country seen a lot in their lives. But this also should make them less vulnerable for this kind of exposures and looks like it is not.
I tend to agree with growth through realism, but people often have the means and ability to protect themselves from these horrors. Im not sure you can systemically prevent this without resorting to big brother shoving propaganda in front of people and forcing them to consume it.
I don't think it needs to be forced, just don't censor so much.
Isn't that forcing? Who decides how much censorship people can voluntarily opt into?
If given control, I think many/most people would opt into a significant amount of censorship.
> The moderators from Kenya and other African countries were tasked from 2019 to 2023 with checking posts emanating from Africa and in their own languages but were paid eight times less than their counterparts in the US, according to the claim documents
Why would pay in different countries be equivalent? Pretty sure FB doesn’t even pay the same to their engineers depending on where in the US they are, let alone which country. Cost of living dramatically differs.
Some products have factories in multiple countries. For example, Teslas are produced in both US and China. The cars produced in both countries are more or less identical in quality. But do you ever see that the market price of the product is different depending on the country of manufacture?
If the moderators in Kenya are providing the same quality labor as those from the US, why the difference in price of their labor?
I have a friend who worked for FAANG and had to temporarily move from US to Canada due to visa issues, while continuing to work for the same team. They were paid less in Canada. There is no justification for this except that the company has price setting power and uses it to exploit the sellers of labor.
A million things factor into market dynamics. I don’t know why this is such a shocking or foreign concept. Why is a waitress in Alabama paid less than in San Francisco for the same work? It’s a silly question because the answers are both obvious and complex.
> Why would pay in different countries be equivalent?
Why 8 times less?
GDP per capita in Kenya is a little less than $2k. In the United States, it’s a bit over $81k.
Median US salary is about $59k. Gross national income (not an identical measure but close) in Kenya about $2.1k.
1/8th is disproportionately in favor of the contractors, relative to market.
Because people chose to take the jobs, so presumably they thought it was fair compensation compared to alternatives. Unless there's evidence they were coerced in some way?
Note that I'm equating all jobs here. No amount of compensation makes it worth seeing horrible things. They are separate variables.
No amount? So you wouldn't accept a job to moderate Facebook for a million dollars a day? If you would, then surely you would also do it for a lower number. There is an equilibrium point.
> So you wouldn't accept a job to moderate Facebook for a million dollars a day?
I would hope not.
Sorry, but I don't believe you. You could work for a month or two and retire. Or hell, just do it for one day and then return to your old job. That's a cool one mill in the bank.
My point is, job shittiness can be priced in.
> work for a month or two and retire --> This is a dream of many, but there exist a set of people that really like their job and have no intention to retire
> just do it for one day and then return to your old job. --> Cool mill in the bank and dreadful images in your head. Perhaps Apitman feels he has enough cash and wont be happier with a million (more?).
Also your point is true but lacks of Facebook interest to elevate that number. I guess it was more a theorical reflexion than an argument for concrete economie.
Because that’s the only reason why anyone would hire them. If you’ve ever worked with this kind of contract workforce they aren’t really worth it without massive cost-per-unit-work savings. I suppose one could argue it’s better that they be unemployed than work in this job but they always choose otherwise when given the choice.
Because prices are determined by supply and demand
The same is true for poverty and the poor that will work for any amount, the cheap labor the rich needs to make riches.
> Why would pay in different countries be equivalent?
Because it's exploitative otherwise. It's just exploiting the fact that they're imprisoned within borders.
You haven't actually explained why it's bad, only slapped an evil sounding label on it. What's "exploitative" in this case and why is it morally wrong?
>they're imprisoned within borders
What's the implication of this then? That we remove all migration controls?
Of course. Not all at once, but gradually over time like the EU has begun to do. If capital and goods are free to move, then so must labor be. The labor market is very far from free if you think about it.
Interesting perspective. I wonder if you yourself take part in the exploitation by purchasing things made/grown in poor countries due to cost.
vegans die of malnutrition.
There's no ethical consumption under capitalism.
If that's the case then there can also be no ethical employment, either, both for employer and for employee. So that would seem to average out to neutrality.
Paying local market rates is not exploitative.
Artificially creating local market rates by trapping people is.
In what sense were these local rates "created artificially"? Are you suggesting that these people are being forced to work agaisnt their will?
It is also exploiting the fact that humans need food and shelter to live and money is used to acquire those things.
That's only exploitation if you combine it with fact of the enclosure of the commons and that all land and productive equipment on Earth is private or state property and that it's virtually impossible to just go farm or hunt for yourself without being fucked with anymore, let alone do anything more advanced without being shut down violently.
>the enclosure of the commons and that all land and productive equipment on Earth is private or state property and that it's virtually impossible to just go farm or hunt for yourself without being fucked with anymore, let alone do anything more advanced without being shut down violently.
How would land allocation work without "enclosure of the commons"? Does it just become a free-for-all? What happens if you want to use the land for grazing but someone else wants it for growing crops? "enclosure of the commons" conveniently solves all these issues by giving exclusive control to one person.
Elinor Ostrom covered this extensively in her Nobel Prize-winning work if you are genuinely interested. Enclosure of the commons is not the only solution to the problems.
[flagged]
That's actually an interesting question. I would love to see some data on whether it really is impossible for the average person to live off the land if they wanted to.
An adjacent question is whether there are too many people on the planet for that to be an option anymore even if it were legal.
>An adjacent question is whether there are too many people on the planet for that to be an option anymore even if it were legal.
Do you mean for everyone to be hunter-gatherers? Yes, that would be impossible. If you mean for a smaller number then it depends on the number.
Yeah I think it would be interesting to know how far over the line we are.
Probably way, way over the line. Population sizes exploded after the agricultural revolution. I wouldn't be surprised if the maximum is like 0.1-1% of the current population. If we're talking about strictly eating what's available without any cultivation at all, nature is really inefficient at providing for us.
Worked at PornHub's parent company for a bit and the moderation floor had a noticeable depressive vibe. Huge turnover. Can't imagine what these people were subjected to.
You don't mention the year(s), but I recently listened to Jordan Peterson's podcast episode 503. One Woman’s War on P*rnhub | Laila Mickelwait.
I will go ahead and assume that on the wild/carefree time of PornHub, when anyone could be able to upload anything and everything, from what that lady said, the numbers of pedophilia videos, bestiality, etc. was rampant.
Yeah, it was during that time, before the great purge. It's not just sexual depravity, people used that site to host all kinds of videos that would get auto-flagged anywhere else (including, the least of it, full movies).
> You don't mention the year(s), but I recently listened to Jordan Peterson's podcast episode 503. One Woman’s War on P*rnhub | Laila Mickelwait.
Laila Mickelwait is a director at Exodus Cry, formerly known as Morality in Media (yes, that's their original name). Exodus Cry/Morality in Media is an explicitly Christian organization that openly seeks to outlaw all forms of pornography, in addition to outlawing abortion and many gay rights including marriage. Their funding comes largely from right-wing Christian fundamentalist and fundamentalist-aligned groups.
Aside from the fact that she has an axe to grind, both she (as an individual) and the organization she represents have a long history of misrepresentating facts or outright lying in order to support their agenda. They also intentionally and openly refer to all forms of sex work (from consensual pornography to stripping to sexual intercourse) as "trafficking", against the wishes of survivors of actual sex trafficking, who have extensively document why Exodus Cry actually perpetuates harm against sex trafficking victims.
> everything, from what that lady said, the numbers of pedophilia videos, bestiality, etc. was rampant.
This was disproven long ago. Pornhub was actually quite good about proactively flagging and blocking CSAM and other objectionable content. Ironically (although not surprisingly, if you're familiar with the industry), Facebook was two to three orders of magnitude worse than Pornhub.
But of course, Facebook is not targeted by Exodus Cry because their mission - as you can tell by their original name of "Morality in Media" - is to ban pornography on the Internet, and going after Facebook doesn't fit into that mission, even though Facebook is actually way worse for victims of CSAM and trafficking.
Sure, but who did the proactive flagging back then? Probably moderators. Seems like a shitty job nonetheless
As far as I can tell, Facebook is still terrible.
I have a throwaway Facebook account. In the absence of any other information as to my interests, Facebook thinks I want to see flat earth conspiracy theories and CSAM.
When I report the CSAM, I usually get a response that says "we've taken a look and found that this content doesn't go against our Community Standards."
[flagged]
They should probably hire more part time people working one hour a day?
Btw, it’s probably a different team handling copyright claims, but my run-in with Meta’s moderation gives me the impression that they’re probably horrifically understaffed. I was helping a Chinese content creator friend taking down Instagram, YouTube and TikTok accounts re-uploading her content and/or impersonating her (she doesn’t have any presence on these platforms and doesn’t intend to). Reported to TikTok twice, got it done once within a few hours (I was impressed) and once within three days. Reported to YouTube once and it was handled five or six days later. No further action was needed from me after submitting the initial form in either case. Instagram was something else entirely; they used Facebook’s reporting system, the reporting form was the worst, it asked for very little information upfront but kept sending me emails afterwards asking for more information, then eventually radio silence. I sent follow-ups asking about progress, again, radio silence. Impersonation account with outright stolen content is still up till this day.
Absolutely grim. I wouldn't wish that job on my worst enemy. The article reminded me of a Radiolab episode from 2018: https://radiolab.org/podcast/post-no-evil
One of few fields where AI is very welcome
Until the AI moderator flags your home videos as child porn, and you lose your kids.
I’m wondering if, like looking out from behind a blanket at horror movies, if getting a moderately blurred copy of images would reduce the emotional punch of highly inappropriate pictures. Or just scaled down tiny.
If it’s already bad blurred or as a thumbnail don’t click on the real thing.
This is more or less how police do CSAM classification now. They start with thumbnails, and that's usually enough to determine whether the image is a photograph or an illustration, involves penetration, sadism etc without having to be confronted with the full image.
I'd be fine with that as long as it was something I could turn off and on at will
No, this just leads to more censorship without any option to appeal.
We’re talking about Facebook here. You shouldn’t have the assumption that the platform should be “uncensored” when it clearly is not.
Furthermore, I’ll rather have the picture of my aunt’s vacation taken down by ai mistake rather than hundreds of people getting PSTD because they have to manually review if some decapitation was real or illustrated on an hourly basis.
Nobody has a right to be published.
Then what is freedom of speech if every plattform deletes your content? Does it even exist? Facebook and co. are so ubiquitous, we shouldn't just apply normal laws to them. They are bigger than governments.
Freedom of speech means that the government can't punish you for your speech. It has absolutely nothing to do with your speech being widely shared, listened to, or even acknowledged. No one has the right to an audience.
> Then what is freedom of speech if every platform deletes your content?
Freedom of speech is between you and the government and not you and a private company.
As the saying goes, if don't like your speach I can tell you to leave my home, that's not censorship, that's how freedom works.
If I don't like your speach, I can tell yo to leave my property. Physical or virtual.
The government is not obligated to publish your speech. They just can't punish you for it (unless you cross a few fairly well-defined lines).
If this was the case then Facebook shouldn’t be liable to moderate any content. Not even CSAM.
Each government and in some cases provinces and municipalities should have teams to regulate content from their region?
Not if we retain control and each deploy our own moderation individually, relying on trust networks to pre-filter. That probably won't be allowed to happen, but in a rational, non-authoritarian world, this is something that machine learning can help with.
Curious, do you have a better solution?
The solution to most social media problems in general is:
`select * from posts where author_id in @follow_ids order by date desc`
At least 90% of the ills of social media are caused by using algorithms to prioritize content and determine what you're shown. Before these were introduced, you just wouldn't see these types of things unless you chose to follow someone who chose to post it, and you didn't have people deliberately creating so much garbage trying to game "engagement".
I'd love a chronological feed but if you gave me a choice I'd get rid of lists in SQL first.
> select * from posts where author_id in @follow_ids order by date desc
SELECT post FROM posts JOIN follows ON posts.author_id = follows.author_id WHERE follows.user_id = $session.user_id;
That's a workflow problem.
> without any option to appeal.
Why would that be?
Currently content is flagged and moderators decide whether to take it down. Using AI, it's easy conceive a process where some uploaded content is preflagged requiring an appeal (otherwise it's the same as before, a pair of human eyes automatically looking at uploaded material).
Uploaders trying to publish rule-breaking content would not bother with an appeal that would reject them anyway.
Because edge cases exist, and it isn't worth it for a company to hire enough staff to deal with them when one user with a problem, even if that problem is highly impactful to their life, just doesn't matter when the user is effectively the product and not the customer. Once the AI works well enough, the staff is gone and the cases where someone's business or reputation gets destroyed because there are no ways to appeal a wrong decision by a machine get ignored. And of course 'the computer won't let me' or 'I didn't make that decision' is a great way for no one to ever have to feel responsible for any harms caused by such a system.
This and social media companies in the EU tend to just delete stuff because of draconian laws where content must be deleted in 24 hours or they face a fine. So companies would rather not risk it. Moderators also only have a few seconds to decide if something should be deleted or not.
> because there are no ways to appeal
I already addressed this and you're talking over it. Why are you making the assumption that AI == no appeal and zero staff? That makes zero sense, one has nothing to do with the other. The human element comes in for appeal process.
> I already addressed this and you're talking over it.
You didn't address it, you handwaved it.
> Why are you making the assumption that AI == no appeal and zero staff?
I explicitly stated the reason -- it is cheaper and it will work for the majority of instances while the edge cases won't result in losing a large enough user base that it would matter to them.
I am not making assumptions. Google notoriously operates in this fashion -- for instance unless you are a very popular creator, youtube functions like that.
> That makes zero sense, one has nothing to do with the other.
Cheaper and mostly works and losses from people leaving are not more than the money saved by removing support staff makes perfect sense and the two things are related to each other like identical twins are related to each other.
> The human element comes in for appeal process.
What does a company have to gain by supplying the staff needed to listen to the appeals when the AI does a decent enough job 98% of the time? Corporations don't exist to do the right thing or to make people happy, they are extracting value and giving it to their shareholders. The shareholders don't care about anything else, and the way I described returns more money to them than yours.
> I am not making assumptions. Google notoriously operates in this fashion -- for instance unless you are a very popular creator, youtube functions like that.
Their copyright takedown system has been around for many years and wasn't contingent on AI. It's a "take-down now, ask questions later" policy to please the RIAA and other lobby groups. Illegal/abuse material doesn't profit big business, their interest is in not having it around.
You deliberately conflated moderation & appeal process from the outset. You can have 100% AI handling of suspect uploads (for which the volume is much larger) with a smaller staff handling appeals (for which the volume is smaller), mixed with AI.
Frankly if your hypothetical upload is still rejected after that, it 99% likely violates their terms of use, in which case there's nothing to say.
> it is cheaper
A lot of things are "cheaper" in one dimension irrespective of AI, doesn't mean they'll be employed if customers dislike it.
> the money saved by removing support staff makes perfect sense and the two things are related to each other like identical twins are related to each other.
It does not make sense to have zero staff in as part of managing an appeal process (precisely to deal with edge cases and fallibility of AI), and it does not make sense to have no appeal process.
You're jumping to conclusions. That is the entire point of my response.
> What does a company have to gain by supplying the staff needed to listen to the appeals when the AI does a decent enough job 98% of the time?
AI isn't there yet, notwithstanding, if they did a good job 98% of the time then who cares? No one.
And then the problem is moved to the team curating data sets.
I would have hoped the previously-seen & clearly recognisable stuff already gets auto-flagged.
I think they use sectioned hashes for that sort of thing. They certainly do for eg ISIS videos, see https://blogs.microsoft.com/on-the-issues/2017/12/04/faceboo...
You know what is going to end up happening though is something akin to the Tesla's "autonomous" Optimus robots.
Maybe.. apple had a lot of backlash using AI to detect CSAM.
Wasn’t the backlash due to the fact that they were running detection on device against your private library?
Yes. As opposed to running it on their servers like they do now.
And it was only for iCloud synced photos.
There's a huge gap between "we will scan our servers for illegal content" and "your device will scan your photos for illegal content" no matter the context. The latter makes the user's device disloyal to its owner.
The choice was between "we will upload your pictures unencrypted and do with them as we like, including scan them for CSAM" vs. "we will upload your pictures encrypted and keep them encrypted, but will make sure beforehand on your device only that there's no known CSAM among it".
> we will upload your pictures unencrypted and do with them as we like
Curious, I did not realize Apple sent themselves a copy of all my data, even if I have no cloud account and don't share or upload anything. Is that true?
Apple doesn't do this. But other service providers do (Dropbox, Google, etc).
Other service providers can scan for CSAM from the cloud, but Apple cannot. So Apple might be one of the largest CSAM hosts in the world, due to this 'feature'.
Apple is already categorizing content on your device. Maybe they don't report what categories you have. But I know if I search for "cat" it will show me pictures of cats on my phone.
And introduces avenues for state actors to force the scanning of other material.
This was also during a time where Apple hadn’t pushed out e2ee for iCloud, so it didn’t even make sense.
This ship has pretty much sailed.
If you are storing your data in a large commercial vendor, assume a state actor is scanning it.
I'm shocked at the amount of people I've seen on my local news getting arrested lately for it and it all comes from the same starting tip:
"$service_provider sent a tip to NCMEC" or "uploaded a known-to-NCMEC hash", ranging from GMail, Google Drive, iCloud, and a few others.
https://www.missingkids.org/cybertiplinedata
"In 2023, ESPs submitted 54.8 million images to the CyberTipline of which 22.4 million (41%) were unique. Of the 49.5 million videos reported by ESPs, 11.2 million (23%) were unique."
And, indeed, this is why we should not expect the process to stop. Nobody is rallying behind the rights of child abusers and those who traffic in child abuse material. Arguably, nor should they. The slippery slope argument only applies if the slope is slippery.
This is analogous to the police's use of genealogy and DNA data to narrow searches for murderers, who they then collected evidence on by other means. There's is risk there, but (at least in the US) you aren't going to find a lot of supporters of the anonymity of serial killers and child abusers.
There are counter-arguments to be made. Germany is skittish about mass data collection and analysis because of their perception that it enabled the Nazi war machine to micro-target their victims. The US has no such cultural narrative.
> And, indeed, this is why we should not expect the process to stop. Nobody is rallying behind the rights of child abusers and those who traffic in child abuse material. Arguably, nor should they.
I wouldn't be so sure.
When Apple was going to introduce on-device scanning they actually proposed to do it in two places.
• When you uploaded images to your iCloud account they proposed scanning them on your device first. This is the one that got by far the most attention.
• The second was to scan incoming messages on phones that had parental controls set up. The way that would have worked is:
1. if it detects sexual images it would block the message, alert the child that the message contains material that the parents think might be harmful, and ask the child if they still want to see it. If the child says no that is the end of the matter.
2. if the child say they do want to see it and the child is at least 13 years old, the message is unblocked and that is the end of the matter.
3. if the child says they do want to see it and the child is under 13 they are again reminded that their parents are concerned about the message, again asked if they want to view it, and told that if they view it their parents will be told. If the child says no that is the end of the matter.
4. If the child says yes the message is unblocked and the parents are notified.
This second one didn't get a lot of attention, probably because there isn't really much to object to. But I did see one objection from a fairly well known internet rights group. They objected to #4 on the grounds that the person sending the sex pictures to your under-13 year old child sent the message to the child, so it violates the sender's privacy for the parents to be notified.
I don't think the problem there is the AI aspect
My understanding was the FP risk. Everything was on device. People designed images that were FPs of real images.
FP? Let us know what this means when you have a chance. Federal Prosecution? Fake Porn? Fictional Pictures?
My guess is False Positive. Weird abbreviation to use though.
No, they had backlash against using AI on devices they don’t own to report said devices to police for having illegal files on them. There was no technical measure to ensure that the devices being searched were only being searched for CSAM, as the system can be used to search for any type of images chosen by Apple or the state. (Also, with the advent of GenAI, CSAM has been redefined to include generated imagery that does not contain any of {children, sex, abuse}.)
That’s a very very different issue.
I support big tech using AI models running on their own servers to detect CSAM on their own servers.
I do not support big tech searching devices they do not own in violation of the wishes of the owners of those devices, simply because the police would prefer it that way.
It is especially telling that iCloud Photos is not end to end encrypted (and uploads plaintext file content hashes even when optional e2ee is enabled) so Apple can and does scan 99.99%+ of the photos on everyone’s iPhones serverside already.
> Also, with the advent of GenAI, CSAM has been redefined to include generated imagery that does not contain any of {children, sex, abuse}
It hasn’t been redefined. The legal definition of it in the UK, Canada, Australia, New Zealand has included computer generated imagery since at least the 1990s. The US Congress did the same thing in 1996, but the US Supreme Court ruled in the 2002 case of Ashcroft v Free Speech Coalition that it violated the First Amendment. [0] This predates GenAI because even in the 1990s people saw where CGI was going and could foresee this kind of thing would one day be possible.
Added to that: a lot of people misunderstand what that 2002 case held. SCOTUS case law establishes two distinct exceptions to the First Amendment – child pornography and obscenity. The first is easier to prosecute and more commonly prosecuted; the 2002 case held that "virtual child pornography" (made without the use of any actual children) does not fall into the scope of the child pornography exception – but it still falls into the scope of the obscenity exception. There is in fact a distinct federal crime for obscenity involving children as opposed to adults, 18 USC 1466A ("Obscene visual representations of the sexual abuse of children") [1] enacted in 2003 in response to this decision. Child obscenity is less commonly prosecuted, but in 2021 a Texas man was sentenced to 40 years in prison over it [2] – that wasn't for GenAI, that was for drawings and text, but if drawings fall into the legal category, obviously GenAI images will too. So actually it turns out that even in the US, GenAI materials can legally count as CSAM, if we define CSAM to include both child pornography and child obscenity – and this has been true since at least 2003, long before the GenAI era.
[0] https://en.wikipedia.org/wiki/Ashcroft_v._Free_Speech_Coalit...
[1] https://www.law.cornell.edu/uscode/text/18/1466A
[2] https://www.justice.gov/opa/pr/texas-man-sentenced-40-years-...
Thanks for the information. However I am unconvinced that SCOTUS got this right. I don’t think there should be a free speech exception for obscenity. If no other crime (like against a real child) is committed in creating the content, what makes it different from any other speech?
> However I am unconvinced that SCOTUS got this right. I don’t think there should be a free speech exception for obscenity
If you look at the question from an originalist viewpoint: did the legislators who drafted the First Amendment, and voted to propose and ratify it, understand it as an exceptionless absolute or as subject to reasonable exceptions? I think if you look at the writings of those legislators, the debates and speeches made in the process of its proposal and ratification, etc, it is clear that they saw it as subject to reasonable exceptions – and I think it is also clear that they saw obscenity as one of those reasonable exceptions, even though they no doubt would have disagreed about its precise scope. So, from an originalist viewpoint, having some kind of obscenity exception seems very constitutionally justifiable, although we can still debate how to draw it.
In fact, I think from an originalist viewpoint the obscenity exception is on firmer ground than the child pornography exception, since the former is arguably as old as the First Amendment itself is, the latter only goes back to the 1982 case of New York v. Ferber. In fact, the child pornography exception, as a distinct exception, only exists because SCOTUS jurisprudence had narrowed the obscenity exception to the point that it was getting in the way of prosecuting child pornography as obscene – and rather than taking that as evidence that maybe they'd narrowed it a bit too far, SCOTUS decided to erect a separate exception instead. But, conceivably, SCOTUS in 1982 could have decided to draw the obscenity exception a bit more broadly, and a distinct child pornography exception would never have existed.
If one prefers living constitutionalism, the question is – has American society "evolved" to the point that the First Amendment's historical obscenity exception ought to jettisoned entirely, as opposed to merely be read narrowly? Does the contemporary United States have a moral consensus that individuals should have the constitutional right to produce graphic depictions of child sexual abuse, for no purpose other than their own sexual arousal, provided that no identifiable children are harmed in its production? I take it that is your personal moral view, but I doubt the majority of American citizens presently agree – which suggests that completely removing the obscenity exception, even in the case of virtual CSAM material, cannot currently be justified on living constitutionalist grounds either.
My understanding was the FP risk. The hashes were computed on device, but the device would self-report to LEO if it detects a match.
People designed images that were FPs of real images. So apps like WhatsApp that auto-save images to photo albums could cause people a big headache if a contact shared a legal FP image.
Weird take. The point of on-device scanning is to enable E2EE while still mitigating CSAM.
No, the point of on-device scanning is to enable authoritarian government overreach via a backdoor while still being able to add “end to end encryption” to a list of product features for marketing purposes.
If Apple isn’t free to publish e2ee software for mass privacy without the government demanding they backdoor it for cops on threat of retaliation, then we don’t have first amendment rights in the USA.
> they don’t own to report said devices to police for having illegal files on them
They do this today. https://www.apple.com/child-safety/pdf/Expanded_Protections_...
Every photo provider is required to report CSAM violations.
Actually they do not.
https://forums.appleinsider.com/discussion/238553/apple-sued...
Probably because you need to feed it child porn so it can detect it...
Already happened/happening. I have an ex-coworker that left my current employer for my state's version of the FBI. Long story short, the government has a massive database to crosscheck against. Often times, the would use automated processes to filter through suspicious data they would collect during arrests.
If the automated process flags something as a potential hit, then they, the humans, would then review those images to verify. Every image/video that is discovered to be a hit is also inserted into a larger dataset as well. I can't remember if the Feds have their own DB (why wouldn't they?), but the National Center for Missing and Exploited Children run a database that I believe government agencies use too. Not to mention, companies like Dropbox, Google, etc.. all has against the database(s) as well.
Apple had a lot of backlash by using AI to scan every photo you ever took and sending it back to the mothership for more training.
Borrowing the thought from Ed Zitron, but when you think about it, most of us are exposing ourselves to low-grade trauma when we step onto the internet now.
What's more; popular TV shows regularly have scenes that could cause trauma, the media has been ramping up the intensity of content for years. I think it's simply seeking more word of mouth 'did you see GoT last night? Oh my gosh so and so did such and such to so and so!'
It really became apparent to me when I watched the FX remake of Shogun, the 1980 version seems downright silly and carefree by comparison.
That's the risk of being in a society in general, it's just that we interact with people outside way less now. If one doesn't like it, they can always be a hermit.
Not just that, but that algorithms are driving us to the extremes. I used to think it was just that humans were not meant to have this many social connections, but it's more about how these connections are mediated, and by whom.
Worth reading Zitron's essay if you haven't already. It sounds obvious, but the simple cataloging of all the indignities we take for granted builds up to a bigger condemnation than just Big Tech. https://www.wheresyoured.at/never-forgive-them/
Is there any way to look at this that doesn't resort to black or white thinking? That's a rather extreme view in itself that could use some nuance and moderation.
There have been multiple instances where I would receive invites or messages from obvious bots - users having no history, generic name, sexualised profile photo. I would always report them to Facebook just to receive a reply an hour or a day later that no action has been taken. This means there is no human in the pipeline and probably only the stuff that's not passing their abysmal ML filter goes to the actual people.
I also have a relative who is stuck with their profile being unable to change any contact details, neither email nor password because FB account center doesn't open for them. Again, there is no human support.
BigTech companies must be mandated by law to have the number of live support people working and reachable that is a fixed fraction of their user number. Then, they would have no incentive to inflate their user numbers artificially. As for the moderators, there should also be a strict upper limit on the number of content (content tokens, if you will) they should view during their work day. Then the companies would also be more willing to limit the amount of content on their systems.
Yeah, it's bad business for them but it's a win for the people.
I have several friends who do this work for various platforms.
The problem is, someone has to do it. These platforms are mandated by law to moderate it or else they're responsible for the content the users post. And the companies can not shield their employees from it because the work simply needs doing. I don't think we can really blame the platforms (though I think the remuneration could be higher for this tough work).
The work tends to suit some people better than others. The same way some people will not be able to be a forensic doctor doing autopsies. Some have better detachment skills.
All the people I know that do this work have 24/7 psychologists on site (most of them can't work remotely due to the private content they work with). I do notice though that most of them do have an "Achilles heel". They tend to shrug most things off without a second thought but there's always one or two specific things or topics that haunt them.
Hopefully eventually AI will be good enough to deal with this shit. It sucks for their jobs or course but it's not the kind of job anyone really does with pleasure.
Someone has to do it is a strong claim. We could not have the services that require it instead.
Absolutely. The platforms could reasonably easy stop allowing anonymous accounts. They don’t because more users means more money.
Not what I was saying. I'm questioning the need for the thing entirely.
Uhh no I'm not giving up my privacy because a few people want to misbehave. Screw that. My friends know who I am but the social media companies shouldn't have to.
Also, it'll make social media even more fake than it already is. Everyone trying to be as fake as possible. Just like LinkedIn is now. It's sickening, all these people toting the company line. Even though they do nothing but complain when you speak to them in person.
And I don't think it'll actually solve the problem. People find ways to get through the validation with fake IDs.
So brown/black people in the third world who often find that this is their only meaningful form of social mobility are the "someone" by default? Because that's the de-facto world we have!
That's not true at all. All the people I speak of are here in Spain. They're generally just young people starting a career. Many of them end up in the fringes of cybersecurity work (user education etc) actually because they've seen so many scams. So it's the start of a good career.
Sure some companies would outsource also to africa but it doesn't mean this work is only available to third-world countries. And there's not that many jobs in it. It's more than possible to be able to find enough people that can stomach it.
There was another article a few years back about the poor state of mental health of Facebook moderators in Berlin. This is not exclusively a poor people problem. More of a wrong people for the job problem.
And of course we should look more at why this is the only form of social mobility for them if it's really the case.
What do you call ambulance chasers, but they go after tech companies? Cause this is that.
I'm curious about the contents that these people moderated. What is it that seeing it fucks people up?
From the first paragraph of the article:
> post-traumatic stress disorder caused by exposure to graphic social media content including murders, suicides, child sexual abuse and terrorism.
If you want a taste of the legal portion of theses just got to 4chan.org/gif/catalog and look for a "rekt", "war", "gore", or "women hate" thread. Watch every video there for 8-10 hours a day.
Now remember this is the legal portion of the content moderated as 4chan does a good job these days of removing illegal content mentioned in that list above. So all these examples will be a milder sample of what moderators deal with.
And do remember to browse for 8-10 hours a day.
edit: it should go without saying that the content there is deep in the NSFW territory, and if you haven't already stumbled upon that content, I do not recommend browsing "out of curiosity".
As someone that grew up with 4chan I got pretty desensitized to all of the above very quickly. Only thing I couldn’t watch was animal abuse videos. That was all yers ago though, now I’m fully sensitized to all of it again.
These accounts like yours and this report of PTSD don't line up. Both of them are credible. What's driving them crazy but not Old Internet vets?
Could it be:
Personally, I'm suspecting that difference in exposure to _any kind of media_ might be a factor; I've come across stories online that imply visiting and staying at places like Tokyo can almost drive people crazy, from the amount of stimuli alone.Doesn't it sound a bit too shallow and biased to determine it was specifically CSAM or whatever specific type of data that did it?
Did your parents know what you were seeing? Advice to others to not have kids see this kind of stuff, let alone get desensitized to it?
What drew you to 4chan?
Of course not. What drew me in was the edginess. What kept me there was the very dark but funny humor. This was in 2006-2010, it was all brand new, it was exciting.
I have a kid now and my plan is to not give her a smartphone/social media till she’s 16 and heavily monitor internet access until she’s atleast 12. Obviously I can’t control what she will see with friends but she goes to a rigorous school and I’m hoping that will keep her busy. Other than that I’m hoping the government comes down hard on social media access to kids/teenagers and all the restrictions are legally codified by the time she’s old enough.
The point is that you don't know which one will stick. Even people who are desensitized will remember certain things, a person's facial expression or a certain sound or something like that, and you can't predict which one will stick with you.
That fucking guy torturing monkeys :(
things that you cannot unsee, the absolute worst of humanity
There was a report by 60 minutes (I think) on this fairly recently. I’m not surprised the publicity attracted lawyers soon after.
https://news.ycombinator.com/item?id=42465459
One terrible aspect of online content moderation is that, no matter how good AI gets and no matter how much of this work we can dump in its lap, to a certain extent there will always need to be a "human in the loop".
The sociopaths of the world will forever be coming up with new and god-awful types of content to post online, which current AI moderators haven't encountered before and which therefore won't know how to classify. It will therefore be up to humans to label that content in order to train the models to handle that new content, meaning humans will have to view it (and suffer the consequences, such as PTSD). The alternative, where AI labels these new images and then uses those AI-generated labels to update the model, famously leads to "model collapse" [1].
Short of banning social media at a societal level, or abstaining from it at an individual level, I don't know that there's any good solution to this problem. These poor souls are taking a bullet for the rest of us. God help them.
1. https://en.wikipedia.org/wiki/Model_collapse
Obvious job that would benefit everyone for AI to do instead of humans.
Good! I hope they get every penny owed. It's an awful job and outsourcing if to jurisdictions without protection was naked harm maximization.
This is the one job we can probably automate now.
it's kinda crazy that they have normies doing this job
Normies? As opposed to who?
[flagged]
I’m not sure what is behind your assumption, if it’s the Autistic people does not have empathy myth, please read up on the topic.
Autistic people do have empathy, it just works differently. Most of them are actually really caring, just not very good at showing it. Nor at picking up others' feelings. But they do care about them in my experience.
Most of them I know will have more difficulty with this type of work, not less. Because they don't tend to process it as well. This includes myself as I do have some autistic tendencies. No way I could do this.
Autistic people often have stronger empathy that neurotypical people, sometimes much much stronger. Especially towards animals.
I'd wager they'd still have PTSD, but wouldn't be able to communicate it as well as a normal person.
What you really want is AI doing this job. Or psychopaths/unempathetic people if that's not an option.
[dead]
[dead]
[dead]
[flagged]
You think people who took these jobs had a list of job offers and were jumping for joy to be able to pick this one out? Or that they stuck with it after the first 5 minutes of moderating necrophilia because they believed other jobs would have similar downsides? You’re so out of touch with the real world and hardships people face trying to make a living for themselves and their family.
I’m curious of other perspectives and conclusions on this.
Why do you think Facebook is the responsible party and not the purveyors of the content that caused them PTSD? From my perspective, Facebook hired people to prevent this content from reaching a wider audience. Thanks for any insight you can provide.
I never said Facebook is the responsible party. I’m saying these workers deserve our sympathy and I’m saying it’s not a case of people who had a simple choice but willingly chose a job that caused them PTSD.
I don’t think Facebook is blameless though. They practically brag about their $40B of AI spend per year and absolutely brag about how advanced their AI is. You can’t focus some of your R&D to flag content that’s instantly recognizable as disgusting content, like pedophilia, necrophilia, and beastiality? There’s already a ton of pre-labeled data they can use from all these workers. No, they don’t get a pass on that. I think it’s shameful they focus all their AI compute and engineering on improving targeted ads and not put a major focus on solving this specific problem that’s directly hurting so many people.
Very good point. Thanks for taking the time to respond and for your thoughtfulness!
Maybe the solution is that Facebook shouldn't exist. It solves both the problem of distribution and the problem of moderation.
While that would solve the problem within Facebook, I think you're kidding yourself if you think that's going to stop the demand or supply of horrible content.
If others want to moderate why should these complainers get in the way? They are free to not take the job, which obviously involves looking at repulsive content so others don’t have to. Most people don't have a problem with social media existing or moderators having the job of a moderator.
At first glance you may have a point. Thing is they’re often recruited with very promising job titles and descriptions, training on mild cases. Once they fully realize what they got themselves into the damage has been done. If they’re unlucky, quitting also means losing their house. This may help empathize a bit with their side of this argument.
If you pay someone to deliver post, and they get their leg blown of because you order them to go through a minefield, you can’t just avoid responsibility by going “that’s what they signed up for” obviously the responsibility of ensuring that the job can be carried out not physically and r safe is with the employer and workers are well within reason to demand compensation if the employer hasn’t ensured the job can be safely carried out.
I think a better example is mining, where miners received no safety equipment, and the mines were not built with safety foremost.
The idea was, if you didn't like it, leave. If you wanted safety equipment, buy it yourself. Or leave. Can't work due to black lung disease partially from poor ventilation the company was responsible for? You're fired; should have left years ago.
There are still people who believe the contract is all that counts, nothing else matters, and if you don't like it, leave.
> It’s the job they signed up for. I don’t understand the complaint. If they don’t want to do the part of the job that is obviously core to it, they should move on. The mass diagnosis just seems like a tactic to generate “evidence”. And the mention of pay compared to other countries makes this look like a bad faith lawsuit to get more compensation.
its also their right to sue their employer for damage if they believe it affected them in a extremely harmful way. signing up for a job doesnt make the employer above the law.
But some here can't fathom that workers also have rights.
Exploited people of the world should just pull themselves up by their bootstraps and work harder to get what they want, like you did?
They aren’t exploited. They’re paid money in return for viewing and filtering content for others. They could not apply or decline the offer and look at other jobs. The availability of this job doesn’t change the rest of their employment options. But it’s pretty clear what this job is. If it was just looking at friendly content, it wouldn’t need to exist.
Exploitation nearly always involves paying. Plenty of people caught up in sex trafficking still get paid, they just don't have a viable way out. Plenty of people working in sweat shops still get paid, but again not enough with enough viable alternatives to get out.
You’re still not acknowledging the key points - that it is obvious up front that the job fundamentally involves looking at content others don’t want to, and that it is a new job that can be accepted or avoided without taking away from other employment opportunities. Therefore it doesn’t match these other situations you’re drawing a comparison to.
Most of these people are forced to take these jobs, because nothing else is available, they don't have the power to avoid this job. You cannot make a principled decision if your basic needs, or those of your family are not met. In fact, many well-off, privileged people who are simply stressed cannot make principled decisions if their livelihood is at stake.
The world is not a tabula rasa where every decision is made in isolation, you can't just treat this like a high school debate team exercise.
Not acknowledging the social factors at play is bordering on bad faith in this case. The social conditions of the moderators is _the_ key factor in this discussion. The poorer you are, the more likely you are to be forced to take a moderator job, the more likely you are to get PTSD. Our social and economic systems are essentially punishing people for being poor.
[dead]
Perhaps if looking at pictures of disturbing things on the internet gives you PTSD than this isn’t the kind of job for you?
Not everyone can be a forensic investigator or coroner, too.
I know lots of people who can and do look at horrible pictures on the internet and have been doing so for 20+ years with no ill effects.
It isn’t known in advance though. These people went to that job and got psychiatric diseases that, considering the thirdworldiness, they are unlikely to get rid of.
I’m not talking about obvious “scream and run away” reaction here. One may think that it doesn’t affect them or people on the internet, but then it suddenly does after they binge it all day for a year.
The fact that not less than 100% got PTSD should be telling something here.
perhaps life on Kenya isn't easy as yours?
The 100+ years of research on PTSD, starting from shell shock studies in WWI shows that PTSD isn't so simple.
Some people come out with no problems, while their trenchmate facing almost identical situations suffers for the rest of their lives.
In this case, the claim is that "it traumatised 100% of hundreds of former moderators tested for PTSD … In any other industry, if we discovered 100% of safety workers were being diagnosed with an illness caused by their work, the people responsible would be forced to resign and face the legal consequences for mass violations of people’s rights."
Do those people you know look at horrible pictures on the internet for 8-10 hours each day?
[flagged]
I didn’t make any claims about me. Read it again, more carefully, before making personal attacks.
> Perhaps if looking at pictures of disturbing things on the internet gives you PTSD than this isn’t the kind of job for you?
Perhaps these are jobs people are forced to do because the price of labour isn't as rich as other countries, trafficked and the likes.
> I know lots of people who can and do look at horrible pictures on the internet and have been doing so for 20+ years with no ill effects.
Looking at is different to moderating. I've sen my fair share of snuff from the first iraqi having their head cut off in 2005 all the way down to: ogrish/liveleak, goatse, tubgirl, 2girls1cup shock sites.
But when you are faced with imagery of gruesome material day-in day-out on 12-hour shifts if not longer non-stop, being paid very little, it would take a toll on anyone.
I've done it, lone-wolf sysop for a adult dating website for two years and the stuff I saw was moderate but still made me feel mentally disturbed. The normality wears off very quickly.
Could you work a five days week looking at extreme obscenity imagery for $2 an hour?
The alternative is they have no job. And it is clear what this job entails, so complaining about the main part of the job afterwards, as this small group of moderators is doing, seems disingenuous.
Leave these empty personal attacks off HN, please. Respond substantively.
I wish they get trillion dollars but I am sure they signed their life away via waivers and whatnots when they got the job :(
Maybe so, but in places with good civil and human rights, you can't sign them away via contract, they're inalienable. If Kenya doesn't offer these protections, and the allegations are correct, then Facebook deserves to be punished regardless for profiting off inhumane working conditions.
absolutely!!
If I was a tech billionaire, and there was so much uploading of stuff so bad, that it was giving my employee/contractors PTSD, I think I'd find a way to stop the perpetrators.
(I'm not saying that I'd assemble a high-speed yacht full of commandos, who travel around the world, righting wrongs when no one else can. Though that would be more compelling content than most streaming video episodes right now. So you could offset the operational costs a bit.)
How else would you stop the perpetrators?
Large scale and super sick perpetrators exist (as compared to small scale ones who do mildly sick stuff) because Facebook is a global network and there is a benefit to operating on such a large platform. The sicker you are, while getting away with it, the more reward you get.
Switch to a federated social systems like Mastodon, with only a few thousand or ten thousand users per instance, and perpetrators will never be able to grow too large. Easy for the moderators to shut stuff down very quickly.
> Switch to a federated social systems like Mastodon, with only a few thousand or ten thousand users per instance, and perpetrators will never be able to grow too large.
The #2 and #3 most popular Mastodon instances allow CSAM.
Tricky. It also gives perpetrators a lot more places to hide. I think the jury is out on whether a few centralized networks or a fediverse makes it harder for attackers to reach potential targets (or customers).
The purpose of facebook moderators (besides legal compliance) is to protect normal people from the "sick" people. In a federated network, of course, such people will create their own instances, and hide there. But then no one is harmed from them, because all such instances will be banned quite quickly, same as all spam email hosts are blocked very quickly by everyone else.
From a normal person perspective on not seeing bad stuff, the design of a federated network is inherently better than a global network.
That's the theory. I'm not sure yet that it works in practice, I've seen a lot of people on Mastodon complaining about how as a moderator, keeping up with the bad services is a perpetual game of whack-a-mole because everything is access on by default. Maybe this is a Mastodon specific issue.
That's because Mastodon or any other federated social network hasn't taken off, and so not enough development has gone into them. If they take off, naturally people will develop analogs of spam lists and SpamAssassin etc for such systems, which will cut down moderation time significantly. I run an org email server, and don't exactly do any thing besides installing such automated tools.
On Mastodon, admins will just have to do the additional work to make sure new accounts are not posting weird stuff.
Big tech vastly underspends on this area. You can find a stream of articles from the last 10 years where BigTech companies were allowing open child prostitution, paid-for violence, and other stuff on their platforms with little to no moderation.
If you were a tech billionaire you'd be a sociopath like the others and wouldn't give a single f about this. You'd be going on podcasts to tell the world that markets will fix everything if given the chance.
They are not wrong. Do you know any mechanism other than markets that work at scale and that don’t cost a bomb and don’t involve abusive central authority?
Tech billionaires usually advocate for some kind of return to the gilded age, with minimal workers rights and corporate tax. Markets were freer back then, how did that work out for the average man? Markets alone don't do anything for the average quality of life.
Quality of life for the average man now is way better than it was at any time in history. A fact.
But is it solely because of markets? Would deregulation improve our lives further? I don't think so, and that is what I am talking about. Musk, Bezos, Andreessen and cie. are advocating for a particular laissez-faire flavor of capitalism, which historically has been very bad for the average man.
[flagged]
"More than 140 Facebook content moderators have been diagnosed with severe post-traumatic stress disorder caused by exposure to graphic social media content including murders, suicides, child sexual abuse and terrorism."
What part here are you suggesting is similar to seeing two men kissing?
Not defending this person in particular, but you should take a look at how anti-LGBT, including at the government level, most countries are in Africa. Maybe a decent number of them do regard seeing homosexuality as PTSD inducing.
There are several places where they legally can and will kill you for homosexuality.
The existence of anti-LGBTQ wasn't where their argument was leading to though.
Their line of logic was that our society moral values are but social construct, that change from place to place and with time, therefore being exposed to sexual violence, child abuse, gore etc. is PTSD inducing only because we're examining it through our limited perspective whereas it's quite possible that all these things those FB mods were exposed to can be perfectly normal in some cultures.
I wanted to see where that argument would lead them to, as in, what kind of people would FB should have hired that would be resistant to all these horrible stuff, but other than to let me know that in fact there are such cultures, I never got a straight answer out of them.
[flagged]
But we're not talking about an 80's christian mom since you proceeded to make the observation that "People are outraged by whatever they are told to be outraged over and ignore everything that they aren't".
Which is to say, being exposed to extreme violent/abusive content could only cause PTSD iff one is subject to a specific social construct that define these acts in a certain way. Let's just assume you're right, what kind of employees would that imply are immune to getting PTSD from such content given your previous observation?
The answer is in the question: Whoever has been raised in a culture that doesn't make a big deal out of whatever the given content is.
And what culture is that?
You're lying and you know it. I remember the 80s as much as anyone else. Especially the part where Elton John and Freddy Mercury where at the peak of their popularity, unless you where living in a religious shithole but that was (and still is) a small part of the world.
In 1980 75% of adults thought that homosexuality was always wrong and 20% that it was never wrong or only sometimes wrong.
https://lgbpsychology.org/html/gss4.html
https://lgbpsychology.org/html/prej_prev.html
Your feelings about the period mean nothing.
Seeing something you think is culturally wrong is not necessarily traumatizing, is it? And surely there are degrees of "wrongness", ranging from the merely uncomfortable to the truly gross to the utterly horrifying. Even within the horrifying category, one can differentiate between things like a finger being chopped off and e.g. disembowelment. It's reasonable to expect a person would be more traumatized the further up the ladder of horror they're forced to look.
This would be believable if not for the fact that hanging gutting and quartering was considered good wholesome family entertainment to watch while getting fast food in the market not three centuries ago literally everywhere.
How did that go away, if people only do what they're brought up to do?
We found out that keeping people homeless to die of exposure slowly was much more effective at keeping the majority in line.
Depends seriously on the country, the Netherlands was way ahead there. In many ways more ahead than it is now because it has become so conservative lately.
In what 80s fantasy were you living in where gay people were open. Rumor was John was bisexual and Freddy getting aids in the late 80s was a huge deal. Queen's peak is around 1992 with Wayne's world. No men kissed on stage or in movies neither did women.
It might have induced 'disgust', but no, two men kissing didn't give 1980s Christian moms actual PTSD.
You have no idea what content is being discussed here if you even think about bringing identity politics into this topic.
[flagged]
Equating outrage to PTSD is absolute nonsense. As someone that lives with a PTSD sufferer, it is an extremely severe and debilitating condition that has nothing to do with “outrage” and can’t be caused by seeing people kiss.
PTSD is what happens when you see someone standing next to you reduced to a chunky red salsa in a split second.
The idea that seeing images of that can match the real thing can only be said by people who haven't smelled the results.
You’re very wrong, it can be caused by different things to different people- as the causes are emotional it requires severe emotional trauma, which does not have to happen through a specific category of event- a lot of different types of trauma and abuse can cause it.
It’s hard to imagine a more disgusting thought process than someone trying to gatekeep others suffering like you are doing here.
actually, not everybody gets PTSD in for example a combat situation, and Gabor Mate says that people who do develop PTSD are the people who have already suffered traumas as children; in a sense, childhood trauma is a preexisting condition.
A lot of PTSD is also not from combat all all- childhood emotional trauma alone can cause it. This is recognized now, but it took a while because initially it was discovered in war veterans and categorically excluded other groups- eventually they discovered that war wasn’t unique in causing the condition.
However, I would point out that Mates’ views are controversial, and don’t fully agree with other research on trauma and PTSD. He unrealistically associates essentially all mental illness and neurodivergence with childhood trauma, even in cases where good evidence contradicts that view. He claims ADHD is caused by childhood emotional trauma, although that is proven not to be the case, so I don’t put much stock in his scientific reasoning abilities- he has his hammer and sees everything as a nail.
You were literally gatekeeping ptsd from Christian mom's not one post ago.
HN ethos is to assume good faith, but my imagination is failing me here as to how you might be sincere and not trolling- can you please share more info to help me out?
What makes you think people have experienced clinically diagnosed or diagnosable PTSD from seeing someone kiss? Has anyone actually claimed that?
You used the word outrage, and again outrage is not trauma- it describes an outer reaction, not an inner experience. They’re neither mutually exclusive nor synonymous.
Your assertion seems to be that only being physically present for a horrific event can be emotionally traumatic- that spending years sitting in a room watching media of children being brutally murdered and raped day in and day out is not possibly traumatic, but watching someone kiss that you politically think should be banned from doing so, can be genuinely traumatic?
In this context, this is dangerously close to asserting "people are only outraged about CSAM because they're told to be." I don't think that's what you mean.
It is exactly what I mean.
If you don't nurture that outrage every day than you'd be rather surprised what can happen to a culture in a single generation.
I think your logic is backwards. The main reason for a culture to ban pedophilia is because it causes trauma in children. For thousands of years, cultures have progressed towards protecting children. This came from a natural sense of outrage in a majority of people, which became part of the culture. Not vice versa. In many of your comments, you seem to assume that people are only automatons who think and do exactly what their culture teaches them, but that's not the truth. The culture is made up of individuals, and individual conscience frequently - thankfully - overrides cultural diktat. Otherwise no dictatorship would ever fall, no group of people would ever be freed, and no wicked practices would ever be stamped out. It has always been individual people acting against the culture whose outrage has caused the culture to change. Which strongly implies that people's sense of outrage is at least partly intrinsic to human nature, totally apart from cultural practices of the time.
I'm now old enough to have seen people who treated homosexuals in the 1980s the same we treat pedophiles today start waving rainbow flags and calling the people they beat up for being gay in highschool Nazis.
There maybe a few people with principles who stick with them.
The majority will happily shove who ever they are told to in a gas chamber.
I'm not saying there aren't a lot of people who are natural conformists, who do whatever they're told to, and hate or love whatever the prevailing culture hates or loves. They may be a majority. And yes, a prevailing culture can take even the revulsion of murder out of people to some extent (although check out the state sanctioned degree of alcohol and drug use among SS officers and you'll see it's not quite so easy to make people do acts of murder and torture every day).
What I am saying is that the conformists don't drive the culture, they're just a blunt weapon of whoever is driving the culture. That weapon can be turned toward gay rights or toward burning people at the stake, but what changes a culture are the individuals with either a conscience or the individuals with wicked plans. Both of which exist outside the mainstream in any time and place.
Maybe another way of saying this is that I think most people are capable of murder and most people are capable of empathy (and therefore trauma) with someone being tortured, but primarily they're concerned with being a good guy. What throws the arc of history towards a higher morality is that maybe >0% of people naturally need to perceive themselves as "good" by defending life and the dignity and humanity of other people, to the extent that needing to be a good person overrides their cultural programming. And those are not the only people who change a culture, but 51% of the time they change it for the better instead of worse.
That's just my view on it.
Wait, why are they calling gay people Nazis? This story is very unclear. And I can't see how it relates to CSAM and the moderators who have to see it, which is a categorically different issue to homosexuality, so different as to be completely unconflatable.
I'm trying to interpret this post in the best light and, regrettably, I'm failing.
Can you clarify what you think the change to society will be if we expose more people online to CSAM and normalize it?
I have a lot of questions.
The nature of the job really sucks. This is not unusual; there are lots of sucky jobs. So my concern is really whether the employees were informed what they would be exposed to.
Also I’m wondering why they didn’t just quit. Of course the answer is money, but if they knew what they were getting into (or what they were already into), and chose to continue, why should they be awarded more money?
Finally, if they can’t count on employees in poor countries to self-select out when the job became life-impacting, maybe they should make it a temporary gig, eg only allow people to do it for short periods of time.
My out-of-the-box idea is: maybe companies that need this function could interview with an eye towards selecting psychopaths. This is not a joke; why not select people who are less likely to be emotionally affected? I’m not sure anyone has ever done this before and I also don’t know if such people would be likely to be inspired by the images, which would make this idea a terrible one. My point is find ways to limit the harm that the job causes to people, perhaps by changing how people interact with the job since the nature of the job doesn’t seem likely to change.
So you're expecting these people to have the deep knowledge of human psychology to know ahead of time that this is likely to cause them long term PTSD, and the impact that will have on their lives, versus simply something they will get over a month after quitting?
I don’t think it takes any special knowledge of human psychology to understand that horrific images can cause emotional trauma. I think it’s a basic due diligence question that when considering establishing such a position, one should consult literature and professionals to discover what impact there might be and what might be done to minimize it.