Removed by mod
Those less responsible authors should be shown this study from the same organization last month showing similar problems on Twitter
In the course of the investigation, researchers found that despite the availability of image hashes to identify and remove known CSAM, Twitter experienced an apparent regression in its mitigation of the problem. Using PhotoDNA, a common detection system for identified instances of known CSAM, matches were identified on public profiles, bypassing safeguards that should have been in place to prevent the spread of such content. This gap was disclosed to Twitter’s Trust & Safety team which responded to address the issue. However, the failure highlights the need for platforms to prioritize user safety and the importance of collaborative research efforts to mitigate and proactively counter online child abuse and exploitation.
That being said, people who code for the Fediverse should see this report and pay particular attention to things like
Current tools for addressing child sexual exploitation and abuse online—such as PhotoDNA and mechanisms for detecting abusive accounts or recidivism—were developed for centrally managed services and must be adapted for the unique architecture of the Fediverse and similar decentralized social media projects.
I honestly don’t know crap about coding, but this seems like a very solvable problem and something I’d very much like for the people who do to engage with. I would absolutely donate some money to support a project like this.
e; I guess what I meant to say is I would absolutely donate some money to purchase API keys from Microsoft
I actually just saw that Dansup is working on adding optional opt-in support for PhotoDNA in pixelfed if an instance admin adds a PhotoDNA API key, I wonder if that was spurred on by this report. Hopefully Mastodon also looks into adding support.
Nice, yeah hopefully this feature or something that accomplishes the same spreads* throughout the Fediverse quickly
*Like, it would be really cool if there was a way to fight child porn that didn’t involve relying on a for profit company, but chipping away at our screwed up economic system is a lower priority than stopping child abuse
After a bit of reading, another option may simply be to include a “report” button that generates a hash of the image and federates the list. That being said, their may be a similarity algorithm under the hood of PhotoDNA that works better. Hard to say since it’s all proprietary and pay-for-membership. Prices aren’t even listed publicly unless you use a cloud API.
Yeah, I’m just discovering that it’s proprietary
In 2009, Microsoft partnered with Dartmouth College to develop PhotoDNA,
Good to know my tax dollars went to helping Microsoft develop another product! /s
Or… license PhotoDNA of course!
Oh, surely they don’t charge for a tool to stop child abus-
Wow, say what you will about capitalism, but it really is an engine for innovation and coming up with new ways to make me lose faith in humanity
There might be other options:
https://www.hackerfactor.com/blog/index.php?archives/931-PhotoDNA-and-Limitations.html
I was informed that, in the last few years, NCMEC has added additional solutions beyond PhotoDNA. This includes Google’s CSAI and Facebook’s open-source video/image matching tools.
That’s good, but it is still just mind blowing to me that we let a bunch of private for profit companies take the lead on this. This is the sort of thing the FBI ought to be all over developing and maintaining and handing out to everyone if they weren’t a bunch of stupid assholes busy harassing environmentalists and police brutality protesters.
I appreciate you doing a level headed job of explaining this and dropping a link. Cheers!
Commenting my $0.02 How fucking bullshit is this. I looked into the report and it’s a report from Stanford. It’s the same loli crap you see on 4chan and even twitter sometimes.
It’s not bullshit that the CSAM content is there. It’s bullshit to imply that this is a unique problem for Mastodon. Every social media platform is rife with child-abuse material.
Who made the study? It’s unknown? Let’s put an X next to the unknown :)
Who made the study? It’s right there in the article and associated links. David Thiel is one of the 2 authors. He works at Stanford and before that worked at Facebook. He’s a security and safety guy. He seems to know what he’s talking about going by his publications and history.
It was a joke
Who funded him?
Definitely “Big Centralised”.
See Fubos comment here, they did a great job: https://lemdit.com/comment/665534
in short though Stanford did. Where Stanford is a silicon valley school lol.
The report hints at it but doesn’t really say it out loud: get rid of one particular server and there goes 99% of it, along with 90% or so of the overall Japanese userbase (as they were the first big Japanese instance and had a mostly-trusted locally relevant company behind it). But nearly every non-Japanese-orientated instance already either fully defederated from it or has something to strip media content from it. It’s essentially its own thing not really related to Mastodon aside from the software in use.
Luckily, I don’t see anything like that on Mastodon. Unfortunately, I don’t see much of anything on Mastodon.
I’ve seen a lot of great stuff on Mastodon but when it starts to become overwhelmingly political, I dip out. Problem is that you have to follow an abrasive amount of random people to start seeing anything across their fediverse. Shit in the beginning, awesome/hits the mark at least in terms of twitter replacement - after you follow enough people.
You’re right, I do need to build up my watch list. I’m just not sure where to start. I’ll probably have to parasitize the watch lists of the few people I’m already watching.
Hashtags are also a good place to start! For example, if you’re looking for science content you can follow the #science hashtag. Once you have those posts coming in to your feed, start following the people and hashtags you’re seeing on the posts you like best. It’ll start snowballing from there.
Also, don’t worry too much about following too much at first. Get that feed populated, then pare it down later. Filtering is pretty powerful too, so a lot of times you can get the good parts of a hashtag and filter out the bad parts instead of the all-or-nothing following of some social media.
Study funded by Eloon and friends? :)
There was some theorycrafting from some of the react devs on twitter that they were striking mastodon from twitter on purpose to eliminate people becoming aware of it.
Good call imo.
WaPo’s coverage of the 2016 Presidential race was a master class in journalistic nihilism. Why sure, let’s read their point of view on social media via a study they found that supports it.
Mastodon is the new Discord!
This post brought to you by Elon’s Muskrat.
Muskrat Love 💨 🎸 🤘
@postmateDumbass @RagingNerdoholic
Well hell. I didn’t see that one coming. lol