🤮, +1
If other comments don’t get you sorted, Scrounger does it nicely. (if you trust some random site/ have no sensitive bookmarks; other options exist if not.)
Hmm. Are you asking in good faith, or to dogpile? Anyway, sure; I can explain why.
The Gruesome - clickbait because “if it bleeds it leads.”
Story - words like “story” are often plainly false when the article is a tiny blurb or fluff piece. Thankfully, this article is an actual story. But remember, it’s still bait.
of How - clickbait because it asks a question it doesn’t answer, baiting the headline-reader to click.
Neuralink’s Monkeys - oh, another Elon Musk altar. The press can’t get enough of Musk.
Actually Died - more bleeding leading.
Headlines can just be content, rather than a tease. This article title intentionally relays no new info.
Ah, so I was wrong. Gotcha.
Clickbait headline, no tldr? That’s a downvote for me dawg.
Skyler Hornback. Great contestant!
I’ll be the one to stoop to a name and shame. From the receipt, that’s Jon & Vinny’s Brentwood. Thanks—will now be sure to avoid going there.
tldr: parody petition for a six month moratorium on superconductor development because it needs more tracking and government intervention.
Chop score: D+
for the curious, the QR code is https://watchdominion.org , which is a movie by The Vegan Hacktivists.
anti-clickbait tldr: system uses facial recognition, complete with the expected false positives, false negatives, and bias.
Key passage:
Clear’s methods determined its facial-recognition system to enroll new members was vulnerable to abuse, said people familiar with the review, who asked not to be identified discussing security-sensitive information.
The computer-generated photos of prospective customers at times captured blurry images that only showed chins and foreheads, or faces obscured by surgical masks and hoodies.
The process — which allowed Clear employees to manually verify prospective customers’ identities after its facial recognition system raised flags — created the potential for human error.
Apparently last July “a man slipped through Clear’s screening lines at Reagan National Airport near Washington, before a government scan detected ammunition — which is banned in the cabin — in his possession.” And he’d “almost managed to board a flight under a false identity.” The TSA checkpoint found the ammunition, which is what it is supposed to do. This had nothing to do with his identity. There’s no suggestion that the passenger intended to do anything nefarious.
anti-clickbait tldr: system uses facial recognition, complete with the expected false positives, false negatives, and bias.
Key passage:
Clear’s methods determined its facial-recognition system to enroll new members was vulnerable to abuse, said people familiar with the review, who asked not to be identified discussing security-sensitive information.
The computer-generated photos of prospective customers at times captured blurry images that only showed chins and foreheads, or faces obscured by surgical masks and hoodies.
The process — which allowed Clear employees to manually verify prospective customers’ identities after its facial recognition system raised flags — created the potential for human error.
Apparently last July “a man slipped through Clear’s screening lines at Reagan National Airport near Washington, before a government scan detected ammunition — which is banned in the cabin — in his possession.” And he’d “almost managed to board a flight under a false identity.” The TSA checkpoint found the ammunition, which is what it is supposed to do. This had nothing to do with his identity. There’s no suggestion that the passenger intended to do anything nefarious.
anti-clickbait tldr: system uses facial recognition, complete with the expected false positives, false negatives, and bias.
Key passage:
Clear’s methods determined its facial-recognition system to enroll new members was vulnerable to abuse, said people familiar with the review, who asked not to be identified discussing security-sensitive information.
The computer-generated photos of prospective customers at times captured blurry images that only showed chins and foreheads, or faces obscured by surgical masks and hoodies.
The process — which allowed Clear employees to manually verify prospective customers’ identities after its facial recognition system raised flags — created the potential for human error.
Apparently last July “a man slipped through Clear’s screening lines at Reagan National Airport near Washington, before a government scan detected ammunition — which is banned in the cabin — in his possession.” And he’d “almost managed to board a flight under a false identity.” The TSA checkpoint found the ammunition, which is what it is supposed to do. This had nothing to do with his identity. There’s no suggestion that the passenger intended to do anything nefarious.
tldr: author is plainly dying, but can’t try risky new treatments because they might… harm his dying body(!?) and the poor widdle FDA might wook bad.
We need to have a much stronger “right to try” presumption: “When Dying Patients Want Unproven Drugs,” we should let those patients try. I have weeks to months left; let’s try whatever there is to try, and advance medicine along the way. The “right to try” is part of fundamental freedom—and this is particularly true for palliative-stage patients without a route to a cure anyway. They are risking essentially nothing.
Paywall. tldr?
Guessing… corporate incompetence and scaling problems and logistics and “muh supply chain” nonsense.
*yawn* tl;dw?
Very far away…
Very far away…
Very far away…
Very far away…
Very far away…
Very far away…
Can’t say for sure it’ll meet your needs and work with your Logi gear, but I use AntiMicroX to re-bind unrecognized controls.
Near Riyadh Tower in the King Abdullah Financial District. (see also, reddit post 2mos ago)