• 5 Posts
  • 647 Comments
Joined 6 months ago
cake
Cake day: June 9th, 2024

help-circle


  • The lie was WORSE than that.

    A lot of the fintechs invovled actually told people their money was safe, because it was subject to “passthrough FDIC insurance”, because their money was ultimately put in an insured bank, and thus was safe.

    Problem is that’s not how it actually worked, so basically everyone was straight up lied to.

    Basically the whole thing is that the bank keeps track of who owns which account and how much money they have, so if they go bust, you just have the FDIC come in and use that data and write checks, basically.

    Except since they’re disrupting banking, they also decided to just fucking not bother, and so even if there was going to be a payout, nobody has any fucking clue who has how much and in which bank said money was.

    Absolute clusterfuck, and about what you’d expect from silly-con valley types.


  • Both!

    The native automation is perfectly cromulent for what I want, usually, but there’s a couple of cases where the integrations either don’t exist or don’t return meaningful data.

    FOR EXAMPLE, the video playback in the living room thing. Sure, the roku integration says “something is playing” but it’s shockingly wrong and unreliable. What happens is it falls into ‘idle’ status between videos, or if you’re fast forwarding sometimes and thus the automation was not doing exactly what I wanted.

    The Jellyfin API, though, can look at the living room tv user and is spot on as to what is going on with play/pause/stopped statuses, so I have node red yank that data direct from the API and it works great.



  • I’ve gone way too far down the automation path.

    All manner of temperature, humidity, occupancy, motion, and air quality sensors make all sorts of things do appropriate responses.

    For example, I’ve got a mmwave motion/occupancy sensor in the bathroom, and if there’s no motion/occupancy and the humidity is more than 5% higher than the hallway sensor, then turn on the exhaust fan until it’s not.

    Or, if the air particulate count in the kitchen is too high, turn on the exhaust fan until it’s not.

    Or, if the living room is occupied, and the tv is on and playing media, turn the overhead lights off and turn the RGB accent light on very dimly. And if the media is paused or stopped, increase the brightness of the RGB lighting so you can see where you’re walking, and if it stays paused or stopped for more than 10 minutes, turn the main lights back to whatever state they were in before media playback started.

    No dashboards though, since the goal is essentially that you don’t have to think about what is going on, because it should Just Work™ and never be something you have to deal with.

    …though, really, I’d say we’re at like 80% successful with that.

    For manual interactions I’ve got a bunch of NFC tags in various places that will trigger the appropriate automation in the case that you either want to do it by hand or it fails to do the needful, plus the app is configured to allow manual control of any device and to trigger specific automations.





  • AI model of that type is safe to deploy anywhere

    Yeah, I think you’ve made a mistake in thinking that this is going to be usable as generative AI.

    I’d bet $5 this is just a fancy machine learning algorithm that takes a submitted image, does machine learning nonsense with it, and returns a ‘there is a high probability this is an illicit image of a child’, and not something you could use to actually generate CSAM with.

    You want something that’s capable of assessing the similarities between a submitted image and a group of known bad images, but that doesn’t mean the dataset is in any way usable for anything other than that one specific task - AI/ML in use cases like this is super broad and has been a thing for decades before the whole ‘AI == generative AI’ thing became what everyone is thinking.

    But, in any case: the PhotoDNA database is in one place and access to it is scaled by the merit of uh, lots of money?

    And of course, any ‘unscrupulous engineer’ that may have any plans for doing anything with this is probably not a complete idiot, even if a pedo: they’re going to have shockingly good access controls and logging and well, if you’re in the US, if the dude takes this database and generates a couple of CSAM images using it, the penalty is, for most people, spending the rest of their life in prison.

    Feds don’t fuck around with creation or distribution charges.


  • comparative scale of the content involved

    PhotoDNA is based on image hashes, as well as some magic that works on partial hashes: resizing the image, or changing the focus point, or fiddling with the color depth or whatever won’t break a PhotoDNA identification.

    But, of course, that means for PhotoDNA to be useful, the training set is literally ‘every CSAM image in existance’, so it’s not really like you’re training on a lot less data than an AI model would want or need.

    The big safeguard, such as it is, is that you basically only query an API with an image and it tells you if PhotoDNA has it in the database, so there’s no chance of the training data being shared.

    Of course, there’s also no reason you can’t do that with an AI model, either, and I’d be shocked if that’s not exactly how they’ve configured it.


  • first time law enforcement are sharing actual csam with a technology company

    It’s very much not: PhotoDNA, which is/was the gold standard for content identification, is a collaboration between a whole bunch of LEOs and Microsoft. The end user is only going to get a ‘yes/no idea’ result on a matched hash, but that database was built on real content working with Microsoft.

    Disclaimer: below is my experience dealing with this shit from ~2015-2020, so ymmv, take it with some salt, etc.

    Law enforcement is also rarely the first-responder to these issues, either: in the US, at least, reports will come to the hosting/service provider first for validation and THEN to NCMEC and LEOs, if the hosting provider confirms what the content is. Even reports that are sent from NCMEC to the provider aren’t being handled by law enforcement as the first step, usually.

    And as for validating reports, that’s done by looking at it without all the ‘access controls and safeguards’ you think there are, other than a very thin layer of CYA on the part of the company involved. You get a report, and once PhotoDNA says ‘no fucking clue, you figure it out’ (which, IME, was basically 90% of the time) a human is going to look at it and make a determination, and then file a report with NCMEC or whatever, if it turns out to be CSAM.

    Frankly, after having done that for far too fucking long, if this AI tool can reduce the amount of horrible shit someone doing the reviews has to look at, I’m 100% for it.

    CSAM is (grossly) a big business, and the ‘new content’ funnel is fucking enormous and is why an extremely delayed and reactive thing like PhotoDNA isn’t all that effective is that, well, there’s a fuckload of children being abused and a fuckload of abusers escaping being caught simply because there’s too much shit to look at and handle effectively and thus any response to anything is super super slow.

    This looks like a solution to make it so less people have to be involved in validation, and could be damn near instant in responding to suspected material that does need validation, which will do a good job of at least pushing the shit out of easy (ier?) availability and out of more public spaces, which honestly, is probably the best thing that is going to be managed unless the countries producing this shit start caring and going after the producers which I’m not holding my breath on.







  • That’s a wee revisionist: Zen/Zen+/Zen2 were not especially performant and Intel still ran circles around them with Coffee Lake chips, though in fairness that was probably because Zen forced them to stuff more cores on them.

    Zen3 and newer, though, yeah, Intel has been firmly in 2nd place or 1st place with asterisks.

    But the last 18 months has them fucking up in such a way that if you told me that they were doing it on purpose, I wouldn’t really doubt it.

    It’s not so much failing to execute well-conceived plans as it was shipping meltingly hot, sub-par performing chips that turned out to self-immolate, combined with also giving up on being their own fab, and THEN torching the relationship with TSMC before you launched your first products they’re fabbing.

    You could write the story as a malicious evil CEO wanting to destroy the company and it’d read much the same as what’s actually happening (not that I think Patty G is doing that, mind you) right now.


  • Yeah but it’s priced the same as a cheap laptop and/or desktop, which of course doesn’t then require you to pay monthly to actually use the stupid thing.

    It feels like another ‘Microsoft asked Microsoft what Microsoft management would buy, and came up with this’ product, and less one that actually has a substantial market, especially when you’re trying to sell a $350 box that costs you $x a month to actually use as a ‘business solution’.

    This would probably be a cool product at $0 with-a-required-contract-with-Azure, but at $350… meh, I suspect it’s a hard sale given the VDI stuff on Azure isn’t cheap.