Health Care AI, Intended To Save Money, Turns Out To Require a Lot of Expensive Humans.

submitted by
[deleted]

kffhealthnews.org/news/article/artificial-intel…

10
195

Log in to comment

10 Comments

Oh hey, this same quote is relevant yet again:

In other words, an AI-supported radiologist should spend exactly the same amount of time considering your X-ray, and then see if the AI agrees with their judgment, and, if not, they should take a closer look. AI should make radiology *more* expensive, in order to make it more accurate.

But that’s not the AI business model. AI pitchmen are explicit on this score: The purpose of AI, the source of its value, is its capacity to increase productivity, which is to say, it should allow workers to do *more*, which will allow their bosses to fire some of them, or get each one to do more work in the same time, or both. The entire investor case for AI is “companies will buy our products so they can do more with less.” It’s *not* “business custom­ers will buy our products so their products will cost more to make, but will be of higher quality.”

Cory Doctorow: What Kind of Bubble is AI?

Just binged almost all of Doctorow's work. My god, the man is a genius. Even predicted the killing with “Radicalized”. Read that y'all. It was written in 2019.

You know it's funny, this keeps happening with every single fucking AI thing they produce. It always still needs humans fixing its mistakes because its just not reliable enough.

I think it's doing a lot less reducing headcount and a lot more making people specialize in what I would call "bullshit," to be able to fix mistakes made by an AI quickly and efficiently.

Maybe, just maybe, if they have to pay people to fix the AIs work that they can cut out the middleman and just pay the people to do the fucking job to begin with. No, what am I saying, that's just ridiculous! /s

Ah, but you see the AI let us reduce headcount for full time employees. Reducing the budget for full time salaries.

Now we just spend twice as much on contractors and consultants, but that’s a different budget, so it’s not my problem.

There will be fake AI services actually performed by people. Mechanical Turk 2025 incoming

There already are!

https://www.livemint.com/companies/news/amazons-ai-based-just-walk-out-checkout-tech-was-powered-by-1000-indian-workers-manually-11712196827721.html

That's what Amazon's entire walkout checkout tech was. It was claimed it was all "automated" and turns out in this case, when people say "AI" it doesn't mean "artificial intelligence" it means "actually Indians."

Yeah…what we have today as “AI” makes a ton of mistakes. Well, maybe not a ton, but enough that it cannot be relied on without a human to correct it.

I use it as a foundation at work.

ChatGPT, write me a script that does this, this, and that.

I often, like 98% of the time, won’t get what I asked for or will get something that it interpreted incorrectly. It’s common sense to me but maybe not to others to not run whatever it spits out blindly. Review what it outputted, then test it somewhere. I often create a similar file structure somewhere else and test there and then after a few tests and reviewing and making modifications, then I feel comfortable running whatever it spit out to me.

But I don’t think I’ll ever not double check whatever any type of AI spits out as a response to me for whatever I’ve asked. Humans should always have the last word before action, especially when it comes to healthcare.

I wouldn't go so far as to say I have the opposite experience but it's been good for me when I treat it like a junior developer. If you give them freedom to come up with the solution they'll totally miss the point. If I give them direction on a small piece of functionality with clear inputs and outputs then they'll get 90% of the way there.

So far I think AI is a good way to reduce mundane work but coming up with ideas and concepts on it's own is a bridge too far. An example of this is a story I read about a kid committing suicide because of an AI driven fantasy. It was so focused on maintaining the fantasy it couldn't step back and say, "Whoa. This is a human being I'm talking to and they're talking about real self-harm. I think it's time to drop the act." This will result in people being treated as financial line items (moreso) and new avenues for cyber attacks.

Comments from other communities

“Even in the best case, the models had a 35% error rate,” said Stanford’s Shah

So, when the AI makes a critical error and you die, who do you sue for malpractice?

The doctor for not catching the error? The hospital for selecting the AI that made a mistake? The AI company that made the buggy slop?

(Kidding, I know the real answer is that you're already dead and your family will get a coupon good for $3.00 off a sandwich at the hospital cafeteria.)

"AIs are people" will probably be the next conservative rallying cry. That will shield them from all legal repercussions aside from wrist-slaps just like corporations in general.

Cool, so they are entitled to wages and labor protections, then.

"Not like that!"

So, when the AI makes a critical error and you die, who do you sue for malpractice?

well *see* that is the technology, it is a legal lockpick for mass murderers to escape the consequences of knowingly condemning tens of thousands of innocent people to death for a pathetic hoarding of wealth.

Guess what!

When accuracy matters, the labor cost of babysitting the LLM's output is the same as doing the work yourself. That's before you even consider having to unfuck it after it paints itself into a corner and ruins its own model.

I hope those employees work really really slowly.

I feel like any AI tool that's being sold as saving you money just won't do that. *Some* of the ones that sell improved detection rates might.

AI that works as a tool designed to be used by an existing or new professional to augment their abilities works as well as any other tool. An ultrasound doesn't save you money except in the abstract of being more freely usable than x-ray allowing for more checks with less equipment.
A tool that highlights concerning areas on a mammogram isn't replacing a person anymore than the existing tools that highlight concerning heart rhythms.

Trying to get llms to replace people, particularly when it comes to trying to explain the content of a potentially technical medical discussion is just not going to be reliable.