AI doom
It’s not secret. I dislike AI. Because it has nothing of intelligence. It’s just an unethical application of gigantic statistical models that reinforce bias (racism, among other things) and pretends to generate accurate answers that are completely wrong.
We are doomed. Literaly billions of dollars are spent burning the planet so that a few fascists can dehumanize work, art, etc. by forcing this onto us.
Gizomodo reports After AI Led to Layoffs, Coders Are Being Hired to Fix ‘Vibe-Coded’ Screwups:
Most of all, it [Generative “AI”] seems to have disrupted the tech industry itself, where what was once a profitable career (software development) increasingly seems to be more of a precarious one, thanks to the rise of so-called “vibe coding”—a form of AI-assisted software development that requires less experience and more automation.
It has worked so well that:
404 Media writes about the rise of an entire new class of coders, dubbed the “vibe coding cleanup specialists,” who can swoop in to fix the problems that AI-generated code creates for companies.
The CEOs that pretend a computer can replace work don’t want to admit that computers can replace being in the office. And guess which of the two things is proven to work? The latter.
Maybe CEO should be replaced by computers. It would cost so much less and be as bad.
Then there is privacy.
Futurism tells us that OpenAI Says It’s Scanning Users’ ChatGPT Conversations and Reporting Content to the Police:
In a new blog post admitting certain failures amid its users’ mental health crises, OpenAI also quietly disclosed that it’s now scanning users’ messages for certain types of harmful content, escalating particularly worrying content to human staff for review — and, in some cases, reporting it to the cops.
Whatever reason there is, the fact they report mean that the chatbot can’t be trusted. Thought crimes is the word here, with global surveillance.
But then what if you see a doctor and they use one of these chatbots?
Pivot to AI tells us ‘Optional’ AI scribe is mandatory if you want to see the doctor:
Australian AI ethics lawyer Kobi Leins took her child to a medical specialist. The specialist asked to use an AI transcription system for the appointment.
Leins had actually reviewed this particular system for privacy and security. She knew it was trash and didn’t want her kid’s data “anywhere near” this thing.
Poorly reviewed technology in use for a very sensitive case. So far, Medical records are confidential. But not anymore if you violate the privacy of your patients by send it all to one of these buggy system. Not only it violate privacy but chances that it will be incorrect. It seems here that even the Australian regulator say these “scribes” are not approved. But the law makers? They are probably dreaming of it.
We are doomed.