

Do you have to ask? They’re trying to kill you because you’re insufficiently productive. And by that, I mean that you’re not generating enough additional wealth for them.
Unrepentant Techno-Hermit, forever trying to make less do more.


Do you have to ask? They’re trying to kill you because you’re insufficiently productive. And by that, I mean that you’re not generating enough additional wealth for them.


I’ve never had a horse in this race, and I never will - but I’m sure this will work out well for those who do. /s
Haha! Yeah, sure it is.


I’d still think a Captain of a vessel like this should be capable of metric to imperial conversion as a prerequisite of holding the position, for all that you may be right. While Americans definitely should grow up and adopt a proper sensible modern system of measurement, it really shouldn’t be beyond a person captaining a ship to employ a calculator.


Surely the Captain knew - or should have known - what the clearance height of his own damn vessel is. It’s bad enough when the average trucker misses the local signage and sheers off the top of their truck on a low underpass or tunnel, but passing under the Brooklyn bridge isn’t a spur of the moment decision, and this is just inexcusably incompetent.


Yeah. Let’s not get started on fucking Oracle. We’ll be here all day. Or year, possibly.


Kudos! I wish you the best of luck and hope for your success.


That seems unlikely to persuade those people to continue using VMware, but good luck with that business strat Broadcom.


If you have to supply your users with AI support to figure out how to configure your OS, you might be doing something wrong.


Almost certainly not, no. Evolution may work faster than once thought, but not that fast. The problem is that societal, and in particular, technological development is now vastly outstripping our ability to adapt. It’s not that people are getting dumber per se - it’s that they’re having to deal with vastly more stuff. All. The. Time. For example, consider the world as it was a scant century ago - virtually nothing in evolutionary terms. A person did not have to cope with what was going on on the other side of the planet, and probably wouldn’t even know for months if ever. Now? If an earthquake hits Paraguay, you’ll be aware in minutes.
And you’ll be expected to care.
Edit: Apologies. I wrote this comment as you were editing yours. It’s quite different now, but you know what you wrote previously, so I trust you’ll be able to interpret my response correctly.


Thank you. I appreciate you saying so.
The thing about LLMs in particular is that - when used like this - they constitute one such grave positive feedback loop. I have no principal problem with machine learning. It can be a great tool to illuminate otherwise completely opaque relationships in large scientific datasets for example, but a polynomial binary space partitioning of a hyper-dimensional phase space is just a statistical knowledge model. It does not have opinions. All it can do is to codify what appears to be the consensus of the input it’s given. Even assuming - which may well be far too generous - that the input is truly unbiased, at best all it’ll tell you is what a bunch of morons think is the truth. At worst, it’ll just tell you what you expect to hear. It’s what everybody else is already saying, after all.
And when what people think is the truth and what they want to hear are both nuts, this kind of LLM-echo chamber suddenly becomes unfathomably dangerous.


Of course, that has always been true. What concerns me now is the proportion of useful to useless people. Most societies are - while cybernetically complex - rather resilient. Network effects and self-organization can route around and compensate for a lot of damage, but there comes a point where having a few brilliant minds in the midst of a bunch of atavistic confused panicking knuckle-draggers just isn’t going to be enough to avoid cascading failure. I’m seeing a lot of positive feedback loops emerging, and I don’t like it.
As they say about collapsing systems: First slowly, then suddenly very, very quickly.


Our species really isn’t smart enough to live, is it?


I’m all for draining what little brains remain in the carcass. It’s not like they’re using them anyway.
Oh? Just what I was looking for! An opportunity to be manipulated more effectively by my owners.
You can change or remove this any time…
Haha, cute.


Those are some excellent points. The root cause seems to me to be the otherwise generally positive human capability for pack-bonding. There are people who can develop affection for their favorite toaster, let alone something that can trivially pass a Turing-test.
This… Is going to become a serious issue, isn’t it?


Look, I realize the frontal lobes of the average fifteen year old aren’t fully developed, I don’t want to be insensitive and I fully support the lawsuit - there must be accountability for what any entity, corporate or otherwise opts to publish, especially for direct user interaction - but if a person reenacts Romeo and Juliet with a goddamn AI chatbot and a gun, there’s something else seriously wrong.
I really don’t, but thanks for asking I guess.