Because AI cannot possibly generate faces.
Because AI cannot possibly generate faces.


I would replace “dramatic” with “predictable”. Everybody knew it was bullshit. It was like tulip mania, but without actual tulips.


That’s exactly what I’m saying: more options to be safer.
Choosing between no men or all men is certainly better than having no choice at all, but being able to filter out just the creeps would be even better.
And yes, that does mean you need to detect who the creeps are, but sexual harassment is already happening, and it would be good to use that information to stop it from happening.


I think this rule a massive improvement, but I also think it’s very restrictive. Women can only choose to avoid all men, rather than just the creeps. So female drivers who need more passengers might feel forced to accept all men, and female passengers who can’t find a ride, might be forced to accept a ride from any male driver. Which might still be a creep.
I think it’s better to weed out the creeps. I think that’s ultimately better for everybody. Make it harder for creeps to get a ride or passenger, instead of making it harder for women.
Maybe both should be an option.


Quite the opposite.


I wouldn’t mind if they’d implemented this the opposite way: if a woman, driver or passenger, encounters a creep, they could report that in the app and then the creep would automatically be banned from riding with women. That way decent men aren’t affected and women keep more choice in drivers/passengers, and only the creeps are singled out.


I’m pretty sure it’s much, much worse than you think. In fact, I’m fairly sure it’s much worse than I think. Men don’t experience it, women are reluctant to talk about it because some men react aggressively to claims that men react aggressively.


Yeah, this does not sound like it’s going to make our children any safer.


Exactly. We are extremely social animals, hardwired to recognise ourselves in things around us, which I’m sure is super useful and vital for a tribe of hunter gatherers living in a hostile environment. But it means that now we recognise faces and emotions in power outlets and lawn chairs. It’s really not surprising we see intelligence and awareness in LLMs, because we recognise that stuff in everything. We are really poor at the level of critical thought required to deal with this responsibly.


An AI response should always be treated as a suggestion, not an answer
Exactly. An AI response can be a great way to get started on a topic you know little about, but it’s never a definitive answer. You have to verify whether it’s actually true. Whether it works. Never trust it blindly.


Especially rude if you want to charge money for it. If your boss wanted an AI answer, they would have asked an AI. You don’t need an expensive consulting company for that.


I still have some DDR3, but no motherboard for it. We need DDR4, but I don’t suppose anyone will want to trade in that direction.


Why are they collecting this data in the first place? You can’t mishandle data you don’t have. The fact that remote access to video is even possible, is very alarming.


Not for you perhaps, but for a lot of people that’s important. There are way too many people who falsely think the Republicans are the Christian party, and that’s unfortunately helped by Christians who separate their faith from politics. It’s vital that more Christians speak out about the hypocrisy and perversion of the “Religious Right”, or uninformed people will continue to be led astray by them.
It’s not good for you either if they continue to vote Republican, so give him this opportunity to set them straight.


I think he’s cancelled in May or thereabouts. It’s a very slow cancellation.


Exactly. They’re trained to produce plausible answers, not correct ones. Sometimes they also happen to be correct, which is great, but you can never trust them.


I’ve started using it as an interactive rubber duck. When I’ve got a problem, I explain it to the AI, after which it gives a response that I ignore because after explaining it, I figured it out myself.
AI has been very helpful for finding my way around Azure deploy problems, though. And other complex configuration issues (I was missing a certificate to use az login). I fixed problems I probably couldn’t have solved without it.
But I’ve lost a lot of time trying to get it to solve complex coding problems. It makes a heroic effort trying to combine aspects of known patterns and algorithms into something resembling a solution, and it can “reason” about how it should work, but it doesn’t really understand what it’s doing.


Use open source maintainers as free volunteers check whether your AI coding experiment works.


It sounds crazy, but it can have impact. It might follow some coding standards it wouldn’t otherwise.
But you don’t really know. You can also explicitly tell it which coding standards to follow and it still won’t.
All code needs to be verified by a human. If you can tell it’s AI, it should be rejected. Unless it’s a vibe coding project I suppose. They have no standards.
It already has fields for personal information, though, and they’re every bit as sensitive as your birthdate. realName, emailAddress, location, and timezone are already in there. The important part is that they’re all optional, and you don’t have to fill them in at all, or can fill them in with fake data. The system still serves you, not some outside party.
But the timing of it does have a lot of people freaking out about it.