

Horses seem to be fearful of just about everything. I wonder how they distinguish between the scent of their own fear and the fear of others.
Freedom is the right to tell people what they do not want to hear.


Horses seem to be fearful of just about everything. I wonder how they distinguish between the scent of their own fear and the fear of others.


New HW4 Teslas do in fact include a front-facing radar, but it’s currently only used for collecting data - not for FSD.
Still, gotta give them credit for getting by with vision-only quite well. I don’t personally see any practical reason why you absolutely must include LiDAR. We already know driving relatively safely with vision only is possible - all the best drivers in the world do it.


And they always will. You need to look at the big picture here, not individual cases. If we replaced every single car on US roads with one driven by AI - proven to be 10 times better a driver than a human - that would still mean 4,000 people getting killed by them each year. That, however, doesn’t mean we should go back to human drivers and 40,000 people killed annually.


Broadly speaking, an AI driver getting stumped means it’s stuck in the middle of the road - while a human driver getting stumped means plowing into a semi truck.
I’d rather be inconvenienced than killed. And from what I’ve seen, even our current AI drivers are already statistically safer than the average human driver - and they’re only going to keep getting better.
They’ll never be flawless though. Nothing is.


If it’s connected to internet it can be hacked.


I think the interventions here are more like: “that’s a trash can someone pushed onto the road - let me help you around it” rather than: “let me drive you all the way to your destination.”
It’s usually not the genuinely hard stuff that stumps AI drivers - it’s the really stupid, obvious things it simply never encountered in its training data before.


I don’t really understand what you mean by suggesting that I have mental imagery I’m unaware of
I haven’t ever claimed such a thing. The point was that when you ask someone about the state of their mind, you’re then relying on their report being accurate - with no good way to verify it.
Although in this case, people pointed out they also monitored the visual region of the brain lighting up.
If someone asked me to visualize an object, I can easily do it. If they then asked whether I can literally see it, I’d say no - but also kind of yes. It’s not a photograph I’m viewing in my mind, but there’s definitely something there. Both yes and no would be truthful answers to “can I see it?”
Still, there’s always a chance that if they could peek inside my mind, they’d find out the thing I report seeing isn’t actually there - at least not when compared to someone who really does see it.


I’ve seen a few videos of this thing in action, and while I like the concept - especially that you can use the same device to mow the lawn too with the lawnmower attachment - it’s still quite painful to watch it work.
Especially with snow blowing, it’s just so disorganized: driving all over the place and making quite the mess. If I’m dropping 5k on an automatic snow blower, I don’t want to have to clean up after it.


The issue with all these studies about people’s subjective experiences is that they rely on self reporting. Just because someone says that they have no mental imagery doesn’t mean that they actually don’t. They may simply be unaware of it. After all, how many people actually spend any significant amount of time learning to pay attention to their minds. The vast majority don’t.
It’s a bit like asking people whether they have an optic blind spot in their vision but not teach them how to look for it. Virtually everyone would say that they don’t and they’d all be wrong.


They’re leaving like redditors to Lemmy. There’s dozens of them!


Nobody knows what it actually takes to reach AGI, so nobody knows whether a certain system has enough compute and context size to get there.
For all we know, it could turn out way simpler than anyone thought - or the exact opposite.
My point still stands: you (or Cameron) couldn’t possibly know with absolute certainty.
I’d have zero issue with the claim if you’d included even a shred of humility and acknowledged you might be wrong. You made an absolute statement instead. That I disagree with.


It’s perfectly valid to discuss the dangers of AGI whether LLMs are the path there or not. I’ve been concerned about AGI and ASI for far longer than I’ve even known about LLMs, and people were worried about exactly the same stuff back then as they are now.
This is precisely the kind of threat you should try to find a solution for before we actually reach AGI - because once we do, it’s way, way too late.
Also:
There is factually 0 chance we’ll reach AGI with the current brand of technology.
You couldn’t possibly know that with absolute certainty.


If anyone who wasn’t the head of an LLM company spouted this drivel, they’d be locked away in a padded room and Axios would rightly be called out for exacerbating the mental health crisis of a paranoid schizophrenic.
Like Eliezer Yudkowsky, Roman Yampolskiy, Stuart Russell, Nick Bostrom, Yoshua Bengio, Geoffrey Hinton, Max Tegmark and Toby Ord?


This comes across more as a warning than a sales pitch.
If only I could convince myself to be as dismissive about the threats of AGI as the average user here seems to be…


…and anxious!


It already had a new owner which is why I stopped using it years ago.


This is quite toxic attitude. You should focus on what’s good for you - not what’s bad for someone else.


Interestingly enough the vet has yet failed to treat any of my sick gerbils I’ve taken there and they’ve all died. Technically not a mouse but a mole but still.


AI has existed since 1956
My unpopular opinion is that social media is simply inherently incompatible with human nature. I don’t think it’s anyone’s fault per se. It’s like heroin in the sense that it doesn’t matter how you distribute it - it’s going to cause harm because hijacking our reward systems is the reason we use it in the first place. If you modify it so all that goes away, then what you’re left with is water - and nobody wants that.
I don’t know what the solution is, though. I don’t think banning it is a solution, but I’m not sure how to square the harmfulness of it. It’s not just kids it’s bad for - it’s everyone. And yeah, there are degrees to it - perhaps Lemmy is objectively better than an algorithm-based message board like Reddit, but something being better doesn’t make it good. A non-toxic heroin that you can’t OD on is also better than the alternative, but it’s still going to be harmful. It’s an arbitrary line we collectively just decide to draw somewhere - even though you could argue infinitely about nudging it one way or the other.