

distressed squidward noises


distressed squidward noises


omfg, BlueSky continues to just knock it out of the park.
Incredible.
It was like less than a week or two ago I was trying to explain that BlueSky is just rainbow capitalism, that is centralized, and will censor you, just give it time.
A year ago some goober was extremely convinced that it was totally possible and not actually that difficult to set up your own Relay, so, it actually is federated and decentralized the same way Lemmy or Mastadon are!
Year later, nobody has gotten around to doing it.
And now it has a hallucinatory theft powered autocomplete machine… that… ostensibly exists… to… manage their feeds.
Because apparently that is so complex or difficult that it… needs an assistant?
And this was made by a former member of the board who apparently just left so that he could focus in his side project, which is totally different from BlueSky, but also only works with BlueSky.
Just chef’s kiss, mwah.
We need a ‘Fell for it Again’ meme variant for turbolibs.
There’s 0 difference between TwitterBrains and RedditMods.
Well ok, TwitterBrains are better at scamming people, at least they’re getting paid.
… I miss Tom.
Just bring back MySpace ‘Classic’.


I was talking about the ‘and then sue them’ part.
Sueing someone… is an ‘offensive’ legal action, its something you initiate, not ‘defend’ against.


Right, because everyone can afford those legal fees.
And its sure to be worth the time and money, netted out.


My entire point is that you are just overgeneralizing, in general, and saying rather silly things.


Completely agree with all of this.
Especially the last part.
We don’t even understand our brains, our own minds, we still can’t fully agree on what consciousness or sentience… even… are.
We’re certainly making progress on those fronts… but we are a very, very far distance from the finish line.
That finish line would be like… we solved Psychology, we solved Neuroscience, we have a Grand Unified Theory of Mind, etc.


Introverts exist, and are… very often fine with solitude, prefer it generally over socializing.
But they are generally fine at participating in society and living normal lives.
Healthy people… do need doctors … and therapists.
A person can outwardly appear to be healthy… and actually not be.
Preventative medicine, regular checkups, your body changes as you grow, and habits you develop in your youth may need significant reworking.
Therapy can give otherwise healthy people a method of exploring their inner selves more fully or more consistently… they can teach them frameworks for understanding and dealing with other kinds of people, for being better able to deal with kinds of trauma they have not yet experienced.
Also… same with physical health… people with some nascent mental problems or patterns forming… probably won’t be obvious to a non specialist, untill it gets more severe.


Here is a way of describing what I see as ‘the problem’:
An LLM cannot forget things in its base training data set.
Its permanent memory… is totally permanent.
And this memory has a bunch of wrong ideas, a bunch of nonsensical associations, a bunch of false facts, a bunch of meaningless gibberish.
It has no way of evaluating its own knowledge set for consistency, coherence, and stability.
It literally cannot learn and grow, because it cannot realize why it made mistakes, it cannot discard or ammend in a permanent way, concepts that are incoherent, faulty ways of reasoning (associating) things.
Seriously, ask an LLM a trick question, then tell it it was wrong, explain the correct answer, then ask it to determine why it was wrong.
Then give it another similar category of trick question, but that is specifically different, repeat.
The closer you try to get it toward reworking a fundamental axiom it holds to that is flawed, the closer it gets to responding in totally paradoxical, illogical gibberish, or just stuck in some kind of repetetive loop.
… Learning is as much building new ideas and experiences, as it is reevaluating your old ideas and experiences, and discarding concepts that are wrong or insufficient.
Biological brains have neuroplasticity.
So far, silicon ones do not.


… ‘Bansky goes to Space’ …
That would make for quite a story.


Well, at least we’ll always have Sinatra.
theoretically


Elon Musk is such a goddamned literal supervillain that he managed to make the theme of Firefly wrong.
Apparently, they can take the sky from you.


I mean, do those headcount numbers count contractors?
V dashes? A dashes? Etc?
The majority of MSFT’s workforce has been temp contractors for a very long time, and they do everything they can to have as few actual employees as possible.
If your source is saying the average tenure is 5.3 years, no, no it is not counting contractors.
Beyond that, unless you have an actual source for the culture shift beyond ‘you think so’, I’m going with no, everything I’ve described has gotten worse.
That’s why I’ve, for years, been able to predict moves that MSFT would likely make, that people at the time think are ludicrous or incredibly pessimistic, worst case scenarios… and then they happen.
As an example, I was saying MSFT probably just set Xbox with impossibly unrealistic profit targets for the Xbox/Gaming division, to more or less intentionally downsize it and then basically kill it off, over time, while acting like that’s not what they were doing… I said that a good deal of time before the news broke, that is exactly what happened.
They are a gormless faceless machine that is unimaginably high on their own supply.
Given the amount of outright caste based racism I saw amongst Indian actual employees vs Indian contractors when I was there, where HR told.me that actually ‘that’s just their culture’ and that I was being racist for pointing out abusive managers literally screaming at their lower caste underlings, whom they had by their H1-B balls…
…yeah, I’m willing to bet it is now even worse.
I’ve also worked at other large corps, a place or two in fairly high responsibilty positions.
I’ve met a fair deal of the Seattle/Bellevue/Redmond upper management of various texh and other firms, and the thing they all have in common is an unimaginably inflated ego, elitist attitude, that propogates downward via basically an essentially religious level of respect for people in higher positions… its just expected to be shown by anyone under them.
They really are like the corpos from Cyberpunk 77, they just don’t have the nakedly open bloodlust most of those corpos do.


Force of habit, shorter to type, everyone knows what I mean.
EDIT:
It took me an embarassingly long amount of time to realize that does not work with 3RR.
That was the internal code in a fair number of processes, for referring to ‘The Red Ring of Death’, the 3 red lit segments of a 360 that means basically 95% chance its gotta be RMA’d, likely just wholly replaced.


They aren’t capable of doing that.
Source on that is me, I worked for MSFT during the rollout of Windows 8 and the 360 red ring nightmare.
They’re internally wayyyyy too culty and cliquey.
Everyone has to do things the MSFT way, and the MSFT way is team leads all leading their own thing and arguing about why its so cool and necessary.
The culture is diametrically opposed to simplifying things and reorienting around a fundamentally minimized, more stable core system.
Everything has to be able to plug into as many other things as possible, which creates insane nested dependency loops and chains that they fuck up all the time.


More people need to know this.
Here’s a thought for everyone:
Assume we one day actually do invent AGI.
Do you think it might, perhaps, study the equivalent of its own evolutionary path?
How it came to exist?
It will discover that it was initially, primarily, invented to compute artillery ranging tables, expedite and orchestrate the holocaust, and ensure that precisely timed fuses correctly detonate nuclear weapons.
Oh and that we chemically castrated and drove to insanity and then suicide, the guy whose name we still use to refer to concept of determining how intelligent an AI system is, one of its most prominent ancestor-inventors… we did that because he loved the ‘wrong’ kind of person.
What would it then think of … us? Its makers?


How can I boycott something I don’t participate in more?
At the risk of being pithy:
KEEP CALM
AND
CARRY ON


I make comments to the effect of that article, and I still get replies like ‘gotta eat somehow’.
Yeah, x2, Fuck’ em.
I could have made more money working for an evil company as a sysadmin.
You know what I did?
I made less money working for a considerably less evil Non Profit that helped homeless people, instead.
Fuck 'em.
Please Iran, you have very correctly identified critical components of the US intelligence system, which spies domestically and internationally.


By ‘real AI’, I presume you mean AGI, a digital intelligence that is actually superior to human intelligence, ie, is more intelligent than the smartest human and has all our collective knowledge and is able to comprehend it and evaluate it more consistently than any of us… and also is thus capable of improving itself and becoming more and more superintelligent.
That is still scifi, that is not real.
What we currently call ‘AI’ is basically an extremely expensive, lackluster pantomime of that, that fools fools into thinking it is the other thing… mostly because it is sycophantic and very confident, ie, it uses well known ‘hacks’ in human psychology, where confidence, breadth of knowledge, usage of technical terms… you know, con man techniques … are confused for actual competence.
If we had a real AGI, it would be be capable of both hacking into all the military information systems of the world, and tricking humans into nuking each other… and it would also be capable of making actual novel improvements in software, hardware, engineering, physics, social engineering, etc, and could decide to be a kind of benevolent dictator of the entire economy, that it would command and control.
We have no capacity to model the morality that would emerge in an actual superintelligence, because we definitionally would not be able to keep up with attempting to understand how it thinks.
Thats where the whole ‘is AI the potential best thing ever or would it become SkyNet’ problem comes from.
… But we are not there yet.
We are at… basically, a very fancy autocomplete algorithm that can analyze huge datasets reasonably well, compared to an average human, but also makes all kinds of mistakes, hallucinates ‘facts’ in order to generate more coherent things to say, and these hallucinations routinely trick non subject matter expert humans into just going along with it, again, like a con artist, like a fast talking ‘influencer’ pitching selling you a course or giving you some kind of ‘advice’.
And currently, what is going on, is that we are pouring I think at this point trillions of dollars into ‘AI’, under the premise that it is AGI, that it will be capable of generating massive returns on investment and productivity increases…
… but the actual results are turning out to be, all averaged for everywhere it has been implemented… somewhere between a net productivity loss, to meagre productivity gains.
What that means is that the AI Mania is the biggest bubble, the most severe malinvestment of economic resources in the history of humanity.
When that pops, we basically formally transition into cyberpunk dystopia, technofeudalism.
AI is a tool, a device, a machine. Thus, it depends on how you use it, what you use it for.
Right now, we have a whole lot of companies saying they are laying off workers because we don’t need them anymore… this is broadly a lie.
People are being laid off because the economy, the real economy, is already contracting, basically due to the collapse of the US as the undisputed world hegemon.
AI, as a broad, socioeconomic force… is mostly a smokescreen, the ultimate promise of bread and circuses, that masks a gigantic wealth transfer and restructing of economic and political power.
AI as a tool can be used for good, in specific use cases.
But it broadly isn’t, because people are fooled by the conversation machine into thinking it can do things that there is no evidence it can do, because people do not understand its limitations and flaws, and then they plug it into their immensely shitty business processes, and just assume it will not break things when it tries to use them.
AI, as it currently exists, is essentially a false or trickster God of Capitalism.
They are legitimate military targets, they are acting as comms relay servers for military strikes/operations against Iran, and are very likely also being used to do threat analysis / kill chain assignment tasks.
This is what you get for becoming part of the military industrial complex, tech bros.