

I’m amazed that anybody could even be surprised at this end result. We truly live with some moronic apes on this earth.


I’m amazed that anybody could even be surprised at this end result. We truly live with some moronic apes on this earth.


I mean, yeah, I left during the first wave of API changes and before the company went public, but the AI has definitely made it completely unusable.


Which country do you live in?


LLMs are at a standstill since 2021, I would argue the current models were around in the late 80s they’re just using more compute time now, but it’s being marketed as the future to confuse a billion dopes like you who don’t understand technology. It’s the ultimate ponzi scheme, the companies are making no money but their evaluation keeps rising.
To clarify, OpenAI wrote a paper proving their model would not reach human output accuracy ever. They proved that the costs of gaining the same level of benefit from GPT3 to GPT4 as GPT2 to GPT3 would cost literally EXPONENTIAL amount of resources, which was proven again in practice when they actually did it a couple of years later. To improve it again would cost more power than mankind currently produces total, but the end result will still be hallucinating liability filled garbage because in 2022 Deepmind proved with LITERALLY INFINITE POWER AND TRAINING DATA that it would not reach human output, that the hard limit didn’t even reach the mid-90s.
You are arguing with the AI companies and researchers. Ya’ll need to understand that AI, as it is, is a fucking scam.
The paper from OpenAI: https://arxiv.org/pdf/2001.08361
The followup paper from DeepMind: https://arxiv.org/pdf/2203.15556


You literally don’t understand.
The human statements are the baseline, right or wrong, and the AI struggles to maintain numbers over 80% of that baseline.
Take however often a person is wrong and multiply it: that’s AI. They like to call it “hallucination” and it will never, ever, go away: in fact it will get worse as it has already polluted its own datasets which it will pull from and produce even worse output like noise coming from an amp in a feedback loop.


If you think a 2:7 ratio after insulting a bunch of net negative slopper subhumans is enough to change my mind then welcome to the internet, my friend. That’s a figure of speech btw, I am not some dirty slopper’s friend.


Removed by mod


“Hey AI, I want to do this very specific thing but I don’t really know what it is called, can you help me?”
That was your previous example. You had a very specific thing in mind, meaning you knew what to search for from reputable sources. There are tons of ways to discover new previously unknown things, all of which are better than being a filthy stupid slopper.
“Hey AI, can you please think for me? Please? I need it, idk what to do.”


wow thanks for that /s


And I explained why that makes them a moron.


I think theres a point where you have to realize the topic of discussion is about LLMs like ChatGPT, and that point was around the time we compared it to Web 3.0, something that people hate and associate with tech bros and evil corporations.
The meaning of words change based on context.


Unfortunately, an LLM lies about 1 in 5 to 1 in 10 times: 80% to 90% accuracy, with a proven hard limit by OpenAI and Deepmind research papers that state even with infinite power and resources it would never approach human language accuracy. Add on top of that the fact that the model is trained on human inputs which themselves are flawed, so you multiply an average person’s rate of being wrong.
In other words, you’re better off browsing forums and asking people, or finding books on the subject, because the AI is full of shit and you’re going to be one of those idiot sloppers everybody makes fun of, you won’t know jack shit and you’ll be confidently incorrect.


AI has no use. It only subtracts value and creates liabilities.
Are they asking for ID?
Then they’re breaching privacy.


Did the EU suddenly develop a tech industry overnight or are you unaware where all the major AI companies are located?


Academia literally got cut by more than a third and Microsoft is planning to revive breeder reactors.
You might think academia will work on the problem but the people running these things absolutely do not.


Because the training has diminishing returns, meaning the small improvements between (for example purposes) GPT 3 and 4 will need exponentially more power to have the same effect on GPT 5. In 2022 and 2023 OpenAI and DeepMind both predicted that reaching human accuracy could never be done, the latter concluding even with infinite power.
So in order to get as close as possible then in the future they will need to get as much power as possible. Academic papers outline it as the one true bottleneck.
It’s a problem inherent to centralized online platforms.