

Yeah it takes insane amounts of electricity. There’s an aluminium smelter in NZ with an entire hydroelectric power plant dedicated to it. 13% of the total electricity supply of NZ dedicated to just one smelter.


Yeah it takes insane amounts of electricity. There’s an aluminium smelter in NZ with an entire hydroelectric power plant dedicated to it. 13% of the total electricity supply of NZ dedicated to just one smelter.


You mean it hallucinated a positive response to your leading question as it is meant to? You are operating on a fundamental misunderstanding of what LLMs do. Even if what you said is true, an LLM would have no knowledge of that unless it was explicitly told as such as an input - and why would they be stupid enough to do that?


My IT teacher’s password was his personalised number plate. I only used it to unblock Newgrounds for my friends.


Any Kobo can install KOReader with a minor firmware update (that only needs to be done once). There’s instructions here: https://github.com/koreader/koreader/wiki/Installation-on-Kobo-devices It supports OPDS out of the box (though IMO is a little unintuitive to use)


I think YouTube has a lot of room for improvement but why are people still so ignorant about the DMCA and the obligations of platforms to maintain safe-harbour status? YouTube must take down content on claim or open itself up to being legally liable for all user generated content. This Tom Scott video is still relevant https://youtu.be/1Jwo5qc78QU


Pretty sure it’s a bad LLM „analysis“ of the code. It has that flavour to it.


Sure I don’t get any retirement or healthcare benefits, but look at all the company scrip I get for the sloppy autocomplete that is stealing all my groundwater no matter how many times someone says „closed loop cooling“.


Unfortunately it’s the best option in Germany after Google threw a tantrum over privacy laws here. However OpenStreetMap is getting better.


ShotCut is my personal favourite of those. Simple but powerful, though the UI is admittedly clunky.


If my job ends up being reviewing AI code spammed at me by vibe coding juniors all day, I’m joining a nunnery.


The Xenoblade Chronicles trilogy if you like JRPGs.


I‘m switching hobbies to gunpla. No one has managed to put DRAM in an airbrush to the best of my knowledge.


It’s discussed in the Bluesky thread but the CI costs are too high on Gitlab and Codeberg for Godot‘s workflow.


With what money? If you haven’t been paying any attention, there’s almost zero investor interest post-COVID contraction in gaming unless its massive controlling buyouts. The little there is requires sacrificing yourself on the altar of gen AI. No one working in games has enough money to self-fund the initial funds for a studio unless they have rich parents (Sandfall) or are one of the problem executives.


It was more that graphics hardware got a lot more flexible. Less fixed functionality meant that DXVK (DirectX 8-11 to Vulkan translation layer) was a lot more viable as you were able to emulate old behaviour on the GPU through Vulkan.
Graphics APIs are a lot more „thinner“ these days as well. Creating a Vulkan renderer from scratch is like „first one must enumerate the universe“. But it means that DX12<->Vulkan translation is relatively straightforward, and all the crazy stuff is done in shaders which can be recompiled for different APIs.


Speaking of carbon, did scientists give up on lengthening carbon nanotubes at some point? They were supposed to be a miracle material as well.


Never stopped hating being forced to use that piece of monopolistic trash ever since I was on dialup when HL2 released. I buy everything I can on GOG.
I especially resent how closed off the Steam Workshop has made the mod ecosystem for a lot of games.


Quantum computing can’t achieve better outcomes for general computing problems than classical computing can. It’s just possible to do particular kinds of algorithms with it (like Shor‘s Algorithm for factorising prime numbers) that classical computing can’t do. It’s still a lot of smoke and mirrors at the moment though.
I don’t need to try. You aren’t learning facts from interrogating an LLM. If it doesn’t have information, it will make up a result. If it does have information, it will make up a result. Even that is personifying it too much because really the transformer has no concept of what „making something up“ is. It takes an input and gives an output, no matter what.