

Ollama + Mistral models is pretty open. All the code and weights are open with a permissible licenses.


Ollama + Mistral models is pretty open. All the code and weights are open with a permissible licenses.


Isn’t servo what FF uses for rendering? I thought they had built that layer from scratch in Rust.


It’d be nice if it integrated with my local ollama instance and let me pick which models I wanted to use on the fly with whatever part of the page I want.


Were these guys smuggling drugs into the US?


Fair point… it’s a very conservative floor number though lol. Doing some heavy lifting on the wording.


I figured the % would be significantly higher, like 40% at this rate.


I would say it’s more like 1000 times more energy. Trillions of matrix math computations for a handful of tokens at max speed and CPU/GPU usage, compared to a 10 millisecond database query (or in wiki’s case, probably mostly just easy direct edge node cache with no processing involved.)


The problem with that is that it’s so locked down now you need an account of X age with Y karma, so the majority of the site isn’t something you can participate in. And I get it, lots of spam accounts and whatnot, but still shitty that they’re a hair trigger away from destroying years worth of built up karma over nothing.


I only got a warning, they said if I did it again I could face a site-wide ban though. Pretty disturbing that they’re tracking upvotes to a post… they said I also upvoted a Luigi comment… but in both cases, they wouldn’t tell me specifically which post or comment triggered it.


The human brain contains roughly 86 billion neurons, while ChatGPT, a large language model, has 175 billion parameters (often referred to as “artificial neurons” in the context of neural networks). While ChatGPT has more “neurons” in this sense, it’s important to note that these are not the same as biological neurons, and the comparison is not straightforward.
86 billion neurons in the human brain isn’t that much compared to some of the larger 1.7 trillion neuron neural networks though.


I see two new features that look fantastic, but the rest of the UI seems likely unchanged. I’ll definitely give it a shot though.


GIMP is unfortunately not a good competitor, the UX/UI is atrocious, and that’s after spending 25 years using it now… I switched to Krita for most things at this point. GIMP needs some sort of revamp.


Collection of personal data is arguably worth money to them though, for advertising and whatever else they’re doing.


A lot of their chips are fab’d in the US and Israel and Germany and others though. It’s weird that nobody has mentioned all their US fabs. The new ones coming up in Ohio shortly (construction has been going already) will be two next-gen fab plants.


Does Intel make its main CPUs in China for those high tariffs?
Looked it up and found this info at least:
Key US Locations:
Arizona (Fab 52 and 62), New Mexico (Fab 9 and 11x), and Oregon (Hillsboro) are major Intel manufacturing hubs in the US, with the new Fab 42 and 32 also being part of a larger campus in Arizona. Ohio is also a major site with construction well underway for two new leading-edge chip factories.
Global Footprint:
Intel also has manufacturing facilities in locations like Israel (Jerusalem, Kiryat Gat) and Ireland (Leixlip).
Expansion and Future:
Intel is actively expanding its global network with new fabs in Ohio, Germany, and other locations, according to Intel Newsroom and plans to make the German fab one of the most advanced in the world.


Ren from Ren and Stimpy?
https://ollama.ai/, this is what I’ve been using for over a year now, new models come out regularly and you just “ollama pull <model ID>” and then it’s available to run locally. Then you can use docker to run https://www.openwebui.com/ locally, giving it a ChatGPT-style interface (but even better and more configurable and you can run prompts against any number of models you select at once.)
All free and available to everyone.
In my experience it depends on the math. Every model seems to have different strengths based on a wide berth of prompts and information.
+1 for Mistral, they were the first (or one of the first) Apache open source licensed models. I run Mistral-7B and variant fine tunes locally, and they’ve always been really high quality overall. Mistral-Medium packed a punch (mid-size obviously) but it definitely competes with the big ones at least.
Same here, except 64GB of RAM, I can’t even remember how much that cost 4 years ago but I’m afraid to check the receipt at this point.