

Whoa I didn’t know that was an option, is it part of the export menu? That would make some of my - we needed to change something after all - situations much easier at work.


Whoa I didn’t know that was an option, is it part of the export menu? That would make some of my - we needed to change something after all - situations much easier at work.


It uses a completely different paradigm of process chaining and management than POSIX and the underlying Unix architecture.
I think that’s exactly it for most people. The socket, mount, timer unit files; the path/socket activations; the After=, Wants=, Requires= dependency graph, and the overall architecture as a more unified ‘event’ manager are what feels really different than most everything else in the Linux world.
That coupled with the ini-style VerboseConfigurationNamesForThatOneThing and the binary journals made me choose a non-systemd distro for personal use - where I can tinker around and it all feels nice and unix-y. On the other hand I am really thankful to have systemd in the server space and for professional work.


While I think I agree with your geneal stance, I also believe ‘no knowledge is lost’ is pure hyperbole.
Aside from many different quasi-documentaries, video essayists and slice-of-life bloggers (whose content is surely backed up on other platforms or by data hoarders) the sheer amount of tacit knowledge of small computer/electronics/hardware repairs and similar, especially in smaller channels, is in no way either ‘not knowledge’ or not ‘lost’ should the platform go up in flames tomorrow.


I am fairly sure this is the actual point of the campaign. The selection bias for a ‘poll’ like this (one that instantly on-boards you to the ai-disabled version of your product if you click answer negative, no less) is so great that I don’t believe the suits/analysts at ddg ever envisioned a different result. Polls and comment sections lure the extreme viewpoints and the ddg crowd already skews privacy-conscious so this was a highly expected outcome.
What the campaign does instead is:
It’s quite clever imo, and there’s no real bad outcome for what I assume is a pretty inexpensive campaign.


Hey this seems neat but I think you might have more success with the post over on [email protected] or [email protected] as community suggestions that are generally more open to individual project promotions.


I see the misunderstanding, didn’t consciously see the ‘used’ hardware in your post above. That makes a lot more sense!


https://pcpartpicker.com/list/XpFtXR
That setup would currently run for around $1730? Without investing into a monitor, or any peripherals like keyboard, mouse, etc and picking a relatively cheap psu/case/cooler combo.
Maybe I misunderstood but seems a far cry from €850.


As far as I know that’s generally what is often done, but it’s a surprisingly hard problem to solve ‘completely’ for two reasons:
The more obvious one - how do you define quality? When you’re working with the amount of data LLMs require as input and need to be checked for on output you’re going to have to automate these quality checks, and in one way or another it comes back around to some system having to define and judge against this score.
There’s many different benchmarks out there nowadays, but it’s still virtually impossible to just have ‘a’ quality score for such a complex task.
Perhaps the less obvious one - you generally don’t want to ‘overfit’ your model to whatever quality scoring system you set up. If you get too close to it, your model typically won’t be generally useful anymore, rather just always outputting things which exactly satisfy the scoring principle, nothing else.
If it reaches a theoretical perfect score, it would just end up being a replication of the quality score itself.


But that ‘brick’ of the posted text is just the article that is linked. So if we are commenting under a post dedicated to the article it would stand to reason that we read the article itself, would you not agree?


I think you really nailed the crux of the matter.
With the ‘autocomplete-like’ nature of current LLMs the issue is precisely that you can never be sure of any answer’s validity. Some approaches try by giving ‘sources’ next to it, but that doesn’t mean those sources’ findings actually match the text output and it’s not a given that the sources themselves are reputable - thus you’re back to perusing those to make sure anyway.
If there was a meter of certainty next to the answers this would be much more meaningful for serious use-cases, but of course by design such a thing seems impossible to implement with the current approaches.
I will say that in my personal (hobby) projects I have found a few good use cases of letting the models spit out some guesses, e.g. for the causes of a programming bug or proposing directions to research in, but I am just not sold that the heaviness of all the costs (cognitive, social, and of course environmental) is worth it for that alone.


One point I would refute here is determinism. AI models are, by default, deterministic. They are made from deterministic parts and “any combination of deterministic components will result in a deterministic system”. Randomness has to be externally injected into e.g. current LLMs to produce ‘non-deterministic’ output.
There is the notable exception of newer models like ChatGPT4 which seemingly produces non-deterministic outputs (i.e. give it the same sentence and it produces different outputs even with its temperature set to 0) - but my understanding is this is due to floating point number inaccuracies which lead to different token selection and thus a function of our current processor architectures and not inherent in the model itself.


I am not sure what your contention, or gotcha, is with the comment above but they are quite correct. And additionally chose quite an apt example with video compression since in most ways current ‘AI’ effectively functions as a compression algorithm, just for our language corpora instead of video.
Otherwise also codeberg.org has a pages feature for a while.
And others that come to mind are surge.sh, Netlify, and Vercel that I think all offer simple one-push static hosting. Vercel and Render can also do dynamic pages, not sure about the others.
Edit: oh and of course GitLab if you’re looking for an almost 1-to-1 Pages experience.