Some weird, German communist, hello. He/him pronouns and all that. Obsessed with philosophy and history, secondarily obsessed with video games as a cultural medium. Also somewhat able to program.

https://abnormalbeings.space/

https://liberapay.com/Wxnzxn/

  • 2 Posts
  • 21 Comments
Joined 1 year ago
cake
Cake day: March 6th, 2025

help-circle
  • Not impossible, although, sadly - any system where anonymity is the prime focus will also invite fucked up shit in addition to legitimate use, without any complicated motives behind it. There’s just a relevant fraction of humanity who are, sometimes essentially, sometimes temporarily, messed up fucks. Which is why I think providing ways to combat abuse has to be a high priority for the underlying development of any project like it, unless it explicitly doesn’t aim for mainstream adoption.


  • I had a wild ride with matrix, originally wanting to run a node on my server. That did not turn out well, because I was a bit stupid and just assumed there would be more admin/mod tools out of the box. As it turned out, I had inadvertently allowed spam/abuse accounts on my node without even noticing, because naive as I was, I assumed my admin-level account would get informed of stuff like user registrations and abuse reports in the standard Element frontend. As a bonus, when I checked what was supposedly the official matrix support channel, it was repeatedly getting spammed with CSAM and gore at the time. That was when I realised, that it definitely was not the ecosystem for me, and running a node without experience had been a pretty stupid idea on my end.


  • A mere 0.1% of users share 80% of fake news. Twelve accounts – known as the “disinformation dozen” – created most of the vaccine misinformation on Facebook during the pandemic. These few hyperactive users produced enough content to create the false perceptions that many people were vaccine hesitant.

    So, this is super anecdotal, but through the father of a friend I learned about a guy who was just downright a walking stereotype in that regard. Said father is a rather conservative guy (ex-cop, actually), got lucky and rather rich, and he lived in a suburban village here in Germany. Said neighbour, as described by him: Also an ex-cop, old acquaintance, wife and kids left him because he was violent, living financially comfortably in a large house in that suburban German village on his own, but miserable. And he, unironically, sent said father of my friend far-right propaganda articles, images, messages just… all day long. Every 10 minutes or so. Presumably as mass messages to about anyone who still had a semblance of contact with him. Anecdotal, hearsay with 2 degrees of separation, but - it was the first time I realised those people existed as actual people just casually living their lives around us all.


  • It’s definitely not the same, but I am somewhat reminded of Robert Sapolski’s Baboon stress study

    Some key paragraphs:

    Robert Sapolsky and Lisa Share report evidence of a higher order cultural tradition in wild baboons in Kenya. Rooted in field observations of a group of olive baboons (called the Forest Troop) since 1978, Sapolsky and Share document the emergence of a unique culture affecting the “overall structure and social atmosphere” of the troop.

    Through a heartbreaking twist of fate, the most aggressive males in the Forest Troop were wiped out. The males, which had taken to foraging in an open garbage pit adjacent to a tourist lodge, had contracted bovine tuberculosis, and most died between 1983 and 1986. Their deaths drastically changed the gender composition of the troop, more than doubling the ratio of females to males, and by 1986 troop behavior had changed considerably as well; males were significantly less aggressive.

    After the deaths, Sapolsky stopped observing the Forest Troop until 1993. Surprisingly, even though no adult males from the 1983–1986 period remained in the Forest Troop in 1993 (males migrate after puberty), the new males exhibited the less aggressive behavior of their predecessors.

    The authors found that while in some respects male to male dominance behaviors and patterns of aggression were similar in both the Forest and control troops, there were differences that significantly reduced stress for low ranking males, which were far better tolerated by dominant males than were their counterparts in the control troops. The males in the Forest Troop also displayed more grooming behavior, an activity that’s decidedly less stressful than fighting. Analyzing blood samples from the different troops, Sapolsky and Share found that the Forest Troop males lacked the distinctive physiological markers of stress, such as elevated levels of stress-induced hormones, seen in the control troops.

    But if aggressive behavior in baboons does have a cultural rather than a biological foundation, perhaps there’s hope for us as well.







  • Psychopaths, sycophants and grifters vying for power were all very prevalent at German universities and research labs at that time. While Engineering still kind of worked - as it was needed for the war machinery and larger industry - even there, with it being “politically neutral”, there was a brain drain - because education allowing for creative thinking was curtailed more broadly, and many talented minds were killed or displaced or even just disfavoured in favour of more nepotistic choices.

    And the myth of “German engineering” being fundamentally way above allied engineering during the war still holds in some circles, when mostly it was about different priorities (like - reliability instead of complex engineering, or the proximity fuse instead of rocketry, or radar instead of jet engines), and even in the spaces where Germans had a leg up on their enemies, it was not a fundamental advantage, but a gap that was being bridged even before German scientists were recruited after the war.




  • Note that this works for PET (It’s a pet peeve of mine when these articles just say “plastic” in a way that makes it sound, like this can recycle all types of plastics), and more precisely, PET after chemical processing into something else, useable by the modified bacteria. While still a great avenue for recycling, the article does not elaborate how feasible this could become in large scales (unless I missed a sentence or paragraph, that can happen with my brain.):

    The team made their discovery when they took polyethylene terephthalate (PET) – a type of plastic often found in food packaging and bottles – and, using sustainable chemical methods, converted it into a new material.

    When the researchers incubated this material with a harmless strain of E coli they found it was converted into another substance known as Paba in a process that must have involved a Lossen rearrangement.

    Crucially, while the Lossen rearrangement typically involves harsh laboratory conditions, it occurred spontaneously in the presence of the E coli, with the researchers discovering it was catalysed by phosphate within the cells themselves.

    The team add that Paba is an essential substance that bacteria need for growth, in particular the synthesis of DNA, and is usually made within the cell from other substances. However, the E coli used in the experiments was genetically modified to block these pathways, meaning the bacteria had to use the PET-based material.

    The researchers say the results are exciting as they suggest plastic waste can be converted into biological material.

    “It is a way to just completely hoover up plastic waste,” said Wallace.

    The researchers then genetically modified the E coli further, inserting two genes – one from mushrooms and one from soil bacteria – that enabled the bacteria to convert PABA into paracetamol.

    The team say that by using this form of E coli they were able to turn the PET-based starting material into paracetamol in under 24 hours, with low emissions and a yield of up to 92%.

    While further work would be needed to produce paracetamol in this way at commercial levels, the results could have a practical application.




  • I think you are underestimating that some skills, like reading comprehension, deliberate communication and reasoning skills, can only be acquired and honed by actually doing very tedious work, that can at times feel braindead and inefficient. Offloading that on something else (that is essentially out of your control, too), and making that a skill that is more and more a fringe “enthusiast” one, has more implications, than losing the skill to patch up your own clothing or calculating things in your head. Understanding and processing information and communicating it to yourself and others is a more essential skill than calculating by hand.

    I think the way the article compares it with walking to a grocery store vs. using a car to do even just 3 minutes of driving is pretty fitting. By only thinking about efficiency, one is in risk of losing sight of the additional effects actually doing tedious stuff has. This also highlights, that this is not simply about the technology, but also about the context in which it is used - but technology also dialectically influences that very context. While LLMs and other generative AIs have their place, where they are useful and beneficial, it is hard to untangle those from genuinely dehumanising uses. Especially in a system, in which dehumanisation and efficiency-above-contemplation are already incentivised. As an anecdote: A few weeks ago, I saw someone in an online debate openly stating, they use AI to have their arguments written, because it makes them “win” the debate more often - making winning with the lowest invested effort the goal of arguments, instead of processing and developing your own viewpoint along counterarguments, clearly a problem of ideology as it structures our understanding of ourselves in the world (and possibly just a troll, of course) - but a problem, that can be exacerbated by the technology.

    Assuming AI will just be like the past examples of technology scepticism seems like a logical fallacy to me. It’s more than just letting numbers be calculated, it is giving up your own understanding of information you process and how you communicate it on a more essential level. That, and as the article points out with the studies it quotes - technology that changes how we interface with information has already changed more fundamental things about our thought processes and memory retention. Just because the world hasn’t ended does not mean, that it did not have an effect.

    I also think it’s a bit presumptuous to just say “it’s true” with your own intuition being the source. You are also qualifying that there are “lazy/dumb” people as an essentialist statement, when laziness and stupidity aren’t simply essentialist attributes, but manifesting as a consequence of systematic influences in life and as behaviours then adding into the system - including learning and practising skills, such as the ones you mention as not being a “bad thing” for them to become more esoteric (so: essentially lost).

    To highlight how essentialism is in my opinion fallacious here, an example that uses a hyperbolic situation to highlight the underlying principle: Imagine saying there should be a totally unregulated market for highly addictive drugs, arguing that “only addicts” would be in danger of being negatively affected, ignoring that addiction is not something simply inherent in a person, but grows out of their circumstances, and such a market would add more incentives to create more addicts into the system. In a similar way, people aren’t simply lazy or stupid intrinsically, they are acting lazily and stupid due to more complex, potentially self-reinforcing dynamics.

    You focus on deliberately unpleasant examples, that seem like a no-brainer to be good to skip. I see no indication of LLMs being exclusively used for those, and I also see no reason to assume that only “deep, rigorous thinking” is necessary to keep up the ability to process and communicate information properly. It’s like saying that practice drawings aren’t high art, so skipping them is good, when you simply can’t produce high art without, often tedious, practising.

    Highlighting the problem in students cheating to not be “properly educated” misses an important point, IMO - the real problem is a potential shift in culture, of what it even means to be “properly educated”. Along the same dynamic leading to arguing, that school should teach children only how to work, earn and properly manage money, instead of more broadly understanding the world and themselves within it, the real risk is in saying, that certain skills won’t be necessary for that goal, so it’s more efficient to not teach them at all. AI has the potential to move culture more into that direction, and move the definitions of what “properly educated” means. And that in turn poses a challenge to us and how we want to manifest ourselves as human beings in this world.

    Also, there is quite a bit of hand-waving in “homework structured in such a way that AI cannot easily do it, etc.” - in the worst case, it’d give students something to do, just to make them do something, because exercises that would actually teach e.g. reading comprehension, would be too easy to be done by AI.



  • I don’t think books ever had the same amount of discussion of how they impact our global carbon footprint, and where it comes to “houses” - I doubt people in the neolithic said about their new invention what is being discussed with AI. It is a disingenuous comparison. (And sure, someone somewhere may have said something like that about basically anything, but usually not a large part of professionals from within the field, like is the case with AI.)

    This is also not simply Ludditism, the nature of how AI is used currently goes far beyond where it is genuinely useful in a case of investor hype FOMO, and the hidden costs for our efforts against climate change are real, as are the problems for creatives - who sadly need a lot of the “bullshit work” that AI can substitute to survive while honing their craft - as is the quality drop in journalism, as are fundamental questions about how far generative AI models can truly evolve in quality for the massive amount of energy invested, so the usual “just wait until the tech gets better” is not the easy way out to justify draining said energy (and fresh water) on top of what crypto mining has been wasting with data centres in the past years.

    Now, those problems aren’t simply problems of the technology, but also of how that technology manifests within market dynamics. But the technology still is not just neutral, and even if we view it as an inevitability, that inevitability does not have to manifest without regulation and within the context of hyped, often unwanted application to basically everything.

    Without mechanisms to address problems and to enforce regulation, in lieu of fundamental changes to what market/investment dynamics demand, this is indeed a very questionable technology at this point. And also: To truly love something abstract, like “technology”, means being able to - sometimes harshly - criticise it. Think the meme of a “tech bro” with a fully automated house vs the IT guy who barely has tech stuff beyond their PC and some stuff tinkered on passionately in their own time.




  • Oh, yes, it wasn’t a direct answer, also, I’m not the person you answered to. Ultimately, my comment was more meant as an overall addition to the discussion, building on the idea of what a solution to:

    Which I think is one of the big issues with OSS projects - many are based around a very small number of people being motivated to work on something for free. And it dies if that stops.

    might be.

    But as answers to your two points. #1 - I have no idea where they got that from, myself #2 - I think you answered that one yourself rather well, and I wanted to build on that one.

    Sorry if that was confusing, my brain is also good at confusing myself at times, can’t imagine how that is for others at times.