• 0 Posts
  • 42 Comments
Joined 1 year ago
cake
Cake day: January 5th, 2025

help-circle






  • If there was an actor behind a handful of accounts that are mostly run by LLMs (which mimic human input and interaction) it’d be easily viable for state-level or professional actors to pull such an operation off at scale and successfully manipulate a small platform like the fediverse - especially with some level of manual input or confirmation. Even taking believable selfies of real people that fit the profile is possible and can be anticipated if the actor or the organization behind them are resourceful.

    I’m not entirely against instance-level detection that attempts to understand user patterns and prevent or flag abuse to mods and admins, but I do believe that humanized input and interaction can already be effectively emulated and will only advance as time passes.

    I believe that increased scrutiny of users in a centralized manner is a privacy violation. I picked my instance intentionally and I give some level of trust to the instance owners, but I wouldn’t consent to them (or the software they choose to use) handing over my PII or usage patterns to a third-party group that suspects me (even through purely automated mechanisms). I would discontinue using the service in such a scenario.

    To support my point that bot detection is mostly futile on the fediverse, I’d like to draw your attention to a parallel to this situation in gaming with humanized aimbots - which are already incredibly viable and are implemented in a variety of ways.

    There are usually actual human actors guiding input to some degree, but the aimbot/etc. is designed to mimic human input to achieve believable results. I believe this technology could still advance quite a bit and there are new methods popping up as every day passes.

    The key difference between gaming and the fediverse, is that the fediverse is not software running on our computers at the kernel-level (as with most anti-cheat) - it’s a website running in a browser.

    Ultimately, I feel it boils down to just blocking instances that you disagree with the operation of to curate your experience - which is already available on Lemmy.








  • Are you personally invested in the AI/LLM space? I’m wondering why you chose to engage with very few of my arguments. Is your account a troll account? If you’re not trolling: re-read. I will not engage further until you adequately address my points.

    I was pretty clear: there is no intelligence. AGI is an absolute pipe dream and it will also be a far cry from actual intelligence if you look into it. Hallucinations won’t be fixed unless the technology evolves - adding more GPU power won’t be able to fix it.

    The copyright theft is an extreme issue, regardless your hand-waving of it. Copyright law reform is not perceivably on the table. Major companies are caught red-handed stealing and these companies have no intention of compensating the rights-holders they stole from.


  • The technology (at least with current methodologies) is flawed: that’s why people are warning of the bubble bursting. We can’t properly scale LLMs on our current grid in the same capacity as China. Our technologies are also incredibly energy-intensive compared to their technologies.

    There is no intelligence, the hallucinations are likely fundamental, the cases of people being given dangerous or harmful advice are rising, human AI psychosis is a real concern, the sycophancy/bias confirmation is still present, and major actors in the AI space are existentially afraid of any form of regulation of the technology/industry (which does not signal confidence).

    Also, it’s critical to factor in the whole copyright issue with training data… one domino is all it takes to collapse the whole thing.


  • Meta/OpenAI openly pirating everything they can to train their LLMs is a good example of how data hungry these AI/etc. companies are.

    Is it plausible that companies request that Reddit narrows down data e.g. by demographic, geographic location, or likelihood of being a real person and request that data for purchase? Sure, but the LLMs seemingly require all data that exists that these companies can get their hands on - I highly doubt with the scale of data being consumed (and data theft being committed) that the big players care too much about Reddit data being tainted. If anything, it might even be desirable to them.