• 0 Posts
  • 21 Comments
Joined 3 years ago
cake
Cake day: July 13th, 2023

help-circle
  • There is a lot to hate about AI. A lot of dangers and valid criticism. But AI chatbots convincing people to kill themselves isn’t a problem with chatbots, it’s a problem with the user.

    To me this seems like an obvious problem with the chat bots. These things are marketed as “PhD level experts” and so advanced that they are about to change the nature or work as we know it.

    I don’t think the companies or their supporters can make these claims, then turn around and say “well obviously you shouldn’t take its output seriously” when a delusional person is tricked by one into doing something bad.


  • Last attempt, I swear.

    By digressing to abstraction, good people can and do justify building tech for immoral purposes. It is irrelevant that tech is not inherently good or bad in cases where it is built to do bad things. Talking about potential alternate uses in cases where tech is being used to do bad is just a way of avoiding the issues.

    I have no problem calling flock or facebooks tech stack bad because the intentions behind the tech are immoral. The application of the tech by those organizations is for immoral purposes (making people addicted, invading their privacy etc). The tech is an extension of bad people trying to do bad things. Commentary about tech’s abstract nature is irrelevant at that point. Yeah, it could be used to do good. But it’s not. Yeah, it is not in-and of-itself good or bad. Who cares? This instantiation of the tech is immoral, because it’s purposes are immoral.

    The engineers who help make immoral things possible should think about that, rather than the abstract nature of their technology. In these cases, saying technology is neutral is to invite the listener to consider a world that doesn’t exist instead of the one that does.




  • I don’t see how that is the case.

    It is literally the case. People who have literally made tools to do bad things justified it by claiming that tech is neutral in an abstract sense. Find an engineer who is building a tool to do something they think is bad, they will tell you that bromide.

    OpenCV is not, in itself, immoral. But openCV is, once again, actual tech that exists in the actual world. In fact, that is how I know it is not bad, I use the context of reality—rather than hypotheticals or abstractions—to assess the morality of the tech. The tech stack that makes up Flock is bad, once again I make that determination by using the actual world as a reference point. It does not matter that some of the tech could be used to do good. In the case of Flock, it is not, so it’s bad.


  • As I said before: In a conversation about technology as it actually exists, talking about potentials is not interesting. Yes all technology has the potential to be good or bad. The massive surveillance tech is actually bad right now in the real world

    This issue with asserting that technology is neutral is it lets the people who develop it ignore the impacts of their work. The engineers that make surveillance tech make it, ultimately, for immoral purposes. When they are confronted with the effects of their work on society they avoid according with the ethics of what it is that they are doing by deploying bromides like “technology is neutral.”

    Example: Building an operant conditioning feedback system into a social media app or video game is not inherently bad, you could use it to reinforce good behaviors and deploy it ethically by obtaining the consent of the people you use on. But the operant conditioning tech in social media apps and video games that actually exists is very clearly and unambiguously bad. It exists to get people addicted to a game or media app, so that they can be more easily exploited. Engineers built that tech stack out for the purpose of exploiting people. The tech, as it exists in the real world, is bad. When these folks were confronted with what they had done, they responded by claiming that tech is not inherently good or bad. (This is a real thing social media engineers really said) They ignored the tech—as it actually exists—in favor of an abstract conversation about some potential alternative tech that does not exist. The effect of which is the people doing harm built a terrible system without ever confronting what it was they were doing.


  • “Technology is neutral” is a bromide engineers use to avoid thinking about how their work impacts people. If you are an engineer working for flock or a similar company, you are harming people. You are doing harm through the technology you help to develop.

    The massive surveillance systems that currently exist were built by engineers who advanced technology for that purpose. The scale and totality of the resulting surveillance states are simply not possible without the tech. The closest alternatives are stasi-like systems that are nowhere near as vast or continuous. In the actual world the actual tech is immoral. Because it was created for immoral purposes and because it is used for immoral purposes.


  • If you are in a discussion about the development and deployment of technology to facilitate a surveillance state, then saying “technology is neutral” is the least interesting thing you could possibly say on the subject.

    In a completely abstract, disconnected-from-society-and-current-events sense it is correct to say technology is amoral. But we live in a world where surveillance technology is developed to make it easier for corporations and the state to invade the privacy of individuals. We live in a world where legal rights are being eroded by the use of this technology. We live in a world where this technology is profitable because it helps organizations violate individual rights. If you live in the US, as I do, then you live in a world where federal law enforcement agencies have become completely contemptuous of the law and are literally abducting innocent people off the street. They use the technology under discussion here to help them do that.

    That a piece of tech might potentially be used for a not-immoral purpose is completely irrelevant to how it is actually being used in the real world.








  • Is this really that precise? Reading through these 10 points, many of them seem quite vague to me. Phrases like:

    [. . .] a structural renewal of a wider movement for social autonomy [. . .]

    or

    [ . . .] emancipatory defence [sic] of the need for communal constraint of harmful technology [. . .]

    could mean a million different things, for example.




  • What does dystopia mean to you?

    In this particular case, the things I find dystopian are the tendency of a disconcertingly high number of people to allow a tech company to mediate (and eventually monetize) every aspect of their social lives. The point I was making is that if this tool were to experience widespread adoption, even putting aside the massive surveillance and manipulation issues, what will inevitably happen is that a subset of people will come to rely on the tool to the point where they cannot interact with others outside of it. That is bad. Its bad because, it takes a fundamental human experience and locks it behind a pay wall. It is also bad because the sort of interactions that this tool could facilitate are going to be, by their nature, superficial. You simply cannot have meaningful interactions with someone else if you are relying on a crib sheet to navigate an interaction with them.

    This tool would inevitably lead to the atrophy of social skills. In the same way that overusing a calculator causes arithmetic skills to atrophy, and in the same way that overusing a GPS causes spatial reasoning skill to atrophy. But in this case it is worse, because this tool would be contributing to the further isolation of people who, judging by the excuses offered in this thread, are already bad at social interactions. People are already lonely and apparently social media is contributing to that trend allowing it to come between you and personal interactions in the face to face world is not going to help.

    This is akin to having sticky notes to remember things, just in a more compact convenient application.

    I really disagree with this analogy. It would be more appropriate to say that this is like carrying around a stack of index cards with notes about people in your life and pulling them out every time you interact with someone. If someone in my life needed an index card to interact with me, I would find that insulting, because it is insincere and dehumanizing. It communicates to others "I don’t care enough about you to bother to learn even basic information about who you are.

    The problem isn’t the technology, it’s the application

    I really cannot stand this bromide. We are talking about a company with a track record of using technology to abuse people. They facilitated a genocide (by incompetence, but they clearly did not give a shit). They prey on people when they feel bad. They researched ways to make people feel bad (so they will be easier to manipulate). They design their tools to be addictive and then manipulate and abuse people on their platform. Saying "technology is neutral is the least interesting thing you can say about tech in the context of the current trends of silicon valley. A place whose thought leaders and influencers are becoming ever more obsessed with manipulation, control and fascism. We don’t need to speculate about technology, we already know the applications of this technology won’t be neutral. They will be used to harm people for profit.


  • A tool that keeps track of people in your life and gives you small talk cues seems dystopian in its self. Relying on that you would just further isolate yourself from others.

    Thinking about it, I am pretty sure I would immediately despise anyone who used this tool on me, even apart from the fact that they would be putting me into a meta database without my consent. I would despise people who use this tool for the same reason I despise people who crudely implement the strategies from “How to win friends and influence people”. Their interactions are insincere and manipulative.


  • I think when most people say something like “technology is making the world worse” they mean the technology as it actually exists and as it is actually developing, not the abstract sense of possible futures that technology could feasibly deliver.

    That is clearly what the author of the piece meant.

    If the main focus of people who develop most technology is getting people more addicted to their devices so they are easier to exploit then technology sucks. If the main focus is to generate immoral levels of waste to scam venture capitalists and idiots on the internet then technology sucks. If the main focus is to use technology to monetize every aspect of someone’s existence, then I think it is fair to say that technology, at this point in history, sucks.

    Saying “technology is neutral” is not super insightful if, in the present moment, the trend in technological development and its central applications are mostly evil.

    Saying “technology is neutral” is worse than unhelpful if, in the present moment, the people who want to use technology to harm others are also using that cliche to justify their antisocial behavior.