• 1 Post
  • 46 Comments
Joined 6 months ago
cake
Cake day: September 27th, 2025

help-circle


  • Chime in if you disagreee, but there’s really only 2 reasons a company like OpenAI shuts down a core service like Sora:

    • The service is hemorrhaging money to the point of financial unsustainability.
    • The service is not popular enough to drive investor hype as a “loss leader”

    We already know that OpenAI is losing money on their generative “AI” products across the board, to the tune of billions of dollars per year, and the economic woes that come from rising hardware prices, oil and gas shortages, and another pointless war in the middle east only make the situation worse for them money-wise.

    And so that really just leaves me to conclude that Sora has not maintained the level of popularity and growth needed to impress investors as Q1 comes to a close. Whether it’s users, subscriptions, or time, they must have looked at the numbers and really didn’t like what they saw.

    Hopefully this is the beginning of the end of the ridiculous “AI” bubble, and the start of a new tech sector correction.


















  • Obviously an AI can’t work without being trained. Neither can a human.

    This is a false equivalency that equates natural learning and human agency with “machine learning”, when in that they are not remotely the same. This is a common and extremely flawed personification of a mathematical system that simply does not “learn” in the same way that a human being does.

    Contrary to what seems to be a popular belief today, the creative insight of a human artist is not simply a combination of all of the other works of art that they have seen (akin to training data superimposed into a model). A human artist has the x-factors of personal agency, taste, and the constant sensory barrage of simply living as a huge part of their creative development. For every painting that a human artist sees, they see an unknowable score of other things that influence their perception of the world and art.

    This is very much not a legal point that you’re arguing here, by the way, it’s a technical and practical one.

    I should note that it’s a very long-standing and well established principle that style cannot be copyrighted.

    “Style” is not what’s in question. It never was, and it wasn’t a word that I used in my example.

    ML models are not trained on “style”. They are trained on actual works.

    And in many cases (including in OpenAI’s case) trained on an unimaginable amount of full copyrighted works, in their entirety, without license or consent from the copyright holders, often times pirated with DRM circumvented.

    It’s a simple fact of the technology that OpenAI’s Ghibli filter could not have been made without training off of a large amount (probably every frame of every film, if I had to make an educated guess) of their actual artistic work. OpenAI have admitted that much themselves in court.

    Okay, you think that. What do the judges think? That’s what it ultimately comes down to.

    You seem to have forgotten that this is a social media website comments section discussion, not a court of law.

    I’m sharing my personal opinion, with a background in art, music, and programming, not law.

    I’m entitled to do so, and I won’t stop because it should go without saying that the copyright system matters a great deal to people who actually make things.

    If you think you’re above that then I’m not sure why you’re even here, frankly. Are you here to argue that any of this is fair use? I don’t see you making that case… (Maybe slightly timidly making that case, but not really going for it.)

    In the end this topic is central to human culture and society, it’s not some kind of intellectual exercise for only people in blue suits to muse about.

    Welcome to “the court of public opinion”, where Texan judges and Roman popes alike can be wrong.