

You seem to be confusing researchers and publishers.


You seem to be confusing researchers and publishers.


That’s part of the cost of AI that the AI companies leave to their customers. There is a tradeoff and we know from a long history of for-profit corporate behaviour that they will generally prefer lower short term cost, despite consequent risk and harm. But if the companies that sell AI services don’t take care to ensure the outputs are true and the companies that use AI don’t take care then that leaves the ultimate customer/consumer to fact check everything. That or simply be oblivious or stop trusting anything. The problem is made worse by the fact that most companies won’t disclose their use of AI, because of the adverse impact on their reputation, unless they are compelled to do so. So far, I don’t see any legislation to compel disclosure.


You’re absolutely correct. But the problem is bigger than the rogue journalist. Separation of duties is a well known requirement for robust, reliable processes immune to single points of failure (whether malicious or, as I suspect in this case, merely grossly negligent and irresponsible). It is necessary but not sufficient to hold just the journalist who used AI responsible for the publication of false statements.


Maybe they should do more than just fire a person who was caught using AI. Maybe they should establish a process of independent fact checking before publication, regardless of whether AI was known or intended to be used to produce the article. It is a problem that AI was used in a way that introduced factual errors. It’s fair that the person responsible for this was fired. But all processes need quality control. Why hasn’t the person who failed to wrap quality control processes around the author fired?


This is a very large part of the problem. This and the fact that, by design, the output of AI is, despite its faults, increasingly difficult to distinguish from good work. Accountants’ spreadsheets and traditional software systems can be audited but AI output can’t: there’s no auditable process. The output doesn’t come out of nowhere, but the process is fundamentally resistant to inspection and validation. The only choices then are to run a parallel auditable system, audit it and compare the results, or run without quality control. It’s a crazy risk, but how many companies will spend the money to mitigate it? How many can survive the short term consequences of doing so?


I don’t disagree with you. I wish there were more companies refusing to use AI, at least without the necessary quality controls. And customers enough to support them. But did you see Visualising AI spending: How does it compare with history’s mega projects?? I don’t think Ars can compete with that kind of funding, spent ruthlessly to eliminate competition. People need to wake up and realize they are the target of the predatory pricing of AI services, not just the companies: ordinary people doing good work can’t compete with AI given away for free. Manufacturing didn’t survive the competition of lower cost products from China. I don’t think Ars and companies like them can survive the competition of AI being given away, practically for free. It’s not even that I think AI has no value - it clearly does have enormous value and I expect it will get better. But current AI needs more oversight and control, and those using it should be required to pay the real cost so that those who choose not to use it can compete fairly. Markets that are too free can’t, until it is too late, constrain investment on this scale. We need not just a few companies resisting on principle but some regulation of AI companies to preserve some fairness of the markets, so that companies who use less AI and ordinary people can compete and survive on a level playing field. There are laws against unfair practices for good reason. We all need them to be enforced now.


Her company has been good, though a recent restructuring is worrying. The advice came to an assembly of CFOs. The problem is much bigger than her company. This was the second professional development guidance she has received in the past month, promoting AI. I give her examples of unreliability and advise caution. At the session, they advised that no one should study programming or accounting any more. My advice was that they should study how to audit and that use of AI would make effective audit much harder than it has been, but also more necessary. The clusterfuck is going to affect everyone, unfortunately. You can’t avoid it by avoiding her company.


My wife is an accountant. She went to a seminar today where they were told to start using AI or get out of the way. They were shown an AI that can produce consolidated annual accounts and financial statements in a few minutes, that it takes her and the auditors a month to produce. And they look very good! The company is unlikely to pay her and wait for the quality reports she has been producing for years. She’s on notice: start prompting the AI or move on. The AI promoters are going to run her and me and probably you into the ground and walk over us all, as they move on to their glorious future.


The elite don’t need the masses to be informed, they need them to be placated and oblivious or confused about what is happening, so they support what is contrary to their interests - idolize and support the elite. Good newsrooms don’t serve the purposes of those that own them. AI producing slop with embedded propaganda serves them. It has only just begun. Watch young people on TikTok, sopping up the numbing propaganda. It is the future - now controlled by US elites. Like programmers who know their code, accountants that know their books, and so many other professionals who pride themselves on the quality of their work, journalists who do their jobs to a high standard are being replaced. It will be very good for a few - those that can afford quality, free from slop and misinformation. But that’s not the audience of Ars.


Best if you don’t if quality is more important than financial viability, but no one can compete financially with the flood of AI/LLM being given away for free or, at most, far below actual cost. It’s not good for anyone but the billionaires, but have you noticed how much wealth they have accumulated in the past few years? It’s very, very good for them.


And yet, if you don’t, you will be undercut by the grossly subsidized AI and out of a job, either individually if your management leans AI or the whole enterprise if they don’t, replaced by the AI slop factories.


AI - damned if you do and damned if you don’t. And it’s not just journalism affected.


Who needs separate apps when you can just tell copilot what you want and it can put the slop straight into your trough?


Didn’t OpenAI say it’s right about 25% of the time?


Maybe WHO could declare a mental health emergency and mandate a lockdown and minimum 8 hours per day of AI use. And ban references to ‘slop’ - It’s really bad for the mental health of the billionaires when they hear their scam referred to as ‘slop’.


Is it disabled in hardware, firmware or software? Does Linux enable it?
It’s an interesting article but it seems to me that when it comes to opposing abuse of power, free communication is more fundamental than free software. Without sufficiently free communication, free software is practically unavailable and for many purposes (anything that involves communication with others) it is unusable. Without sufficiently free means of communication, the fediverse will cease to exist. Access to and use of the Internet is increasingly regulated.


According to Are Song Lyrics Copyrighted? How the Law Works, unless their use is ‘fair use’ or they have a license, then they are violating copyright, if I understand the article correctly. I believe that site explains laws in the United States. It probably varies somewhat by jurisdiction, so I expect it would depend on who owns the website and where they are based.


Is ChatGPT a legal entity competent to violate copyright law? I don’t think that’s likely.
I do think OpenAI violated copyright law by copying song lyrics and other media to use them as input to their LLM systems, to embed the essence of them into their LLMs for commercial benefit. Judging by the valuations of the companies that do not yet have significant income compared to the investments, on the face of it, the IP they copied, often without license, as far as I know, is fantastically valuable.
The problem is cultural, not technical or legal. Most people are at best indifferent and more often supportive of the exploitation of others. Unless that changes, the exploitation will be relentless. AI is a new tool that facilitates a kind of exploitation. But the fundamental inclination to exploit with minimal appreciation and compensation is nothing new. Exploitation is not merely tolerated. It is broadly encouraged and venerated. The law is primarily a tool of the elite to protect themselves. It does little to protect the interests of a typical FOSS contributor and the state does even less. There have been a few cases fought and won but compared to the scale of the industry, the resources committed to defending FOSS are trivial. That’s no more the end of FOSS now than it was in the beginning. It will probably reduce revenue for a few companies that have been exploiting FOSS and FOSS producers for profit. The vast majority of contributors were never compensated. Of those that were, it was typically far less than the value of their contributions.