

This really has nothing to do with Smart TVs in itself though… It’s just a problem if you choose to play YouTube videos on your TV, which seems like a pretty reasonable thing to want to do.


This really has nothing to do with Smart TVs in itself though… It’s just a problem if you choose to play YouTube videos on your TV, which seems like a pretty reasonable thing to want to do.


Nah man, this is just some divisive bullshit. How many people have you converted by leading with telling them they’re getting cucked? I think it’s a much greater chance that if you ’accuse’ someone of ”cuckloading” they will just become defensive.
I am also a bit impressed how quickly you brought US politics, slavery and world wars into a discussion about online privacy.


Jesus Christ this is such a toxic attitude…. If you want people to take you seriously I don’t think being an ass about it and rage-baiting people is the right strategy.


That seems to be the terms for the personal edition of Microsoft 365 though? I’m pretty sure the enterprise edition that has the features like DLP and tagging content as confidential would have a separate agreement where they are not passing on the data.
That is like the main selling point of paying extra for enterprise AI services over the free publicly available ones.
Unless this boundary has actually been crossed in which case, yes. It’s very serious.


That is kind of assuming the worst case scenario though. You wouldn’t assume that QA can read every email you send through their mail servers ”just because ”
This article sounds a bit like engagement bait based on the idea that any use of LLMs is inherently a privacy violation. I don’t see how pushing the text through a specific class of software is worse than storing confidential data in the mailbox though.
That is assuming that they don’t leak data for training but the article doesn’t mention that.


They’re probably not going to use it…
… but if they do it’s going to be a hell of a good starting point in motivating people to leave Facebook


Honestly you pretty much don’t. Llama are insanely expensive to run as most of the model improvements will come from simply growing the model. It’s not realistic to run LLMs locally and compete with the hosted ones, it pretty much requires the economics of scale. Even if you invest in a 5090 you’re going to be behind the purpose made GPUs with 80GB VRAM.
Maybe it could work for some use cases but I rather just don’t use AI.


Maybe i misunderstand what you mean but yes, you kind of can. The problem in this case is that the user sends two requests in the same input, and the LLM isn’t able to deal with conflicting commands in the system prompt and the input.
The post you replied to kind of seems to imply that the LLM can leak info to other users, but that is not really a thing. As I understand when you call the LLM it’s given your input and a lot of context that can be a hidden system prompt, perhaps your chat history, and other data that might be relevant for the service. If everything is properly implemented any information you give it will only stay in your context. Assuming that someone doesn’t do anything stupid like sharing context data between users.
What you need to watch out for though, especially with free online AI services is that they may use anything you input to train and evolve the process. This is a separate process but if you give personal to an AI assistant it might end up in the training dataset and parts of it end up in the next version of the model. This shouldn’t be an issue if you have a paid subscription or an Enterprise contract that would likely state that no input data can be used for training.


I can’t really tell if you’re joking or not but no, I’m saying that it’s a bug, and at no point anything is sent off your computer


I like that the article excerpt clearly says that it’s simply about files not being removed when the trash bin is emptied, and it’s a problem specific to the Canonical snap system… Yet every single other comment in here rants about Microsoft spyware. Not many people read beyond the headline, lol.
I think you are vastly underestimating the amount of people that only interact with social media through apps in this day and age. Even on Reddit I would be surprised if the old design is more than a few percent of the user base.
It honestly does raise the question of why they are still maintaining the old UI. It seems like it would be an annoying legacy product to keep alive for a tiny part of the user-base. Perhaps it’s used as an API stability canary or something though.