• 4 Posts
  • 122 Comments
Joined 4 months ago
cake
Cake day: January 7th, 2026

help-circle
  • I hope this discussion will gain some more mainstream traction, because it’s the more immediate threat or practice, while not nearly being addressed sufficiently. The only thing I fundamentally disagree with, is the perceived notion, that the lack of transparency is the problem. The fact that these systems are in place is the problem: stop collecting excessive personal data on your workers (which depend on their jobs for an income), and all the problems magically disappear; without ambiguity.

    I’m of the opinion that “private” property, which structurally invites the public (including workers), shouldn’t classify as “private” property, and be subject to much stricter regulations on deployment of surveillance systems. It’s not ethically justifiable, for people to be forced, to subject themselves to surveillance, simply to be able to make a living and buy food for themselves. Advances in technology have made it so, that surveillance tech is no longer compatible with modern society: supervisors (including law-enforcement) got to greedy, and now it’s (hopefully) coming to bite them in the ass.


  • The EU’s executive arm has also developed its own age verification app, which has the “highest privacy standards in the world,” according to Von der Leyen. Member states will soon be able to integrate it into their digital wallets, and it can easily be enforced by online platforms. “No more excuses – the technology for age-verification is available,” the EU chief said.

    So more pro age-verification propaganda, instead of what the headline suggests: nothing concrete in that regard. How about devices with parental control for kids, which only allow access to platforms suitable for the demographic, instead of “age”-gating the entire internet (including for adults)?









  • Yeah, on the rare occasion where I do order something online, and happen to trip up the convoluted system, I rather give the webshop a call, than giving into this dystopian nonsense. And I’ve long done away with any mainstream platforms, which I suspect will happily adopt the system (especially those who forced me to adopt 2FA: which was seemingly just in preparation of this…). I’ll happily function as an example, to illustrate just how morally unjust it is, to effectively force someone to purchase and use a device they explicitly chose not to use; I really want to hear someone justify that.

    I think there’s few people left, which do not believe we’ve gone too far with technology, so the Amish to some extend are definitely onto something.


  • It turns out reCAPCHA has been a privacy nightmare from the beginning: from silently monitoring user activity in the background, to sending payment information to Google; in order for an AI to assess the data, and return a risk-score to the website. But that apparently wasn’t bold enough, and now an effective 2FA is required, which provides additional telemetry to Google (but not to the website or app: which is obviously the privacy concern). So get ready to 2FA with Google upon registration, login, updating your cart, and payment; or to skip the hassle, you should just let an approved “shopping assistant” make purchases for you (“that drive a projected 25% increase in average order value”). I don’t even own a modern Android or iOS device, so how am I supposed to solve these?


  • It seems they’re really focusing on “registration, login, cart, and payment”, which would mean the customer would have to do this effective 2FA (which most consumers have conveniently been conditioned into using…), at least during these stages. This paired with the ability to allow “trusted” AI agents (including shopping assistants “that drive a projected 25% increase in average order value”), really makes it appear they’re incentivizing use of these shopping assistants (in order to avoid the 2FA hassle). It’s batshit insane the big-tech oligopoly has enshittified the internet to such a degree, the average consumer is required to outsource their usage to a big-tech agent (or at least one “trusted” by these platforms), for them to get any meaningful use out of it. And the rogue actors? Well, they’ll probably resort back to exploiting the third-world for solving CAPTCHAs…


  • I’ll give it a year before this “voluntary” evaluation becomes mandatory, while standards based on industry-leading models, dictate guardrails impossible to implement for upcoming models. And thus giving reason to consider would-be competitors’ models a “national security risk”: evaluated by a board, which by then, is composed of “experts” with a vested interest in the leading industry…

    Personally I believe AI models, using content for which they do not have the creator’s explicit permission, have no right to exist (at least as a commercially available product).


  • Camera surveillance is simply no longer compatible with use in spaces, structurally inviting the general public, due to advances in technology. You cannot physically limit what’s being captured by an image sensor: it captures everything, and filtering (including removable masking) is only able to happen after collection. Which could also mean the data itself, or derivatives thereof, may be stored indefinitely; and could, at any time in the future, be used as evidence against members of society.

    The only meaningful strategy to prohibit this, is to physically remove these surveillance systems: so personal data isn’t collected to begin with. Don’t even get me started, about the GDPR supposedly protecting citizens against this type of surveillance: it pushed for modernization of the systems, legitimized the “collect but protect” approach, created physical backdoors for the government to get ahold personal data being collected, and incentivized member states to piggy bank off of it.

    But I’m glad the cracks are beginning to surface, and ordinary folks starting to grow uncomfortable around modern camera surveillance too, because that’s the only reasonable response to it.



  • Yeah, but I guess YouTube is where their target demographic lives: the big-tech user, possibly interested in changing their habits (instead of a privacy-conscious demographic: long aware of most which is being discussed). In the best case scenario they actively promote alt-platforms, but most of them rarely receive any attention from the creator. Naomi Brockwell quite actively engages on Odysee, and I’ve seen Rob Braxman engage as well, which is something respect worthy; unlike Techlore having comments disabled on all his videos, for some reason. I also find it remarkable they’re strongly advocating against “age” verification, however YouTube having been a forerunner of this by years (requiring submission of your government issued ID, in order to view age-restricted content: you know, some album cover with something possibly falsely flagged as a nipple, or heaven forbid a naughty track title…), as far as I’m aware, not actively being scrutinized; or at least not in any meaningful way.





  • I get what you’re saying, but at least they control what they share on their channels, unlike a Tesla car that might disclose intimate moments not intended for sharing (including of others captured in effect). There are a lot of gaps in camera coverage, especially in the public space (at least here in The Netherlands), but if these mobile camera systems move through the streets, you capture a lot more than the relatively few stationary cameras would. I do value Rob’s content from time to time, whenever he covers subjects which I believe are underdiscussed (EV surveillance not being one of them haha).