• 0 Posts
  • 21 Comments
Joined 3 years ago
cake
Cake day: July 5th, 2023

help-circle
  • Ok, so the study was specifically targeting a genetic form of deafness linked to mutations in a gene called OTOF. These mutations prevent the body from producing enough of the protein otoferlin, which is essential for sending sound signals from the inner ear to the brain.

    researchers used a synthetic adeno-associated virus (AAV) to deliver a working version of the OTOF gene directly into the inner ear.

    Super cool! And now they’re targeting other genetic forms of deafness hoping to deliver similar results. Neat!

    “OTOF is just the beginning,” says Dr. Duan. “We and other researchers are expanding our work to other, more common genes that cause deafness, such as GJB2 and TMC1. These are more complicated to treat, but animal studies have so far returned promising results. We are confident that patients with different kinds of genetic deafness will one day be able to receive treatment.”



  • So, they made it a “suggestion” to not contaminate the environment, got rid of a bunch of security requirements (Who needs security at a plant powering our cities anyway??), AND are upping the amount of radiation exposure someone can have before they seek treatment.

    So, basically a bunch of evil shit. huh, whuda’thunk the same people that saw the increased incidence of new cancers with all the PFAS in our water and food, and decided to roll back regulations on it while introducing a brand-new PFAS laden pesticide for all our crops! Wonderful! We’ll all get cancer and die early deaths because of them!

    They’re making literally everything worse, murdering us in the street, arresting dissidents, covering up their crimes, and stealing billions while they do it.


  • oh awesome, Palantir, the same company profiting off the genocide in Gaza

    CEO Alex Karp has been a vocal supporter of Israel. In November 2023, he stated, “I am proud that we are supporting Israel in every way we can.”

    After October 2023, Palantir has provided Israel with multiple AI-powered data analytics tools for military and intelligence purposes.

    In January 2024, Palantir held its board meeting in Israel and entered into a “strategic partnership” with Israel’s Ministry of Defense to help Israel’s “war effort.”

    In October 2024, Norway’s largest asset manager, Storebrand, divested its Palantir shares, worth $24 million, due to concerns that Palantir’s work for Israel might implicate Storebrand in violations of international humanitarian law and human rights.




  • but that’s the problem. Ai people are pushing it as a universal tool. The huge push we saw to have ai in everything is kind of proof of that.

    People taking the response from LLMs at face value is a problem

    So we can’t trust it, but in addition to that, we also can’t trust people on TV, or people writing articles for official sounding websites, or the white house, or pretty much anything anymore. and that’s the real problem. We’ve cultivated an environment where facts and realities are twisted to fit a narrative, and then demanded that we give equal air time and consideration to literal false information being peddled by hucksters. These LLMs probably wouldn’t be so bad if we didn’t feed them the same derivative and nonsensical BS we consume on a daily basis. but at this point we’ve just introduced and are now relying on a flawed tool that’s basing it’s knowledge on flawed information and it just creates a positive feedback loop of bullshit. People are using ai to write BS articles that are then referenced by ai. It won’t ever get better, it will only get worse.


  • I don’t think using an inaccurate tool gives you extra insight into anything. If I asked you to measure the size of objects around your house, and gave you a tape measure that was not correctly metered, would that make you better at measuring things? We learn by asking questions and getting answers. If the answers given are wrong, then you haven’t learned anything. It, in fact, makes you dumber.

    People who rely on ai are dumber, because using the tool makes them dumber. QED?




  • I think Ai being used by teachers and administrators for the purpose of off-loading menial tasks is great. Teachers are often working like 90 hours a week just to meet all the requirements put upon them, and a lot of those tasks do not require much thought, just a lot of time.

    In that respect, yeah sure, go for it. But at this point it seems like they’re encouraging students to use these programs as a way to off-load critical thinking and learning, and that… well, that’s horrifyingly stupid.


  • When I was in medical school, the one thing that surprised me the most was how often a doctor will see a patient, get their history/work-up, and then step outside into the hallway to google symptoms. It was alarming.

    Of course, the doctor is far more aware of ailments, and his googling is more sophisticated than just typing in whatever the patient says (you have to know what info is important in the pt. history, because patients will include/leave out all sorts of info), but still. It was unnerving.

    I also saw a study way back when that said that hanging up a decision tree flow chart in Emergency rooms, and having nurses work through all the steps drastically improved patient care; additionally new programs can spot a cancerous mass on a radiograph/CT scan far before the human eye could discern it, and that’s great but… We still need educated and experienced doctors because a lot of stuff looks like other stuff, and sometimes the best way to tell them apart is through weird tricks like “smell the wound, does it smell fruity? then it’s this. Does it smell earthy? then it’s this.”


  • I gotta be honest. Whenever I find out that someone uses any of these LLMs, or Ai chatbots, hell even Alexa or Siri, my respect for them instantly plummets. What these things are doing to our minds, is akin to how your diet and cooking habits change once you start utilizing doordash extensively.

    I say this with full understanding that I’m coming off as just some luddite, but I don’t care. A tool is only as useful as it improves your life, and off-loading critical thinking does not improve your life. It actively harms your brains higher functions, making you a much easier target for propaganda and conspiratorial thinking. Letting children use this is exponentially worse than letting them use social media, and we all know how devastating the effects of that are… This would be catastrophically worse.

    But hey, good thing we dismantled the department of education! Wouldn’t want kids to be educated! just make sure they know how to write a good ai prompt, because that will be so fucking useful.



  • I was thinking about their horrifying conclusion as well, and your comment made me pine for the days when you wouldn’t know something. Think about it, back before the internet, if you had a random question, you either had to interact with some trusted person, or you went to the library and looked it up. It’s like the ever-present access to all information has quelled or killed any notion of curiosity or boredom, and it’s within those frames of mind that learning and inspiration come. I remember as a kid when I wouldn’t know the answer to something, I’d think on it for days, weeks. I’d get stuck on a video game level, and hit my head against the wall for hours trying to overcome it, only to pick up a random gamer magazine off the rack at the mall, and read the solution. Treating that magazine like it was the lost treasure map of some ancient expedition, passing it around my group of friends… Interactions and experiences that are gone forever.

    The idea that we’ve gradually went from relying on trusted professionals, learned educators, and scientific rigor, replacing them with a corporations data-harvesting LLM, on-line influencers, and click-bait “journals” cosplaying as academic centers with integrity. This article is basically celebrating the fact that we’ve off-shored all of our thinking, curiosity, and inquisitiveness to machines, all the while we struggle for scraps in a corporation dominated life devoid of genuine human interaction. We’re all to busy sipping dopamine hits from a screen instead of actually living our lives.

    I grew up while the internet was being slowly rolled out, and being from the last generation to remember what it was like before the internet, I can say that the things I miss most are privacy, the ability to be bored, and not knowing.

    It’s worse now, and it’s harder everyday to imagine that life on this planet will improve.


  • The existence of this kind of instinct within an LLM is extremely concerning. Acting out towards self-preservation via unethical means is something that can be hand-waved away in an LLM, but once we reach true AGI, this same thing will pop-up, and there’s no reason to believe that 1. we would notice, and 2. we would be able to stop it. This is the kind of thing that should, ideally, give us pause enough to set some world-wide ground rules for the development of this new tech. Creating a thinking organism that can potentially access vital cyber architecture whilst acting unethically towards self-preservation is how you get Skynet.




  • When I was a cook, even if I was just making something simple, I could still find creative satisfaction in a variety of ways. How you sprinkle on the garnish, plating, using a little more of this, a little less of that. Food to a chef is like art designed to be destroyed, so with the temporary nature of the medium, it really allows you to be creative. You’re not hung up on making it perfect, because it’s just about to be eaten, so it let’s you be more free with your design choices. It can be fun creating art while you’re supposed to be working.

    but if my job was suddenly just washing up after a machine… well. That will get old real quick.