• 0 Posts
  • 175 Comments
Joined 3 years ago
cake
Cake day: July 3rd, 2023

help-circle
  • The only one actually relevant to “cognitive decline” is the second, do we know what the math questions actually were?

    Also I wasn’t making an anology, I’m seriously asking if we’d see the same drop-off, as I think the root of the problem is moreso that humans will generally choose to use less effort rather then more, so any tool that reduces effort might see the same amount of drop off in end result when taken away.

    Going with analogies though, people having cars mean less people learning about/using horses/carriages/bikes and as cars are increasingly more complex and less repairable, less people put in the effort to learn how to fix them if something goes wrong.

    Ultimately though I have to wonder what does that really matter in the long term? Did people stop doing/understanding math once calculators became common?

    One of the points the paper makes is people who used it to help rather then solve the problem for them performed better once it was taken away, which adheres with my own observations on how people use certain tools vs seek to understand how those tool work and deeper their understanding. However again, is it really a problem that a majority of Americans (for example) don’t know how to change the oil on their car? Does that actually indicate they’re less intelligent or unable to rationalize/logically process information? Or do they generally put the effort that would be put into learning how their car works into other efforts.

    Unfortunately I think many are simply too burned out with day to day life to care about much learning at all, which is a much larger issue IMO.

    Though I will say I do think AI/LLMs will only reinforce that behavior, I’m not sure if that’ll be all bad or really all that different then the existing status quo prior to their spread.

    Edit: We could talk about the economic impact it will have, but the root cause is the same as all the other wealth inequality, and I can easily forsee how LLMs could be much more equitable rather then used as vehicles for enrichment.



















  • Lmfao and computers are just for nerds

    Edit: OpenAI, Anthropic, etc can all die, but LLMs are not. You can run a local model.

    Now I completely agree with the hype train is completely out of control and its a monetary bubble, but the tool itself is not going away.

    Edit2: I think the dotcom bubble is a good analogy, the underlying idea of the internet and all it can do and online ordering and such was solid, just an insane amount of hype on top that simply couldn’t be reached at that time. But now, the biggest companies ever are mainly internet/tech companies.


  • Ever heard of skills? You can essentially “teach” it new things that are not directly available in its model, right now it’s still pretty early but it (to me) feels like quite a leap compared to model-only usage.

    Its by no means perfect, but I do not think we’re even close to scratching the surface of what all can be done with the tech.

    I would bet people back at the advent of computers would scoff at many of the things computers can do now as fantasy.

    Edit: Right now, context size is a limiting factor, but you can do things like assign sub-agents to specific tasks/skills and have the overall agent call the subagent to complete the task thereby reducing the context size needed for the skill on the original agent call, it sorta acts as a mediator. Of course you still need to ensure you’re documenting what does/doesn’t work and have that available for future tasks in the same vein so it doesn’t repeat mistakes.

    On your point about the underlying model used to train it, I imagine at some point there will be a breakthrough where it becomes more dynamic, I think skills are kind of a stepping stone to that. Maybe instead of models being gigantic, data is broken down into individual skills that are called to inform specific actions, and those skills can easily be dynamic already.