Rendered at 22:06:24 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
vidarh 9 hours ago [-]
This is such a lazy argument. Every tool that displaces old tools causes skills to be lost when those skills are no longer needed.
To the extent that people still need to be able to critically assess what AI delivers to achieve their goals, they will still pick up those skills or fail. They will then need to either invest the time to learn, or they'll fail to find employment, or fail in other aspects of life.
When we see people lamenting lost skills like this, it is usually a result of them overestimating the continued necessity of certain skills in the face of new technology.
You won't suddenly have a generation of software developers (for example) who don't know the necessary skills to do their work, but you may get a generation of software developers who don't have the skills you think are necessary to do their work.
9dev 7 hours ago [-]
Skills no longer needed… as long as you have access to an AI model provided by a handful of companies at an arbitrary rate; with training cost so high that only huge corporations have the funds to pull it off, building an ever-growing moat over time.
This sounds like a great future! Nothing worrying here at all.
wilkystyle 3 hours ago [-]
This assumes that the way things are now is the way things always will be.
Right now AI is in its mainframe era (thin clients connecting to expensive compute somewhere else that you don't control), but I firmly believe that the AI version of the personal computing revolution is on the horizon. Democratized computing probably seemed pretty out of reach when all we had were mainframes, but in retrospect the progression from mainframe to personal computer to supercomputer in your pocket seems ordinary and almost expected.
I have no doubt that the technology needed to democratized personal AI will also advance in similar ways, and we will have no shortage of next generation's "640K ought to be enough for anybody."
9dev 1 hours ago [-]
Maybe. Alternatively, things will veer further towards centralisation because that is where all the VCs bet on getting their investments back, and where they historically have seen the most revenue. I’m not convinced AI follows the same trajectory general computing did 50 years ago; the world has changed massively since then.
ThrowawayR2 1 hours ago [-]
"Past performance is not indicative of future results." and "Don't count your chickens before they've hatched." except in the world of the AI advocates, where they confidently assure us that it's perfectly fine to count our AI chickens before they've hatched because reasons.
oblio 1 hours ago [-]
We only got PCs because IBM screwed up. Every other ecosystem is walled off to various degrees. And absolutely every current corp knows about IBMs failure and definitely does not want to repeat it.
Nintendo? Walled garden. Playstation? Walled garden. Mac/iOS? Walled garden? Clouds? Obviously walled gardens, the higher the walls the more advanced the services. SaaS? Walled gardens. Social media? Walled gardens.
cyanydeez 2 hours ago [-]
I think the problem is a stochastic one: More options seem to exist for this technolgy to abuse humanity via it's "owners" than do it to democratize anything. It's not like it's helping to wage war, molify the public and entraining pre-existing racist for the last decade.
These are all things happening today via AI, so really, this is an argument thats like, entropy. There's always way more ways in which things fall apart than they build to stability.
Being optimistic seems more like religiousity than any real accounting of the current system you're operating in (unless you're a billionaire).
saidnooneever 7 hours ago [-]
"i like money and sex, do you like money and sex too,? maybe we can be friends!" - Idiocracy
alecbz 7 hours ago [-]
A car that can self-drive 100% of the time is a new tool that could make driving an obsolete skill. A car that can self-drive successfully 99% of the time is dangerous because it trains people to not be ready to take over for the 1% they need to.
vidarh 5 hours ago [-]
This is only a problem if regulators and/or courts and/or consumers all fail to recognise that said 99% car isn't safe enough.
alecbz 1 hours ago [-]
Sure -- I think articles like this are a warning that the skills we're losing are likely _not_ so completely supplanted by AI that they'll soon be irrelevant.
casey2 5 hours ago [-]
What actually happens is that the 1% is ignored or outlawed. The shovel doesn't do 100% of human excavating tasks better than hands, but we rightly realized that the space of possibilities involving a shovel was much greater than the 1% of hand powered excavation.
alecbz 37 minutes ago [-]
If the 1% is just a bit less efficient with the new tech, sure, but it's different if the 1% means your car crashes.
array_key_first 1 hours ago [-]
AI is ultimately a thinking replacement tool. Losing the skill of critical thought is existential.
The actual lazy argument here is pretending that AI is like the fucking cotton gin or something. We all know and understand it's not. There's people getting whole ass degrees using AI for everything.
legacynl 8 hours ago [-]
What is your argument actually based on? It seems you're just assuming this to be the case.
vidarh 5 hours ago [-]
All of human history.
legacynl 5 hours ago [-]
Not withstanding that knowledge of history still doesn't allow you to predict the future. In those cases we automated methods and tools, now we're automating humans, don't you think that possibly might be a significant departure from what happened in history?
mpalmer 8 hours ago [-]
> To the extent that people still need to be able to critically assess what AI delivers to achieve their goals, they will still pick up those skills or fail.
Or, the people who evaluate them will be suffering from the exact same self-inflicted cognitive limitations, and promote them, or at least not fire them.
The quality of this firm's product suffers perhaps, but it doesn't matter. The consumer will again, in all likelihood, be limited in the same way.
Everyone's happy.
vidarh 5 hours ago [-]
More realistically, a company that fails to properly evaluate this in ways that reflect actual market needs will fail in the marketplace.
dyauspitr 8 hours ago [-]
It’s essentially about if your skills are “Turing complete”. If you know only Java, you may not be able to build an app that requires assembly tier efficiency but you can do it. With vibe coding you just have to hope and pray. It’s not really a skill. Your skills are not Turing complete.
vidarh 5 hours ago [-]
So vibe coding won't be sufficient to replace a skilled Java developer, and won't obsolete that skill, and if there aren't alternatives that more completely replaces a skilled Java developer, this then isn't a relevant comparison.
bitbasher 10 hours ago [-]
It aligns with my experience and what I have seen. Looking at this through the lenses of writing software; much of "learning" to write software comes down to experience.
When you see an error like, "error: expected ‘,’ or ‘;’ before ‘include’" you know what happened and where to look because you've seen it a hundred times before.
AI takes that away. It's not inherently bad, it's great that it can solve those sort of things for you. However, the second order effects are terrible. You end up never developing that experience. Is this simply evolution of the craft? Is that experience no longer necessary?
I could be wrong, but I believe that experience is necessary and losing it will be a net negative. Furthermore, the reduction of experience will increase dependency on these tools and the companies that provide them.
itmitica 10 hours ago [-]
Why would cognitive overload work better?
AI is a tool to help you see the forest from the trees.
You reading articles the old fashion way can be akin to seeing the trees but not seeing the actual forest.
Young minds tend to learn. How they do it, the old fashion way, the new AI way, they will learn.
Many blank out in school on different subjects and the cognitive overload byproduct follows them all their life making them wary of new things.
And finally, maybe you, personally, are reaching a limit in your comprehension of the modern world, and you show it by fighting the wrong battle with the wrong arguments.
Or maybe you are onto something.
legacynl 8 hours ago [-]
> Why would cognitive overload work better?
I don't know where you get the 'cognitive overload' term from (it's not in the article). But it general; cognitive effort is what drives our brains to learn in the first place.
As an organ in an organism, the brain is very adverse to using energy, because the organism might need it later to run or fight some danger. Learning costs energy, and the brain rather doesn't if it doesn't need to. The only reason that the brain will ever learn anything, is if you repeatedly expose it to 'cognitive effort', because in this case the effort of learning will save energy in the long run.
If you use AI so that most things don't require cognitive effort, your brain will not use those learned neural pathways, and they will atrophy over time.
The only thing that the brain learns from using AI is that the most efficient way of doing anything is having the AI do it for you.
itmitica 6 hours ago [-]
I'm going to comment on two subjects.
One, if cognitive overloading is not in the article, then it's good, it means I actually put some thought in the responding effort.
Two, AI expands possibilities for those that want that, and offer shortcuts for those that want that. No different from any other learning process: you could actually learn something or you could just do it. It makes sense, not all humans seek learning, but most humans look for results and answers.
legacynl 5 hours ago [-]
I don't know what your interpretation of 'expanding possibilities' is, but I suspect that those are shortcuts in some way too. If you only use AI to help you search the internet, you'll become less adept at searching yourself. If AI allows you to do something that you aren't able to do yourself, it is allowing you the shortcut to not have to learn that thing.
itmitica 5 hours ago [-]
AI is expanding my thinking possibilities.
Mastering a certain form of internet search does not mean you are learning, it means you are mastering, to some degree, a tool to search. Shortcuts are OK here, per me. Learning comes when you actually go beyond tool-use skills.
To be more explicit, the time learning a tool is not time used learning, it's time used preparing for learning. AI cuts that, if you want, and you get straight to actually learning something instead of tripping over tools and infrastructure, becoming too overloaded to be able to see the forest from the trees.
bogzz 10 hours ago [-]
The last four words would have sufficed.
itmitica 9 hours ago [-]
I always am open to learning, even by antithesis.
mpalmer 9 hours ago [-]
> Why would cognitive overload work better?
You mean the state of affairs humans have enjoyed for the last four millennia? The status quo that led to all of the technology you seem to think we now can't live without?
> Many blank out in school on different subjects and the cognitive overload byproduct follows them all their life making them wary of new things.
They should try putting their phones down before we double down on solving tech problems with more tech.
You are over simplifying and sending confusing signals.
There was/is constant progress that constantly demanded/demands more tech. Without more tech, the progress would have/would stalled.
JohnFen 5 hours ago [-]
There is only a loose connection between technological advances and progress. Depending on how you define "progress". Technological advances have held back or reversed progress in many areas, just as it has advanced progress in others.
To talk about "progress" as if it were some sort of simple, objective thing is misleading.
The real issue isn't about progress. It's about what sort of lives we want to be living.
legacynl 8 hours ago [-]
There are multiple ways that people and society can progress, and most of them have nothing to do with tech.
itmitica 8 hours ago [-]
More details please.
legacynl 8 hours ago [-]
Unless you're being willfully obtuse, I'm sure you can come up with your own examples of how some changes in society, culture or politics could massively improve the lives of everyone.
itmitica 7 hours ago [-]
You are willfully evasive, so I'm going to take this as a sign not to waste any more time.
legacynl 5 hours ago [-]
Perhaps some cultural phenomenon convinces people to start taking washing their hands after the bathroom very seriously, preventing tens of thousands of deaths every year.
It's a stupid example, but no tech would be needed. There's loads of problems in the worlds (wars, disease, famine, etc) that can be massively improved (progress) without any change in tech.
itmitica 5 hours ago [-]
Should we go into how much tech is involved with making soap? Or how much tech is involved with running water facilities? Or how much tech is involved with transitioning from outside toilets to modern toilets? Cultural phenomenons are helping none, without the tech to back it up. In fact, cultural phenomenons not backed by tech are the ones making our society a regressive one.
gmerc 10 hours ago [-]
I’m sorry what? Look at the world, overrun in slop and say this again with a. straight face
itmitica 9 hours ago [-]
You are assuming slop is only an AI feature. You know the anecdotal aunt in the old days, confidently hallucinating answers? One had to carry half-truths all the time, and things are still the same now. Only now, the mitigation can come sooner than later. From an AI model near you.
mpalmer 8 hours ago [-]
You really should read the linked article if you're going to comment this much.
itmitica 8 hours ago [-]
If you have meaningful commentaries, I'm ready to learn from them.
childintime 7 hours ago [-]
Skills have a life cyle. That's something you learn as you get older. You are inevitably a part, an expression, of the time you grow up in. We become obsolete by the time we die. We die knowing our exact knowledge can't be replaced, it dies with us.
legacynl 5 hours ago [-]
I think you're right, and I think that's in large parts perfectly fine. As long as the important skills that we need continually keep replenishing themselves in young people.
The problem is identifying which skills those actually are. Without an true answer, I'm going to be prudent, and assume that things involving learning and critical thinking are some of the important ones.
igor_mart 11 hours ago [-]
My skills was forged in the other type of school, I know how to operate a lathe and milling machine but I don't think it's a good thing now and very dangerous too. The time dictates the skills. But understanding of the basic life/ physical principles was fired in me by my father, so I don't rely to school on that, it's the parent who is responsible.
ookblah 10 hours ago [-]
i feel like people should be focusing on the damaging things that aren't just "ai" (like what he hell does that even mean, it's too broad?).
dark app patterns, gambling, etc. like seriously, i know we all want to hate on llms or whatever stealing our jobs or making us stupider but has this been any different from the past in that regard?
whether it be radio, tv, computer, internet, video games, etc. all of these claimed to be doing something "to the children" but i agree with another comment said kids will figure out a way to learn and utilize the tools given to them.
did me "offloading" my thinking to google or some computer instead of cracking open some library book or doing calculations by hand damage my thinking at the time? no... because a sufficiently motivated person will learn regardless, figure out why things work the way they do, and rather it's better access to said information that helps.
we should be fixing the motivation problem rather than the tools which we've been trying to do for decades. teach people the framework for solving problems and critical thinking. kids nowadays have way more things demanding their attention and it's been on a decline far before this AI wave (cough social media). we literally sound like old farts lol.
lerp-io 14 hours ago [-]
dont you gain skills with ai? it teaches you how to do stuff, you ask it questions, etc like a tutor?
array_key_first 1 hours ago [-]
Theoretically yes, in practice it's like 0.001% of people who are able to use it this way.
We saw the same thing with the Internet. Infinite knowledge, all at your fingertips. So surely everyone becomes doctors and everyone can program right? No more university needed?
Turns out no. The bottleneck has never been information, the bottleneck is you.
curt15 10 hours ago [-]
Perhaps if you're the highly motivated type who would excel even without ai. But it's far too easy to become like maths students who learn only how to use a calculator instead of how to actually add fractions.
Schmerika 13 hours ago [-]
Do you think that's how most students are using it? Teachers would quickly disabuse you of that notion [0]:
> In study hall, I watched a kid use Snapchat to take pictures of his computer screen. He was working on IXL skills. His Snap A.I. friend sent an immediate reply. He then clicked the answer on his screen. The next question popped up, he took a picture and got an answer. He swiftly went through the whole session this way. His right hand held the phone, he tapped the camera button, glanced at the reply, and his left hand entered the answers on his laptop. He didn’t know I was watching, but I saw the gold medal of 100 percent mastery bloom on his screen. I told the teacher who assigned the IXL. She didn’t realize Snapchat had an A.I. that would do her homework. It can answer all the questions.
... Now, can you use AI to learn things? Sure. But what the article is talking about it is critical thinking:
> Adults using AI mostly just sound generic. But for a child who never formed independent reasoning, "generic" is a major identity problem. The model’s reasoning doesn’t compete with the child’s reasoning but becomes the child’s reasoning. For children still building out the cognitive skills for evaluating the world, the effect will not be temporary but have a foundation impact on their thinking.
American's performance with critical thinking is already mixed at best. A new generation with even lower independent thinking ability combined with AI painstakingly engineered to suffer from severe bias is a powerful recipe for (even more) horrors beyond human comprehension. Paid for by our tax dollars.
Learning and suffering seem to be linked to some degree. It takes a lot of up front pain to get to a point where you can become an effective autodidact. You have to develop an appreciation for the game. AI can accelerate aspects of this, but it often alleviates too much suffering for a novice to develop the fundamentals.
If you go into AI as a way to get your school work done more quickly, you won't experience the friction you need to. AI should be used to make the work longer and deeper. More engaging and adapted to the individual. Not quicker and easier.
The problem is that AI is the most effective dual use technology we have ever created with regard to education and cheating at education. The monkey brain doesn't like to suffer, so on average I think we find most people tend toward the shittier use case.
simianwords 11 hours ago [-]
One could have said the same things when calculators were invented. Is routine suffering by adding numbers by hand required? Or is it more important to delegate simpler things and focus on complex problems.
Ekaros 11 hours ago [-]
Certainly practising mental arithmetic helps in capability of doing mental arithmetic. Doing adding by hand probably also improves mental arithmetic.
The again we are not that far off from time when your AI glasses will read the price label. And then automatically add up total for you. Hopefully you then each time ask what does that total mean in context of your finances...
modriano 8 hours ago [-]
I learned math long after the advent of the calculator and went on to study math-heavy fields (physics, mechanical engineering, and data science).
I wasn't ever able to really develop deep intuition about/understanding of a calculation until I did it by hand once or twice. I often just plugged in new models and algos just to see if performance was above a threshold, but when I wanted to productionize a new winner, I'd have to run through the algo by hand for a few steps to understand and tune it. And through doing it by hand, the complex became the simple.
functional_dev 7 hours ago [-]
I am not expert, but I heard brains learn way better when we actually use our hands to write stuff out.
like the logic sticks deeper in your head that way... using computer is fast, but sometimes it just goes in one ear and out the other
bob1029 9 hours ago [-]
The point is not to make the suffering permanent. It is a temporary phase. A lesson. Once you complete it you can go on to do the automated thing without as much concern.
simianwords 8 hours ago [-]
Yes agreed
adi_kurian 9 hours ago [-]
Lots of people gaining skills with AI too.
ramesh31 8 hours ago [-]
The skills I've lost are no longer valuable. No one with a brain will ever spend another minute writing HTML/CSS by hand anymore. But I spent a decade of my career doing that all day long every day. It's time to move on and up. The horizon for software is limitless now that we've been freed of the drudgery.
RcouF1uZ4gsC 10 hours ago [-]
Books also cause loss of skills.
One effect of widespread books is we don’t have poets like Homer. We don’t develop the memorization skills like they did in the past.
And that’s ok.
We can use the bandwidth for other stuff.
plastic-enjoyer 9 hours ago [-]
I swear to god, people heard the story about how Socrates was against books and reurgitate this as argument against any critical view on AI usage. If this is the level of reasoning people have, nothing will be lost when cognitive skills decline through AI usage anyway.
semilin 8 hours ago [-]
There's an irony to people repeating this claim without even having read the Phaedrus. If they had, they'd understand that the concern with writing was that it was not able to respond as a human in dialogue. One could think that LLMs are an improvement in this regard, but for the fact that LLMs are actually autonomous sophists.
Socrates would have been against LLMs, and for good reason. Writing isn't unequivocally bad, but it is simply not a substitution for real dialogue and thought. We use books as a means by which to have more things to discuss with humans. LLMs can supplant the desire to even have dialogue with others, which is perhaps the more insidious thing.
ramesh31 8 hours ago [-]
>I swear to god, people heard the story about how Socrates was against books and reurgitate this as argument against any critical view on AI usage.
It's something we all learn in freshman english class. But it comes up over and over again because the general idea is true. You have to temper the unbridled optimism that comes with any new technology by contemplation of what may be lost. Otherwise we're spinning in circles.
begemotz 9 hours ago [-]
claude told me to say it.
qsera 10 hours ago [-]
>We can use the bandwidth for other stuff.
Like fighting on social media...
Seriously, what was the other stuff that we used our bandwidth for when the books caused the loss of skills.
We have lost Homer, but what have we gained? A million social-media warriors?
Ekaros 6 hours ago [-]
I think some of them might even value social-media trolls in some ways... At least there is lot of honest dishonest work there...
Now future is to replace those with machines. No more human input. Just endless amount machines fighting with other machines...
coffeefirst 9 hours ago [-]
But they don’t. Seriously, do you read?
Books encode skill.
I’m not a hater. LLMs on search is the best research tool I’ve ever used because it’s read everything and can find minutia buried in places it would take me a long time to find.
But there’s a huge difference between using it to assist focus, or as a study aide, and offloading the whole act of thinking itself.
casey2 5 hours ago [-]
Research papers are already summarized, at the top there is a section called "Abstract" which includes the summary. Usually the first and last sentences are the relevant abstraction layer for most people.
When automation comes along it gives humans the time to actually think about what they are doing and whether it even makes sense. Is your goal to motivate some research? That likely requires a conversation with one or more authors. Otherwise it's an exercise in narcissism. I am the elite who will bestow this sacred knowledge unto the commoners and cross-disciplinary researchers who cannae understand it without me.
With automation more people are unfit, but some people are better in every metric that exists. What's important is that everybody has the freedom, if they wish, to achieve those top metrics. Insofar as those metrics don't involve direct control overs resources, since those will always be gatekept and require the approval of others.
To the extent that people still need to be able to critically assess what AI delivers to achieve their goals, they will still pick up those skills or fail. They will then need to either invest the time to learn, or they'll fail to find employment, or fail in other aspects of life.
When we see people lamenting lost skills like this, it is usually a result of them overestimating the continued necessity of certain skills in the face of new technology.
You won't suddenly have a generation of software developers (for example) who don't know the necessary skills to do their work, but you may get a generation of software developers who don't have the skills you think are necessary to do their work.
This sounds like a great future! Nothing worrying here at all.
Right now AI is in its mainframe era (thin clients connecting to expensive compute somewhere else that you don't control), but I firmly believe that the AI version of the personal computing revolution is on the horizon. Democratized computing probably seemed pretty out of reach when all we had were mainframes, but in retrospect the progression from mainframe to personal computer to supercomputer in your pocket seems ordinary and almost expected.
I have no doubt that the technology needed to democratized personal AI will also advance in similar ways, and we will have no shortage of next generation's "640K ought to be enough for anybody."
Nintendo? Walled garden. Playstation? Walled garden. Mac/iOS? Walled garden? Clouds? Obviously walled gardens, the higher the walls the more advanced the services. SaaS? Walled gardens. Social media? Walled gardens.
These are all things happening today via AI, so really, this is an argument thats like, entropy. There's always way more ways in which things fall apart than they build to stability.
Being optimistic seems more like religiousity than any real accounting of the current system you're operating in (unless you're a billionaire).
The actual lazy argument here is pretending that AI is like the fucking cotton gin or something. We all know and understand it's not. There's people getting whole ass degrees using AI for everything.
Or, the people who evaluate them will be suffering from the exact same self-inflicted cognitive limitations, and promote them, or at least not fire them.
The quality of this firm's product suffers perhaps, but it doesn't matter. The consumer will again, in all likelihood, be limited in the same way.
Everyone's happy.
When you see an error like, "error: expected ‘,’ or ‘;’ before ‘include’" you know what happened and where to look because you've seen it a hundred times before.
AI takes that away. It's not inherently bad, it's great that it can solve those sort of things for you. However, the second order effects are terrible. You end up never developing that experience. Is this simply evolution of the craft? Is that experience no longer necessary?
I could be wrong, but I believe that experience is necessary and losing it will be a net negative. Furthermore, the reduction of experience will increase dependency on these tools and the companies that provide them.
AI is a tool to help you see the forest from the trees.
You reading articles the old fashion way can be akin to seeing the trees but not seeing the actual forest.
Young minds tend to learn. How they do it, the old fashion way, the new AI way, they will learn.
Many blank out in school on different subjects and the cognitive overload byproduct follows them all their life making them wary of new things.
And finally, maybe you, personally, are reaching a limit in your comprehension of the modern world, and you show it by fighting the wrong battle with the wrong arguments.
Or maybe you are onto something.
I don't know where you get the 'cognitive overload' term from (it's not in the article). But it general; cognitive effort is what drives our brains to learn in the first place.
As an organ in an organism, the brain is very adverse to using energy, because the organism might need it later to run or fight some danger. Learning costs energy, and the brain rather doesn't if it doesn't need to. The only reason that the brain will ever learn anything, is if you repeatedly expose it to 'cognitive effort', because in this case the effort of learning will save energy in the long run.
If you use AI so that most things don't require cognitive effort, your brain will not use those learned neural pathways, and they will atrophy over time.
The only thing that the brain learns from using AI is that the most efficient way of doing anything is having the AI do it for you.
One, if cognitive overloading is not in the article, then it's good, it means I actually put some thought in the responding effort.
Two, AI expands possibilities for those that want that, and offer shortcuts for those that want that. No different from any other learning process: you could actually learn something or you could just do it. It makes sense, not all humans seek learning, but most humans look for results and answers.
Mastering a certain form of internet search does not mean you are learning, it means you are mastering, to some degree, a tool to search. Shortcuts are OK here, per me. Learning comes when you actually go beyond tool-use skills.
To be more explicit, the time learning a tool is not time used learning, it's time used preparing for learning. AI cuts that, if you want, and you get straight to actually learning something instead of tripping over tools and infrastructure, becoming too overloaded to be able to see the forest from the trees.
You mean the state of affairs humans have enjoyed for the last four millennia? The status quo that led to all of the technology you seem to think we now can't live without?
> Many blank out in school on different subjects and the cognitive overload byproduct follows them all their life making them wary of new things.
They should try putting their phones down before we double down on solving tech problems with more tech.
https://news.ycombinator.com/item?id=47456153
There was/is constant progress that constantly demanded/demands more tech. Without more tech, the progress would have/would stalled.
To talk about "progress" as if it were some sort of simple, objective thing is misleading.
The real issue isn't about progress. It's about what sort of lives we want to be living.
It's a stupid example, but no tech would be needed. There's loads of problems in the worlds (wars, disease, famine, etc) that can be massively improved (progress) without any change in tech.
The problem is identifying which skills those actually are. Without an true answer, I'm going to be prudent, and assume that things involving learning and critical thinking are some of the important ones.
dark app patterns, gambling, etc. like seriously, i know we all want to hate on llms or whatever stealing our jobs or making us stupider but has this been any different from the past in that regard?
whether it be radio, tv, computer, internet, video games, etc. all of these claimed to be doing something "to the children" but i agree with another comment said kids will figure out a way to learn and utilize the tools given to them.
did me "offloading" my thinking to google or some computer instead of cracking open some library book or doing calculations by hand damage my thinking at the time? no... because a sufficiently motivated person will learn regardless, figure out why things work the way they do, and rather it's better access to said information that helps.
we should be fixing the motivation problem rather than the tools which we've been trying to do for decades. teach people the framework for solving problems and critical thinking. kids nowadays have way more things demanding their attention and it's been on a decline far before this AI wave (cough social media). we literally sound like old farts lol.
We saw the same thing with the Internet. Infinite knowledge, all at your fingertips. So surely everyone becomes doctors and everyone can program right? No more university needed?
Turns out no. The bottleneck has never been information, the bottleneck is you.
> In study hall, I watched a kid use Snapchat to take pictures of his computer screen. He was working on IXL skills. His Snap A.I. friend sent an immediate reply. He then clicked the answer on his screen. The next question popped up, he took a picture and got an answer. He swiftly went through the whole session this way. His right hand held the phone, he tapped the camera button, glanced at the reply, and his left hand entered the answers on his laptop. He didn’t know I was watching, but I saw the gold medal of 100 percent mastery bloom on his screen. I told the teacher who assigned the IXL. She didn’t realize Snapchat had an A.I. that would do her homework. It can answer all the questions.
... Now, can you use AI to learn things? Sure. But what the article is talking about it is critical thinking:
> Adults using AI mostly just sound generic. But for a child who never formed independent reasoning, "generic" is a major identity problem. The model’s reasoning doesn’t compete with the child’s reasoning but becomes the child’s reasoning. For children still building out the cognitive skills for evaluating the world, the effect will not be temporary but have a foundation impact on their thinking.
American's performance with critical thinking is already mixed at best. A new generation with even lower independent thinking ability combined with AI painstakingly engineered to suffer from severe bias is a powerful recipe for (even more) horrors beyond human comprehension. Paid for by our tax dollars.
0 - https://www.nytimes.com/2026/02/26/learning/teachers-on-how-...
If you go into AI as a way to get your school work done more quickly, you won't experience the friction you need to. AI should be used to make the work longer and deeper. More engaging and adapted to the individual. Not quicker and easier.
The problem is that AI is the most effective dual use technology we have ever created with regard to education and cheating at education. The monkey brain doesn't like to suffer, so on average I think we find most people tend toward the shittier use case.
The again we are not that far off from time when your AI glasses will read the price label. And then automatically add up total for you. Hopefully you then each time ask what does that total mean in context of your finances...
I wasn't ever able to really develop deep intuition about/understanding of a calculation until I did it by hand once or twice. I often just plugged in new models and algos just to see if performance was above a threshold, but when I wanted to productionize a new winner, I'd have to run through the algo by hand for a few steps to understand and tune it. And through doing it by hand, the complex became the simple.
like the logic sticks deeper in your head that way... using computer is fast, but sometimes it just goes in one ear and out the other
One effect of widespread books is we don’t have poets like Homer. We don’t develop the memorization skills like they did in the past.
And that’s ok.
We can use the bandwidth for other stuff.
Socrates would have been against LLMs, and for good reason. Writing isn't unequivocally bad, but it is simply not a substitution for real dialogue and thought. We use books as a means by which to have more things to discuss with humans. LLMs can supplant the desire to even have dialogue with others, which is perhaps the more insidious thing.
It's something we all learn in freshman english class. But it comes up over and over again because the general idea is true. You have to temper the unbridled optimism that comes with any new technology by contemplation of what may be lost. Otherwise we're spinning in circles.
Like fighting on social media...
Seriously, what was the other stuff that we used our bandwidth for when the books caused the loss of skills.
We have lost Homer, but what have we gained? A million social-media warriors?
Now future is to replace those with machines. No more human input. Just endless amount machines fighting with other machines...
Books encode skill.
I’m not a hater. LLMs on search is the best research tool I’ve ever used because it’s read everything and can find minutia buried in places it would take me a long time to find.
But there’s a huge difference between using it to assist focus, or as a study aide, and offloading the whole act of thinking itself.
When automation comes along it gives humans the time to actually think about what they are doing and whether it even makes sense. Is your goal to motivate some research? That likely requires a conversation with one or more authors. Otherwise it's an exercise in narcissism. I am the elite who will bestow this sacred knowledge unto the commoners and cross-disciplinary researchers who cannae understand it without me.
With automation more people are unfit, but some people are better in every metric that exists. What's important is that everybody has the freedom, if they wish, to achieve those top metrics. Insofar as those metrics don't involve direct control overs resources, since those will always be gatekept and require the approval of others.