I encourage everyone thinking about commenting to read the article first.
When I finally read it, I found it remarkably balanced. It cites positives and negatives, all of which agree with my experience.
> Con: AI poses a grave threat to students' cognitive development
> When kids use generative AI that tells them what the answer is … they are not thinking for themselves. They're not learning to parse truth from fiction.
None of this is controverisal. It happens without AI, too, with kids blindly copying what the teacher tells them. Impossible to disagree, though.
> Con: AI poses serious threats to social and emotional development
Yep. Just like non-AI use of social media.
> Schooling itself could be less focused on what the report calls "transactional task completion" or a grade-based endgame and more focused on fostering curiosity and a desire to learn
No sh*t. This has probably been a recommendation for decades. How could you argue against it, though?
> AI designed for use by children and teens should be less sycophantic and more "antagonistic," pushing back against preconceived notions and challenging users to reflect and evaluate.
Genius. I love this idea.
===
ETA:
I believe that explicitly teaching students how to use AI in their learning process, that the beautiful paper direct from AI is not something that will help them later, is another important ingredient. Right now we are in a time of transition, and even students who want to be successful are uncertain of what academic success will look like in 5 years, what skills will be valuable, etc.
> I believe that explicitly teaching students how to use AI in their learning process, that the beautiful paper direct from AI is not something that will help them later, is another important ingredient.
IMNSHO as an instructor, you believe correctly. I tell my students how and why to use LLMs in their learning journey. It's a massively powerful learning accelerator when used properly.
Curricula have to be modified significantly for this to work.
I also tell them, without mincing words, how fucked they will be if they use it incorrectly. :)
>> AI designed for use by children and teens should be less sycophantic and more "antagonistic"
> Genius. I love this idea.
I don't think it would really work with current tech. The sycophancy allows LLMs to not be right about a lot of small things without the user noticing. It also allows them to be useful in the hands of an expert by not questioning the premise and just trying their best to build on that.
If you instruct them to question ideas, they just become annoying and obstinate. So while it would be a great way to reduce the students' reliance on LLMs...
> pushing back against preconceived notions and challenging users to reflect and evaluate
Who decides what needs to be "pushed back"? Also, I imagine it's not easy to train a model to notice these "preconceived notions" and react "appropriately": machine learning will automatically extract patterns from data, so if enough texts contain a "preconceived notion" that you don't like, it'll learn it anyway, so you'll have to manually clean the data (seems like extremely hard work and lowkey censorship) or do extensive "post-training".
It's not clear what it means to "challenge users to reflect and evaluate". Making the model analyze different points of view and add a "but you should think for yourself!" after each answer won't work because everyone will just skip this last part and be mildly annoyed. It's obvious that I should think for myself, but here's why I'm asking the LLM: I _don't_ want to think for myself right now, or I want to kickstart my thinking. Either way, I need some useful input from the LLM.
If the model refuses to answer and always tells me to reflect, I'll just go back to Google search and not use this model at all. In this case someone just wasted money on training the model.
True, but teachers don't train LLMs. Good LLMs can only be trained by massive corporations, so training an "LLM for schools" must be centralized. This should of course be supervised by the government, so the government ends up deciding what needs pushback and what kind of pushback. This alone is not easy because someone will have to enumerate the things that need pushback, provide examples of such "bad things", provide "correct" alternatives and so on. This then feeds into data curation and so on.
Teachers are also "local". The resulting LLM will have to be approved nation-wide, which is a whole can of worms. Or do we need multiple LLMs of this kind? How are they going to differ from each other?
Moreover, people will hate this because they'll be aware of it. There will be a government-approved sanitized "LLM for schools" that exhibits particular "correct" and "approved" behavior. Everyone will understand that "pushing back" is one of the purposes of the LLM and that it was made specifically for (indoctrination of) children. What is this, "1984" or whatever other dystopian novel?
Many of the things that may "need" pushback are currently controversial. Can a man be pregnant? "Did the government just explicitly allow my CHILD to talk to this LLM that says such vile things?!" (Whatever the "things" may actually be) I guarantee parents from all political backgrounds are going to be extremely mad.
> I believe that explicitly teaching students how to use AI in their learning process
I'm a bit nervous about that one.
I very firmly believe that learning well from AI is a skill that can and should be learned, and can be taught.
What's an open question for me is whether kids can learn that skill early in their education.
It seems likely to me that you need a strong baseline of understanding in a whole array of areas - what "truth" means, what primary sources are, extremely strong communication and text interpretation skills - before you can usefully dig into the subtleties of effectively using LLMs to help yourself learn.
Can kids be leveled up to that point? I honestly don't know.
>>> Schooling itself could be less focused on what the report calls "transactional task completion" or a grade-based endgame and more focused on fostering curiosity and a desire to learn
>> How could you argue against it, though?
because large scale society does use and deploy rote training with grading and uniformity, to sift and sort for talent of different kinds (classical music, competitive sports, some maths) on a societal scale. Further, training individuals to play a routine specialized role is essential for large scale industrial and government growth.
Individualist world views are shocked and dismayed.. repeatedly, because this does not diminish, it has grown. All of the major economies of the modern world do this with students on a large scale. Theorists and critics would be foolish to ignore this, or spin wishful thinking scenarios opposed to this. My thesis here is that all large scale societies will continue on this road, and in fact it is part of "competitiveness" from industrial and some political points of view.
The balance point of individual development and role based training will have to evolve; indeed it will evolve.. but with that extremes? and among whom?
I've listened to a handful of podcasts with education academics and professionals talking about AI. They invariably come across as totally lost, like a hen inviting a fox in to help watch the eggs.
It's perhaps to be expected, as these education people are usually non-technical. But it's definitely concerning that (once again) a lack of technical and media literacy among these education types will lead to them letting (overall) unhelpful tech swarm the system.
> But it's definitely concerning that (once again) a lack of technical and media literacy among these education types will lead to them letting (overall) unhelpful tech swarm the system.
I hate this kind of framing because it puts the burden on the teachers when the folks we should be scrutinizing are the administrators and other stakeholders responsible for introducing it.
AI companies sell this tech to administrators, who then tell their teachers to adopt it in the classroom. A ton of them are probably getting their orders from a supervisor to use AI in class. But it's so easy to condescend and ignore the conversations that took place among decision-makers long before a teacher introduced it to the classroom.
It's like being angry at doctors for how terrible the insurance system is in the US.
>It's perhaps to be expected, as these education people are usually non-technical.
I don't think that's totally correct. I think it's because AI has come at everyone, equally, all at once. Educational academics didn't have years to study this because it was released on our kids at the same time.
I have two kids (sophmore in HS and a middle schooler) and in both their individual studies and when I'm helping them with homework we use AI pretty extensively now.
The one off stuff is mostly taking a picture of a math problem and asking it to walk step by step through the process. In particular this has been helpful to me as the processes and techniques have changed.
It's been useful in foreign languages as well to rapidly check work, and make corrections.
On the generative side it's fantastic for things like: give me 3 more math problems similar to this one or for generating worksheets and study guides.
As far as technological adoption goes, it's 100% that every kid knows what ChatGPT is (even maybe more than just "AI" in general). There's some very mixed feelings from the kids with it: my middle schooler was pretty creeped out by the ChatGPT voice interface for example.
Doesn't matter. Every time some maniac invents some, we all need to scramble to adopt it. This is what _progress_ is. Is there's a new technology, we don't think about the consequences. We all just adopt it and use it so thoroughly that we cannot imagine living without it.
Calm down, what actually happens is there is a reaction to new technology and then once its been used there is a counter reaction which takes into account what works and what dosent.
Is there a previous decade you'd prefer to return too for quality of life? Why?
The big issue I’ve faced and seen others face is the use of LLMs induced skill atrophy.
For studying, LLMs feel Like using a robot to lift weights for you at gym.
——
If people used to get cardio as a side effect of having to walk everywhere, and we were forced to think as a side effect of having to actually do the homework, then are LLMs ushering in an era of cognitive ill health ?
For what it’s worth, I spend quite a bit of effort to understand how people are using LLMs, especially non-tech people.
What’s the value to know 7283828*7282828 when you have a computer next to you? What’s the value to know something when an AI can do in seconds. Maybe we need to realize that most of the knowledge is cheap now and deal with it.
Bloom's paradox is well known and proven in education.
AI is the first thing that can positively personalize education and instruction and provide support to instructors.
The authors seem of limited technical literacy to know that you can just train and focus only on textbooks, instead of their explorations using general models and the pitfalls that they have. Not knowing this key difference affects some of the points being made.
The intersection of having a take on technology needs some semblance of digital and technical literacy involved in the paper to help acknowledge or navigate it, or it become a potential blind spot.
It takes legitimate concerns and ironically explores them in average ways, much like an llm returns average text for vague or incomplete questions.
In the rosiest view, the rich give their children private tutors (and always have), and now the poor can give their children private tutors too, in the form of AIs. More realistically, what the poor get is something which looks superficially like a private tutor, yet instead of accelerating and deepening learning, it is one that allows the child to skip understanding entirely. Which, from a cynical point of view, suits the rich just fine...
This is absolutely not an objective review. The person who wrote this is a very particular type of person who Alpha School appeals strongly towards. I'm not saying anything in particular is wrong with the review, but calling it unbiased is incorrect.
Calling the Alpha school "AI" or even "AI to aid learning" is a massive stretch. I've read that article and nothing in there says AI to me. Data collection and on-demand computer-based instruction, sure.
I don't disagree with your premise, but I don't think that article backs it up at all.
Imagine a tutor that stays with you as long as you need for every concept of math, instead of the class moving on without you and that compounding over years.
Rather than 1 teacher for 30 students, 1 teacher can scale to 30 students to better address Bloom's 2 sigma problem, which discovered students in a 1:2 ratio with a tutor full time ended up in the 98% of students reliably.
LLMs are capable of delivering this outright, or providing serious inroads to it for those capable and willing to do the work beyond going through the motions.
When I finally read it, I found it remarkably balanced. It cites positives and negatives, all of which agree with my experience.
> Con: AI poses a grave threat to students' cognitive development
> When kids use generative AI that tells them what the answer is … they are not thinking for themselves. They're not learning to parse truth from fiction.
None of this is controverisal. It happens without AI, too, with kids blindly copying what the teacher tells them. Impossible to disagree, though.
> Con: AI poses serious threats to social and emotional development
Yep. Just like non-AI use of social media.
> Schooling itself could be less focused on what the report calls "transactional task completion" or a grade-based endgame and more focused on fostering curiosity and a desire to learn
No sh*t. This has probably been a recommendation for decades. How could you argue against it, though?
> AI designed for use by children and teens should be less sycophantic and more "antagonistic," pushing back against preconceived notions and challenging users to reflect and evaluate.
Genius. I love this idea.
=== ETA:
I believe that explicitly teaching students how to use AI in their learning process, that the beautiful paper direct from AI is not something that will help them later, is another important ingredient. Right now we are in a time of transition, and even students who want to be successful are uncertain of what academic success will look like in 5 years, what skills will be valuable, etc.
IMNSHO as an instructor, you believe correctly. I tell my students how and why to use LLMs in their learning journey. It's a massively powerful learning accelerator when used properly.
Curricula have to be modified significantly for this to work.
I also tell them, without mincing words, how fucked they will be if they use it incorrectly. :)
> Genius. I love this idea.
I don't think it would really work with current tech. The sycophancy allows LLMs to not be right about a lot of small things without the user noticing. It also allows them to be useful in the hands of an expert by not questioning the premise and just trying their best to build on that.
If you instruct them to question ideas, they just become annoying and obstinate. So while it would be a great way to reduce the students' reliance on LLMs...
Who decides what needs to be "pushed back"? Also, I imagine it's not easy to train a model to notice these "preconceived notions" and react "appropriately": machine learning will automatically extract patterns from data, so if enough texts contain a "preconceived notion" that you don't like, it'll learn it anyway, so you'll have to manually clean the data (seems like extremely hard work and lowkey censorship) or do extensive "post-training".
It's not clear what it means to "challenge users to reflect and evaluate". Making the model analyze different points of view and add a "but you should think for yourself!" after each answer won't work because everyone will just skip this last part and be mildly annoyed. It's obvious that I should think for myself, but here's why I'm asking the LLM: I _don't_ want to think for myself right now, or I want to kickstart my thinking. Either way, I need some useful input from the LLM.
If the model refuses to answer and always tells me to reflect, I'll just go back to Google search and not use this model at all. In this case someone just wasted money on training the model.
Millions of teachers make these kinds of decisions every minute of every school day.
Teachers are also "local". The resulting LLM will have to be approved nation-wide, which is a whole can of worms. Or do we need multiple LLMs of this kind? How are they going to differ from each other?
Moreover, people will hate this because they'll be aware of it. There will be a government-approved sanitized "LLM for schools" that exhibits particular "correct" and "approved" behavior. Everyone will understand that "pushing back" is one of the purposes of the LLM and that it was made specifically for (indoctrination of) children. What is this, "1984" or whatever other dystopian novel?
Many of the things that may "need" pushback are currently controversial. Can a man be pregnant? "Did the government just explicitly allow my CHILD to talk to this LLM that says such vile things?!" (Whatever the "things" may actually be) I guarantee parents from all political backgrounds are going to be extremely mad.
Then don't. It's easy enough to pay a teacher a salary.
I'm a bit nervous about that one.
I very firmly believe that learning well from AI is a skill that can and should be learned, and can be taught.
What's an open question for me is whether kids can learn that skill early in their education.
It seems likely to me that you need a strong baseline of understanding in a whole array of areas - what "truth" means, what primary sources are, extremely strong communication and text interpretation skills - before you can usefully dig into the subtleties of effectively using LLMs to help yourself learn.
Can kids be leveled up to that point? I honestly don't know.
because large scale society does use and deploy rote training with grading and uniformity, to sift and sort for talent of different kinds (classical music, competitive sports, some maths) on a societal scale. Further, training individuals to play a routine specialized role is essential for large scale industrial and government growth.
Individualist world views are shocked and dismayed.. repeatedly, because this does not diminish, it has grown. All of the major economies of the modern world do this with students on a large scale. Theorists and critics would be foolish to ignore this, or spin wishful thinking scenarios opposed to this. My thesis here is that all large scale societies will continue on this road, and in fact it is part of "competitiveness" from industrial and some political points of view.
The balance point of individual development and role based training will have to evolve; indeed it will evolve.. but with that extremes? and among whom?
It's perhaps to be expected, as these education people are usually non-technical. But it's definitely concerning that (once again) a lack of technical and media literacy among these education types will lead to them letting (overall) unhelpful tech swarm the system.
I hate this kind of framing because it puts the burden on the teachers when the folks we should be scrutinizing are the administrators and other stakeholders responsible for introducing it.
AI companies sell this tech to administrators, who then tell their teachers to adopt it in the classroom. A ton of them are probably getting their orders from a supervisor to use AI in class. But it's so easy to condescend and ignore the conversations that took place among decision-makers long before a teacher introduced it to the classroom.
It's like being angry at doctors for how terrible the insurance system is in the US.
I don't think that's totally correct. I think it's because AI has come at everyone, equally, all at once. Educational academics didn't have years to study this because it was released on our kids at the same time.
The one off stuff is mostly taking a picture of a math problem and asking it to walk step by step through the process. In particular this has been helpful to me as the processes and techniques have changed.
It's been useful in foreign languages as well to rapidly check work, and make corrections.
On the generative side it's fantastic for things like: give me 3 more math problems similar to this one or for generating worksheets and study guides.
As far as technological adoption goes, it's 100% that every kid knows what ChatGPT is (even maybe more than just "AI" in general). There's some very mixed feelings from the kids with it: my middle schooler was pretty creeped out by the ChatGPT voice interface for example.
Is there a previous decade you'd prefer to return too for quality of life? Why?
Just before terminally online society.
A1 should not be in every classroom.
Furthermore any books or teaching that does not feature medium rare as the correct cooking of a steak should be banned (and burned to well done).
For studying, LLMs feel Like using a robot to lift weights for you at gym.
——
If people used to get cardio as a side effect of having to walk everywhere, and we were forced to think as a side effect of having to actually do the homework, then are LLMs ushering in an era of cognitive ill health ?
For what it’s worth, I spend quite a bit of effort to understand how people are using LLMs, especially non-tech people.
Bloom's paradox is well known and proven in education.
AI is the first thing that can positively personalize education and instruction and provide support to instructors.
The authors seem of limited technical literacy to know that you can just train and focus only on textbooks, instead of their explorations using general models and the pitfalls that they have. Not knowing this key difference affects some of the points being made.
The intersection of having a take on technology needs some semblance of digital and technical literacy involved in the paper to help acknowledge or navigate it, or it become a potential blind spot.
It takes legitimate concerns and ironically explores them in average ways, much like an llm returns average text for vague or incomplete questions.
There will however be a gigantic gulf between kids who use AI to learn vs those who use AI to aid learning
Objective review of Alpha school in Austin:
https://www.astralcodexten.com/p/your-review-alpha-school
yeah, but not the way you are thinking
you think the rich are going to abolish a traditional education for their kids and dump them in front of a prompt text box for 8 years
that'll just be for the poor and (formerly) middle-class kids
I don't disagree with your premise, but I don't think that article backs it up at all.
Rather than 1 teacher for 30 students, 1 teacher can scale to 30 students to better address Bloom's 2 sigma problem, which discovered students in a 1:2 ratio with a tutor full time ended up in the 98% of students reliably.
LLMs are capable of delivering this outright, or providing serious inroads to it for those capable and willing to do the work beyond going through the motions.
https://en.wikipedia.org/wiki/Bloom's_2_sigma_problem (1984)