
Let’s face it; things are changing fast. At the time of writing this article, AI is relatively new, but growing rapidly. Language Learning Models, most commonly, ChatGPT, have been widely accepted and adopted by 52% of U.S. adults. Many schools and virtual education resources have started integrating AI into their education plans. We are even beginning to see AI play the role of the educator. Alpha School, for example, have fully replaced human educators with AI.
An executive order put in place by Donald Trump ensures integration of AI into public K-12 education nationwide:
“[AI] is rapidly transforming the modern world, driving innovation across industries, enhancing productivity, and reshaping the way we live and work,” the executive order reads. “To ensure the United States remains a global leader in this technological revolution, we must provide our nation’s youth with opportunities to cultivate the skills and understanding necessary to use and create the next generation of AI technology.”
What may have sounded like science fiction a few short years ago has now become our daily reality. While it may be easy to ignore today, this is the beginning of AI integration into the education system and the tools we use as home educators. So, do we adapt before we’re left behind?
First, I think we would be amiss if we didn’t consider the big picture. This may read as woo-woo, but hear me out.
AI inhibits the human mind’s natural state of curiosity. The human mind is the original processor. Outsourcing our internal intelligence and processes to an external-“all-knowing” framework limits not only our mind, but the essential spirit of who we are as human beings.
– Acorn to Oak
Now that we got that out of the way, let’s consider the objective risks of AI as they exist today.
- AI is often wrong. Researchers found that ChatGPT agreed with false statements up to one-quarter of the time. AI is even known to hallucinate. This is when AI “provide(s) users with fabricated data that appears authentic. These inaccuracies are so common that they’ve earned their own moniker; we refer to them as “hallucinations” (Generative AI Working Group, n.d.).
- AI is subject to bias. AI is trained by humans, and therefore adopt their human flaws. LLMs often have political, racial, and social baises baked into their responses due to this AI bias. The problem is, AI bias is not always as obvious as the bias of a known source. One may easily take on bias without realizing.
- AI poses a severe impact on our environment. Put simply, the computational power required to operate generative AI places substainal demands on resouces such as electricity and water, which leads to an increase of emissions and pressures on the power grid. Data centers cause distruption in the communities they exist in, and often are placed in disadvanged communities.
- AI is often wrong. Researchers found that ChatGPT agreed with false statements up to one-quarter of the time. AI is even known to hallucinate. This is when AI “provide(s) users with fabricated data that appears authentic. These inaccuracies are so common that they’ve earned their own moniker; we refer to them as “hallucinations” (Generative AI Working Group, n.d.).
How much energy does ChatGPT use per prompt?
Each query to ChatGPT consumes approximately 0.001 to 0.01 kilowatt-hours (kWh) of electricity. This is roughly 10 to 15 times the energy consumed by a standard Google search.
Data centers need cooling systems, and many of these systems rely on water. It’s estimated that each ChatGPT prompt consumes around2 to 5 liters of water due to the cooling requirements of the servers processing the queries.
Now, let’s get specific on AI use in homeschooling. There has been much discussion in the homeschool community about using AI to lesson plan, teaching children to use AI for self-directed learning, and using AI teaching tools.
Many have made claims that the world is moving rapidly towards AI, so we would be putting our children at a disadvantage to omit these tools from our homeschool.
“If your child isn’t using AI in their learning, they’re falling behind. The future isn’t waiting — and neither should they.” @littlefenders
I cast no judgement on those who utilze these tools in their learning. However, I think this is an important discussion to have and we should make our own fully developed, informed opinions before jumping in to the world of AI. This is my perspective:
Depending on a language model to do your internal processing for you, before you have taken the steps to think for yourself, can lead to a habit of detrimental consequence.
– Acorn to Oak
At surface level, this claim rings a familiar bell. We all remember a math teacher who told the class, “what will you do when you don’t have a calculator? Will you carry around a calculator in your pocket?” Suddenly, we’ve catapulted into a world where we will always have access to “short-cuts.” However, utilizing generative AI goes much deeper than using a calculator for basic equations, and it is important to know the difference. It comes down to how our brains, aka the original computer, compute and learn about the world around us.
How do we learn, and how does learning change us?
Neuroplasticity refers to the brain’s ability to change and become stronger through new experiences. Each new concept or skill creates neural pathways, which become stronger with repetition and practice.
Learning can be frustrating, but frustration is productive. Learning feel can feel uncomfortable, because the process of learning is actively re-wiring our brains. If we limit our exposure to difficulty, our minds cannot develop at their full capacity.
When learners grapple with difficult tasks, they are more likely to engage in deep cognitive processes, which result in stronger, more durable memories and skills (Bjork, 1994)
It is only through continual exposure and persistence through hard tasks that our minds can grow. In fitness we call it progressive overload. If we aren’t continually practicing and increasing weight, we will fail to put on muscle growth. The same is true for our mind.
If AI is doing a portion of the processing for us, are we still learning?
Some scientists argue that AI tools limit, rather than enhance, understanding. Trusting AI tools can leave us vulnerable to what has been coined as illusions of understanding, “a class of metacognitive errors that arise from holding incorrect beliefs about the nature of one’s understanding.”
When we off-load our mental labor to AI tools, we can fall into the trap of feeling like we are learning when we are not. We then feel that we understand concepts that we have not yet processed.
Using LLMs can quickly allow us to fall into the habit of off-loading our objectives, asking the LLM to redistribute them to us in bite-size, easy-to-manage pieces. AI can inhibit our conclusions, because we have missed out on the critical tasks and experiences that would have allowed us to draw those very conclusions. Our mind did not go through the entire cognitive process. Therefore, we may not make the same neural connections it would have without AI assistance. We cannot say what the long-term effect of this habit on adults and children could be, since these tools are so new and changing rapidly every day.
Now, for some optimism.
If we don’t have hope for a better world, who will?
Homeschoolers have the privilege to resist the current state of affairs and forge their own path. The initial reaction may be “I can’t make a difference” or, “the world is changing and I need to prepare my child to live in the modern world.” Yes, this may be the way of the future, but this doesn’t have to be our future. Ultimately, I do not use AI because I enjoy the process of learning. I feel that using AI robs me from many parts of the learning process, so I chose to not use it. I hope that I can instill this love of learning to my children and to the children that I educate, and that’s what feels right to me.