Google Search was introduced in 1998 and rapidly became the world’s most popular software. Today Search is visited an average of 8.5 billion times a day. That’s 100,000 times per second. (If you don’t believe me, just Google it.)
Today, it is used by almost everyone. School kids start using it in Kindergarten or First Grade, and unless innovative technologies supplant it, they will keep using it throughout their lives.
But it was not always like that: It got off to a rocky start as numerous teachers and school board members, fearing kids would plagiarize Search in their essays, book reports, and homework assignments, banned its use and gave students failing grades if they were caught using it.
But resistance proved futile: They might as well have tried preventing a tsunami from hitting land by punching it.
Repeating History
I share with you this trivial tidbit of computer history because history is being repeated today by educators who have started to ban ChatGPT and similar AI platforms for the same reasons.
Once again, there are well-founded fears that some kids may copy without attributing, and once again, there are educators moving to ban a world-changing technology.
In the US, school boards in New York City, Los Angeles, Washington DC, Montgomery County, Ala, Seattle, and Bellevue, Wash have all banned students from using GPT AI software.
Elsewhere in the world, entire countries have banned them as well. They include China, Russia, Cuba, and other countries known to suppress free thinking and speech. These are not our usual bedfellows in public education practices.
Disruptive Pack
The issue is far greater than ChatGPT, which is enjoying first-mover advantages, but already, they are being pursued by at least six additional offerings, including Google Bard, my current favorite, Salesforce Einstein, Microsoft Bing, Jasper, Copilot, and more.
There is not yet a collective word for these products, which all use AI to perform similar functions, so for this report, I am calling them Personal AI, which may not be precisely accurate, but it seems like a decent place keeper if you ask me.
The current Personal AI mass adoption rate is unprecedented in digital history. While observers seem to either love or hate it but are almost unanimous in seeing it as a lasting game changer.
Simply put, Personal AI software platforms are all built on Large Language Models (LLMs) trained on massive text datasets that enable users to generate text, pictures, graphics, and translations, write creatively and converse with users in an uncannily humanlike fashion.
My view is that ultimately the jargon will fade as mainstream adoption grows, and the conversation evolves from being about the enabling technology and moves into conversations about what people can do with it.
What’s more important from my perspective is that this new technological generation is likely to permanently change relationships between people and technology and that will be most clearly reflected in the schoolchildren of today.
As sociologists once characterized Millennials—perhaps incorrectly-- as digital natives, so today’s school age are likely to be generally characterized for their relationships with Personal AI.
If some students are prevented by Luddite educational decision-makers from early access to it, then those students might be left behind as their peers pull ahead in schools, colleges, and careers.
Teachers Staying Back
My wife Paula Israel worked for several years in Northern California schools that are generally regarded as among the most progressive in the US. She is the daughter, mother, and friend of educators and remains passionate about quality public school education.
At my request, Paula chatted briefly with just three of her former colleagues about ChatGPT and how they regard students using it.
In summary, two have spent only a few minutes with it and one has spent no time using it, but all three have taken steps to prevent students from using it because they suspect it would be used to cheat.
In my view, this is disturbing to me for two reasons:
(1) Many Bay Area students will likely become job candidates at tech companies, where acceptance will require fluency in Personal AI.
(2) If this is a sample of teacher thinking in one of the most progressive school districts in the US, I shudder about educator attitudes in less forward-thinking school districts.
Not one of Paula’s associates seemed aware that ChatGPT and similar software are unprecedented as information-gathering tools, or that they will remain important for many years until eventually being eclipsed by even more powerful AI tools of the future.
One teacher is considering no longer assigning essays as homework assignments, where students would use computers and the internet. Instead, she would have them write in classrooms using pen and paper.
The only snags for her is that handwriting is harder to read than printed content, and computerized grading would be impossible.
In short, her concerns about students adopting new technology is to prevent cheating she has to succumb to using old technology not just for her students but herself as well.
This disturbs me.
The schools where this is happening are about an hour’s ride from where ChatGPT was developed. I shudder to picture Personal AI is being dealt with in Wyoming, Alabama, and my new home state of Florida.
Catching Cheaters
I was pleased that at least one of Paula’s three colleagues was aware of Turnitin, and Wordtune, two of at least five AI programs that use Pattern Recognition and Natural Language Recognition to detect plagiarized content quickly.
I think using GPT AI software to detect plagiarism is a wiser and more effective approach than banning the use of something that most students will be expected to use later in life.
And there are better ways to encourage appropriate practices than classroom essays or punching tsunamis.
What to Teach?
If schools want students to succeed in the modern world, succeed, there are many ways for teachers to prepare them by providing lessons about AI, ethics, opportunities, and challenges.
Once again, Bard and I collaborated on ideas, and this is what we came up with:
1. Teach best practices. Instruct pupils about Generative AI and how it can be used to research, brainstorm and write.
2. Warn of consequences. Teach students how AI programs can easily detect improper copying and instruct them on penalties and damages to reputation.
3. Reward Originality. Make assignments that require students to think critically and creatively and grade them accordingly.
For example:
a. What your life would have been like if you had been with the Pilgrims when they landed at Plymouth.
b. What would you do if you had to live a full day without internet connection, or
c. How AI could slow climate change.
4. Attribute what you can. Teach how citing sources increases credibility and professionalism. Teach how to paraphrase accurately.
5. Be Journalistic. Whether you are a student, an employee, or a byline journalist, your writing should represent your best thinking. Quote others to support your thinking and research.
6. Praise Ethics. Punish Violators. Show praise and encouragement to students who demonstrate originality and ethical behavior.
Conclusion
If you have children in schools today, be proactive, and contact other teachers, school boards, etc. Do everything you can to ensure that their teachers are not technological Luddites.
AI is here to stay. There are many causes for concern, but it is unquestionably here to stay.
Very thought provoking piece Shel. The model for education and the grading of students has been creaking for while under the onslaught of the Internet. This is another layer of challenge added on. Not sure that the bandaids in place so far are going to hold. But for sure banning the technology and attempting to get students to use pre-internet and pre-AI tools certainly doesn’t sound like a decent plan