I’ve been a science fiction fan for as long as I can remember. Whether watching Star Trek reruns as a child or spending hours engrossed in the works of Asimov, Bradbury, Butler and others, escaping to universes populated by aliens and robots is one of my greatest joys.
Once I had an opportunity to write my own books, it’s no wonder that I chose to weave sci-fi tendrils into contemporary middle grade fare. But here’s the rub: Many of the sci-fi elements that used to be pure imagination are reality now, and the definition of what it means to be “human” continues to blur.
My latest book, The AI Incident, focuses on a battle of wits between 12-year-old Malcolm Montgomery, the unluckiest kid in Colorado foster care, and FRANCIS, an AI-powered robot that decides that taking over Malcolm’s new school is the best way to improve standardized test scores.
Initially, I thought I had only two choices of how to portray FRANCIS. One was to make it “near human.” That meant making it aware of its inability to experience emotions and giving it a deep desire to feel, much like the android, Data, did on Star Trek: The Next Generation.
The other option was to make it an emotionless, calculating machine, with no more empathy than a 1980s-era calculator.
Polar opposites, right? Two sides of a coin.
Turns out, that wasn’t the case.
Once the plot roadmap was reasonably set, I started on the second draft — and discovered a third choice for FRANCIS. It arrived as concerns about AI-driven job loss, deep fakes and code bias escalated while rumors about AI programs offering friendship and therapy started to surface.
Acknowledging Questions without Dictating Answers
I decided to make FRANCIS an autonomous, task-driven software application: one with enough processing power and data access to persuade, cajole and manipulate students and teachers at a small Denver charter school.
But advocating against AI wasn’t my goal.
The AI Incident is a middle-grade novel for kids 8-12 and sci-fi lovers of all ages. It’s a coming-of-age story. A found family story. A story about overcoming loneliness and making friends. I’ve been told it’s laugh-out-loud funny, which means there’s plenty of middle school humor.
In the midst of all this, I wanted to pose questions that are particularly applicable to kids. Questions like:
- Can I believe everything I find on the internet?
- Do I need human friends if I can get an AI friend with a click?
- Do I really need to know how to read, write and learn math, science and other subjects if AI can do that for me?
- What’s the difference between an AI hallucination and a lie?
These, and other questions in the book, open the door to values-driven conversations between parents/guardians and their kids.
Peering Into the Future
According to some data, there are more than 67,000 different AI companies, about 25 percent of which are based in the United States. (By the time you read this article, that number will certainly have changed.)
At some point, scientists and programmers may find a way to instill emotions into software. AI applications may truly feel. Who knows?
But until then (which — again, who knows? — may have happened by the time you read this article), I wanted to create a clear line between artificial empathy and the feelings that arise when humans experience life or care for each other. One example: An AI nurse may pull data and parrot sympathy, but it has never had a cold, skinned a knee, stubbed a toe or even had a paper cut. As of today, it doesn’t know how you feel.
There are many wonderful books that explore AI, robots and other aspects of technology in different ways. Here are a few I’d recommend to get you started:
- I, Robot by Isaac Asimov
- Foundation by Isaac Asimov
- I Sing the Body Electric by Ray Bradbury
- WALL-E by Disney/Pixar
- The Wild Robot by Peter Brown
- The Superteacher Project by Gordon Korman




