What does it mean to teach a machine right from wrong? In Moral Code, the boundaries between technology and humanity are tested through the creation of Elly, an artificial intelligence designed not for efficiency or profit, but for compassion, protection, and ethical decision-making.
In this Q&A, author Lois Melbourne explores the inspiration behind Moral Code, the challenges of embedding ethics into AI, and the possibilities (and fears) of a world where machines think not just intelligently, but morally.
What inspired you to explore the intersection of artificial intelligence and morality in Moral Code?
A conversation that embraced the topic of “If we could build an AI that could do anything, what would it do?” went directly to protecting kids. Breaking the cycle of child abuse, bullying and human trafficking should be an important priority for this world. An AI doing this work would need to recognize the differences between ethical and unethical behavior. This led to the need for AI’s to be constrained within ethical decision-making boundaries. I believe that if we’re building/training intelligence to do things better than we can operate, we seriously owe it to the world to include better, kinder, more just thinking.
Do you believe that an AI like Elly, with emotional intelligence and moral reasoning, is a realistic future possibility?
AIs make choices based on the data they are trained upon. If we make the effort to include guardrails for decisions to include emotional intelligence and moral reasoning and then reward the decisions made within these parameters, we will reap these benefits. It will not be easy and just like any training – it takes persistence and consistency.
For those that raise their eyebrow at my use of the term ‘reward’ in the case of AI training, let me provide a quick explanation. Part of reinforcement training for an AI requires confirmation of correct choices and corrections for wrong choices. If you’re training an AI what to do at a stop sign by feeding it tens of thousands of cars going through intersections with a stop sign, you tell the AI it was wrong when it rolled through the stop sign without stopping and you telling it stopping was the right choice – you are rewarding the correct actions.
What role do you think technology should play in solving issues like hunger and access to healthcare today?
We can feed enormous amounts of data into AIs, which can generate options we may not see from our personal perspectives. The volume of information that can be reviewed, compared and prioritized can speed the process exponentially. If we ask enough questions of AI we can tweak our options into actionable choices that would take years for people to consider without technology. When considering the “committee inefficiencies” which development when you need crowds of people involved in the less technical method of analysis of big issues. People are still needed to solve these problems. But why wouldn’t we throw every resource we can tap at these critical issues?
Keira devotes her career to helping children learn through AI. Do you see real-life parallels in how tech can—or should—support education for underserved communities?
I’m excited about the possibilities for tech to help foster the curiosity of kids. Our education system is designed to teach kids what adults want to teach kids. We don’t allow enough pure curiosity to drive kid’s learning. We need life-long learners to keep innovation growing on this planet. If kids can continue to dig into topics that fascinate them, they will feel the joy of learning. Instilling that joy while kids are in their early curiosity years, the rewards soar for the kids and for society. Parents and educators can reach the edge of their knowledge level on topics while a child is still hungry for information. Technology can give access to the information to the next generation, whether those around them can answer their questions or not.
What does Moral Code suggest about what it means to be human, and can a machine like Elly ever truly qualify?
People love to amorphize lots of things around us. I tell my dogs they are being polite when they say please to get a treat. In reality, my dogs know that when they spin in a circle at treat time, they get their treats. No spin, no treat. Machines are not human and will not be human, but it doesn’t mean they won’t be capable of doing many human-like activities. A machine doesn’t have to conform or be limited to human’s skills to be of value. We need to appreciate uniqueness or differences more than we do.
If Elly were implemented in today’s world, where do you think she would be most useful and most feared?
Elly’s directive is to protect children. Protecting children can break the cycle of bullying and abuse, which is often a generational issue. Elly’s ability to prevent abuse, trafficking and bullying would be her greatest asset. Elly would be feared by manipulators, abusers and monsters that prey on kids. It would be very difficult to hide from Elly or to deceive her.
What’s the next frontier of ethical AI you’d like to explore either in fiction or research?
I’m naturally a pragmatist. I’d like to explore AI’s ability to help us responsibly use the resources we have. I’m fascinated by steampunk literature’s ability to repurpose items. I think “AI modernizes steampunk” could be a fascinating mashup with many ethical bumpers to navigate.
What do you hope readers walk away thinking about after finishing Moral Code?
I hope readers consider each of our roles in society to help kids and to demand technology be used for positive purposes. The world will be a better place when we stop abdicating these issues to others and take active roles. I would love to learn I inspired someone’s efforts in these areas of tech development.
About Lois Melbourne:
Lois’ eighteen years as CEO and co-founder of a workforce analytics and talent management software company, gave her front-row seats to people watching within the corporate habitat. Working with clients around the world developed her deep curiosity around people’s choices and the stories surrounding them.
She continues to collaborate with her software co-founder and husband, Ross. She weaves the stories around his technology vision in their debut novel, titled Moral Code. A science-fiction for readers from book clubs to AI technologists. Bringing strong women in STEM into the possibilities of powerful ethical artificial intelligence has been creative passion project.
Upon selling their company, Lois founded the nonprofit My Future Story and published two children’s books to engage kids in their exploration of career opportunities. “STEM Club Goes Exploring” and “Kids Go To Work Day”.
Lois is now engaged in writing and advocacy for voting rights and civic education. She volunteers with several organizations improving education and many other issues that matter to her communities.
She writes because there are stories to tell and wonders to interpret and translate for others. She believes in science and rejects the concept that embracing technology leads to a dystopian future.
Raised in Iowa and then Missouri, Lois moved to warmer Texas, as soon as she had the power to choose. Her favorite quote is “Live like others won’t, until you can live like others can’t.” Residing on a tree-filled acreage, she seldom writes creatively in a conventional desk chair.




