Home  >   Blog  >   How I Define "Artificial Intelligence"

2022-09-09

How I Define "Artificial Intelligence"

  This article is part of the series: How to Talk About AI Productizing Data with People

My recent reading of the novel "Klara and the Sun" written by the Novel Prize-winning writer was lovely. Beyond the simple joy of reading the story, the book enabled me to rethink the concept of artificial intelligence "AI" from an ethical standpoint.

In my opinion, what’s unique about Klara and the Sun was how the author illustrated an AI, Klara, as a character. What I mean by that is how realistically the story depicts the emotion and behavior of the “machine,” and I think it is one of the few differentiators we can see among numerous Sci-Fi books/movies that set an AI as a core of its storyline, like A.I. and Her.

In other words, while Klara and the Sun is not outstanding for me in terms of the underlying messages the story tries to convey, like “love”, “relationship”, and “beauty of the human world” that many Sci-Fi stories commonly speak, the author beautifully illustrated interactions between humans (e.g., Josie, Rick, Mother) and AI (Klara) with detailed psychological description both implicitly and explicitly.

Meanwhile, several thought-provoking paragraphs asked readers for thinking about AI ethics e.g., by letting a boy, Rick, talk about how he thinks about an ethical aspect of drone technology. Eventually, every single behavior of the characters dictates the possibility and limitation of AI, and the reality makes a boundary between a machine-like program and human-like AI ambiguous throughout the book; as soon as Klara started working for a girl, Josie, I can easily forget the fact that Klara is actually a machine because the artificial friend captures the world and converses with the others so naturally.

After reading the well-crafted story, I cannot stop thinking about the following questions:

  • What do you imagine when someone calls a particular piece of code “AI”?
  • What is your definition of “AI” (vs. ordinary software program)?
  • Is an expert system (i.e., heuristics, a rule-based program that helps our decision-making process) still “AI”?

As a developer working on data analytics and machine learning, which are the technical foundations of making modern AI-ish applications possible, I want to be particularly conscious of the word since it implies too many different ideas depending on the context. It’s not just a buzzword used for fancy marketing communications, and I strongly believe answering these questions is a key first step to discussing the ethical issues we are facing in the era of big data.

“AI”—Someone would immediately imagine a Sci-Fi-quality sophisticated autonomous robot that doesn’t look different from real humans, while some others might give more realistic answers based on what we already have e.g., robot vacuum cleaners, Siri-like voice assistants, and data-driven applications such as recommendation and machine translation.

For me, AI is ultimately defined by the “behavior” of the program and our perception of that. That is, like Klara demonstrated in the story, seamless interaction with end users (humans) indicates the sophistication of the machine. At the same time, sensitivity to user feedback (i.e., what humans speak in front of Klara, and how Klara responds to it) determines whether we categorize the program as “just a machine” or “AI”. In that sense, any kind of software program can potentially be an AI, and vice versa, regardless of its technical complexity.

It's not about the unseen future, and it's rather a matter that we all are already facing daily. Therefore, it is important for developers to carefully think of how to implement machine-human interactions (i.e., user interface/interaction), and this is where the idea of ethical product development & humane technology comes into play.

  This article is part of the series: How to Talk About AI Productizing Data with People

  Share


  Categories

Data Science & Analytics Business Machine Learning

  See also

2022-11-27
Fluid People and Blended Society: How Systems Model "Dividuals"
2022-09-22
Why Your Job Title Matters (Cont.)─Technology for the People
2022-04-21
"Why Do We Build This?" Humane Technologist's View of Bad Product/Project

  More

Last updated: 2022-09-09

  Author: Takuya Kitazawa

Takuya Kitazawa is a freelance software developer based in British Columbia, Canada. As a technologist specializing in AI and data-driven solutions, he has worked globally at Big Tech and start-up companies for a decade. At the intersection of tech and society, he is passionate about promoting the ethical use of information technologies through his mentoring, business consultation, and public engagement activities. See CV for more information, or contact at [email protected].

  Now   Gift a cup of coffee

  Disclaimer

  • Opinions are my own and do not represent the views of organizations I am/was belonging to.
  • I am doing my best to ensure the accuracy and fair use of the information. However, there might be some errors, outdated information, or biased subjective statements because the main purpose of this blog is to jot down my personal thoughts as soon as possible before conducting an extensive investigation. Visitors understand the limitations and rely on any information at their own risk.
  • That said, if there is any issue with the content, please contact me so I can take the necessary action.