In light of recent accidents with autonomous and semi-autonomous vehicles, will people put their trust in artificial intelligence? Missouri S&T researchers are digging for answers.
Given the choice of riding in an Uber driven by a human or a self-driving version, which would you choose?
Considering last month’s fatal crash of a self-driving Uber that took the life of a woman in Tempe, Arizona, and the recent death of a test-driver of a semi-autonomous vehicle being developed by Tesla, peoples’ trust in the technology behind autonomous vehicles may also have taken a hit. The reliability of self-driving cars and other forms of artificial intelligence is one of several factors that affect humans’ trust in AI, machine learning and other technological advances, write two Missouri University of Science and Technology researchers in a recent journal article.
“Trust is the cornerstone of humanity’s relationship with artificial intelligence,” write Dr. Keng Siau, professor and chair of business and information technology at Missouri S&T, and Weiyu Wang, a Missouri S&T graduate student in information science and technology. “Like any type of trust, trust in AI takes time to build, seconds to break and forever to repair once it is broken.”
The Uber and Tesla incidents point to the need to rethink the way AI applications such as autonomous driving systems are developed, and for designers and manufacturers of these systems to take certain steps to build greater trust in their products, Siau says.
Despite these recent incidents, Siau sees a strong future for AI, but one fraught with trust issues that must be resolved.
“Trust building is a dynamic process, involving movement from initial trust to continuous trust development,” Siau and Wang write in “Building Trust in Artificial Intelligence, Machine Learning, and Robotics,” published in the February 2018 issue of Cutter Business Technology Journal.
In their article, Siau and Wang examine prevailing concepts of trust in general and in the context of AI applications and human-computer interaction. They discuss the three types of characteristics that determine trust in this area – human, environment and technology – and outline ways to engender trust in AI applications.
Siau and Wang point to five areas that can help build initial trust in artificial intelligence systems:
Beyond developing initial trust, however, creators of AI also must work to maintain that trust. Siau and Wang suggest seven ways of “developing continuous trust” beyond the initial phases of product development:
“The AI age is going to be unsettling, transformative and revolutionary,” Siau writes in another recent article (“How Will Technology Shape Learning?” published in the March 2018 issue of The Global Analyst). But in this unsettling environment, higher education can play a significant role.
“Higher education must rise to the challenge to prepare students for the AI revolution and enable students to successfully surf in the AI age,” Siau writes.
Already, Siau is working to prepare MBA students at Missouri S&T for the AI age through Artificial Intelligence, Robotics, and Information Systems Management, a course he introduced in 2017. As part of the coursework, Siau asks each student to present an article on a new artificial intelligence or machine learning technology or application.
Leave a Reply