Keith Abney, a senior lecturer in philosophy at California Polytechnic State University, presented “AI/Robots and Ethics: Surveying the Risk Environment,” which explains the risks, distinct types of risks and workable solutions for AI.
Risk is defined as a possibility that harm may occur. Within risks, there are benefits with regard to risks that are acceptable about AI. But in the words of Abney, “Can we trust AI… that is a complicated question.”
AI leans toward the ethics of risks that surround the community. There are eight known factors of acceptable risk factors that apply to the Ethics of Risk.
The factors are Consent, Informed Consent, Affected Population, Step Risk vs. State Risk, Seriousness and Probability, False Positive and Negative, who decides Acceptable Risk and Existential Risk.
Consent involves involuntariness, non-voluntariness and whether it was proper to use AI without consent of effect. Involuntariness defines the affected party as being aware of the risk and being nonconsenting yet forced. Voluntariness means the affected party is unaware of the risk and cannot consent.
Informed Consent discusses the knowledge of knowing the risk but giving consent regardless. But it also questions whether the morality of consent should or should not require adequate knowledge of what is being consented to.
Affected Population discusses who is at risk and who understands that their role is risky with AI. Step Risk vs. State Risk involves which one is more important in debates over AI.
State Risk is time-dependent on being in a certain state that is a direct function. Step Risk, which is not time-dependent, is the amount of time spent on the step matters a little or not at all.
Seriousness and Probability are how bad the harm would be and how likely it is to happen using AI. False positive and negative are if AI wrongly finds where a phenomenon occurs or does not occur and if it is present when it is either absent or when it is absent but not present.
Whoever decides Acceptable Risk discusses whether the risks are acceptable or unacceptable.
It applies to three different standards. A good faith subjective standard is up to an individual if the risks are acceptable or unacceptable. A reasonable person standard is what a fair, informed member of a relevant community believes in. Objective standards require evidence or an expert to explain this unacceptable risk.
The Existential Risk is that it would annihilate Earth’s intelligent life or permanently curtail its potential. Catastrophic Risk, which is not an acceptable risk, discusses climate change or something essential to humans.
“All of these can be a result of terrible things if AI is misused, none of them would make humans extinct,” Abney said.
The workable solutions for AI are space backup and responsible AI. Responsible AI requires regulation and auditing to figure out the best AI practices and procedures that will require implementation, exploring and settling in space as a backup plan for our biosphere.
This presentation and work are supported in part by the US National Science Foundation by Cal Poly, College of Liberal Arts and Philosophy Department.