In This Issue
Vol. 27 Issue 21
ETHICAL AI
- WHAT IS “ETHICAL AI”?
- CREATING AI
- BEWARE THE DATA
MORE FROM SNS
QUOTES
MEMBER SPOTLIGHT
ETHERMAIL
OUR PARTNERS
WHERE’S MARK?
—
“Ethical AI” (EAI) seems to be in the news daily, providing controversial, challenging, and often troubling stories about systems that, for one reason or another, fail the test.
What Is “Ethical AI”?
While everyone may have their own definition of ethics or ethical, EAI is a bit more focused. Here is a definition from faculty.ai:
Ethical AI is when AI systems are designed and deployed in ways that consider the harms associated with AI systems and [have] mitigated them. These harms include bias and discrimination, unsafe or unreliable outcomes, unexplainable outcomes and invasions of privacy.
What I find most technically interesting about this concept is that it is not a straight implementation of human morals or ethics, although they do play a role. From a compute standpoint, we might say that EAI is AI that passes certain objective tests and, in passing these, also allows for the implementation of human-rights values.
EAI, in this compute design sense, neither dictates, nor adheres directly to, any particular human agenda, regardless of race, creed, or country. If it did, then our definition might apply in Turkey but not in Poland.
It’s worth saying at the start that everyone seems to be trying to impose their own agendas, morals, ethics, etc., onto this subject (after all, we’re human – an issue we’ll return to shortly), with predictably conflicting, and often angry, results. Among the more infamous results of clearly failed EAI attempts, we would have to list Amazon’s HR system, which after years of spending, training, and refining, had to be dumped wholesale because it appeared to be both racist and genderist in its hiring practices.
In pursuing these goals, we would also want to mention the wholesale fleeing from Google (including firings) of its EAI team over the last few months, with no clear resolution. It is clear that not only big tech companies, but also the technology itself, are currently incapable of delivering the EAI we hope to achieve.
In short, the compute definition of EAI goals, at least to my own perspective, is one of eliminating both intentional bias and unintentional bias from analytical outcomes. Ultimately, this will require the same of our datasets, as well as of our analytical machines.
SKU: SNS-2022-06-21 - Need Help? Contact Us Leave Feedback
Categories: 2022 Issues, Back Issues
Tag: PDF Download
Contact Us
Leave Feedback
Related Products
- $9.95
- $9.95
-
SNS: Special Letter: On the Commercialization of Graphene and Other Nanomaterials
Back Issues, 2015 Issues - $9.95