Brianna White

Administrator
Staff member
Jul 30, 2019
4,656
3,456
Artificial intelligence “acts” unethically in ways that are different from humans, even if the harms that both AI and humans can cause are similar. For example, even if both humans and AI can invade people’s privacy, discriminate, or cause physical harm, artificial intelligence does not act with intention to cause such harm. Rather, the harm results from how artificial intelligence collects and processes data.
Currently, artificial intelligence cannot achieve consciousness, though one Google engineer disagrees. Today, the type of artificial intelligence that companies are creating and incorporating into their operations and decision systems is artificial narrow intelligence, which refers to a computer's ability to perform a single task or limited tasks extremely well. Otherwise known as “machine learning,” in most cases, that task is to process data through learning by example. In discovering commonalities between data points, AI will discover (or construct) a pattern that it will use to identify what the AI algorithm is meant to find and/or provide a solution, given the pattern revealed by the data points.
In many ways, “machine learning” is similar to reasoning by analogy or inductive reasoning by humans. Reasoning by analogy is when a person compares two cases to determine how similar they may be for the sake of applying the same conclusions from one case to the other. It is a way to establish a pattern for cases that may be diverse but have a common theme. Inductive reasoning is when a person draws a conclusion or a broad generalization from a set of specific examples. The larger the set, the more accurate the conclusion can be. It is a way to find the pattern among the particulars.
The major difference between these two forms of human reasoning and “machine learning” is in the fact that “machine learning” is more vulnerable to what G.E. Moore called the naturalistic fallacy, which is when one mistakes what currently exists with what should be seen as moral. For example, the naturalistic fallacy may conflate the existence of systemic bias with an assumption that it should continue. For AI ethics scandals, there is no dearth of examples of AI algorithms reinforcing discriminatory practices simply because discrimination is already rampant. Most recently, the Department of Justice and the Equal Employment Opportunity Commission have found that AI may be discriminating against disabled applicants in the hiring process.
Continue reading: https://www.forbes.com/sites/irabedzow/2022/06/30/what-it-takes-to-create-and-implement-ethical-artificial-intelligence/?sh=4a62f110f928
 

Attachments

  • p0008448.m08065.960x0_2022_07_01t100303_468.jpg
    p0008448.m08065.960x0_2022_07_01t100303_468.jpg
    87.9 KB · Views: 43
  • Like
Reactions: Brianna White