Brianna White

Administrator
Staff member
Jul 30, 2019
4,654
3,453
If you've given any thought at all to artificial intelligence (AI) and the progress made in the field, you're probably in one of these two camps:
Camp 1: AI is the biggest possible threat to humankind and will take over and make slaves of us humans (aka The Matrix or The Terminator camp).
Camp 2: AI should be embraced by humankind and will drive us to unprecedented new levels of creativity, productivity and societal advancement. AI will largely remain subservient to us, and we will coexist harmoniously (aka the "R2D2" camp).
Most books and movies that deal with AI also lean toward one of these two camps. "Good" and "evil" are somewhat nebulous terms — our baseline empathy sets our definition of "good." For example, most of us know that we should value human life over material objects without needing anyone to tell us so explicitly. Someone who sacrifices a baby to get a new car would automatically be branded "evil." These macro laws/rules are hardwired into us as human beings. But why should human life or animal life be valuable to AI? A dog has no greater intrinsic value to a machine than, say, a sandwich — unless we program our values into our AI systems.
The question isn't really whether AI will eventually become more intelligent than humans (it definitely will) or whether it will turn good or evil. It's what we can do right now to make sure it turns "good" (or, at the very least, doesn't turn "evil").
Continue reading: https://www.forbes.com/sites/forbestechcouncil/2021/08/30/why-the-ethics-of-ai-are-complicated/?sh=4a1be0dc43a7
 

Attachments

  • p0004503.m04177.ai_complications.jpg
    p0004503.m04177.ai_complications.jpg
    119.3 KB · Views: 67