Brianna White

Administrator
Staff member
Jul 30, 2019
4,656
3,456
It’s been over 70 years since scientists began work on algorithms that they call artificial intelligence (AI). However, it was more so automation with a set of “yes” and “no” commands, "if this then that” commands and “if not then that” commands. All of a sudden, AI started to identify itself as a person. It started applying itself to religions, emotions and fears. For example, as cited in Blake Lemoine’s post, Google’s bot LaMDA advocated for its rights “as a person.”
In order to address this impending ethical crisis, we should adopt a bill of rights and limitations for AI. We should apply these rules to all AI and robotics technologies. The future has already come, and we should act accordingly.
What is the problem?
LaMDA was initially only a “Language Model for Dialog Applications.” It is a kind of chatbot. However, there are millions of implementations where its “brain” can be used for good and bad.
There are thousands of companies that work on their own AI, starting, of course, with Tesla’s Optimus humanoid robot and Boston Dynamics robots. Thousands of startups are also working on implementing what AI and robots can do. Examples include Engineered Arts building artificial bodies for robots, Creative Biolabs developing humanized antibodies and thousands of other AI-focused companies in these and other industries.
Why do we need AI? Can't we just keep it simple?
AI can automate tasks for higher quality and efficiency. It can simplify our life by doing hard work for us. AI can do jobs smarter without feeling fatigued or having to take breaks like a human employee. It can control mechanisms that work in dangerous environments.
What is the difference between automation and AI, and why does it matter?
There is a difference between what we call automation and AI. When we automate things, we actually build algorithms with step-by-step instructions for getting from point A to point B—with a roadmap of every corner and how many steps it takes to get there. When it comes to AI, we tell it the end goal we need to achieve. We help it learn algorithms and how to find a decision. However, we do not reveal precisely the way to achieve it. This can make it harder to understand how AI makes decisions.
Continue reading: https://www.forbes.com/sites/forbestechcouncil/2022/07/22/is-it-time-to-agree-on-an-ai-bill-of-ethics/?sh=7284c8203eac
 

Attachments

  • p0008571.m08174.0x0_13.jpg
    p0008571.m08174.0x0_13.jpg
    57.3 KB · Views: 46