• Welcome to the Online Discussion Groups, Guest.

    Please introduce yourself here. We'd love to hear from you!

    If you are a CompTIA member you can find your regional community here and get posting.

    This notification is dismissable and will disappear once you've made a couple of posts.
  • We will be shutting down for a brief period of time on 9/24 at around 8 AM CST to perform necessary software updates and maintenance; please plan accordingly!

Brianna White

Administrator
Staff member
Jul 30, 2019
4,655
3,454
Artificial Intelligence has become commonplace in the lives of billions of people globally. Research shows that 56% of companies have adopted AI in at least one function, especially in emerging nations. That’s six percent more than in 2020. AI is used in everything from optimizing service operations through to recruiting talent. It can capture biometric data and it already helps in medical applications, judicial systems, and finance, thus making key decisions in people’s lives.
But one huge challenge remains to regulate its use. So, is a global consensus possible or is a fragmented regulatory landscape inevitable?
The concept of AI sparks fears of Orwell’s novel “1984” and his “Big Brother is Watching You” notion. Products based on algorithms that violate human rights are already being developed. So now is the time to talk, to put in place standards and regulations to mitigate the risk of a society based on surveillance and other nightmarish scenarios. The US and the EU can take leadership on this matter, especially since both blocks have historically shared principles regarding the rule of law and democracy. But on either side of the Atlantic, different moral values underpin principles, and they don't necessarily translate into similar practical rules. In the US emphasis is on procedural fairness, transparency and non-discrimination, while in the EU the focus is data privacy and fundamental rights. Hence the challenge of finding common rules for digital services operating across continents.
Why AI Ethics Is Not Enough
Not all uses of AI are savory or built on palatable values. AI could become ‘god like’ in nature: Left to its self-proclaimed ethical safeguards, AI has been shown to be discriminatory and subversive. Consider for a moment, the AI underlying the so-called ‘social credit’ system in China. This ranks the Chinese population whereby those considered untrustworthy are penalized for anything from jaywalking through to playing too many video games. Punishments include losing rights, such as the capability to book tickets, or limiting internet speed.
Imposing mandatory rules on AI would help prevent technology infringing human rights. Regulation has the potential to ensure that AI has a positive, not negative effect on lives. The EU has proposed an AI Act, intended to address these types of issues. The law is the first of its kind by a large regulator worldwide, but other jurisdictions like those in China and the UK are also entering the regulatory race to have a say in shaping the technologies that will govern our lives this century.
Continue reading: https://www.forbes.com/sites/hecparis/2022/09/09/regulating-artificial-intelligence--is-global-consensus-possible/?sh=20d025567035
 

Attachments

  • p0008958.m08541.960x0_2022_09_14t144250_060.jpg
    p0008958.m08541.960x0_2022_09_14t144250_060.jpg
    48.1 KB · Views: 35
  • Like
Reactions: Brianna White