• Welcome to the Online Discussion Groups, Guest.

    Please introduce yourself here. We'd love to hear from you!

    If you are a CompTIA member you can find your regional community here and get posting.

    This notification is dismissable and will disappear once you've made a couple of posts.
  • We will be shutting down for a brief period of time on 9/24 at around 8 AM CST to perform necessary software updates and maintenance; please plan accordingly!

Brianna White

Administrator
Staff member
Jul 30, 2019
4,655
3,455
Eight days. That’s how long Google’s Advanced Technology External Advisory Council (ATEAC), an eight-member committee set up in 2019 to guide the company’s development of A.I., survived before the company dissolved it
The committee imploded for several reasons. Google wanted the ATEAC to meet just four times a year. It expected its members to work pro bono. And although the digital giant claimed that the ATEAC’s efforts would inform its A.I. use, it wasn’t clear which projects the committee would monitor, to whom it would report, or which executive(s) would act on its recommendations. In retrospect, the ATEAC was consumed by the rising organizational and societal skepticism about its role because it simply wasn’t set up for success. 
As CEOs expand their organizations’ uses of A.I., they face complex challenges. They find that they have to manage tradeoffs among objectives such as profits, consumer safety, reputation, ethics, and values that are often in conflict with one another. These tradeoffs force them to choose between making decisions that will have tangible short-term impacts and those with mid- to long-term implications that are difficult to evaluate. 
These ever more complex tradeoffs are inevitable with A.I. First, A.I. allows companies to offer new services—such as personalized recommendations for each consumer and preventive maintenance of every machine—that they couldn’t until now. Second, A.I.-generated tradeoffs often end up having a major impact because companies can scale A.I. rapidly. Third, A.I. learns and evolves over time, even without human supervision, so the risks are harder to predict. Microsoft’s chatbot Tay turned racist in 2016 less than 16 hours after it went online. Besides, in the absence of regulations and guidelines, business leaders find it tough to identify and manage the risks from using A.I.
Continue reading: https://fortune.com/2022/03/04/artificial-intelligence-ai-watchdog-review-board/
 

Attachments

  • p0007197.m06850.gettyimages_1337613724_e1646280732536.jpg
    p0007197.m06850.gettyimages_1337613724_e1646280732536.jpg
    30.6 KB · Views: 29