Eight days. That’s how long Google’s Advanced Technology External Advisory Council (ATEAC), an eight-member committee set up in 2019 to guide the company’s development of A.I., survived before the company dissolved it.
The committee imploded for several reasons. Google wanted the ATEAC to meet just four times a year. It expected its members to work pro bono. And although the digital giant claimed that the ATEAC’s efforts would inform its A.I. use, it wasn’t clear which projects the committee would monitor, to whom it would report, or which executive(s) would act on its recommendations. In retrospect, the ATEAC was consumed by the rising organizational and societal skepticism about its role because it simply wasn’t set up for success.
As CEOs expand their organizations’ uses of A.I., they face complex challenges. They find that they have to manage tradeoffs among objectives such as profits, consumer safety, reputation, ethics, and values that are often in conflict with one another. These tradeoffs force them to choose between making decisions that will have tangible short-term impacts and those with mid- to long-term implications that are difficult to evaluate.
These ever more complex tradeoffs are inevitable with A.I. First, A.I. allows companies to offer new services—such as personalized recommendations for each consumer and preventive maintenance of every machine—that they couldn’t until now. Second, A.I.-generated tradeoffs often end up having a major impact because companies can scale A.I. rapidly. Third, A.I. learns and evolves over time, even without human supervision, so the risks are harder to predict. Microsoft’s chatbot Tay turned racist in 2016 less than 16 hours after it went online. Besides, in the absence of regulations and guidelines, business leaders find it tough to identify and manage the risks from using A.I.
Continue reading: https://fortune.com/2022/03/04/artificial-intelligence-ai-watchdog-review-board/
The committee imploded for several reasons. Google wanted the ATEAC to meet just four times a year. It expected its members to work pro bono. And although the digital giant claimed that the ATEAC’s efforts would inform its A.I. use, it wasn’t clear which projects the committee would monitor, to whom it would report, or which executive(s) would act on its recommendations. In retrospect, the ATEAC was consumed by the rising organizational and societal skepticism about its role because it simply wasn’t set up for success.
As CEOs expand their organizations’ uses of A.I., they face complex challenges. They find that they have to manage tradeoffs among objectives such as profits, consumer safety, reputation, ethics, and values that are often in conflict with one another. These tradeoffs force them to choose between making decisions that will have tangible short-term impacts and those with mid- to long-term implications that are difficult to evaluate.
These ever more complex tradeoffs are inevitable with A.I. First, A.I. allows companies to offer new services—such as personalized recommendations for each consumer and preventive maintenance of every machine—that they couldn’t until now. Second, A.I.-generated tradeoffs often end up having a major impact because companies can scale A.I. rapidly. Third, A.I. learns and evolves over time, even without human supervision, so the risks are harder to predict. Microsoft’s chatbot Tay turned racist in 2016 less than 16 hours after it went online. Besides, in the absence of regulations and guidelines, business leaders find it tough to identify and manage the risks from using A.I.
Continue reading: https://fortune.com/2022/03/04/artificial-intelligence-ai-watchdog-review-board/