K

Kathleen Martin

Guest
AI researchers often say good machine learning is really more art than science. The same could be said for effective public relations. Selecting the right words to strike a positive tone or reframe the conversation about AI is a delicate task: done well, it can strengthen one’s brand image, but done poorly, it can trigger an even greater backlash.
The tech giants would know. Over the last few years, they’ve had to learn this art quickly as they’ve faced increasing public distrust of their actions and intensifying criticism about their AI research and technologies.
Now they’ve developed a new vocabulary to use when they want to assure the public that they care deeply about developing AI responsibly—but want to make sure they don’t invite too much scrutiny. Here’s an insider’s guide to decoding their language and challenging the assumptions and values baked in.
accountability (n) - The act of holding someone else responsible for the consequences when your AI system fails.
accuracy (n) - Technical correctness. The most important measure of success in evaluating an AI model’s performance. See validation.
adversary (n) - A lone engineer capable of disrupting your powerful revenue-generating AI system. See robustness, security.
alignment (n) - The challenge of designing AI systems that do what we tell them to and value what we value. Purposely abstract. Avoid using real examples of harmful unintended consequences. See safety.
artificial general intelligence (phrase) - A hypothetical AI god that’s probably far off in the future but also maybe imminent. Can be really good or really bad whichever is more rhetorically useful. Obviously you’re building the good one. Which is expensive. Therefore, you need more money. See long-term risks.
 
Continue reading: https://www.technologyreview.com/2021/04/13/1022568/big-tech-ai-ethics-guide
 

Attachments

  • p0006005.m05657.mit_words_final.jpg
    p0006005.m05657.mit_words_final.jpg
    381.8 KB · Views: 51