• Welcome to the Online Discussion Groups, Guest.

    Please introduce yourself here. We'd love to hear from you!

    If you are a CompTIA member you can find your regional community here and get posting.

    This notification is dismissable and will disappear once you've made a couple of posts.
  • We will be shutting down for a brief period of time on 9/24 at around 8 AM CST to perform necessary software updates and maintenance; please plan accordingly!

Brianna White

Administrator
Staff member
Jul 30, 2019
4,654
3,454
Investors, take note. Your due diligence checklist may be missing a critical element that could make or break your portfolio’s performance: responsible AI. Other than screening and monitoring companies for future financial returns, growth potential and ESG criteria, it’s time for private equity (PE) and venture capital (VC) investors to start asking hard questions about how firms use AI.
Given the rapid proliferation and uptake of AI in recent years – 75 percent of all businesses already include AI in their core strategies – it’s no surprise that the technology is top-of-mind for PE and VC investors. In 2020, AI accounted for 20 percent or US$75 billion of worldwide VC investments. McKinsey & Company has reported that AI could increase global GDP by roughly 1.2 percent per year, adding a total of US$13 trillion by 2030.
AI now powers everything from online searches to medical advancement to job productivity. But, as with most technologies, it can be problematic. Hidden algorithms may threaten cybersecurity and conceal bias; opaque data can erode public trust. A case in point is the BlenderBot 3 launched by Meta in August 2022. The AI chatbot made anti-Semitic remarks and factually incorrect statements regarding the United States presidential election, and even asked users for offensive jokes.
In fact, the European Consumer Organization's latest survey on AI found that over half of Europeans believed that companies use AI to manipulate consumer decisions, while 60 percent of respondents in certain countries thought that AI leads to greater abuse of personal data.
How can firms use AI in a responsible way and work with cross-border organizations to develop best practices for ethical AI governance? Below are some of our recommendations, which are covered in the latest annual report of the Ethical AI Governance Group, a collective of AI practitioners, entrepreneurs and investors dedicated to sharing practical insights and promoting responsible AI governance.
Continue reading: https://knowledge.insead.edu/responsibility/responsible-ai-has-become-critical-business
 

Attachments

  • p0009284.m08847.gettyimages_1129515493_1.jpg
    p0009284.m08847.gettyimages_1129515493_1.jpg
    120.1 KB · Views: 41
  • Like
Reactions: Brianna White