• Welcome to the Online Discussion Groups, Guest.

    Please introduce yourself here. We'd love to hear from you!

    If you are a CompTIA member you can find your regional community here and get posting.

    This notification is dismissable and will disappear once you've made a couple of posts.
  • We will be shutting down for a brief period of time on 9/24 at around 8 AM CST to perform necessary software updates and maintenance; please plan accordingly!
K

Kathleen Martin

Guest
Hello and welcome to a special edition of Eye on A.I., recapping insights from Fortune’s Brainstorm A.I. conference, which took place over the past two days in Boston. The second day of the conference yesterday saw some fascinating discussions of everything from the use of A.I. in wealth management to the cutting edge of robotics. Among the highlights was a fireside chat with Lynne Parker, director of the National AI Initiative Office in the White House Office of Science and Technology Policy. Parker said she sees great potential in privacy-preserving machine learning, in part as a way to create larger datasets that might help counter China’s advantage in having access to more data. She also spoke about the need for explainable and transparent A.I. algorithms, especially in “high risk” use cases, like credit and mortgage underwriting or medical diagnoses. Parker’s comment may hint that the U.S. government is considering following the European Union in taking a risk-based approach to A.I. regulation. The EU has proposed categorizing A.I. use cases as high, medium or low risk, with different levels of regulatory compliance required for each category. She said “the right regulation can spur innovation.” One of the overarching themes running through many of the conversations at the conference was the idea that the success or failure of A.I. projects is rarely about the technology and almost always about the culture of the business in which that project is being implemented. “Step back from the tech and bring people along on the journey,” Carol Juel, the chief technology and operating officer at the online bank Synchrony, advised. “A.I. will not solve the problems of any organization.” Jeff McMillan, the chief data analytics and data officer at Morgan Stanley Wealth Management, similarly said that if he could do things over again he would be less of a perfectionist about the A.I. algorithms he was creating and worry much more about how those algorithms fit with the business. “You have to go there based on the business demand and not my A.I. strategy, which nobody cares about,” he said. On the other hand, Julie Sweet, Accenture’s CEO, warned companies “not to confuse necessity with strategic choice.” Many companies had to implement automation, such as chatbots to assist with customer service or new algorithms to deal with pricing decisions, because of the pressures of the COVID-19 pandemic. That, she says, is necessity. But CEOs and boards need to start thinking more strategically about where they can derive the most value from A.I. and not be afraid to think big and move fast. Still many companies stumble in implementing A.I. projects, struggling to move from proof-of-concept projects to full-scale implementations. And there was plenty of discussion at Brainstorm A.I. about how to avoid those pitfalls. Tony Kreager, the vice president of data engineering and data science at FedEx’s Dataworks division talked about the advantage of thinking in short timescales: six days, six weeks, and six months. That is six days to cobble together a bare bones proof of concept that data scientists and machine learning engineers can show to a business unit, six weeks to scale it up and prove that the application can deliver business value, and six months to go to full deployment. But, like many at the conference, Kreager also talked about the importance of being willing to walk away from a project if, after six days or six weeks, it isn't working. At the same time, most of the executives at the conference said they recognize the need to have multi-functional, multi-disciplinary teams working on these A.I. projects from the outset. This can help spot potential challenges, including critical ethical or legal issues, before a company has invested too much time and money in building an application.
Continue reading: https://fortune.com/2021/11/10/successful-a-i-depends-on-people-not-tech/
 

Attachments

  • p0005693.m05351.fortune_logo_2016_840x485.jpg
    p0005693.m05351.fortune_logo_2016_840x485.jpg
    34.2 KB · Views: 35
  • Like
Reactions: Kathleen Martin