• Welcome to the Online Discussion Groups, Guest.

    Please introduce yourself here. We'd love to hear from you!

    If you are a CompTIA member you can find your regional community here and get posting.

    This notification is dismissable and will disappear once you've made a couple of posts.
  • We will be shutting down for a brief period of time on 9/24 at around 8 AM CST to perform necessary software updates and maintenance; please plan accordingly!

Brianna White

Administrator
Staff member
Jul 30, 2019
4,655
3,454
Artificial intelligence has a long and rich history stretching over seven decades. What’s interesting is that AI predates even modern computers, with research on intelligent machines being some of the starting points for how we came up with digital computing in the first place. Early computing pioneer Alan Turing was also an early AI pioneer, developing ideas in the late 1940s and 1950s. Norbert Wiener, creator of cybernetics concepts developed the first autonomous robots in the 1940s when even transistors didn’t exist, let alone big data or the Cloud. Claud Shannon developed hardware mice that could solve mazes without needing any deep learning neural networks. W. Grey Walter famously built two autonomous cybernetic tortoises that could navigate the world around them and even find their way back to their charging spot in the late 1940s without a single line of Python being coded. It was only after these developments and the subsequent coining of the term “AI” at a Dartmouth convention in 1956 that digital computing really became a thing.
So given all that, with all our amazing computing power, limitless Internet and data, and Cloud computing, we surely should have achieved the dreams of AI researchers that had us orbiting planets with autonomous robots and intelligent machines envisioned in 2001: A Space Odyssey, Star Wars, Star Trek, and other science fiction that was developed in the 1960s and 1970s. And yet today, our chatbots are not that much smarter than the ones developed in the 1960s, our image recognition systems are satisfactory but still can’t recognize the Elephant in the Room. Are we really achieving AI or are we falling into the same traps over and over? If AI has been around for decades now, then why are still seeing so many challenges with its adoption? And, why do we keep repeating the same mistakes from the past?
AI Sets its First Trap: The First AI winter
In order to better understand where we currently are with AI, you need to understand how we got here. The first major wave of AI interest and investment occurred from the early 1950s through the early 1970s. Much of the early AI research and development stemmed from the burgeoning fields of computer science, neuropsychology, brain science, linguistics, and other related areas. AI research built upon exponential improvements in computing technology. This combined with funding from government, academic, and military sources produced some of the earliest and most impressive advancements in AI. Yet, while progress around computing technology continued to mature and progress, the AI innovations developed during this window ground to a near halt in the mid 1970s. The funders of AI realized they weren’t achieving what was expected or promised for intelligent systems, and it felt like AI is a goal that would never be achieved. This period of decline in interest, funding, and research is known in the industry as the first AI Winter, so called because of the chill that researchers felt from investors, governments, universities, and potential customers.
Continue reading: https://www.forbes.com/sites/cognitiveworld/2022/09/03/why-do-we-keep-repeating-the-same-mistakes-on-ai/?sh=bf50d461475c
 

Attachments

  • p0008899.m08487.forbes.jpg
    p0008899.m08487.forbes.jpg
    2.8 KB · Views: 38
  • Like
Reactions: Brianna White