• Welcome to the Online Discussion Groups, Guest.

    Please introduce yourself here. We'd love to hear from you!

    If you are a CompTIA member you can find your regional community here and get posting.

    This notification is dismissable and will disappear once you've made a couple of posts.
  • We will be shutting down for a brief period of time on 9/24 at around 8 AM CST to perform necessary software updates and maintenance; please plan accordingly!
K

Kathleen Martin

Guest
Artificial Intelligence (AI) – the capability of a machine or piece of software to display human-like intelligence – permeates our daily lives, often in ways we do not notice. It touches us in myriad ways. Advanced technology operates behind the scenes, powering and optimizing smartphone apps, transportation, healthcare, retail, and more.
In the words of my colleague at EastBanc Technologies, Levon Paradzhanyan, “AI-capable agents have cognitive functions that allow them to observe, learn, take action, and solve a problem or task autonomously. … [AI-powered]] computers can make use of human brain structure to perform natural language processing, image recognition, and much more, in some cases surpassing human expert benchmarks!”
How is this possible? It’s all about math and computing power. The key is to be able to process more complex mathematics faster and with more accuracy. That requires more computer power: vaster processors and more memory to be able to hold more data. And because Moore’s Law, which in 1975 predicted a doubling of transistors on a chip every two years, has largely held up, we continue to have the computational power needed to carry out increasingly complex and ambitious Deep Learning (DL) models and AI algorithms.
Even though machines can solve certain tasks much quicker than humans, today’s AI remains firmly at its first level, which we sometimes refer to as “narrow AI” because it is limited to handling certain tasks in a very clearly defined – narrow – way. Beyond narrow AI are two more levels, “general AI” and “super AI,” which together offer a theoretical view of what AI may eventually become.
AI has long been a favored subject in popular culture where it has obtained an almost mythical status at times. It’s often dystopian. “Blade Runner” – and the short story on which it was based, Philip K. Dick’s “Do Androids Dream of Electric Sheep” – introduced us to android antagonists called “replicants,” which are generally superior to humans, but some of which – or whom? – do not actually know that they are machines. In “The Terminator,” innovative but reckless humans build intelligent machines, which become self-aware – i.e., autonomous and untethered from human control. Naturally, the machines decide to wage war against humankind in a near-successful attempt at wiping us out. In another genuine sci-fi classic, “2001: A Space Odyssey,” HAL, the onboard computer/spaceship operating system, goes rogue and tries to kill the crew in an act of very humanlike self-preservation. Fascination becomes fear as AI is used to exemplify humankind’s hubris.
Real-life AI is more benign. Driverless commuter trains. Semi-autonomous cars built by Tesla and a rapidly growing number of other innovative automakers. Smart home appliances that “talk” to each other (and us). Financial technology solutions that facilitate payments and incorporate advanced algorithms. Computer-driven coordination of public transportation. All examples of supercomputers running algorithms and making calculations far beyond what any human can do – making our lives just a little bit easier.
One of the most famous examples of just how smart a machine can be was IBM’s “Deep Blue” computer defeating world chess champion Garry Kasparov. How did a computer get the better of the world’s best chess player? Very simply put, IBM’s machine had the ability to crunch through every possible action and outcome on the chess board faster and with more accuracy than any human could. And that – carrying out a clearly defined intellectual task – is where current AI excels. The machines cannot think independently, but when it comes to crunching such numbers, they are far ahead of us.
Continue reading: https://customerthink.com/the-three-levels-of-artificial-intelligence-weve-only-just-begun/
 

Attachments

  • p0005534.m05198.customer_think_1.png
    p0005534.m05198.customer_think_1.png
    13.4 KB · Views: 38
  • Like
Reactions: Kathleen Martin