References to artificial intelligence (AI) beings have appeared throughout time since antiquity [1]. Indeed, it was the study of formal reasoning, with philosophers and mathematicians at this time who started this inquiry. Then, much later, in more recent times it was the study of mathematical logic which led computer scientist Alan Turing to develop his theory of computation.
Alan Turning is perhaps most notably known for his role in developing the 'universal' computer called the Bombe at Bletchley Park, which decrypted the Nazi enigma machine messages during World War II. However, it was perhaps his (and Alonzo Church’s) Church-Turing thesis which suggested that digital computers could simulate any process of formal reasoning, which is most influential in the field of AI today.
Such work led to much initial excitement, with a workshop at Dartmouth College being held in the summer of 1956 with many of the most influential computer science academics at the time, such as Marvin Minsky, John McCarthy, Herbert Simon, and Claude Shannon, which led to the founding of artificial intelligence as a field. They were confident that the problem would be solved soon, with Herbert Simon saying, “machines will be capable, within twenty years of doing any work a man can do.” Marvin Minsky agreed, suggesting, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved" [2]. However, this has not been the case, and the problem proved far more difficult than they imagined, leading to a loss of enthusiasm when ideas ran out which brought about what is known as the AI winter (a lack of interest) arriving in the 1970s.
However, there has most recently been a revival of AI interest and approaches, such as the revival of deep learning algorithms in 2012, by George E. Dahl who won the "Merck Molecular Activity Challenge" using multi-task deep neural networks to predict the biomolecular target of a drug [3], and the development of deep reinforcement learning (Q-learning) algorithms in 2014 [4].
Continue reading: https://www.psychologytoday.com/us/blog/psychology-in-society/202203/towards-artificial-general-intelligence-agi
Alan Turning is perhaps most notably known for his role in developing the 'universal' computer called the Bombe at Bletchley Park, which decrypted the Nazi enigma machine messages during World War II. However, it was perhaps his (and Alonzo Church’s) Church-Turing thesis which suggested that digital computers could simulate any process of formal reasoning, which is most influential in the field of AI today.
Such work led to much initial excitement, with a workshop at Dartmouth College being held in the summer of 1956 with many of the most influential computer science academics at the time, such as Marvin Minsky, John McCarthy, Herbert Simon, and Claude Shannon, which led to the founding of artificial intelligence as a field. They were confident that the problem would be solved soon, with Herbert Simon saying, “machines will be capable, within twenty years of doing any work a man can do.” Marvin Minsky agreed, suggesting, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved" [2]. However, this has not been the case, and the problem proved far more difficult than they imagined, leading to a loss of enthusiasm when ideas ran out which brought about what is known as the AI winter (a lack of interest) arriving in the 1970s.
However, there has most recently been a revival of AI interest and approaches, such as the revival of deep learning algorithms in 2012, by George E. Dahl who won the "Merck Molecular Activity Challenge" using multi-task deep neural networks to predict the biomolecular target of a drug [3], and the development of deep reinforcement learning (Q-learning) algorithms in 2014 [4].
Continue reading: https://www.psychologytoday.com/us/blog/psychology-in-society/202203/towards-artificial-general-intelligence-agi