Early Advancements
The 1950s to the 1980s was an era for AI focused on learning and building on new ideas. This era laid the groundwork for modern AI while pushing the boundaries of computation. Throughout this period, researchers and engineers were very optimistic about what they thought AI would be able to do, a parallel to our current enthusiasm for AI development. At this point, however, it highlighted the start to normalization of deviance within the field, as ambitious goals led to a mismatch between reality and aspirations.
The normalization of deviance started with the AI community's ambitious early projections. Researchers, encouraged by ideas proposed in publications such as Alan Turing's 1950 paper "Computing Machinery and Intelligence" and the “Turing Test”, were inspired to build intelligence that could compete with human cognition. This optimism grew following the 1956 Dartmouth Summer Research Project on Artificial Intelligence, considered the birthplace of AI. Attendees like Marvin Minsky, John McCarthy, and Claude Shannon outlined goals that involved solving complex problems through computational logic, an early indicator of the progress-oriented nature the field would adopt.
Engineers such as Arthur Samuel demonstrated real progress. Samuel’s development of a checkers-playing program for IBM in the 1950s introduced innovations like pruning search trees and enabling the program to "learn" by playing itself repeatedly. As early applications of machine learning, these successes made it look like AI could quickly overcome human-like challenges. However, researchers often underestimated the algorithmic complexities involved in replicating human cognition. Limited hardware capabilities led to a disconnect between what was promised and reality.
During this era, significant theoretical contributions, such as Warren McCulloch and Walter Pitts' work on artificial neurons, influenced the creation of early neural networks, like Minsky’s SNARC. While promising, these early networks again struggled with practical limitations, such as insufficient computing power and training data. The challenges did little to hinder enthusiastic researchers, many of whom continued to believe that true AI breakthroughs were imminent. The normalization of deviance emerged as the field increasingly accepted the mismatch between its aspirations and actual outcomes, often leading to cycles of hype followed by disappointment.
Early Deviance
Deviation from standard computing grew alongside the advancements in the field. Researchers had become so focused on their vision of AI as a new technology that they overlooked concerns such as human-computer interaction. An example of this was the response to Joseph Weizenbaum’s ELIZA, a simple chatbot that had conversations with people by using pattern matching and substitution rules. Designed to mimic a psychotherapist, ELIZA’s DOCTOR script led users to form emotional connections with the program, and they saw it as a real person. This shocked Weizenbaum, who expected people to understand that they were engaging with a machine. Instead, many users felt meaning in their interactions, treating ELIZA as a real conversation partner. This response displayed that people were willing to project human qualities onto the machine, raising ethical questions about the potential for AI to manipulate users. ELIZA revealed the dangers of overestimating AI’s capabilities, and highlighted the need for researchers to consider the psychological impact of their work.
By the 1970s, the shortcomings of early AI were more evident, particularly with predictions by scholars like H.A. Simon and Minsky. These expectations had been built on an idealized view of AI’s rapid progress, which failed to account for the limitations of the technology at the time. As research in AI continued, it became clear that tasks requiring human intelligence and natural language understanding were far more complex than anticipated. This disconnect between the promises made by researchers and the actual functioning of the machines began to expose deeper issues within the field. One of the most important moments came with the release of James Lighthill’s 1973 report, which attacked AI’s development, particularly its inability to perform at the level of general human intelligence. Lighthill’s findings emphasized the gaps between theory and practice, highlighting that despite impressive breakthroughs in narrow problem-solving tasks, AI still lacked the ability to tackle the complexity of real-world issues.