Artificial Intelligence — How We Got Here
Blog

Artificial Intelligence — How We Got Here

Discover how AI’s past, from early ideas to deep learning, is shaping the future of work.

Highlights:

  • The history of artificial intelligence dates back to the 1950s, when AI theory was first developed at Dartmouth College in New Hampshire.
  • Machine learning, which lets computers learn from data, marked a paradigm shift in AI history. 
  • AI’s impact is already significant across accounting and other industries, and it continues to expand as technology progresses. 

Artificial intelligence (AI) is poised to revolutionize countless industries, including audit and accounting. It promises to reduce manual effort, streamline processes and help drive greater efficiencies, among other benefits. And while AI is certainly a hot topic that in many ways seems to have just recently burst on the scene, it actually has a long history spanning decades that has led to today’s current state. 

What is AI?

At its core, AI’s story is about the development of computer systems. These complex programs can do tasks that normally require human intelligence. These exercises include understanding natural language, recognizing patterns, learning from experiences, making decisions and solving complex problems. 

AI is now widely applied and influential. It is reshaping industries, work environments and societal norms. We have personal assistants on our smartphones, like Siri and Google Assistant. There are also more intricate systems that drive autonomous vehicles, diagnose diseases and improve customer service through chatbots. The footprint of AI in our lives is significant and growing every day. 

In the workplace, AI smooths operations. It automates routine tasks and predicts trends, personalizing customer experiences. These functions boost efficiency and open new vistas of innovation and opportunity in fields such as auditing. But the road to this future was paved with years of theory, technological tinkering and a quest to understand computational intelligence’s potential. 

To answer the question, “What is artificial intelligence?” and fully appreciate AI’s present and future, we must first understand its past. Let’s take a journey back in time to fully appreciate how AI has developed into one of today’s most promising and exciting technological developments.

The history of AI, explained

Scientists sowed the first seeds of AI in a quest to build machines capable of human thought and action. What began as speculative enchantment eventually assumed the form of practical computing. The first generation of AI research focused on problem-solving and logical reasoning. It aimed to make machines process language, recognize patterns and solve complex problems.

Who first developed AI theory?

The history of AI theory began at Dartmouth College in New Hampshire in 1956. It was at a workshop led by John McCarthy and attended by prominent thinkers, including Allen Newell and Herbert A. Simon. Science fiction was about to transition into a scientific pursuit, and researchers formally established the field of AI.

The ‘Logic Theorist’

The Logic Theorist was a program developed by Allen Newell, J.C. Shaw and Herbert Simon in 1956 at the RAND Corporation. It automated mathematical problem-solving. It was one of the first AI programs that proved able to solve problems better than humans in a specific domain. It excelled at solving problems involving propositional calculus, a branch of math that deals with logical statements and their relationships. 

The program represented problems symbolically and used algorithms to manipulate these symbols based on established rules or “heuristics.” Doing so could break apart complex issues into simpler sub-problems. Then, it was possible to solve the sub-problems step-by-step, like a human mathematician. The Logic Theorist was an early example of AI applied to problem-solving. It laid the groundwork for future research in this area.

General Problem Solver

Following the success of the Logic Theorist, Newell and Simon embarked on creating a more adaptable AI program. They succeeded in creating the General Problem Solver (GPS) in 1957. They designed GPS as a universal problem solver that could tackle numerous issues, not just a specific domain like its predecessor. This aspiration marked a significant milestone in AI research. It symbolized the pursuit of a machine with the capacity to emulate the broad problem-solving skills of the human mind.

The General Problem Solver approached tasks by breaking them into smaller, more manageable parts. It used “means-ends analysis,” where it identified the differences between the present state and the goal state and searched for actions to minimize the gap. This method allowed the GPS to solve structured problems logically, mirroring the step-by-step reasoning process humans often employ.

Shakey the Robot

Developed during the late 1960s at Stanford Research Institute (now SRI International) in California, Shakey was the first robot to exhibit the capabilities of making decisions and solving problems autonomously. Named for its somewhat unstable movement, Shakey came complete with a camera, sensors and motors that allowed it to interact with and traverse its environment.

Shakey’s software let it view its surroundings, analyze situations and act on them using “if-then” statements. This approach helped Shakey navigate rooms, move items and execute tasks by fragmenting elaborate commands into simple actions.

Shakey’s development was a major advance in robotics and the history of artificial intelligence. It underscored the potential of merging movement with decision-making. This robot served as a baseline for further research in robotics, specifically for self-navigation and problem-solving.

Expert systems 

By the 1970s, AI had started making its mark in business by introducing expert systems. Expert systems were a giant leap in AI. They could tackle specific challenges by copying the decision-making of human specialists. Their designs aimed to solve complex problems in narrow domains: diagnosing diseases in medicine, making financial forecasts in economics or interpreting geological data for oil exploration, for example.

Expert systems combined a knowledge base with a set of inference rules. They were effective because they could use vast, specialized knowledge that often surpassed that of any single human mind.

Machine learning

Machine learning, on the other hand, marked a paradigm shift in AI history. Where expert systems rely on predefined rules, machine learning lets computers learn from data. This approach allows computers to improve their performance on a task over time without direction on how to handle every possible situation. Machine learning includes many techniques, including neural networks, decision trees and reinforcement learning. Each is suited to different types of duties.

Machine learning models are flexible and outstanding at learning. This ability has made them central to AI’s evolution. They power speech recognition, autonomous vehicles and personalized content recommendations. Expert systems and machine learning are two distinct but complementary approaches to AI. They bring us closer to developing machines that think and absorb information like humans.

AI winter

The 1980s marked a defining period in AI’s background. The technology captured the public’s imagination but also over-promised and under-delivered. AI entered a period known as the ‘AI winter,’ characterized by reduced funding and interest in the field.

The AI winter was a sobering chapter when the limitations of early AI technologies became apparent. Computational power could not yet support the complex neural networks necessary for robust AI, and funding dried up. For a time, the field of AI lingered in relative obscurity.

However, the AI renaissance was waiting on the other side of the valley. Thanks to Moore’s Law and parallel computing, big data exploded and processing times got faster.

Moore’s Law and parallel computing

Moore’s Law, a prediction made by engineer Gordon Moore in 1965, remains a foundational principle in the technological world. It posits that the number of transistors on a microchip doubles every two years while the cost of computers halves. 

This swift advancement that the law describes has propelled the progression of computing power, enabling AI systems to evolve in complexity and capability. Within AI development, Moore’s Law has played a pivotal role in processing vast amounts  of data and executing intricate algorithms previously deemed unattainable.

Parallel computing entails dividing substantial problems into smaller components solved concurrently across various processors. This method dramatically cuts the time needed to process lots of data or run complex algorithms. In AI, parallel computing streamlines the training of deep learning models on large datasets. AI researchers can use more complicated models by spreading the load across many units. They can also iterate faster. 

The synergy between Moore’s Law and parallel computing has dramatically propelled the advancement of AI. These systems mimic human intelligence with unprecedented fidelity.

Deep learning

Deep learning is a revolutionary subset of machine learning. It aims to imitate the human brain through artificial neural networks comprised of layered algorithms. Each layer processes an aspect of the data, starting from the simplest to the most complex features. Deep learning refers to the number of layers that transform the data. More layers allow for higher abstraction and complexity. They let the model recognize patterns and make decisions with astonishing precision.

One of the most striking aspects of deep learning is its ability to learn feature representation automatically. Traditional machine-learning algorithms rely on human-engineered features. However, deep learning models can independently discover functional patterns in data, which is clear in fields like image and speech recognition. 

The future evolution of AI

Due to AI and automation, the future landscape of work will significantly transform. This shift is especially true in fields like auditing and accounting. One key concept is blending human and machine work. This notion could involve AI handling routine tasks while practitioners focus on analysis, strategy and human connections. 

This collaboration will let them shift to roles that require human judgment, innovation and ethical oversight, such as strategic advising and regulatory compliance. Another idea suggests that accountants will evolve into data scientists, focusing on understanding complex data patterns.

Moreover, there is speculation about new jobs opening up in accounting involving ethical monitoring and improving AI systems. These concepts point to a future where artificial intelligence technology will enhance and empower the profession. These transformations will require the learning of new skills, and will create new opportunities to add value in unprecedented ways.

The AI renaissance has brought about a new era of possibilities. With technological advancements and computing power, AI research has seen rapid progress, particularly in deep learning. AI’s impact is already significant through the development of applications for various industries, and it continues to expand as technology progresses. Its future promises to be just as exciting as its past.