- Hands-On Artificial Intelligence with Unreal Engine
- Francesco Sapio
- 852字
- 2025-02-24 10:11:43
A glance into the past
It might come unexpected to some of you, but the story of AI started well before computers. In fact, even ancient Greeks hypothesized the existence of intelligence machines. A famous example is the bronze giant Talos, who protected the city of Crete from invaders. Another is the golden helpers of Hephaestus, who helped God in his volcano forge along with the Cyclops. In the XVII century, René Descartes wrote about automatons that could think, and believed that animals were do different from machines, which could be replicated with pulleys, pistons, and cams.
However, the core of this story starts in 1931, when the Austrian logician, mathematician, and philosopher Kurt Gödel proved that all the true statements in the first-order logic are derivable. On the other hand, this is not true for higher order logics, in which some true (or false) statements are unprovable. This made first-order logic a good candidate to automate derived logical consequences. Sounds complicated? Well, you can imagine how that sounded to the ears of his traditionalist contemporaries.

In 1937, Alan Turing, an English computer scientist, mathematician, logician, cryptanalyst, philosopher, and theoretical biologist, pointed out some of the limits of "intelligent machines" with the halting problem: it is not possible to predict a-priori if a program will terminate unless it is actually run. This has many consequences in theoretical computer science. However, the fundamental step happened thirteen years later, in 1950, when Alan Turing wrote his famous paper "Computing Machinery and Intelligence", in which he talked about the imitation game, nowadays mostly known as "The Turing Test": a way to define what an intelligent machine is.
In the 1940s, some attempts were made to emulate biological systems: McCulloch and Pitts developed a mathematical model for a neuron in 1943, and Marvin Minsky created a machine that was able to emulate 40 neurons with 3,000 vacuum tubes in 1951. However, they fell into the dark.
From the late 1950s through to the early 1980s, a great portion of AI research was devoted to "Symbolic systems". These are based on two components: a knowledge base made out of symbols and a reasoning algorithm, which uses logical inference to manipulate those symbols, in order to expand the knowledge base itself.
During this period, many brilliant minds made significant progresses. A name worth quoting is McCarthy, who organized a conference in Dartmouth College in 1956, where the term "Artificial Intelligence" was first coined. Two years later, he invented the high-level programming language LISP, in which the first programs that were able to modify themselves were written. Other remarkable results include Gelernter's Geometry Theorem Prover in 1959, the General Problem Solver (GPS) by Newell and Simon in 1961, and the famous chat-bot Eliza by Weizenbaum, which was the first software that, in 1966, could have a conversation in natural language. Finally, the apotheosis of symbolic systems happened in 1972 with the invention of PROLOG by the French scientist Alain Colmerauer.
Symbolic systems led to many AI techniques, which are still used in games, such as blackboard architectures, pathfinding, decision trees, state machines, and steering algorithms, and we will explore all of them throughout this book.
The trade-off of these systems is between knowledge and search. The more knowledge you have, the less you need to search, and the faster you can search, the less knowledge you will need. This has even been proven mathematically by Wolpert and Macready in 1997. We will have the chance to examine this trade-off in more detail later in this book.
At the beginning of the 1990s, symbolic systems became inadequate, because they proved hard to scale to larger problems. Also, some philosophical arguments arose against them, maintaining that symbolic systems are an incompatible model for organic intelligence. As a result, old and new technologies have been developed that were inspired by biology. The old Neural Networks were dusted off from the shelf, with the success of Nettalk in 1986, a program that was able to learn how to read aloud, and with the publication of the book "Parallel distributed processing" by Rumelhart and McClelland in the same year. In fact, "back-propagation" algorithms were rediscovered, since they allow a Neural Network (NN) to actually learn.
In the last 30 years of AI, research took new directions. From the work of Pearl on "Probabilistic reasoning in intelligent systems", probability has been adopted as one of the principal tools to handle uncertainty. As a result, AI started to use many statistical techniques, such as Bayesian-nets, Support Vector Machines (SVMs), Gaussian processes, and the Markov Hidden Model, which is used widely to represent the temporal evolution of the states of a system. Also, the introduction of large databases unlocked many possibilities in AI, and a new whole branch named "Deep Learning" arose.
However, it's important to keep in mind that, even if AI researchers discover new and more advance techniques, the old are not to be discarded. In fact, we will see how, depending on the problem and its size, a specific algorithm can shine.