Figure 0.8: Comparing neighboring Perlin noise values in one (left) and two (right) dimensions. The cells are shaded according to their Perlin noise value.
-
Two-dimensional noise works exactly the same way conceptually. The difference, of course, is that the values aren’t just written in a linear path along one row of the graph paper, but rather fill the whole grid. A given value will be similar to all of its neighbors: above, below, to the right, to the left, and along any diagonal, as in the right half of Figure 0.9.
+
Two-dimensional noise works exactly the same way conceptually. The difference, of course, is that the values aren’t just written in a linear path along one row of the graph paper, but rather fill the whole grid. A given value will be similar to all of its neighbors: above, below, to the right, to the left, and along any diagonal, as in the right half of Figure 0.8.
If you were to visualize this graph paper with each value mapped to the brightness of a color, you would get something that looks like clouds. White sits next to light gray, which sits next to gray, which sits next to dark gray, which sits next to black, which sits next to dark gray, and so on.
The idea that an object’s state can vary over time is an important development. So far in this book, the objects (movers, particles, vehicles, boids, bodies) have generally existed in only one state. They might have moved with sophisticated behaviors and physics, but ultimately they remained the same type of object over the course of their digital lifetime. I’ve alluded to the possibility that these entities can change over time (for example, the weights of steering “desires” can vary), but I haven’t fully put this into practice. Now, with cellular automata, you’ll see how an object’s state can change based on a system of rules.
-
The development of cellular automata systems is typically attributed to Stanisław Ulam and John von Neumann, who were both researchers at the Los Alamos National Laboratory in New Mexico in the 1940s. Ulam was studying the growth of crystals, and von Neumann was imagining a world of self-replicating robots. You readd that right: robots that can build copies of themselves.
+
The development of cellular automata systems is typically attributed to Stanisław Ulam and John von Neumann, who were both researchers at the Los Alamos National Laboratory in New Mexico in the 1940s. Ulam was studying the growth of crystals, and von Neumann was imagining a world of self-replicating robots. You read that right: robots that can build copies of themselves.
Von Neumann’s original “cells” had 29 possible states, so perhaps the idea of self-replicating robots is bit too complex of a starting point. Instead, imagine a row of dominoes, where each domino can be in one of two states: standing upright (1) or knocked down (0). Just like dominoes react to their neighboring dominoes, the behavior of each cell in a cellular automaton is influenced by the states of its neighboring cells.
This chapter will explore how even the most basic rules of something like dominoes can lead to a wide array of intricate patterns and behaviors, similar to natural processes like biological reproduction and evolution. Von Neumann’s work in self-replication and CA is conceptually similar to what’s probably the most famous cellular automaton, the “Game of Life,” which I’ll discuss in detail later in the chapter.
Perhaps the most significant (and lengthy) scientific work studying cellular automata arrived in 2002: Stephen Wolfram’s 1,280-page A New Kind of Science. Available in its entirety for free online, Wolfram’s book discusses how CA aren’t simply neat tricks, but are relevant to the study of biology, chemistry, physics, and all branches of science. In a moment, I’ll turn to building a simulation of Wolfram’s work, although I’ll barely scratch the surface of the theories Wolfram outlines—my focus will be on the code implementation, not the philosophical implications. If the examples spark your curiosity, you’ll find plenty more to read about in Wolfram’s book.
@@ -719,7 +719,7 @@
Exercise 7.11
Exercise 7.12
Create a CA in which each pixel is a cell and the pixel’s color is its state.
-
5) Historical. In the object-oriented Game of Life example, I used two variables to keep track of a cell’s current andd previous state. What if you use an array to keep track of a cell’s state history over a longer period? This relates to the idea of a “complex adaptive system,” one that has the ability to change its rules over time by learning from its history. (Stay tuned for more on this concept in Chapters 9 and 10.)
+
5) Historical. In the object-oriented Game of Life example, I used two variables to keep track of a cell’s current and previous state. What if you use an array to keep track of a cell’s state history over a longer period? This relates to the idea of a “complex adaptive system,” one that has the ability to change its rules over time by learning from its history. (Stay tuned for more on this concept in Chapters 9 and 10.)
Exercise 7.13
Visualize the Game of Life by coloring each cell according to how long it’s been alive or dead. Can you also use the cell’s history to inform the rules?
Take a moment to think back to a simpler time, when you wrote your first p5.js sketches and life was free and easy. What was a fundamental programming concept that you likely used in those first sketches and continue to use over and over again to this dady? Variables. Variables allow you to save data and reuse it while a program runs.
-
Of course, this is nothing new. In this book, you’ve moved far beyond sketches with just one or two simple variables, working up to sketches organized around more complex data structures: variables holding custom objects that include both data and functionality. You’ve used these complex data structures—classes—to build your own little worlds of movers and particles and vehicles and cells and trees. But there’s been a catch: in each and every example in this book, you’ve had to initialize the properties of these objects. Perhaps you made a whole set of particles with random colors and sizes, or a list of vehicles all starting at the same x,y position.
+
Take a moment to think back to a simpler time, when you wrote your first p5.js sketches and life was free and easy. What was a fundamental programming concept that you likely used in those first sketches and continue to use over and over again to this day? Variables. Variables allow you to save data and reuse it while a program runs.
+
Of course, this is nothing new. In this book, you’ve moved far beyond sketches with just one or two simple variables, working up to sketches organized around more complex data structures: variables holding custom objects that include both data and functionality. You’ve used these complex data structures—classes—to build your own little worlds of movers and particles and vehicles and cells and trees. But there’s been a catch: in each and every example in this book, you’ve had to worry about initializing the properties of these objects. Perhaps you made a whole set of particles with random colors and sizes, or a list of vehicles all starting at the same x,y position.
What if, instead of acting as “intelligent designers,” assigning the properties of the objects through randomness or thoughtful consideration, you could let a process found in nature—evolution—decide the values for you? Can you think of the variables of a JavaScript object as the object’s DNA? Can objects give birth to other objects and pass down their DNA to a new generation? Can a p5.js sketch evolve?
-
The answer to all these questions is a resounding yes, and getting to that answer will be the focus of this chapter . After all, this book would hardly be complete without tackling a simulation of one of the most powerful algorithmic processes found in nature itself, biological evolution. This chapter is dedicated to examining the principles behind evolutionary processes and finding ways to apply those principles in code.
+
The answer to all these questions is a resounding yes, and getting to that answer will be the focus of this chapter. After all, this book would hardly be complete without tackling a simulation of one of the most powerful algorithmic processes found in nature itself, biological evolution. This chapter is dedicated to examining the principles behind evolutionary processes and finding ways to apply those principles in code.
Genetic Algorithms: Inspired by Actual Events
The primary means for developing code systems that evolve are genetic algorithms (GAs for short), a type of algorithm inspired by the core principles of Darwinian evolutionary theory. In these algorithms, populations of potential solutions to a problem evolve over generations through processes that mimic natural selection in biological evolution. While computer simulations of evolutionary processes date back to the 1950s, much of our contemporary understanding of genetic algorithms stems from the work of John Holland, a professor at the University of Michigan whose 1975 book Adaptation in Natural and Artificial Systems pioneered GA research. Today, genetic algorithms are part of a wider field of that’s often referred to as evolutionary computing.
To be clear, genetic algorithms are only inspired by genetics and evolutionary theory; GAs aren’t intended to precisely implement the science behind these fells. As I explore genetic algorithms in this chapter, I won’t be making Punnett squares (sorry to disappoint), and there will be no discussion of nucleotides, protein synthesis, RNA, or other topics related to the biological processes of evolution. I don’t care so much about creating a scientifically accurate simulation of evolution as it happens in the physical world; rather, I care about methods for applying evolutionary strategies in software.
@@ -18,7 +18,7 @@
Genetic Algorithms: Inspir
Ecosystem Simulation. The traditional computer science genetic algorithm and interactive selection technique are what you’ll likely find if you search online or read a textbook about artificial intelligence. But as you'll soon see, they don’t really simulate the process of evolution as it happens in the physical world. In this chapter, I’ll also explore techniques for simulating the process of evolution in an ecosystem of artificial creatures. How can the objects that move about a canvas meet each other, mate, and pass their genes on to a new generation? This could apply directly to the Ecosystem Project outlined at the end of each chapter. It will also be particularly relevant as I explore the concept of “neuro-evolution” in Chapter 10.
Why Use Genetic Algorithms?
-
To help illustrate the utility of the traditional genetic algorithm, I’m going to start with cats. No, not just your every day feline friends. I’m going to start with some purr-fect cats that possess a talent for typing, with the goal of producing the complete works of Shakespeare (Figure 9.1).
+
To help illustrate the utility of the traditional genetic algorithm, I’m going to start with cats. No, not just your every day feline friends. I’m going to start with some purr-fect cats that paw-sess a talent for typing, with the goal of producing the complete works of Shakespeare (Figure 9.1).
Selection. There must be a mechanism by which some creatures have the opportunity to be parents and pass on their genetic information, while others don’t. This is commonly referred to as “survival of the fittest.” Take, for example, a population of gazelles that are chased by lions. The faster gazelles have a better chance of escaping the lions, increasing their chances of living longer, reproducing, and passing on their genetic information to offspring. The term fittest can be misleading, however. It’s often thought to mean biggest, fastest, or strongest, but while it can sometimes encompass physical attributes like size, speed, or strength, it doesn’t have to. The core of natural selection lies in whatever traits best suit an organism's environment andd increasing the likelihood of survival and reproduction. Instead of asserting superiority, “fittest” can be better understood as “survival of those adapted to their environment” or even “survival of the survivors.” In the case of typing cats, for example, a more “fit” cat is one that has typed more words present in a given phrase of Shakespeare.
I want to emphasize the context in which I’m applying these Darwinian concepts: a simulated, artificial environment where specific goals can be quantified, all for the sake of creative exploration. Throughout history, the principles of genetics have been used to harm those who have been marginalized and oppressed by dominant societal structures. Therefore, it’s essential to approach projects involving genetic algorithms with careful consideration of the language used, and to ensure that the documentation and descriptions of the work are framed inclusively.
-
With these concepts established, I’ll begin walking through the narrative of the genetic algorithm. I'll do this in the context of typing monkeys. The algorithm itself will be divided into two parts: a set of conditions for initialization and the steps that are repeated over and over again until the correct phrase is found.
+
With these concepts established, I’ll begin walking through the narrative of the genetic algorithm. I'll do this in the context of typing cats. The algorithm itself will be divided into several steps that unfold over two parts: a set of conditions for initialization, and the steps that are repeated over and over again until the correct phrase is found.
Step 1: Creating a Population
-
For typing monkeys, the first step of the genetic algorithm is to create a population of phrases. I’m using the term “phrase” rather loosely to mean any string of characters. These phrases are the “creatures” of this example, though of course they aren’t very creature-like.
+
For typing cats, the first step of the genetic algorithm is to create a population of phrases. I’m using the term “phrase” rather loosely to mean any string of characters. These phrases are the “creatures” of this example, though of course they aren’t very creature-like.
In creating the population of phrases, the Darwinian principle of variation applies. Let’s say for the sake of simplicity that I’m trying to evolve the phrase “cat” and that I have a population of three phrases.
@@ -64,10 +64,10 @@
Step 1: Creating a Population
Sure, there’s variety in these three phrases above, but try to mix and match the characters every which way and you’ll never get cat. There isn’t enough variety here to evolve the optimal solution. However, if there were a population of thousands of phrases, all generated randomly, chances are that at least one phrase would have a c as the first character, one will have an a as the second, and one a t as the third. A large population will most likely provide enough variety to generate the desired phrase. (In step 3 of the algorithm, I'll also demonstrate another mechanism to introduce more variation in case there isn’t enough in the first place.) Step 1 can therefore be described as follows:
Create a population of randomly generated elements.
-
“Element” is perhaps a better, more general-purpose term than “creature.” But what is the element itself? As you move through the examples in this chapter, you’ll see several different scenarios; you might have a population of images or a population of vehicles à la Chapter 5. The part that’s new in this chapter is that each element, each member of the population, has virtual “DNA,” a set of properties (you could also call them “genes”) that describe how a given element looks or behaves. In the case of the typing monkey, for example, the DNA could be a string of characters. With this in mind, I can be even more specific and describe step 1 of the genetic algorithm as:
+
“Element” is perhaps a better, more general-purpose term than “creature.” But what is the element itself? As you move through the examples in this chapter, you’ll see several different scenarios; you might have a population of images or a population of vehicles à la Chapter 5. The part that’s new in this chapter is that each element, each member of the population, has virtual “DNA,” a set of properties (you could also call them “genes”) that describe how a given element looks or behaves. In the case of the typing cats, for example, the DNA could be a string of characters. With this in mind, I can be even more specific and describe step 1 of the genetic algorithm as:
Create a population of N elements, each with randomly generated DNA.
-
In the field of genetics, there’s an important distinction between the concepts genotype and phenotype. The actual genetic code—the particular sequence of molecules in the DNA—is an organism’s genotype. This is what gets passed down from generation to generation. The phenotype, by contrast, is the expression of that data—this monkey will be tall, that monkey will be short, that other monkey will be a particularly fast and effective typist.
-
The genotype.phenotype distinction is key to creatively using genetic algorithms. What are the objects in your world? How will you design the genotype for those objects—the data structure to store each object’s properties, and the values those properties take on? And how will you use that information to design the phenotype? That is, what do you want these variables to actually express?
+
In the field of genetics, there’s an important distinction between the concepts genotype and phenotype. The actual genetic code—the particular sequence of molecules in the DNA—is an organism’s genotype. This is what gets passed down from generation to generation. The phenotype, by contrast, is the expression of that data—this cat will be big, that cat will be small, that other cat will be a particularly fast and effective typist.
+
The genotype/phenotype distinction is key to creatively using genetic algorithms. What are the objects in your world? How will you design the genotype for those objects—the data structure to store each object’s properties, and the values those properties take on? And how will you use that information to design the phenotype? That is, what do you want these variables to actually express?
We do this all the time in graphics programming, taking values (the genotype) and interpreting them in a visual way (the phenotype). The simplest example is probably color.
@@ -124,15 +124,15 @@
Step 1: Creating a Population
-
A nice thing about the monkey-typing example is that there’s no difference between genotype and phenotype. The DNA data itself is a string of characters, and the expression of that data is that very string.
+
A nice thing about the cat-typing example is that there’s no difference between genotype and phenotype. The DNA data itself is a string of characters, and the expression of that data is that very string.
Step 2: Selection
-
The second step of the genetic algorithm is to apply the Darwinian principle of selection.This involves evaluating the population and determining which members are “fit” to be selected as parents for the next generation. The process of selection can be divided into two steps.
+
The second step of the genetic algorithm is to apply the Darwinian principle of selection.This involves evaluating the population and determining which members are “fit” to be selected as parents for the next generation. The process of selection can be divided into two steps.
Evaluate fitness.
Create a mating pool.
For the first of these steps, I’ll need to design a fitness function, a function that produces a numeric score to describe the fitness of a given element of the population. This, of course, isn’t how the real world works at all. Creatures aren’t given a score; rather, they simply survive or they don’t survive. In the case of a traditional genetic algorithm, however, where the goal is to evolve an optimal solution to a problem, a mechanism to numerically evaluate any given possible solution is required.
-
Consider the current scenario, the typing monkey. Again, for simplicity, I’ll say the target phrase is cat. Assume three members of the population: hut, car, and box. Car is obviously the most fit, given that it has two correct characters, hut has only one, and box has zero. And there it is, a fitness function:
+
Consider the current scenario, the typing cats. Again, for simplicity, I’ll say the target phrase is cat. Assume three members of the population: hut, car, and box. Car is obviously the most fit, given that it has two correct characters, hut has only one, and box has zero. And there it is, a fitness function:
\text{fitness} = \text{the number of correct characters}
@@ -271,7 +271,7 @@
Step 3: Reproduction
Crossover.
Mutation.
-
The first step, crossover, involves creating a child out of the genetic code of two parents. In the case of the monkey-typing example, say I’ve picked the following two parent phrases from the mating pool, as outlined in the selection step (I’m simplifying and using strings of length 6, instead of the 18 characters required for “to be or not to be”).
+
The first step, crossover, involves creating a child out of the genetic code of two parents. In the case of the cat-typing example, say I’ve picked the following two parent phrases from the mating pool, as outlined in the selection step (I’m simplifying and using strings of length 6, instead of the 18 characters required for “to be or not to be”).
@@ -294,7 +294,7 @@
Step 3: Reproduction
Figure 9.4: Two examples of crossover from a random midpoint
-
Another possibility is to randomly select a parent for each character in the child string, as in Figure 9.5. You can think of this as flipping a coin six times: heads, take a character from parent A; tails, from parent B. This yields even more possible outcomes: “codurg”, “natine”, “notune”, “cadune”, and so on.
+
Another possibility is to randomly select a parent for each character in the child string, as in Figure 9.5. You can think of this as flipping a coin six times: heads, take a character from parent A; tails, from parent B. This yields even more possible outcomes: “codurg,” “natine,” “notune,” “cadune,” and so on.
Figure 9.5: Crossover with a “coin-flipping” approach
@@ -307,7 +307,7 @@
Step 3: Reproduction
Figure 9.6: Mutating the child phrase
-
Mutation is described in terms of a rate. A given genetic algorithm might have a mutation rate of 5 percent, or 1 percent, or 0.1 percent, for example. Say I’ve arrived at the child phrase “catire”. If the mutation rate is 1 percent, this means that for each character in the phrase, there’s a 1 percent chance that it will mutate before being “born” into the next generation. What does it mean for a character to mutate? In this case, mutation could be defined as picking a new random character. A 1 percent probability is fairly low, so most of the time mutation won’t occur at all in a six-character string (about 94 percent of the time, in fact). However, when it does, the mutated character is replaced with a randomly generated one (see Figure 9.6).
+
Mutation is described in terms of a rate. A given genetic algorithm might have a mutation rate of 5 percent, or 1 percent, or 0.1 percent, for example. Say I’ve arrived through crossover at the child phrase “catire.” If the mutation rate is 1 percent, this means that for each character in the phrase, there’s a 1 percent chance that it will mutate before being “born” into the next generation. What does it mean for a character to mutate? In this case, mutation could be defined as picking a new random character. A 1 percent probability is fairly low, so most of the time mutation won’t occur at all in a six-character string (about 94 percent of the time, in fact). However, when it does, the mutated character is replaced with a randomly generated one (see Figure 9.6).
As you’ll see in the coming examples, the mutation rate can greatly affect the behavior of the system. A very high mutation rate (such as, say, 80 percent) would negate the entire evolutionary process and leave you with something more akin to a brute force algorithm. If the majority of a child’s genes are generated randomly, then you can’t guarantee that the more “fit” genes occur with greater frequency with each successive generation.
Overall, the process of selection (picking two parents) and reproduction (crossover and mutation) is repeated N times until there’s a new population of N child elements.
Step 4: Repeat!
@@ -335,7 +335,7 @@
Step 1: Initialization
class DNA {
}
-
What should go in the DNA class? For a typing monkey, its DNA would be the random phrase it types, a string of characters. However, using an array of characters (rather than a string object) provides a more generic template that can extend easily to other data types. For example, the DNA of a creature in a physics system could be an array of vectors—or for an image, an array of numbers (RGB pixel values). Any set of properties can be listed in an array, and even though a string is convenient for this particular scenario, an array will serve as a better foundation for future evolutionary examples.
+
What should go in the DNA class? For a typing cat, its DNA would be the random phrase it types, a string of characters. However, using an array of characters (rather than a string object) provides a more generic template that can extend easily to other data types. For example, the DNA of a creature in a physics system could be an array of vectors—or for an image, an array of numbers (RGB pixel values). Any set of properties can be listed in an array, and even though a string is convenient for this particular scenario, an array will serve as a better foundation for future evolutionary examples.
The genetic algorithm specifies that I create a population of N elements, each with randomly generated genes. The DNA constructor therefore includes a loop to fill in each element of the genes array.
class DNA {
constructor(length){
@@ -698,7 +698,7 @@
Example 9.1: Gene
let c = floor(random(32, 127));
return String.fromCharCode(c);
}
-
In Example 9.1, you might notice that new child elements are directly added to the population array. This approach is possible because I have a separate mating pool array that contains references to the original parent elements. However, if I were to instead use the “relay race” weightedSelection() function, I'd need to create a temporary “new population” array. This temporary array would hold the child elements and replace the original population array only after the reproduction step is completed. You’ll see this implemented in Example 9.2.
+
In Example 9.1, you might notice that new child elements are directly added to the population array. This approach is possible because I have a separate mating pool array that contains references to the original parent elements. However, if I were to instead use the “relay race” weightedSelection() function, I'd need to create a temporary array for the new population. This temporary array would hold the child elements and replace the original population array only after the reproduction step is completed. You’ll see this implemented in Example 9.2.
Exercise 9.6
Add features to Example 9.1 to report more information about the progress of the genetic algorithm itself. For example, show the phrase closest to the target in each generation, as well as a report on the number of generations, the average fitness, and so on. Stop the genetic algorithm once it has solved the phrase. Consider writing a Population class to manage the GA, instead of including all the code in draw().
@@ -789,7 +789,7 @@
Key #1: The Global Variables
-
Without any mutation at all (0 percent), you just have to get lucky. If all the correct characters are present somewhere in an element of the initial population, you’ll evolve the phrase very quickly. If not, there’s no way for the sketch to ever reach the exact phrase. Run it a few times and you’ll see both instances. In addition, once the mutation rate gets high enough (10 percent, for example), there’s so much randomness involved (1 out of every 10 letters is random in each new child) that the simulation is pretty much back to a random typing monkey. In theory, it will eventually solve the phrase, but you may be waiting much, much longer than is reasonable.
+
Without any mutation at all (0 percent), you just have to get lucky. If all the correct characters are present somewhere in an element of the initial population, you’ll evolve the phrase very quickly. If not, there’s no way for the sketch to ever reach the exact phrase. Run it a few times and you’ll see both instances. In addition, once the mutation rate gets high enough (10 percent, for example), there’s so much randomness involved (1 out of every 10 letters is random in each new child) that the simulation is pretty much back to a random typing cat. In theory, it will eventually solve the phrase, but you may be waiting much, much longer than is reasonable.
Key #2: The Fitness Function
Playing around with the mutation rate or population size is pretty easy and involves little more than typing numbers in your sketch. The real hard work of a developing a genetic algorithm is in writing the fitness function. If you can’t define your problem’s goals and evaluate numerically how well those goals have been achieved, then you won’t have successful evolution in your simulation.
Before I move on to other scenarios exploring more sophisticated fitness functions, I want to look at flaws in my Shakespearean fitness function. Consider solving for a phrase that isn’t 18 characters long, but 1,000. And take two elements of the population, one with 800 characters correct and one with 801. Here are their fitness scores:
@@ -816,17 +816,10 @@
Key #2: The Fitness Function
There are a couple of problems here. First, I’m adding elements to the mating pool N times, where N equals fitness multiplied by 100. But objects can only be added to an array a whole number of times, so A and B will both be added 80 times, giving them an equal probability of being selected. Even with an improved solution that takes floating point probabilities into account, 80.1 percent is only a teeny tiny bit higher than 80 percent. But getting 801 characters right is a whole lot better than 800 in the evolutionary scenario. I really want to make that additional character count. I want the fitness score for 801 characters to be substantially better than the score for 800.
To put it another way, Figure 9.8 shows graphs of two possible fitness functions.
-
-
-
-
- Figure 9.8: On the left, a fitness graph of y=x, on the right y = x^2
-
-
-
-
-
-
+
+
+ Figure 9.8: On the left, a fitness graph of y=x, on the right y = x^2
+
On the left is a linear graph; as the number of characters goes up, so does the fitness score. By contrast, in the graph on the right, as the number of characters goes up, the fitness score goes way up. That is, the fitness increases at an accelerating rate as the number of correct characters increases.
I can achieve this second type of result in a number of different ways. For example, I could say:
\text{fitness} = \text{(correct characters)}^2
@@ -903,7 +896,7 @@
Key #3: The Genotype and Phenotype
I started with the Shakespeare example because of how easy it was to design both the genotype (an array of characters) and its expression, the phenotype (the string displayed on the canvas). It isn’t always this easy, however. For example, when talking about the fitness function for a soccer game, I happily assumed the existence of computer-controlled kickers that each have a “set of parameters that determine how they kick a ball towards the goal,” but actually determining what those parameters are and how you choose to encode them would require some thought and creativity. And of course, there’s no one correct answer: how you design the system is up to you.
The good news—and I hinted at this earlier in the chapter—is that you’ve been translating genotypes (data) into phenotypes (expression) all along. Anytime you write a class in p5.js, you make a whole bunch of variables.
What’s great about dividing the genotype and phenotype into separate classes (DNA and Rocket, for example) is that when it comes time to build all of the code, you’ll notice that the DNA class I developed earlier remains intact. The only thing that changes is the kind of data stored in the array (numbers, vectors, and so on) and the expression of that data in the phenotype class.
In the next section, I'll follow this idea a bit further and walk through the necessary steps to implement an example that involves moving bodies and an array of vectors as DNA.
Evolving Forces: Smart Rockets
-
I mentioned rockets for a specific reason: in 2009, Jer Thorp released a genetic algorithms example on his blog entitled “Smart Rockets.” Throp pointed out that NASA uses evolutionary computing techniques to solve all sorts of problems, from satellite antenna design to rocket firing patterns. This inspired him to create a Flash demonstration of evolving rockets.
+
I mentioned rockets for a specific reason: in 2009, Jer Thorp released a genetic algorithms example on his blog entitled “Smart Rockets.” Thorp pointed out that NASA uses evolutionary computing techniques to solve all sorts of problems, from satellite antenna design to rocket firing patterns. This inspired him to create a Flash demonstration of evolving rockets.
+
Here’s the scenario: a population of rockets launches from the bottom of the screen with the goal of hitting a target at the top of the screen. There are obstacles blocking a straight-line path to the target (see Figure 9.9).
-
- Figure 9.9: A population of smart rockets seeking a delicious strawberry planet.
+
+ Figure 9.9: A population of smart rockets seeking a delicious strawberry planet
-
Here’s the scenario: a population of rockets launches from the bottom of the screen with the goal of hitting a target at the top of the screen. There are obstacles blocking a straight-line path to the target (see Figure 9.9).
+
Each rocket is equipped with five thrusters of variable strength and direction (Figure 9.10). The thrusters don’t fire all at once and continuously; rather, they fire one at a time in a custom sequence.
-
- Figure 9.10: A single smart rocket with five thrusters, carrying Clawdius the astronaut.
+
+ Figure 9.10: A single smart rocket with five thrusters, carrying Clawdius the astronaut
-
Each rocket is equipped with five thrusters of variable strength and direction (Figure 9.10). The thrusters don’t fire all at once and continuously; rather, they fire one at a time in a custom sequence.
In this section, I'm going to evolve my own simplified smart rockets, inspired by Jer Thorp’s. When I get to the end of the section, I’ll leave implementing some of Thorp’s additional advanced features as an exercise.
My rockets will have only one thruster, and this thruster will be able to fire in any direction with any strength for every frame of animation. This isn’t particularly realistic, but it will make building out the example a little easier. (You can always make the rocket and its thrusters more advanced and realistic later.)
Developing the Rockets
@@ -1035,9 +1028,10 @@
Developing the Rockets
}
}
}
-
The happy news here is that I don’t really have to do anything else to the DNA class. All of the functionality for the typing cat (crossover and mutation) still applies. The one difference I do have to consider is how to initialize the array of genes. With the typing monkey, I had an array of characters and picked a random character for each element of the array. Now I’ll do exactly the same thing and initialize a DNA sequence as an array of random vectors.
+
The happy news here is that I don’t really have to do anything else to the DNA class. All of the functionality for the typing cat (crossover and mutation) still applies. The one difference I do have to consider is how to initialize the array of genes. With the typing cat, I had an array of characters and picked a random character for each element of the array. Now I’ll do exactly the same thing and initialize a DNA sequence as an array of random vectors.
Your instinct in creating a random vector might be as follows:
let v = createVector(random(-1, 1), random(-1, 1));
+
This is perfectly fine and will likely do the trick. However, if I were to draw every single possible vector that could be picked, the result would fill a square (see Figure 9.11, left). In this case, it probably doesn’t matter, but there’s a slight bias to the diagonals given that a vector from the center of a square to a corner is longer than a purely vertical or horizontal one.
@@ -1049,7 +1043,6 @@
Developing the Rockets
Figure 9.11: On the left, vectors created with random x and y values. On the right, using p5.Vector.random2D().
-
This is perfectly fine and will likely do the trick. However, if I were to draw every single possible vector that could be picked, the result would fill a square (see left of Figure 9.11). In this case, it probably doesn’t matter, but there’s a slight bias to the diagonals given that a vector from the center of a square to a corner is longer than a purely vertical or horizontal one.
What would be better here is to pick a random angle and make a vector of length 1 from that angle, such that the results form a circle (see right of Figure 9.11). This could be done with a quick polar to Cartesian conversion, but an even quicker path to the result is just to use p5.Vector.random2D().
for (let i = 0; i < length; i++) {
//{!1} A random unit vector
@@ -1124,7 +1117,7 @@
Managing the Population
}
}
- // Calculate fitness for each rocket
+ // Calculate the fitness for each rocket.
fitness() {
for (let i = 0; i < this.population.length; i++) {
this.population[i].calculateFitness();
@@ -1133,12 +1126,12 @@
Managing the Population
// The selection method normalizes all the fitness values.
selection() {
- // Sum all of the fitness values
+ // Sum all of the fitness values.
let totalFitness = 0;
for (let i = 0; i < this.population.length; i++) {
totalFitness += this.population[i].fitness;
}
- // Divide by the total to normalize the fitness values
+ // Divide by the total to normalize the fitness values.
for (let i = 0; i < this.population.length; i++) {
this.population[i].fitness /= totalFitness;
}
@@ -1156,7 +1149,7 @@
Managing the Population
// Rocket goes in the new population
newPopulation[i] = new Rocket(320, 240, child);
}
- // Now the new population is the current one
+ // Now the new population is the current one.
this.population = newPopulation;
}
There’s one more fairly significant change, however. With typing cats, a random phrase was evaluated as soon as it was created. The string of characters had no lifespan; it existed purely for the purpose of calculating its fitness. The rockets, however, need to live for a period of time before they can be evaluated—that is, they need to be given a chance to make their attempt at reaching the target. Therefore, I need to add one more method to the Population class that runs the physics simulation itself. This is identical to what I did in the run() method of a particle system: update all the particle positions and draw them.
@@ -1225,12 +1218,12 @@
Example 9.2: Smart Rockets
}
}
-// Move the target if the mouse is pressed, rockets will adapt to the new target
+// Move the target if the mouse is pressed. The rockets will adapt to the new target.
function mousePressed() {
target.x = mouseX;
target.y = mouseY;
}
-
At the bottom of the code, you’ll see that I’ve added a new feature: when the mouse is pressed, the target position is moved to the coordinates of the mouse cursor. This change allows you to observe how the rockets adapt and adjust their trajectories in real-time towards the new target position, as the system continuously evolves.
+
At the bottom of the code, you’ll see that I’ve added a new feature: when the mouse is pressed, the target position is moved to the coordinates of the mouse cursor. This change allows you to observe how the rockets adapt and adjust their trajectories toward the new target position as the system continuously evolves in real time.
Making Improvements
My smart rockets work, but they aren’t particularly exciting yet. After all, the rockets simply evolve toward having DNA with a bunch of vectors that point straight at the target. To make things more interesting, I’m going to suggest two improvements for the example. For starters, when I first introduced the smart rocket scenario, I said the rockets should evolve the ability to avoid obstacles. Adding this feature will make the system more complex and demonstrate the power of the evolutionary algorithm more effectively.
To evolve obstacle avoidance, I need some obstacles to avoid. I can easily create rectangular, stationary obstacles by implementing a class of Obstacle objects that store their own position and dimensions.
@@ -1289,12 +1282,12 @@
Making Improvements
if (distance < this.recordDistance) {
this.recordDistance = distance;
}
-
Additionally, a rocket deserves a reward based on the speed with which it reaches its target. Since the Obstacle class already has an implemented contains() method, there's no reason why the target can't also be an obstacle. It's just an obstacle that the rocket wants to hit! I can also add another flag called hitTarget to keep track of this.
+
Additionally, a rocket deserves a reward based on the speed with which it reaches its target. For that, I need to a way of knowing when a rocket has hit the target. Actually, I already have one: the Obstacle class has a contains() method, and there’s no reason why the target can’t also be implemented as an obstacle. It's just an obstacle that the rocket wants to hit! I can use the contains() method to set a new hitTarget flag on each Rocket object. A rocket will stop if it hits the target, just like it stops if it hits an obstacle.
// If the object reaches the target, set a boolean flag to true.
if (target.contains(this.position)) {
this.hitTarget = true;
}
-
Remember., I also want the rocket to have a higher fitness the faster it reaches the target. Conversely, the slower it reaches the target, the lower its fitness score. To implement this, a finishCounter can be incremented every cycle of the rocket’s life until it reaches the target. At the end of its life, the counter will equal the amount of time the rocket took to reach the target.
+
Remember, I also want the rocket to have a higher fitness the faster it reaches the target. Conversely, the slower it reaches the target, the lower its fitness score. To implement this, a finishCounter can be incremented every cycle of the rocket’s life until it reaches the target. At the end of its life, the counter will equal the amount of time the rocket took to reach the target.
// Increase the finish counter if it hasn't hit the target
if (!this.hitTarget) {
this.finishCounter++;
@@ -1325,7 +1318,7 @@
Example 9.3: Smarter Rockets
-
There are many ways in which this example could be improved and expanded further. The following exercises offer some ideas and challenges to explore genetic algorithms in more depth. What else can you try?
+
There are many ways in which this example could be improved and further expanded. The following exercises offer some ideas and challenges to explore genetic algorithms in more depth. What else can you try?
Exercise 9.8
Create a more complex obstacle course. As you make it more difficult for the rockets to reach the target, do you need to improve other aspects of the GA—for example, the fitness function?
@@ -1343,14 +1336,14 @@
Exercise 9.11
Another way to teach a rocket to reach a target is to evolve a flow field. Can you make the genotype of a rocket a flow field of vectors?
Interactive Selection
-
Karl Sims is a computer graphics researcher and visual artist who worked extensively with genetic algorithms (he is also well-known for his work with particle systems!). One of his innovative evolutionary projects is the museum installation Galapagos. Originally installed in the Intercommunication Center in Tokyo in 1997, the installation consists of twelve monitors displaying computer-generated images. These images evolve over time, following the genetic algorithm steps of selection and reproduction.
+
Karl Sims is a computer graphics researcher and visual artist who worked extensively with genetic algorithms. (He’s also well known for his work with particle systems!) One of his innovative evolutionary projects is the museum installation Galapagos. Originally installed in the Intercommunication Center in Tokyo in 1997, the installation consists of twelve monitors displaying computer-generated images. These images evolve over time, following the genetic algorithm steps of selection and reproduction.
The innovation here isn’t the use of the genetic algorithm itself, but rather the strategy behind the fitness function. In front of each monitor is a sensor on the floor that can detect the presence of a visitor viewing the screen. The fitness of an image is tied to the length of time that viewers look at the image. This is known as interactive selection, a genetic algorithm with fitness values assigned by people.
-
Far from being confined to art installations, interactive selection is quite prevalent in the digital age of user-generated ratings and reviews. Could you imagine evolving the perfect song based on your Spotify ratings? Or the ideal book according to Goodreads reviews?
+
Far from being confined to art installations, interactive selection is quite prevalent in the digital age of user-generated ratings and reviews. Could you imagine evolving the perfect song based on your Spotify ratings? Or the ideal book according to Goodreads reviews? In keeping with the book’s nature theme, however, I’ll illustrate how interactive selection works using a population of digital flowers like the ones in Figure 9.14.
-
- 9.13 Flower Design for Interactive Selection
+
+ 9.13: Flower design for interactive selection
-
To illustrate this technique, I’m going to build a population of digital flowers like the one in Figure 9.14. Each flower will have a set of properties: petal color, petal size, petal count, center color, center size, stem length, and stem color. A flower’s DNA (genotype) is an array of floating point numbers between 0 and 1, with a single value for each property.
+
Each flower will have a set of properties: petal color, petal size, petal count, center color, center size, stem length, and stem color. A flower’s DNA (genotype) is an array of floating point numbers between 0 and 1, with a single value for each property.
class DNA {
constructor() {
// The genetic sequence (14 properties for each flower)
@@ -1368,11 +1361,11 @@
Interactive Selection
// How "fit" is this flower?
this.fitness = 1;
}
-
When it comes time to draw the flower on screen, I’ll use p5.js’s map() function to convert any gene value to the appropriate range for pixel dimensions or color values. (I’ll also use colorMode() to set the RGB ranges between 0 and 1.)
+
When it comes time to draw the flower, I’ll use p5.js’s map() function to convert any gene value to the appropriate range for pixel dimensions or color values. (I’ll also use colorMode() to set the RGB ranges between 0 and 1.)
show() {
//{.offset-top}
// The DNA values are assigned to flower properties
- // such as: petal color, petal size, number of petals, etc.
+ // such as petal color, petal size, number of petals, etc.
let genes = this.dna.genes;
// I'll set the RGB range to 0-1 with colorMode() and use map() as needed elsewhere for drawing the flower.
let petalColor = color(genes[0], genes[1], genes[2], genes[3]);
@@ -1425,7 +1418,7 @@
Example 9.4: Interactive Selection
It should be noted that this example is just a demonstration of the idea of interactive selection and doesn’t achieve a particularly meaningful result. For one, I didn’t take much care in the visual design of the flowers; they’re just a few simple shapes with different sizes and colors. (See if you can spot the use of polar coordinates in the code, though!) Sims used more elaborate mathematical functions as the genotype for his images. You might also consider a vector-based approach, in which a design's genotype is a set of points and/or paths.
-
The more significant problem here, however, is one of time. In the natural world, evolution occurs over millions of years. In the computer simulation world of the chapter’s first examples, the populations are able to evolve behaviors relatively quickly because the new generations are being produced algorithmically. In the typing monkey example, a new generation was born in each cycle through draw() (approximately 60 per second). Each generation of smart rockets had a lifespan of 250 frames—still a mere blink of the eye in evolutionary time. In the case of interactive selection, however, you have to sit and wait for a person to rate each and every member of the population before you can get to the next generation. A large population would be unreasonably tedious for the user to evaluate—not to mention, how many generations could you stand to sit through?
+
The more significant problem here, however, is one of time. In the natural world, evolution occurs over millions of years. In the computer simulation world of the chapter’s first examples, the populations are able to evolve behaviors relatively quickly because the new generations are being produced algorithmically. In the typing cat example, a new generation was born in each cycle through draw() (approximately 60 per second). Each generation of smart rockets had a lifespan of 250 frames—still a mere blink of the eye in evolutionary time. In the case of interactive selection, however, you have to sit and wait for a person to rate each and every member of the population before you can get to the next generation. A large population would be unreasonably tedious for the user to evaluate—not to mention, how many generations could you stand to sit through?
There are certainly clever ways around this problem. Sims’s Galapagos exhibit concealed the rating process from the viewers, as it occurred through the normal behavior of looking at artwork in a gallery setting. Building a web application that would allow many people to rate a population in a distributed fashion is also a good strategy for achieving ratings for large populations quickly.
In the end, the key to a successful interactive selection system boils down to the same keys previously established. What is the genotype and phenotype? And how do you calculate fitness—or in this case, what’s your strategy for assigning fitness according to interaction?
@@ -1434,14 +1427,14 @@
Exercise 9.14
Exercise 9.12
-
Another of Karl Sims’ seminal works in the field of genetic algorithms is "Evolved Virtual Creatures." In this project, a population of digital creatures in a simulated physics environment is evaluated for the their ability to perform tasks, such as swimming, running, jumping, following, and competing for a green cube. The project uses a “node-based” genotype. In other words, the creature’s DNA is not a linear list of vectors or numbers, but a map of nodes (much like the “soft body simulation” in Chapter 6.) The phenotype is the creature’s body itself, a network of limbs connected with muscles.
+
Another of Karl Sims’s seminal works in the field of genetic algorithms is “Evolved Virtual Creatures.” In this project, a population of digital creatures in a simulated physics environment is evaluated for their ability to perform tasks, such as swimming, running, jumping, following, and competing for a green cube. The project uses a “node-based” genotype. In other words, the creature’s DNA isn’t a linear list of vectors or numbers, but a map of nodes (much like the soft body simulation in Chapter 6.) The phenotype is the creature’s body itself, a network of limbs connected with muscles.
-
Can you design the DNA for a flower, plant, or creature as a “network” of parts? One idea is to use interactive selection to evolve the design. Alternatively, you could incorporate spring forces, perhaps with toxiclibs.js or matter.js, to create a simplified 2D version of Sims's creatures. What if they were to evolve according to a fitness function associated with a specific goal? For more about Sims’s techniques, you can read his 1994 Paper and watch the “Evolved Virtual Creatures” video on YouTube.
+
Can you design the DNA for a flower, plant, or creature as a “network” of parts? One idea is to use interactive selection to evolve the design. Alternatively, you could incorporate spring forces, perhaps with toxiclibs.js or Matter.js, to create a simplified 2D version of Sims’s creatures. What if they were to evolve according to a fitness function associated with a specific goal? For more about Sims’s techniques, you can read his 1994 Paper and watch the “Evolved Virtual Creatures” video on YouTube.
@@ -1515,7 +1508,7 @@
Ecosystem Simulation
dead() {
return (this.health < 0.0);
}
-
This is a good first step, but I haven’t really achieved anything. After all, if all bloops start with 100 health points and lose 0.2 points per frame, then all bloops will live for the exact same amount of time and die together. If every single bloop lives the same amount of time, each one has an equal chance of reproducing, and therefore no evolutionary change will occur..
+
This is a good first step, but I haven’t really achieved anything. After all, if all bloops start with 100 health points and lose health at the same rate, then all bloops will live for the exact same amount of time and die together. If every single bloop lives the same amount of time, each one has an equal chance of reproducing, and therefore no evolutionary change will occur.
There are several ways to achieve variable lifespans with a more sophisticated world. One approach is to introduce predators that eat bloops. Faster bloops would be more likely to escape being eaten, leading to the evolution of increasingly faster bloops. Another option is to introduce food. When a bloop eats food, its health points increase, extending its life.
Let’s assume there’s an array of vector positions called food. I could test each bloop’s proximity to each food position. If the bloop is close enough, it eats the food (which is then removed from the world) and increases its health.
eat(food) {
@@ -1536,24 +1529,20 @@
Ecosystem Simulation
Genotype and Phenotype
-
- Figure 9.14: A small and big “bloop” creature. The example will use simple circles, but you should try being more creative!
+
+ Figure 9.14: Small and big “bloop” creatures. The example will use simple circles, but you should try being more creative!
The ability for a bloop to find food is tied to two variables—size and speed (see Figure 9.14). Bigger bloops will find food more easily simply because their size will allow them to intersect with food positions more often. And faster bloops will find more food because they can cover more ground in a shorter period of time.
Since size and speed are inversely related (large bloops are slow, small bloops are fast), I only need a genotype with a single number.
class DNA {
- constructor(newgenes) {
- if (newgenes) {
- this.genes = newgenes;
- } else {
- // The genetic sequence is a single value!
- // It may seem absurd to use an array for just one number, but this will
- // scale for more sophisticated bloop designs.
- this.genes = new Array(1);
- for (let i = 0; i < this.genes.length; i++) {
- this.genes[i] = random(1);
- }
+ constructor() {
+ // The genetic sequence is a single value!
+ // It may seem absurd to use an array for just one number, but this will
+ // scale for more sophisticated bloop designs.
+ this.genes = [];
+ for (let i = 0; i < 1; i++) {
+ this.genes[i] = random(0, 1);
}
}
The phenotype is the bloop itself, whose size and speed are assigned by adding an instance of a DNA object to the Bloop class.
@@ -1596,8 +1585,11 @@
Selection and Reproduction
class DNA {
//{!1} This copy() method replaces crossover().
copy() {
- let newgenes = this.genes.slice();
- return new DNA(newgenes);
+ // Create new DNA (with random genes)
+ let newDNA = new DNA();
+ //{!1} Overwrite the random genes with a copy this DNA's genes
+ newDNA.genes = this.genes.slice();
+ return newDNA;
}
}
With the selection and reproduction pieces in place, I can finalize the World class to manage a list of all Bloop objects, as well as a Food object that contains a list of positions for the food (which I’ll draw as small squares).
Now that I have explained the computational process of a perceptron, let's take a look at an example of one in action. As I mentioned earlier, neural networks are commonly used for pattern recognition applications, such as facial recognition. Even simple perceptrons can demonstrate the fundamentals of classification. Let’s demonstrate with the following scenario.
-
-
-
- Figure 10.4: A collection of points in two dimensional space divided by a line.
-
-
-
Consider a line in two-dimensional space. Points in that space can be classified as living on either one side of the line or the other. While this is a somewhat silly example (since there is clearly no need for a neural network; on which side a point lies can be determined with some simple algebra), it shows how a perceptron can be trained to recognize points on one side versus another.
-
Let’s say a perceptron has 2 inputs: x,y coordinates of a point). When using a sign activation function, the output will either be -1 or 1. The input data are classified according to the sign of the output, the weighted sum of inputs. In the above diagram, you can see how each point is either below the line (-1) or above (+1).
-
The perceptron itself can be diagrammed as follows. In machine learning x’s are typically the notation for inputs and y is typically the notation for an output. To keep this convention I’ll note in the diagram the inputs as x_0 and x_1. x_0 will correspond to the x cooordinate and x_1 to the y. I name the output simply “\text{output}”.
+
Imagine you have a dataset of plants and you want to classify them into two categories: “xerophytes” (plants that have evolved to survive in an environment with little water and lots of sunlight, like the desert) and “hydrophytes” (plants that have adapted to living in submerged in water, with reduced light.) On the x-axis, you plot the amount of daily sunlight received by the plant and on the y-axis, the amount of water.
+
+
+ Figure 10.4: A collection of points in two dimensional space divided by a line.
+
+
While this is an oversimplified scenario and real-world data would have more messiness to it, you can see how the plants can easily be classified according to whether they are on side of a line or the other. To classify a new plant plotted into the space does not require a neural network (on which side a point lies can be determined with some simple algebra), I’ll use this as the basis to show how a perceptron can be trained to recognize points on one side versus another.
+
Here the perceptron will have 2 inputs: x,y coordinates of a point, representing the amount of sunlight and water respectively. When using a sign activation function, the output will either be -1 or 1. The input data are classified according to the sign of the output, the weighted sum of inputs. In the above diagram, you can see how each point is either below the line (-1) or above (+1). I can use this to signify hydrophyte (+1, above the line) or xerophyte (-1, below the line.)
+
The perceptron itself can be diagrammed as follows. In machine learning x’s are typically the notation for inputs and y is typically the notation for an output. To keep this convention I’ll note in the diagram the inputs as x_0 and x_1. x_0 will correspond to the x-coordinate (sunlight) and x_1 to the y (water). I name the output simply “\text{output}”.
Figure 10.5 A perceptron with two inputs (x_0 and x_1), a weight for each input (\text{weight}_0 and \text{weight}_1) as well as a processing neuron that generates the output.
@@ -461,7 +460,7 @@
It’s a “Network,” Remember?
Figure 10.9: One the left a collection of points that is linearly separable. On the right, non-linearly separable data where a curve is required to separate the points.
-
On the left of Figure 10.11, is an example of classic linearly separable data. Graph all of the possibilities; if you can classify the data with a straight line, then it is linearly separable. On the right, however, is non-linearly separable data. You can’t draw a straight line to separate the black dots from the gray ones.
+
On the left of Figure 10.9, is an example of classic linearly separable data, like the simplified plant classification of xerophytes and hydrophytes. Graph all of the possibilities; if you can classify the data with a straight line, then it is linearly separable. On the right, however, is non-linearly separable data. Imagine you are classifying plants according to soil acidity (x-axis) and temperature (y-axis). Some plants might thrive in acidic soils at a specific temperature range, while other plants prefer less acidic soils but tolerate a broader range of temperatures. There is a more complex relationship between the two variables, and a straight line cannot be drawn to separate the two categories of plants—"acidophilic" and "alkaliphilic.”
One of the simplest examples of a non-linearly separable problem is XOR, or “exclusive or.” I’m guessing, as someone who works with coding and p5.js, you are familiar with a logical \text{AND}. For A \text{ AND } B to be true, both A and B must be true. With \text{OR}|, either A or B can be true for A \text{ OR } B to evaluate as true. These are both linearly separable problems. Let’s look at the solution space, a “truth table.”
@@ -501,17 +500,80 @@
Classification and Regression
While I won't be building a complete MNIST model with ml5.js (you could if you wanted to!), it serves as a canonical example of a training dataset for image classification: 70,000 images each assigned one of 10 possible labels. This idea of a “label” is fundamental to classification, where the output of a model involves a fixed number of discrete options. There are only 10 possible digits that the model can guess, no more and no less. After the data is used to train the model, the goal is to classify new images and assign the appropriate label.
Regression, on the other hand, is a machine learning task where the prediction is a continuous value, typically a floating point number. A regression problem can involve multiple outputs, but when beginning it’s often simpler to think of it as just one. Consider a machine learning model that predicts the daily electricity usage of a house based on any number of factors like number of occupants, size of house, temperature outside. Here, rather than a goal of the neural network picking from a discrete set of options, it makes more sense for the neural network to guess a number. Will the house use 30.5 kilowatt-hours of energy that day? 48.7 kWh? 100.2 kWh? The output is therefore a continuous value that the model attempts to predict.
Inputs and Outputs
-
Once the task has been determined, the next step is to finalize the configuration of inputs and outputs of the neural network. In the case of MNIST, each image is a collection of 28x28 grayscale pixels and each pixel can be represented as a single value (ranging from 0-255). The total pixels is 28 \times 28 = 784. The grayscale value of each pixel is an input to the neural network.
+
Once the task has been determined, the next step is to finalize the configuration of inputs and outputs of the neural network. Instead of MNIST which involves using an image as the input to a neural network, let's use another classic “Hello, World” example in the field of data science and machine learning: Iris Flower classification. This dataset can be found in the University of California Irvine Machine Learning Repository and originated from the work of American botanist Edgar Anderson. Anderson embarked on a data collection endeavor over many years that encompassed multiple regions of the United States and Canada. After carefully analyzing the collected data, he built a table to classify Iris flowers into three distinct species: Iris setosa, Iris virginica and Iris versicolor.
Anderson included four numeric attributes for each flower: sepal length, sepal width, petal length, and petal width, all measured in centimeters. (He also recorded color information but that data appears to have been lost.) Each record is then paired with its Iris categorization.
+
+
+
+
sepal length
+
sepal width
+
petal length
+
petal width
+
classification
+
+
+
+
+
5.1
+
3.5
+
1.4
+
0.2
+
Iris-setosa
+
+
+
4.9
+
3.0
+
1.4
+
0.2
+
Iris-setosa
+
+
+
7.0
+
3.2
+
4.7
+
1.4
+
Iris-versicolor
+
+
+
6.4
+
3.2
+
4.5
+
1.5
+
Iris-versicolor
+
+
+
6.3
+
3.3
+
6.0
+
2.5
+
Iris-virginica
+
+
+
5.8
+
2.7
+
5.1
+
1.9
+
Iris-virginica
+
+
+
+
In this dataset, the first four columns (sepal length, sepal width, petal length, petal width) serve as inputs to the neural network. The output classification is provided in the fourth column on the right. Figure 10.9 depicts a possible architecture for a neural network that can be trained on this data.
Since there are 10 possible digits 0-9, the output of the neural network is a prediction of one of 10 labels.
+
On the left of Figure 10.9, you can see the four inputs to the network, which correspond to the first four columns of the data table. On the right, there are three possible outputs, each representing one of the Iris species labels. The neural network's goal is to “activate” the correct output for the input data, much like how the Perceptron would output a +1 or -1 for its single binary classification. In this case, the output values are like signals that help the network decide which Iris species label to assign. The highest computed value “activates” to signify the correct classification for the input data.
+
In the diagram, you'll also notice the inclusion of a hidden layer. Unlike input and output neurons, the nodes in this “hidden” layer are not directly connected to the network's inputs or outputs. The layer introduces an added layer of complexity to the network's architecture, necessary as I have established for more complex, non-linearly separable data. The number of nodes depicted, in this case, five nodes, is arbitrary. Neural network architectures can vary greatly, and the number of hidden nodes is often determined through experimentation and optimization. In the context of this book, I'm relying on ml5.js to automatically configure the architecture based on the input and output data, simplifying the implementation process.
Consider the regression scenario of predicting the electricity usage of a house. Let’s assume you have a table with the following data:
+
Figure 10.10 shows a variety of homes and weather conditions. Let use scenario propose early of a regression predicting the electricity usage of a house. Here’s, I’ll use a “made-up” dataset.
@@ -564,11 +626,12 @@
Inputs and Outputs
-
Here in this table, the inputs to the neural network are the first three columns (occupants, size, temperature). The fourth column on the right is what the neural network is expected to guess, or the output.
+
Just as before, the inputs to the neural network are the first three columns (occupants, size, temperature). The fourth column on the right is what the neural network is expected to guess, or the output. The network architecture follows suit in Figure 10.10, also with an arbitrary choice of four nodes for the hidden layer.
-
- Figure 10.18 Possible network architecture for 3 inputs and 1 regression output
+
+ Figure 10.10 Possible network architecture for 3 inputs and 1 regression output
+
Unlike the Iris classification, since there is just one number to be predicted (rather than a choice between three labels), this neural network as only one output. I’ll note, however, that this is not a requirement of a regression. A machine learning model can perform a regression that predicts multiple continuous values.
Setting up the Neural Network with ml5.js
In a typical machine learning scenario, the next step after establishing the inputs and outputs is to configure the architecture of the neural network. This involves specifying the number of hidden layers between the inputs and outputs, the number of neurons in each layer, which activation functions to use, and more! While all of this is possible with ml5.js, it will make its best guess and design a model for you based on the task and data.
As demonstrated with Matter.js and toxiclibs.js in chapter 6, you can import the ml5.js library into your index.html file.
@@ -577,21 +640,21 @@
Setting up the Neural Network
To create a neural network, you must first create a JavaScript object that will configure the model. While there are many properties that you can set, most of them are optional, as the network will use default values. Let’s begin by specifying the "task" that you intend the model to perform: "regression" or "classification.”
let options = { task: "classification" }
let classifier = ml5.neuralNetwork(options);
-
This, however, gives ml5.js very little to go on in terms of designing the network architecture. Adding the inputs and outputs will complete the rest of the puzzle for it. In the case of MNIST, there are 784 inputs (grayscale pixel colors) and 10 possible output labels (digits “0” through “9”). This can be configured in ml5.js with a single integer for the number of inputs and an array of strings for the list of output labels.
+
This, however, gives ml5.js very little to go on in terms of designing the network architecture. Adding the inputs and outputs will complete the rest of the puzzle for it. In the case of Iris Flower classification, there are 4 inputs and 3 possible output labels. This can be configured in ml5.js with a single integer for the number of inputs and an array of strings for the list of output labels.
The electricity regression scenario involved 3 input values (occupants, size, temperature) and 1 output value (usage in kWh).
+
The electricity regression scenario involved 3 input values (occupants, size, temperature) and 1 output value (usage in kWh). With regression there are no string labels, so only an integer indicating the number of outputs is required.
let options = {
inputs: 3,
outputs: 1,
task: "regression",
};
let energyPredictor = ml5.neuralNetwork(options);
-
While the MNIST and energy predictor scenarios are useful starting points for understanding how machine learning works, it's important to note that they are simplified versions of what you might encounter in a “real-world” machine learning application. Depending on the problem, there could be significantly higher levels of complexity both in terms of the network architecture and the scale and preparation of data. Instead of a neatly packaged dataset like MNIST, you might be dealing with enormous amounts of messy data. This data might need to be processed and refined before it can be effectively used. You can think of it like organizing, washing, and chopping ingredients before you can start cooking with them.
+
While the Iris flower and energy predictor scenarios are useful starting points for understanding how machine learning works, it's important to note that they are simplified versions of what you might encounter in a “real-world” machine learning application. Depending on the problem, there could be significantly higher levels of complexity both in terms of the network architecture and the scale and preparation of data. Instead of a neatly packaged dataset, you might be dealing with enormous amounts of messy data. This data might need to be processed and refined before it can be effectively used. You can think of it like organizing, washing, and chopping ingredients before you can start cooking with them.
The “lifecycle” of a machine learning model is typically broken down into seven steps.
Data Collection: Data forms the foundation of any machine learning task. This stage might involve running experiments, manually inputting values, sourcing public data, or a myriad of other methods.
@@ -610,7 +673,7 @@
Building a Gesture Classifier
After all, how are you supposed to collect your data without knowing what you are even trying to do? Are you predicting a number? A category? A sequence? Is it a binary choice, or are there multiple options? These considerations about your inputs (the data fed into the model) and outputs (the predictions) are critical for every other step of the machine learning journey.
Let’s take a crack at step 0 for an example problem of training your first machine learning model with ml5.js and p5.js. Imagine for a moment, you’re working on an interactive application that responds to a gesture, maybe that gesture is ultimately meant to be classified via body tracking, but you want to start with something much simpler—one single stroke of the mouse.
-
+
[POSSIBLE ILLUSTRATION OF A SINGLE MOUSE SWIPE AS A GESTURE: basically can the paragraph below be made into a drawing?]
Each gesture could be recorded as a vector (extending from the start to the end points of a mouse movement) and the model’s task could be to predict one of four options: “up”, “down”, “left”, or “right.” Perfect! I’ve now got the objective and boiled it down into inputs and outputs!
@@ -692,7 +755,7 @@
Callbacks
Evaluation
If debug is set to true in the initial call to ml5.neuralNetwork(), once train() is called, a visual interface appears covering most of the p5.js page and canvas.
-
+
Figure 10.19: The TensorFlow.js “visor” with a graph of the loss function and model details.
This panel, called "Visor," represents the evaluation step, as shown in Figure X.X. The Visor is a part of TensorFlow.js and includes a graph that provides real-time feedback on the progress of the training. Let’s take a moment to focus on the "loss" plotted on the y-axis against the number of epochs along the x-axis.
@@ -802,847 +865,10 @@
Exercise 10.5
Exercise 10.6
[Exercise around hand pose classifier?]
-
Reinforcement Learning
-
There is so much more to working with data, machine learning, ml5.js, and beyond. I’ve only scratched the surface. As I close out this book, my goal is to tie the foundational machine learning concepts I’ve covered back into animated, interactive p5.js sketches that simulate physics and complex systems. Let’s see if I can bring as many concepts from the entire book back together for one last hurrah!
-
Towards the start of this chapter, I referenced an approach to incorporating machine learning into a simulated environment called “reinforcement learning.” Imagine embedding a neural network into any of the example objects (walker, mover, particle, vehicle) and calculating a force or some other action. The neural network could receive inputs related to the environment (such as distance to an obstacle) and produce a decision that requires a choice from a set of discrete options (e.g., move “left” or “right”) or a set of continuous values (e.g., magnitude and direction of a steering force). This is starting to sound familiar: it’s a neural network that receives inputs and performs classification or regression!
-
Here is where things take a turn, however. To better illustrate the concept, let’s start with a hopefully easy to understand and possibly familiar scenario, the game “Flappy Bird.” The game is deceptively simple. You control a small bird that continually moves horizontally across the screen. With each tap or click, the bird flaps its wings and rises upward. The challenge? A series of vertical pipes spaced apart at irregular intervals emerges from the right. The pipes have gaps, and your primary objective is to navigate the bird safely through these gaps. If you hit one, it’s game over. As you progress, the game’s speed increases, and the more pipes you navigate, the higher your score.
Suppose you wanted to automate the gameplay, and instead of a human tapping, a neural network will make the decision as to whether to “flap” or not. Could machine learning work here? Skipping over the “data” steps for a moment, let’s think about “choosing a model.” What are the inputs and outputs of the neural network?
-
Let’s begin with the inputs. This is quite the intriguing question because there isn’t a definitive answer! In a scenario where you want to see if you could train an automated neural network player without any knowledge of the game itself, it might make the most sense to have the inputs be all the pixels of the game screen. Maybe you don’t want to put your thumb on the scale in terms of what aspects of the game are important. This approach attempts to feed everything about the game into the model.
-
For me, I understand the flappy bird game quite well, I believe I can identify the important data points needed to make a decision. I can bypass all the pixels and boil the essence of the game down into the important features that define the game. Remember the discussion about features in the context of the gesture classifier? It applies here as well. These features are not arbitrary aspects of the game; they represent the distinct characteristics of Flappy Bird that are most salient for the neural network's decisions.
These are the inputs to the neural network. But what about the outputs? Is the problem a "classification" or "regression" one? This may seem like an odd question to ask in the context of a game like Flappy Bird, but it's actually incredibly important and relates to how the game is controlled. Tapping the screen, pressing a button, or using keyboard controls are all examples of classification. After all, there is only a discrete set of choices: tap or not, press 'w', 'a', 's', or 'd' on the keyboard. On the other hand, using an analog controller like a joystick leans towards regression. A joystick can be tilted in varying degrees in any direction, translating to continuous output values for both its horizontal and vertical axes.
-
For flappy bird, it’s a classification decision with only two choices:
-
-
flap
-
don’t flap
-
-
-
- Figure 10.22: The neural network as ml5.js might design it
-
-
This gives me the information needed to choose the model and I can let ml5.js build it.
-
let options = {
- inputs: 5,
- outputs: ["flap", "no flap"]
-}
-let birdBrain = ml5.neuralNetwork(options);
-
Now if I were to continue this line of thinking further, I’d have to go back to steps 1 and 2 of the machine learning process: data collection and preparation. How exactly would that work here? One idea would be to scour the earth for the greatest flappy bird player of all time and record them playing for hours. I could log all of the input features for every moment of gameplay along with whether the player flapped or not. Feed all that data into the model, train it, and I can see the headlines already: “Artificial Intelligence Bot Defeats Flappy Bird.”
-
But um, wait a second here, has an agent really learned to play Flappy Bird on its own or has it really just learned to mirror the play of a human? What if that human missed a key aspect of flappy bird strategy? The automated player would never discover it. Not to mention the fact that collecting all that data would be an incredibly tedious and laborious process.
-
This is where reinforcement learning comes in. Reinforcement learning is a type of machine learning where an agent learns through interacting with the environment and receiving feedback in the form of rewards or penalties. Unlike supervised learning, where the “correct” answers are provided by a training dataset, the agent in reinforcement learning learns the answers, the optimal decisions, through trial and error. For example, in Flappy Bird, the bird could receive a positive reward every time it successfully navigates a pipe, but a negative reward if it hits a pipe or the ground. The agent's goal is to figure out which actions lead to the most cumulative rewards over time.
-
At the start, the Flappy Bird agent won't know the best time to flap its wings, leading to many crashes. But as it accrues more and more feedback from countless play-throughs, it begins to refine its actions and develop the optimal strategy to navigate the pipes without crashing, maximizing its total reward. This process of "learning by doing" and optimizing based on feedback is the essence of reinforcement learning.
-
In the next section, I'll explore the principles I’m outlining here with a twist. Traditional techniques in reinforcement learning involve defining something called a “policy” and a corresponding “reward function.” Instead of going down this road, however, I will introduce a related technique that is baked into ml5.js: neuroevolution. This technique combines the evolutionary algorithms from Chapter 9 with neural networks. By evolving the weights of a neural network, I’ll demonstrate how the bird can perfect its journey through the pipes! I'll then finish off the chapter with a variation of Craig Reynold's steering behaviors from Chapter 5 using neuroevolution.
-
Evolving Neural Networks is NEAT!
-
Instead of traditional backpropagation to train the weights in a neural network, neuroevolution applies principles of genetic algorithms and natural selection: the best-performing neural networks are "selected" and their "genes" (or weights) are combined and mutated to create the next generation of networks.
-
One of the first examples of neuroevolution can be found in the 1994 paper "Genetic Lander: An experiment in accurate neuro-genetic control" by Edmund Ronald and Marc Schoenauer. In the 1990s traditional neural network training methods were still nascent, and this work explored an alternative approach. The paper describes how a simulated spacecraft—in a game aptly named "Lunar Lander"—can learn how to safely descend and land on a surface. Rather than use hand-crafted rules or labeled datasets, the researchers opted for genetic algorithms to evolve and train neural networks over multiple generations. And it worked!
-
In 2002, Kenneth O. Stanley and Risto Miikkulainen expanded on earlier neuroevolutionary approaches with their paper titled "Evolving Neural Networks Through Augmenting Topologies." Unlike the lunar lander method that focused on evolving the weights of a neural network, Stanley and Miikkulainen introduced a method that also evolved the network's structure itself! The “NEAT” algorithm—NeuroEvolution of Augmenting Topologies—starts with simple networks and progressively refines their topology through evolution. As a result, NEAT can discover network architectures tailored to specific tasks, often yielding more optimized and effective solutions.
-
A comprehensive NEAT implementation would require going deeper into the neural network architecture with TensorFlow.js directly. My goal here is to emulate Ronald and Schoenauer’s research in the modern context of the web browser with ml5.js. Rather than use the lunar lander game, I’ll give this a try with Flappy Bird!
-
Coding Flappy Bird
-
The game Flappy Bird was created by Vietnamese game developer Dong Nguyen in 2013. In January 2014, it became the most downloaded app on the Apple App Store. However, on February 8th, Nguyen announced that he was removing the game due to its addictive nature. Since then, it has been one of the most cloned games in history. Flappy Bird is a perfect example of "Nolan's Law," an aphorism attributed to the founder of Atari and creator of Pong, Nolan Bushnell: "All the best games are easy to learn and difficult to master.”
-
Flappy Bird is also a terrific game for beginner coders to recreate as a learning exercise, and it fits perfectly with the concepts in this book. To create the game with p5.js, I’ll start with by defining a Bird class. Now, I’m going to do something that may shock you here, but I’m going to skip using p5.Vector for this demonstration and instead use separate x and y properties for the bird’s position. Since the bird only moves along the vertical axis in the game, x remains constant! Therefore, the velocity (and all of the relevant forces) can be a single scalar value for just the y-axis. To simplify things even further, I’ll add the forces directly to the bird's velocity instead of accumulating them into an acceleration variable. In addition to the usual update(), I’ll include a flap() method for the bird to fly upward. The show() method is not included below as it remains the same and draws only a circle.
-
class Bird {
- constructor() {
- // The bird's position (x will be constant)
- this.x = 50
- this.y = 120;
-
- // Velocity and forces are scalar since the bird only moves along the y-axis
- this.velocity = 0;
- this.gravity = 0.5;
- this.flapForce = -10;
- }
-
- // The bird flaps its wings
- flap() {
- this.velocity += this.flapForce;
- }
-
- update() {
- // Add gravity
- this.velocity += this.gravity;
- this.y += this.velocity;
- // Dampen velocity
- this.velocity *= 0.95;
-
- // Handle the "floor"
- if (this.y > height) {
- this.y = height;
- this.velocity = 0;
- }
- }
-}
-
The other primary element of the game are the pipes that the bird must navigate through. I’ll create a Pipe class to describe a pair of rectangles, one that emanates from the top of the canvas and one from the bottom. Just as the bird only moves vertically, the pipes slide along only the horizontal axis, so the properties can also be scalar values rather than vectors. The pipes move at a constant speed and don’t experience any physics.
-
class Pipe {
- constructor() {
- // The size of the opening between the two parts of the pipe
- this.spacing = 100;
- // A random height for the top of the pipe
- this.top = random(height - this.spacing);
- // The starting position of the bottom pipe (based on the top)
- this.bottom = this.top + this.spacing;
- // The pipe starts at the edge of the canvas
- this.x = width;
- // Width of the pipe
- this.w = 20;
- // Horizontal speed of the pipe
- this.velocity = 2;
- }
-
- // Draw the two pipes
- show() {
- fill(0 );
- noStroke();
- rect(this.x, 0, this.w, this.top);
- rect(this.x, this.bottom, this.w, height - this.bottom);
- }
-
- // Update the pipe horizontal position
- update() {
- this.x -= this.velocity;
- }
-}
-
To be clear, the "reality" depicted in the game is a bird flying through pipes. The bird is moving along two dimensions while the pipes remain stationary. However, it is simpler in terms of code to consider the bird as stationary in its horizontal position and treat the pipes as moving.
-
With a Bird and Pipe class written, I'm almost set to run the game. However, there remains a key missing piece: collisions. The whole game rides on the bird attempting to avoid the pipes! This is nothing new, you’ve seen many examples of objects checking their positions against others throughout this book.
-
Now, there's a design choice to make. A function to check collisions could logically be placed in either the Bird class (to check if the bird hits a pipe) or in the Pipe class (to check if a pipe hits the bird). Either can be justified depending on your point of view. I'll place it in the Pipe class and call it collides().
-
It's a little trickier than you might think on first glance as the function needs to check both the top and bottom rectangles of a pipe against the position of the bird. There are a variety of ways you could approach this, one way is to first check if the bird is vertically within the bounds of either rectangle (either above the top pipe or below the bottom one). But it's only actually colliding with the pipe if the bird is also horizontally within the boundaries of the pipe's width. An elegant way to write this is to combining each of these checks with a logical "and."
-
collides(bird) {
- // Is the bird within the vertical range of the top or bottom pipe?
- let verticalCollision = bird.y < this.top || bird.y > this.bottom;
- // Is the bird within the horizontal range of the pipes?
- let horizontalCollision = bird.x > this.x && bird.x < this.x + this.w;
- //{!1} If it's both a vertical and horizontal hit, it's a hit!
- return verticalCollision && horizontalCollision;
- }
-
The algorithm currently treats the bird as a single point and does not take into account its size. This is something that should be improved for a more realistic version of the game.
-
All that’s left to do is write setup() and draw(). I need a single variable for the bird and an array for a list of pipes. The interaction is just a single press of the mouse. Rather than build a fully functional game with a score, end screen, and other usual elements, I’ll just make sure things are working by drawing the text “OOPS!” near any pipe when there is a collision. The code also assumes an additional offscreen() method to the Pipe class for when it has moved beyond the left edge of the canvas.
-
-
Example 10.3: Flappy Bird Clone
-
-
-
-
-
-
let bird;
-let pipes = [];
-
-function setup() {
- createCanvas(640, 240);
- //{!2} Create a bird and start with one pipe
- bird = new Bird();
- pipes.push(new Pipe());
-}
-
-//{!3} The bird flaps its wings when the mouse is pressed
-function mousePressed() {
- bird.flap();
-}
-
-function draw() {
- background(255);
- // Handle all of the pipes
- for (let i = pipes.length - 1; i >= 0; i--) {
- pipes[i].show();
- pipes[i].update();
- if (pipes[i].collides(bird)) {
- text("OOPS!", pipes[i].x, pipes[i].top + 20);
- }
- if (pipes[i].offscreen()) {
- pipes.splice(i, 1);
- }
- }
- // Update and show the bird
- bird.update();
- bird.show();
- //{!3} Add a new pipe every 75 frames
- if (frameCount % 75 == 0) {
- pipes.push(new Pipe());
- }
-}
-
The trickiest aspect of the above code lies in spawning the pipes at regular intervals with the frameCount variable and modulo operator %. In p5.js, frameCount is a system variable that tracks the number of frames rendered since the sketch began, incrementing with each cycle of the draw() loop. The modulo operator, denoted by %, returns the remainder of a division operation. For example, 7 % 3 would yield 1 because when dividing 7 by 3, the result is 2 with a remainder of 1. The boolean expression frameCount % 75 == 0 therefore checks if the current frameCount value, when divided by 75, has a remainder of 0. This condition is true every 75 frames and at those frame counts, a new pipe is spawned and added to the pipes array.
-
Exercise 10.7
-
Implement a scoring system that awards points for successfully navigating through each set of pipes. Feel free to add your own visual design elements for the bird, pipes, and environment!
-
-
Neuroevolution Flappy Bird
-
The game, as it currently stands, is controlled by mouse clicks. The first step to implementing neuroevolution is to give each bird a brain so that it can decide on its own whether or not to flap its wings.
-
The Bird Brain
-
In the previous section on reinforcement learning, I established a list of input features that comprise the bird's decision-making process. I’m going to use that same list with one simplification. Since the size of the opening between the pipes will remain constant, there’s no need to include both the y positions of the top and bottom; one will suffice.
-
-
y position of the bird.
-
y velocity of the bird.
-
y position of the next pipe’s top (or the bottom!) opening.
-
x distance to the next pipes.
-
-
The outputs have just two options: to flap or not to flap! With the inputs and outputs set, I can add a brain property to the bird’s constructor with the appropriate configuration. Just to demonstrate a different style here, I’ll skip including a separate options variable and pass the properties as an object literal directly into the ml5.neuralNetwork() function. Note the addition of a neuroEvolution property set to true. This is necessary to enable some of the features I’ll be using later in the code.
-
constructor() {
- this.brain = ml5.neuralNetwork({
- // A bird's brain receives 4 inputs and classifies them into one of two labels
- inputs: 4,
- outputs: ["flap", "no flap"],
- task: "classification",
- //{!1} A new property necessary to enable neuro evolution functionality
- neuroEvolution: true
- });
- }
-
Next, I’ll add a new method called think() to the Bird class where all of the necessary inputs for the bird are calculated. The first two are easy, as they are simply the y and velocity properties of the bird itself. However, for inputs 3 and 4, I need to determine which pipe is the “next” pipe.
-
At first glance, it might seem that the next pipe is always the first one in the array, since the pipes are added one at a time to the end of the array. However, once a pipe passes the bird, it is no longer relevant. I need to find the first pipe in the array whose right edge (x-position plus width) is greater than the bird’s x position.
-
think(pipes) {
- let nextPipe = null;
- for (let pipe of pipes) {
- //{!4} The next pipe is the one who hasn't passed the bird yet.
- if (pipe.x + pipe.w > this.x) {
- nextPipe = pipe;
- break;
- }
- }
-
Once I have the next pipe, I can create the four inputs:
-
let inputs = [
- // y-position of bird
- this.y,
- // y-velocity of bird
- this.velocity,
- // top opening of next pipe
- nextPipe.top,
- //{!1} distance from next pipe to this pipe
- nextPipe.x - this.x,
- ];
-
However, I have forgotten a critical step! The range of all input values is determined by the dimensions of the canvas. The neural network, however, expects values in a standardized range, such as 0 to 1. One method to normalize these values is to divide the inputs related to vertical properties byheight, and those related to horizontal ones by width.
-
let inputs = [
- //{!4} All of the inputs are now normalized by width and height
- this.y / height,
- this.velocity / height,
- nextPipe.top / height,
- (nextPipe.x - this.x) / width,
- ];
-
With the inputs in hand, I’m ready to pass them to the neural network’s classify() method. There is, however, one small problem. Remember, classify() is asynchronous! This means I need implement a callback inside the Bird class to process the decision! Unfortunately, doing so adds a level of complexity to the code here which is entirely unnecessary. Asynchronous callbacks with machine learning functions in ml5.js are typically necessary due to the time required to process a large amount of data in a model. Without a callback, the code might have to wait a long time and if it’s in the context of a p5.js animation, it could severely impact the smoothness of any animation. The neural network here, however, only has four floating point inputs and two output labels! It’s tiny and can run so fast there’s no reason to implement this asynchronously.
-
For completeness, I will include a version of the example on this book’s website that implements neuroevolution with asynchronous callbacks. For the discussion here, however, I’m going to use a feature of ml5.js that allows me to take a shortcut. The method classifySync() is identical to classify(), but it runs synchronously, meaning that the code stops and waits for the results before moving on. You should be very careful when using this version of the method as it can cause problems in other contexts, but it will work well for this scenario. Here is the end of the think() method with classifySync().
-
let results = this.brain.classifySync(inputs);
- if (results[0].label == "flap") {
- this.flap();
- }
- }
-
The neural network's prediction is in the same format as the gesture classifier and the decision can be made by checking the first element of the results array. If the output label is "flap", then call flap().
-
Now is where the real challenge begins: teaching the bird to win the game and flap its wings at the right moment! Recalling the discussion of genetic algorithms from Chapter 9, there are three key principles that underpin Darwinian evolution: Variation, Selection, and Heredity. Let’s go through each of these principles, implementing all the steps of the genetic algorithm itself with neural networks.
-
Variation: A Flock of Flappy Birds
-
A single bird with a randomly initialized neural network isn’t likely to have any success at all. That lone bird will most likely jump incessantly and fly way offscreen or sit perched at the bottom of the canvas awaiting collision after collision with the pipes. This erratic and nonsensical behavior is a reminder: a randomly initialized neural network lacks any knowledge or experience! The bird is essentially making wild guesses for its actions and success is going to be very rare.
-
This is where the first key principle of genetic algorithms comes in: variation. The hope is that by introducing as many different neural network configurations as possible, a few might perform slightly better than the rest. The very first step towards variation is to add an array of many birds.
-
// Population size
-let populationSize = 200;
-// Array of birds
-let birds = [];
-
-function setup() {
- //{!3} Create the bird population
- for (let i = 0; i < populationSize; i++) {
- birds[i] = new Bird();
- }
-
- //{!1} Run the computations on the "cpu" for better performance
- ml5.setBackend("cpu");
-}
-
-function draw() {
- for (let bird of birds) {
- //{!1} This is the new method for the bird to make a decision to flap or not
- bird.think(pipes);
- bird.update();
- bird.show();
- }
-}
-
You might notice a peculiar line of code that's crept into setup: ml5.setBackend("cpu"). When running neural networks, a lot of the heavy computational lifting is often offloaded to the GPU. This is the default behavior, and especially critical for larger pre-trained models included as part of ml5.js.
-
-
GPU vs. CPU
-
-
GPU (Graphics Processing Unit): Originally designed for rendering graphics, GPUs are adept at handling a massive number of operations in parallel. This makes them excellent for the kind of math operations and computations that machine learning models frequently perform.
-
CPU (Central Processing Unit): Often considered the "brain" or general-purpose heart of a computer, a CPU handles a wider variety of tasks than the specialized GPU.
-
-
-
But there's a catch! Transferring data to and from the GPU introduces some overhead. In most cases, the gains from the GPU's parallel processing offset this overhead. However, for such a tiny model like the one here, copying data to the GPU and back slows things down more than it helps.
-
This is where ml5.setBackend("cpu") comes in. By specifying "cpu", the neural network computations will instead run on the “Central Processing Unit” —the general-purpose heart of your computer— which handles the operations more efficiently for a population of many tiny bird brains.
-
Selection: Flappy Bird Fitness
-
Once I’ve got a diverse population of birds, each with their own neural network, the next step in the genetic algorithm is selection. Which birds should pass on their genes (in this case, neural network weights) to the next generation? In the world of Flappy Bird, the measure of success is the ability to stay alive the longest avoiding the pipes. This is the bird's "fitness." A bird that dodges many pipes is considered more "fit" than one that crashes into the first one it encounters.
-
To track the bird’s fitness, I am going to add two properties to the Bird class: fitness and alive.
-
constructor() {
- // The bird's fitness
- this.fitness = 0;
- //{!1} Keeping track if the bird is alive or not
- this.alive = true;
- }
-
I’ll assign the fitness a numeric value that increases by 1 every cycle through draw(), as long as the bird remains alive. The birds that survive longer should have a higher fitness.
-
update() {
- //{!1} Incrementing the fitness each time through update
- this.fitness++;
- }
-
The alive property is a boolean flag that is initially set to true. However, when a bird collides with a pipe, it is set to false. Only birds that are still alive are updated and drawn to the canvas.
-
function draw() {
- // There are now an array of birds!
- for (let bird of birds) {
- //{!1} Only operate on the birds that are still alive
- if (bird.alive) {
- // Make a decision based on the pipes
- bird.think(pipes);
- // Update and show the bird
- bird.update();
- bird.show();
-
- //{!4} Has the bird hit a pipe? If so, it's no longer alive.
- for (let pipe of pipes) {
- if (pipe.collides(bird)) {
- bird.alive = false;
- }
- }
- }
- }
-}
-
In Chapter 9, I demonstrated two techniques for running an evolutionary simulation. The first involved a population living for a fixed amount of time each generation. The same approach would likely work here as well, but I want to allow the birds to accumulate the highest fitness possible and not arbitrarily stop them based on a time limit. The second technique, demonstrated with the "bloops" example, involved eliminating the fitness score entirely and setting a random probability for cloning alive birds. However, this approach could become messy and risks overpopulation or all the birds dying out completely. Instead, I propose combining elements of both approaches. I will allow a generation to continue as long as at least one bird is still alive. When all the birds have died, I will select parents for the reproduction step and start anew.
-
Let’s begin by writing a function to check if all the birds have died.
-
function allBirdsDead() {
- for (let bird of birds) {
- //{!3} If a single bird is alive, they are not all dead!
- if (bird.alive) {
- return false;
- }
- }
- //{!1} If the loop completes without finding a living bird, they are all dead
- return true;
-}
-
When all the birds have died, then it’s time for selection! In the previous genetic algorithm examples I demonstrated a technique for giving a fair shot to all members of a population, but increasing the chances of selection for those with higher fitness scores. I’ll use that same weightedSelection() function here.
-
//{!1} See chapter 9 for a detailed explanation of this algorithm
-function weightedSelection() {
- let index = 0;
- let start = random(1);
- while (start > 0) {
- start = start - birds[index].fitness;
- index++;
- }
- index--;
- //{!1} Instead of returning the entire Bird object, just the brain is returned
- return birds[index].brain;
-}
-
However, for this algorithm to function properly, I need to first normalize the fitness values of the birds so that they collectively sum to 1. This way, each bird's fitness is equal to its probability of being selected.
-
function normalizeFitness() {
- // Sum the total fitness of all birds
- let sum = 0;
- for (let bird of birds) {
- sum += bird.fitness;
- }
- //{!3} Divide each bird's fitness by the sum
- for (let bird of birds) {
- bird.fitness = bird.fitness / sum;
- }
-}
-
Heredity: Baby Birds
-
There’s only one step left in the genetic algorithm—reproduction. In Chapter 9, I explored in great detail the two step process for generating a “child” element: crossover and mutation. Crossover is where the third key principle of heredity arrives. After selecting the DNA of two parents, they are combined to form the child’s DNA. At first glance, the idea of inventing an algorithm for crossover of two neural networks might seem daunting. Yet, it’s actually quite straightforward. Think of the individual “genes” of a bird’s brain to be the weights within the network. Mixing two such brains boils down to creating a new neural network, where each weight is chosen by a virtual coin flip—picking a value from the first or second parent.
-
// Picking two parents and creating a child with crossover
-let parentA = weightedSelection();
-let parentB = weightedSelection();
-let child = parentA.crossover(parentB);
-
As you can see, today is my lucky day, as ml5.js includes a crossover() that manages the algorithm for mixing the two neural networks. I can happily move onto the mutation step.
-
// Mutating the child
-child.mutate(0.01);
-
The ml5.js library also provides a mutate() method that accepts a "mutation rate" as its primary argument. The rate determines how often a weight will be altered. For example, a rate of 0.01 indicates a 1% chance that any given weight will mutate. During mutation, ml5.js adjusts the weight slightly by adding a small random number to it, rather than selecting a completely new random value. This behavior mimics real-world genetic mutations, which typically introduce minor changes rather than entirely new traits. Although this default approach works for many cases, ml5.js offers more control over the process by allowing the use of a "custom" function as an optional second argument to mutate().
-
These crossover and mutation steps are repeated for the size of the population to create an entire new generation of birds. This is accomplished by populating an empty local array nextBirds with the new birds. Once the population is full, the global birds array is then updated to this fresh generation.
-
function reproduction() {
- //{!1} Start with a new empty array
- let nextBirds = [];
- for (let i = 0; i < populationSize; i++) {
- // Pick 2 parents
- let parentA = weightedSelection();
- let parentB = weightedSelection();
- // Create a child with crossover
- let child = parentA.crossover(parentB);
- // Apply mutation
- child.mutate(0.01);
- //{!1} Create the new bird object
- nextBirds[i] = new Bird(child);
- }
- //{!1} The next generation is now the current one!
- birds = nextBirds;
-}
-
If you look closely at the reproduction() function, you may notice that I’ve slipped in another new feature of the Bird class, specifically an argument to the constructor. When I first introduced the idea of a bird “brain,” each new Bird object was created with a brand new brain—a fresh neural network courtesy of ml5.js. However, I now want the new birds to “inherit” a child brain that was generated through the processes of crossover and mutation.
-
To make this possible, I’ll subtly change the Bird constructor to look for an “optional” argument named, of course, brain.
-
constructor(brain) {
- //{!1} Check if a brain was passed in
- if (brain) {
- this.brain = brain;
- //{!1} If not, proceed as usual
- } else {
- this.brain = ml5.neuralNetwork({
- inputs: 4,
- outputs: ["flap", "no flap"],
- task: "classification",
- neuroEvolution: true,
- });
- }
- }
-
Here’s the magic, if no brain is provided when a new bird is created, the brain argument remains undefined. In JavaScript, undefined is treated as false and so the code moves on to the else and calls ml5.neuralNetwork(). On the other hand, if I I do pass in an existing neural network, brain evaluates to true and is assigned directly to this.brain. This elegant trick allows the constructor to handle different scenarios.
-
And with that, the example is complete. All that is left to do is call normalizeFitness() and reproduction() in draw() at the end of each generation when all the birds have died out.
-
-
Example 10.4: Flappy Bird NeuroEvolution
-
-
-
-
-
-
function draw() {
- //{inline} all the rest of draw
-
- //{!4} Create the next generation when all the birds have died
- if (allBirdsDead()) {
- normalizeFitness();
- reproduction();
- }
-}
-
Example 10.4 also adjusts the behavior of birds so that they die when they leave the canvas, either by crashing into the ground or soaring too high above the top.
-
EXERCISE: SPEED UP TIME, ANNOTATE PROCESS, ETC.
-
EXERCISE: SAVE AND LOAD BIRD
-
Steering the Neuroevolutionary Way
-
Having explored neuroevolution with Flappy Bird, I’d like to shift the focus back to the realm of simulation, specifically the steering agents introduced in chapter 5. What if, instead of dictating the rules for an algorithm to calculate a steering force, a simulated creature could evolve its own strategy? Drawing inspiration from Craig Reynolds’ aim of “life-like and improvisational” behaviors, my goal is not to use neuroevolution to engineer the perfect creature that can flawlessly execute a task. Instead, I hope to create a captivating world of simulated life, where the quirks, nuances, and happy accidents of evolution unfold in the canvas.
-
Let’s begin with adapting the Smart Rockets example from Chapter 9. In that example, the genetic code for each rocket was an array of vectors.
-
this.genes = [];
-for (let i = 0; i < lifeSpan; i++) {
- //{!2} Each gene is a vector with random direction and magnitude
- this.genes[i] = p5.Vector.random2D();
- this.genes[i].mult(random(0, this.maxforce));
-}
-
I propose adapting the above to instead use a neural network to "predict" the vector or steering force, transforming the genes into a brain.
But what are the inputs and outputs? In the original example, the vectors from the genes array were applied sequentially, querying the array with a counter variable.
-
this.applyForce(this.genes[this.counter]);
-
Now, instead of an array lookup, I want the neural network to return a vector with predictSync().
-
// Get the outputs from the neural network
-let outputs = this.brain.predictSync(inputs);
-// Use one output for an angle
-let angle = outputs[0].value * TWO_PI;
-// Use another outputs for magnitude
-let magnitude = outputs[1].value * this.maxforce;
-// Create and apply the force
-let force = p5.Vector.fromAngle(angle).setMag(magnitude);
-this.applyForce(force);
-
The neural network brain outputs two values; one for the angle of the vector, one for the magnitude. You might think to use these outputs for the vector’s x and y components. However, the default output range for an ml5 neural network is between 0 and 1. I want the forces to be capable of pointing in both positive and negative directions! Mapping an angle offers the full range.
-
You may have noticed that the code includes a variable called inputs that I have yet to declare or initialize. Defining the inputs to the neural network is where you as the designer of the system can be the most creative, and consider the simulated biology and capabilities of your creatures.
-
As a first try, I’ll assign something very basic for the inputs and see if it works. Since the Smart Rockets environment is static, with fixed obstacles and targets, what if the brain could learn and estimate a "flow field" to navigate towards its goal? A flow field receives a position and returns a vector, so the neural network can mirror this functionality and use the rocket's position as input (normalizing the x and y values according to the canvas dimensions).
-
let inputs = [this.position.x / width, this.position.y / height];
-
That’s it! Everything else from the original example can remain unchanged: the population, the fitness function, and the selection process. The only other small adjustment is to use ml5.js’s crossover() and mutate() functions, eliminating the need for a separate DNA class with implementations of these steps.
-
-
Example 10.5: Smart Rockets Neuroevolution
-
-
-
-
-
-
reproduction() {
- let nextPopulation = [];
- // Create the next population
- for (let i = 0; i < this.population.length; i++) {
- // Sping the wheel of fortune to pick two parents
- let parentA = this.weightedSelection();
- let parentB = this.weightedSelection();
- let child = parentA.crossover(parentB);
- //{!1} Apply mutation
- child.mutate(this.mutationRate);
- nextPopulation[i] = new Rocket(320, 220, child);
- }
- //{!1} Replace the old population
- this.population = nextPopulation;
- this.generations++;
- }
-
EXERCISE: something about desired vs. steering and using the velocity as inputs also
-
A Changing World
-
In the Smart Rockets example, the environment was static. This made the rocket's task of finding the target easy to accomplish using only its position as input. However, what if the target and the obstacles in the rocket's path were moving? To handle a more complex and changing environment, I need to expand the neural network's inputs and consider additional "features" of the environment. This is similar to what I did with Flappy Bird, where I identified the key data points of the environment to guide the bird's decision-making process.
-
Let’s begin with the simplest version of this scenario, almost identical to the Smart Rockets, but removing obstacles and replacing the fixed target with a random “perlin noise” walker. In this world, I’ll rename the Rocket to Creature and write a new Glow class to represent a gentle, drifting orb. Imagine that the creature’s goal is to reach the light source and dance in its radiant embrace as long as it can.
-
class Glow {
- constructor() {
- //{!2} Two different perlin noise offsets
- this.xoff = 0;
- this.yoff = 1000;
- this.position = createVector();
- this.r = 24;
- }
-
- update() {
- //{!2} Assign the position according to Perlin noise
- this.position.x = noise(this.xoff) * width;
- this.position.y = noise(this.yoff) * height;
- //{!2} Move along the perlin noise space
- this.xoff += 0.01;
- this.yoff += 0.01;
- }
-
- show() {
- stroke(0);
- strokeWeight(2);
- fill(200);
- circle(this.position.x, this.position.y, this.r * 2);
- }
-}
-
As the glow moves, the creature should take the glow’s position into account, as an input to its brain. However, it is not sufficient to know only the light’s position; it’s the position relative to the creature’s own that is key. A nice way to synthesize this information as an input feature is to calculate a vector that points from the creature to the glow. Here is where I can reinvent the seek() method from Chapter 5 using a neural network to estimate the steering force.
-
seek(target) {
- //{!1} Calculate a vector from the position to the target
- let v = p5.Vector.sub(target, this.position);
-
This is a good start, but the components of the vector do not fall within a normalized input range. I could divide v.x by width and v.y by height, but since my canvas is not a perfect square, it may skew the data. Another solution is to normalize the vector, but with that, I would lose any measure of the distance to the glow itself. After all, if the creature is sitting on top of the glow, it should steer differently than if it were very far away. There are multiple approaches I could take here. I’ll go with saving the distance in a separate variable before normalizing and plan to use it as an additional input feature.
-
seek(target) {
- let v = p5.Vector.sub(target, this.position);
- // Save the distance in a variable (one input)
- let distance = v.mag();
- // Normalize the vector pointing from position to target (two inputs)
- v.normalize();
-
Now, if you recall, a key element of Reynolds’ steering formula involves comparing the desired velocity to the current velocity. How the vehicle is currently moving plays a significant role in how it should steer! For the creature to consider its own velocity as part of its decision-making, I can include the velocity vector in the inputs as well. To normalize these values, it works beautifully to divide the vector’s components by the maxspeed property. This retains both the direction and magnitude of the vector. The rest of the code follows the same with the output of the neural network synthesized into a force to be applied to the creature.
-
seek(target) {
- let v = p5.Vector.sub(target.position, this.position);
- let distance = v.mag();
- v.normalize();
- // Compiling the features into an inputs array
- let inputs = [
- v.x,
- v.y,
- distance / width,
- this.velocity.x / this.maxspeed,
- this.velocity.y / this.maxspeed,
- ];
- //{!5} Predicting the force to apply
- let outputs = this.brain.predictSync(inputs);
- let angle = outputs[0].value * TWO_PI;
- let magnitude = outputs[1].value;
- let force = p5.Vector.fromAngle(angle).setMag(magnitude);
- this.applyForce(force);
- }
-
Enough has changed here from the rockets that it is also worth reconsidering the fitness function. Previously, fitness was calculated based on the rocket's distance from the target at the end of each generation. However, since this new target is moving, I prefer to accumulate the amount of time the creature is able to catch the glow as the measure of fitness. This can be achieved by checking the distance between the creature and the glow in the update() method and incrementing a fitness value when they are intersecting. Both the Glow and Creature class include a radius property r which can be used to determine collision.
-
update(target) {
- //{inline} the usual updating of position, velocity, accleration
-
- //{!4} Increase the fitness whenever the creature reaches the glow
- let d = p5.Vector.dist(this.position, target.position);
- if (d < this.r + target.r) {
- this.fitness++;
- }
- }
-
Now, one thing you may have noticed about these examples is that testing them requires a delightful exercise in patience as you watch the slow crawl of the simulation play out generation after generation. This is part of the point—I want to watch the process! It’s also a nice excuse to take a break, which is to be encouraged. Head outside, enjoy some non-simulated nature, perhaps a small cup of soothing tea while you wait? Take comfort in the fact that you only have to wait billions of milliseconds rather than the billions of years required for actual biology.
-
Nevertheless, for the system to evolve, there’s no inherent requirement that you draw and animate the world. Hundreds of generations could be completed in the blink of an eye if you could skip all that time spent rendering the scene.
-
One way to avoid tearing your hair out every time you change a small parameter and find yourself waiting what seems like hours to see if it had any effect is to render the environment, well, less often. In other words, you can compute multiple simulation steps per draw() cycle with a for loop.
-
Here is where I can make use of one of my favorite features of p5.js: the ability to quickly create standard interface elements. You saw this before in the interactive selection example from Chapter 10 with createButton(). In the following code, a "range" slider is used to control the skips in time. Only the code for the new time slider is shown here, excluding all the other global variables and their initializations in setup(). Remember, you will also need to separate the code for visuals from the physics to ensure that rendering still occurs only once.
-
//{!1} A variable to hold the slider
-let timeSlider;
-
-function setup() {
- //{!1} Creating the slider with a min and max range, and starting value
- timeSlider = createSlider(1, 20, 1);
-}
-
-function draw() {
- //{!5} All of the drawing code happening just once!
- background(255);
- glow.show();
- for (let creature of creatures) {
- creature.show();
- }
-
- //{!8} All of the simulation code running multiple times according to the slider
- for (let i = 0; i < timeSlider.value(); i++) {
- for (let creature of creatures) {
- creature.seek(glow);
- creature.update(glow);
- }
- glow.update();
- lifeCounter++;
- }
-}
-
In p5.js, a slider is defined with three arguments: a minimum value (for when the slider is all the way to the left), a maximum value (for when the slider is all the way to the right), and a starting value (for when the page first loads). This allows the simulation to run at 20X speed to reach the results of evolution more quickly, then slow back down to bask in the glory of the intelligent behaviors on display. Here is the final version of the example with a newCreature constructor to create a neural network. Everything else has remained the same from the Flappy Bird example code.
-
-
Example 10.6: Neuroevolution Steering
-
-
-
-
-
-
class Creature {
- constructor(x, y, brain) {
- this.position = createVector(x, y);
- this.velocity = createVector(0, 0);
- this.acceleration = createVector(0, 0);
- this.r = 4;
- this.maxspeed = 4;
- this.fitness = 0;
-
- if (brain) {
- this.brain = brain;
- } else {
- this.brain = ml5.neuralNetwork({
- inputs: 5,
- outputs: 2,
- task: "regression",
- neuroEvolution: true,
- });
- }
- }
-
- //{inline} seek() predicts a steering force as described previously
-
- //{inline} update() increments the fitness if the glow is reached as described previously
-
-}
-
Neuroevolution Ecosystem
-
If I’m being honest here, this chapter is getting kind of long. My goodness, this book is incredibly long, are you really still here reading? I’ve been working on it for over ten years and right now, at this very moment as I type these letters, I feel like stopping. But I cannot. I will not. There is one more thing I must demonstrate, that I am obligated to, that I won’t be able to tolerate skipping. So bear with me just a little longer. I hope it will be worth it.
-
There are two key elements of what I’ve demonstrated so far that don’t fit into my dream of the Ecosystem Project that has been the through-line of this book. The first is something I covered in chapter 9 with the introduction of the bloops—a system of creatures that all lives and dies together, starting completely over with each subsequent generation, is not how the biological world works! I’d like to also examine this in the context of neuroevolution.
-
But even more so, there’s a major flaw in the way I am extracting features from a scene. The creatures in Example 10.6 are all knowing. They know exactly where the glow is regardless of how far away they are or what might be blocking their vision or senses. Yes, it may be reasonable to assume they are aware of their current velocity, but I didn’t introduce any limits to the perception of external elements in their environment.
-
A common approach in reinforcement learning simulations is to attach sensors to an agent. For example, consider a simulated mouse in a maze searching for cheese in the dark. Its whiskers might act as proximity sensors to detect walls and turns. The mouse can’t see the entire maze, only its immediate surroundings. Another example is a bat using echolocation to navigate, or a car on a winding road that can only see what is projected in front of its headlights.
-
I’d like to build on this idea of the whiskers (or more formally the “vibrissae”) found in mice, cats, and other mammals. In the real world, animals use their vibrissae to navigate and detect nearby objects, especially in dark or obscured environments.
I’ll keep the generic class name Creature but think of them now as the circular “bloops” of chapter 9, enhanced with whisker-like sensors that emanate from their center in all directions.
-
class Creature {
- constructor(x, y) {
- // The creature has a position and radius
- this.position = createVector(x, y);
- this.r = 16;
- // The creature has an array of sensors
- this.sensors = [];
-
- // The creature has a 5 sensors
- let totalSensors = 5;
- for (let i = 0; i < totalSensors; i++) {
- // First, calculate a direction for the sensor
- let angle = map(i, 0, totalSensors, 0, TWO_PI);
- // Create a vector a little bit longer than the radius as the sensor
- this.sensors[i] = p5.Vector.fromAngle(angle).mult(this.r * 1.5);
- }
- }
-}
-
The code creates a series of vectors that each describe the direction and length of one “whisker” sensor attached to the creature. However, just the vector is not enough. I want the sensor to include a value, a numeric representation of what it is sensing. This value can be thought of as analogous to the intensity of touch. Just as a cat's whisker might detect a faint touch from a distant object or a stronger push from a closer one, the virtual sensor's value could range to represent proximity. Let’s assume there is a Food class to describe a circle of deliciousness that the creature wants to find.
-
class Food {
- //{!4} A piece of food has a random position and fixed radius
- constructor() {
- this.position = createVector(random(width), random(height));
- this.r = 50;
- }
-
- show() {
- noStroke();
- fill(0, 100);
- circle(this.position.x, this.position.y, this.r * 2);
- }
-}
-
A Food object is a circle drawn according to a position and radius. I’ll assume the creature in my simulation has no vision and relies on sensors to detect if there is food nearby. This begs the question: how can I determine if a sensor is touching the food? One approach is to use a technique called “raycasting.” This method is commonly employed in computer graphics to project rays (often representing light) from an origin point in a scene to determine what objects they intersect with. Raycasting is useful for visibility and collision checks, exactly what I am doing here!
-
Although raycasting is a robust solution, it requires more involved mathematics than I'd like to delve into here. For those interested, an explanation and implementation are available in Coding Challenge #145 on thecodingtrain.com. For the example now, I will opt for a more straightforward approach and check whether the endpoint of a sensor lies inside the food circle.
-
-
- Figure 10.x: Endpoint of sensor is inside or outside of the food based on distance to center of food.
-
-
As I want the sensor to store a value for its sensing along with the sensing algorithm itself, it makes sense to encapsulate these elements into a Sensor class.
-
class Sensor {
- constructor(v) {
- this.v = v.copy();
- //{!1} The sensor also stores a value for the proximity of what it is sensing
- this.value = 0;
- }
-
- sense(position, food) {
- //{!1} Find the "tip" (or endpoint) of the sensor by adding position
- let end = p5.Vector.add(position, this.v);
- //{!1} How far is it from the food center
- let d = end.dist(food.position);
- //{!1} If it is within the radius light up the sensor
- if (d < food.r) {
- //{!1} The further into the center the food, the more the sensor activates
- this.value = map(d, 0, food.r, 1, 0);
- } else {
- this.value = 0;
- }
- }
-}
-
Notice how the sensing mechanism gauges how deep inside the food’s radius the endpoint is with the map() function. When the sensor's endpoint is just touching the outer boundary of the food, the value starts at 0. As the endpoint moves closer to the center of the food, the value increases, maxing out at 1. If the sensor isn't touching the food at all, its value remains at 0. This gradient of feedback mirrors the varying intensity of touch or pressure in the real world.
-
Let’s look at testing the sensors with one bloop (controlled by the mouse) and one piece of food (placed at the center of the canvas). When the sensors touch the food, they light up and get brighter the closer to the center.
-
-
Example 10.7: Bloops with Sensors
-
-
-
-
-
-
let bloop, food;
-
-function setup() {
- createCanvas(640, 240);
- //{!2} One bloop, one piece of food
- bloop = new Creature();
- food = new Food();
-}
-
-function draw() {
- background(255);
-
- // Temporarily control the bloop with the mouse
- bloop.position.x = mouseX;
- bloop.position.y = mouseY;
- // Draw the food and the bloop
- food.show();
- bloop.show();
- // The bloop senses the food
- bloop.sense(food);
-
-}
-
-class Creature {
- constructor(x, y) {
- this.position = createVector(x, y);
- this.r = 16;
-
- //{!8} Create the sensors for the creature
- this.sensors = [];
- let totalSensors = 15;
- for (let i = 0; i < totalSensors; i++) {
- let a = map(i, 0, totalSensors, 0, TWO_PI);
- let v = p5.Vector.fromAngle(a);
- v.mult(this.r * 2);
- this.sensors[i] = new Sensor(v);
- }
- }
-
- //{!4} Call the sense() method for each sensor
- sense(food) {
- for (let i = 0; i < this.sensors.length; i++) {
- this.sensors[i].sense(this.position, food);
- }
- }
-
- //{inline} see book website for the drawing code
-}
-
Are you thinking what I’m thinking? What if the values of those sensors are the inputs to a neural network?! Assuming I bring back all of the necessary physics bits in the Creature class, I could write a new think() method that processes the sensor values through the neural network “brain” and outputs a steering force, just as with the previous two examples.
-
think() {
- // Build an input array from the sensor values
- let inputs = [];
- for (let i = 0; i < this.sensors.length; i++) {
- inputs[i] = this.sensors[i].value;
- }
-
- // Predicting a steering force from the sensors
- let outputs = this.brain.predictSync(inputs);
- let angle = outputs[0].value * TWO_PI;
- let magnitude = outputs[1].value;
- let force = p5.Vector.fromAngle(angle).setMag(magnitude);
- this.applyForce(force);
- }
-
The logical next step would be incorporate all the usual parts of the genetic algorithm, writing a fitness function (how much food did each creature eat?) and performing selection after a fixed generation time period. But this is a great opportunity to test out the principles of a ”continuous” ecosystem with a more sophisticated environment and set of potential behaviors for the creatures themselves.
-
Instead of a fixed lifespan cycle for the population, I will introduce the concept of health for each one. For every cycle through draw() that a creature lives, the health deteriorates.
-
class Creature {
- constructor() {
- //{inline} All of the creature's properties
-
- // The health starts at 100
- this.health = 100;
- }
-
- update() {
- //{inline} the usual updating position, velocity, acceleration
-
- // Losing some health!
- this.health -= 0.25;
- }
-
Now in draw(), if any bloop’s health drops below zero, it dies and is deleted from the array. And for reproduction, instead of performing the usual crossover and mutation all at once, each bloop (with a health grader than zero) will have a 0.1% chance of reproducing.
-
function draw() {
- for (let i = bloops.length - 1; i >= 0; i--) {
- if (bloops[i].health < 0) {
- bloops.splice(i, 1);
- } else if (random(1) < 0.001) {
- let child = bloops[i].reproduce();
- bloops.push(child);
- }
- }
- }
-
This methodology will lose the crossover() functionality and instead use the copy() method. The reproductive process in this case is cloning rather than mating. A higher mutation rate isn’t always ideal but it will help introduce additional variation without the mixing of weights. However, I encourage you to consider ways that you could also incorporate crossover.
-
reproduce() {
- //{!2} copy and mutate rather than crossover and mutate
- let brain = this.brain.copy();
- brain.mutate(0.1);
- return new Creature(this.position.x, this.position.y, brain);
- }
-
Now, for this to work, some bloops should live longer than others. By consuming food, their health increases giving them a boost of time to reproduce. I’ll manage in this an eat() method of the Creature class.
-
eat(food) {
- // If the bloop is close to the food, increase its health!
- let d = p5.Vector.dist(this.position, food.position);
- if (d < this.r + food.r) {
- this.health += 0.5;
- }
- }
-
Is this enough for the system to evolve and find its equilibrium? I could dive deeper, tweaking parameters and behaviors in pursuit of the ultimate evolutionary system. The allure of the infinite rabbit hole is one I cannot easily escape. I will do that on my own time and for the purpose of this book, invite you to run the example, experiment, and draw your own conclusions.
-
-
Example 10.8: Neuroevolution Ecosystem
-
-
-
-
-
-
let bloops = [];
-let timeSlider;
-let food = [];
-
-function setup() {
- createCanvas(640, 240);
- ml5.setBackend("cpu");
- for (let i = 0; i < 20; i++) {
- bloops[i] = new Creature(random(width), random(height));
- }
- for (let i = 0; i < 8; i++) {
- food[i] = new Food();
- }
- timeSlider = createSlider(1, 20, 1);
-}
-
-function draw() {
- background(255);
- for (let i = 0; i < timeSlider.value(); i++) {
- for (let i = bloops.length - 1; i >= 0; i--) {
- bloops[i].think();
- bloops[i].eat();
- bloops[i].update();
- bloops[i].borders();
- if (bloops[i].health < 0) {
- bloops.splice(i, 1);
- } else if (random(1) < 0.001) {
- let child = bloops[i].reproduce();
- bloops.push(child);
- }
- }
- }
- for (let treat of food) {
- treat.show();
- }
- for (let bloop of bloops) {
- bloop.show();
- }
-}
-
The final example also includes a few additional features that you’ll find in the accompanying code such as an array of food that shrinks as it gets eaten (re-spawning when it is depleted). Additionally, the bloops shrink as their health deteriorates.
-
The Ecosystem Project
Step 10 Exercise:
-
Try incorporating the concept of a “brain” into the creatures in your world!
-
-
What are each creature’s inputs and outputs?
-
How do the creatures perceive? Do they “see” everything or have limits based on sensors?
-
How can you find balance in your system?
-
+
???????????????
+
-
The end
-
If you’re still reading, thank you! You’ve reached the end of the book. But for as much material as this book contains, I’ve barely scratched the surface of the physical world we inhabit and of techniques for simulating it. It’s my intention for this book to live as an ongoing project, and I hope to continue adding new tutorials and examples to the book’s website as well as expand and update accompanying video tutorials on thecodingtrain.com. Your feedback is truly appreciated, so please get in touch via email at (daniel@shiffman.net) or by contributing to the GitHub repository at github.com/nature-of-code, in keeping with the open-source spirit of the project. Share your work. Keep in touch. Let’s be two with nature.
-
\ No newline at end of file
diff --git a/content/11_nn_ga.html b/content/11_nn_ga.html
new file mode 100644
index 00000000..65eddaf8
--- /dev/null
+++ b/content/11_nn_ga.html
@@ -0,0 +1,846 @@
+
+
Chapter 11. NeuroEvolution
+
Reinforcement Learning
+
There is so much more to working with data, machine learning, ml5.js, and beyond. I’ve only scratched the surface. As I close out this book, my goal is to tie the foundational machine learning concepts I’ve covered back into animated, interactive p5.js sketches that simulate physics and complex systems. Let’s see if I can bring as many concepts from the entire book back together for one last hurrah!
+
Towards the start of this chapter, I referenced an approach to incorporating machine learning into a simulated environment called “reinforcement learning.” Imagine embedding a neural network into any of the example objects (walker, mover, particle, vehicle) and calculating a force or some other action. The neural network could receive inputs related to the environment (such as distance to an obstacle) and produce a decision that requires a choice from a set of discrete options (e.g., move “left” or “right”) or a set of continuous values (e.g., magnitude and direction of a steering force). This is starting to sound familiar: it’s a neural network that receives inputs and performs classification or regression!
+
Here is where things take a turn, however. To better illustrate the concept, let’s start with a hopefully easy to understand and possibly familiar scenario, the game “Flappy Bird.” The game is deceptively simple. You control a small bird that continually moves horizontally across the screen. With each tap or click, the bird flaps its wings and rises upward. The challenge? A series of vertical pipes spaced apart at irregular intervals emerges from the right. The pipes have gaps, and your primary objective is to navigate the bird safely through these gaps. If you hit one, it’s game over. As you progress, the game’s speed increases, and the more pipes you navigate, the higher your score.
Suppose you wanted to automate the gameplay, and instead of a human tapping, a neural network will make the decision as to whether to “flap” or not. Could machine learning work here? Skipping over the “data” steps for a moment, let’s think about “choosing a model.” What are the inputs and outputs of the neural network?
+
Let’s begin with the inputs. This is quite the intriguing question because there isn’t a definitive answer! In a scenario where you want to see if you could train an automated neural network player without any knowledge of the game itself, it might make the most sense to have the inputs be all the pixels of the game screen. Maybe you don’t want to put your thumb on the scale in terms of what aspects of the game are important. This approach attempts to feed everything about the game into the model.
+
For me, I understand the flappy bird game quite well, I believe I can identify the important data points needed to make a decision. I can bypass all the pixels and boil the essence of the game down into the important features that define the game. Remember the discussion about features in the context of the gesture classifier? It applies here as well. These features are not arbitrary aspects of the game; they represent the distinct characteristics of Flappy Bird that are most salient for the neural network's decisions.
These are the inputs to the neural network. But what about the outputs? Is the problem a "classification" or "regression" one? This may seem like an odd question to ask in the context of a game like Flappy Bird, but it's actually incredibly important and relates to how the game is controlled. Tapping the screen, pressing a button, or using keyboard controls are all examples of classification. After all, there is only a discrete set of choices: tap or not, press 'w', 'a', 's', or 'd' on the keyboard. On the other hand, using an analog controller like a joystick leans towards regression. A joystick can be tilted in varying degrees in any direction, translating to continuous output values for both its horizontal and vertical axes.
+
For flappy bird, it’s a classification decision with only two choices:
+
+
flap
+
don’t flap
+
+
+
+ Figure 10.22: The neural network as ml5.js might design it
+
+
This gives me the information needed to choose the model and I can let ml5.js build it.
+
let options = {
+ inputs: 5,
+ outputs: ["flap", "no flap"]
+}
+let birdBrain = ml5.neuralNetwork(options);
+
Now if I were to continue this line of thinking further, I’d have to go back to steps 1 and 2 of the machine learning process: data collection and preparation. How exactly would that work here? One idea would be to scour the earth for the greatest flappy bird player of all time and record them playing for hours. I could log all of the input features for every moment of gameplay along with whether the player flapped or not. Feed all that data into the model, train it, and I can see the headlines already: “Artificial Intelligence Bot Defeats Flappy Bird.”
+
But um, wait a second here, has an agent really learned to play Flappy Bird on its own or has it really just learned to mirror the play of a human? What if that human missed a key aspect of flappy bird strategy? The automated player would never discover it. Not to mention the fact that collecting all that data would be an incredibly tedious and laborious process.
+
This is where reinforcement learning comes in. Reinforcement learning is a type of machine learning where an agent learns through interacting with the environment and receiving feedback in the form of rewards or penalties. Unlike supervised learning, where the “correct” answers are provided by a training dataset, the agent in reinforcement learning learns the answers, the optimal decisions, through trial and error. For example, in Flappy Bird, the bird could receive a positive reward every time it successfully navigates a pipe, but a negative reward if it hits a pipe or the ground. The agent's goal is to figure out which actions lead to the most cumulative rewards over time.
+
At the start, the Flappy Bird agent won't know the best time to flap its wings, leading to many crashes. But as it accrues more and more feedback from countless play-throughs, it begins to refine its actions and develop the optimal strategy to navigate the pipes without crashing, maximizing its total reward. This process of "learning by doing" and optimizing based on feedback is the essence of reinforcement learning.
+
In the next section, I'll explore the principles I’m outlining here with a twist. Traditional techniques in reinforcement learning involve defining something called a “policy” and a corresponding “reward function.” Instead of going down this road, however, I will introduce a related technique that is baked into ml5.js: neuroevolution. This technique combines the evolutionary algorithms from Chapter 9 with neural networks. By evolving the weights of a neural network, I’ll demonstrate how the bird can perfect its journey through the pipes! I'll then finish off the chapter with a variation of Craig Reynold's steering behaviors from Chapter 5 using neuroevolution.
+
Evolving Neural Networks is NEAT!
+
Instead of traditional backpropagation to train the weights in a neural network, neuroevolution applies principles of genetic algorithms and natural selection: the best-performing neural networks are "selected" and their "genes" (or weights) are combined and mutated to create the next generation of networks.
+
One of the first examples of neuroevolution can be found in the 1994 paper "Genetic Lander: An experiment in accurate neuro-genetic control" by Edmund Ronald and Marc Schoenauer. In the 1990s traditional neural network training methods were still nascent, and this work explored an alternative approach. The paper describes how a simulated spacecraft—in a game aptly named "Lunar Lander"—can learn how to safely descend and land on a surface. Rather than use hand-crafted rules or labeled datasets, the researchers opted for genetic algorithms to evolve and train neural networks over multiple generations. And it worked!
+
In 2002, Kenneth O. Stanley and Risto Miikkulainen expanded on earlier neuroevolutionary approaches with their paper titled "Evolving Neural Networks Through Augmenting Topologies." Unlike the lunar lander method that focused on evolving the weights of a neural network, Stanley and Miikkulainen introduced a method that also evolved the network's structure itself! The “NEAT” algorithm—NeuroEvolution of Augmenting Topologies—starts with simple networks and progressively refines their topology through evolution. As a result, NEAT can discover network architectures tailored to specific tasks, often yielding more optimized and effective solutions.
+
A comprehensive NEAT implementation would require going deeper into the neural network architecture with TensorFlow.js directly. My goal here is to emulate Ronald and Schoenauer’s research in the modern context of the web browser with ml5.js. Rather than use the lunar lander game, I’ll give this a try with Flappy Bird!
+
Coding Flappy Bird
+
The game Flappy Bird was created by Vietnamese game developer Dong Nguyen in 2013. In January 2014, it became the most downloaded app on the Apple App Store. However, on February 8th, Nguyen announced that he was removing the game due to its addictive nature. Since then, it has been one of the most cloned games in history. Flappy Bird is a perfect example of "Nolan's Law," an aphorism attributed to the founder of Atari and creator of Pong, Nolan Bushnell: "All the best games are easy to learn and difficult to master.”
+
Flappy Bird is also a terrific game for beginner coders to recreate as a learning exercise, and it fits perfectly with the concepts in this book. To create the game with p5.js, I’ll start with by defining a Bird class. Now, I’m going to do something that may shock you here, but I’m going to skip using p5.Vector for this demonstration and instead use separate x and y properties for the bird’s position. Since the bird only moves along the vertical axis in the game, x remains constant! Therefore, the velocity (and all of the relevant forces) can be a single scalar value for just the y-axis. To simplify things even further, I’ll add the forces directly to the bird's velocity instead of accumulating them into an acceleration variable. In addition to the usual update(), I’ll include a flap() method for the bird to fly upward. The show() method is not included below as it remains the same and draws only a circle.
+
class Bird {
+ constructor() {
+ // The bird's position (x will be constant)
+ this.x = 50
+ this.y = 120;
+
+ // Velocity and forces are scalar since the bird only moves along the y-axis
+ this.velocity = 0;
+ this.gravity = 0.5;
+ this.flapForce = -10;
+ }
+
+ // The bird flaps its wings
+ flap() {
+ this.velocity += this.flapForce;
+ }
+
+ update() {
+ // Add gravity
+ this.velocity += this.gravity;
+ this.y += this.velocity;
+ // Dampen velocity
+ this.velocity *= 0.95;
+
+ // Handle the "floor"
+ if (this.y > height) {
+ this.y = height;
+ this.velocity = 0;
+ }
+ }
+}
+
The other primary element of the game are the pipes that the bird must navigate through. I’ll create a Pipe class to describe a pair of rectangles, one that emanates from the top of the canvas and one from the bottom. Just as the bird only moves vertically, the pipes slide along only the horizontal axis, so the properties can also be scalar values rather than vectors. The pipes move at a constant speed and don’t experience any physics.
+
class Pipe {
+ constructor() {
+ // The size of the opening between the two parts of the pipe
+ this.spacing = 100;
+ // A random height for the top of the pipe
+ this.top = random(height - this.spacing);
+ // The starting position of the bottom pipe (based on the top)
+ this.bottom = this.top + this.spacing;
+ // The pipe starts at the edge of the canvas
+ this.x = width;
+ // Width of the pipe
+ this.w = 20;
+ // Horizontal speed of the pipe
+ this.velocity = 2;
+ }
+
+ // Draw the two pipes
+ show() {
+ fill(0 );
+ noStroke();
+ rect(this.x, 0, this.w, this.top);
+ rect(this.x, this.bottom, this.w, height - this.bottom);
+ }
+
+ // Update the pipe horizontal position
+ update() {
+ this.x -= this.velocity;
+ }
+}
+
To be clear, the "reality" depicted in the game is a bird flying through pipes. The bird is moving along two dimensions while the pipes remain stationary. However, it is simpler in terms of code to consider the bird as stationary in its horizontal position and treat the pipes as moving.
+
With a Bird and Pipe class written, I'm almost set to run the game. However, there remains a key missing piece: collisions. The whole game rides on the bird attempting to avoid the pipes! This is nothing new, you’ve seen many examples of objects checking their positions against others throughout this book.
+
Now, there's a design choice to make. A function to check collisions could logically be placed in either the Bird class (to check if the bird hits a pipe) or in the Pipe class (to check if a pipe hits the bird). Either can be justified depending on your point of view. I'll place it in the Pipe class and call it collides().
+
It's a little trickier than you might think on first glance as the function needs to check both the top and bottom rectangles of a pipe against the position of the bird. There are a variety of ways you could approach this, one way is to first check if the bird is vertically within the bounds of either rectangle (either above the top pipe or below the bottom one). But it's only actually colliding with the pipe if the bird is also horizontally within the boundaries of the pipe's width. An elegant way to write this is to combining each of these checks with a logical "and."
+
collides(bird) {
+ // Is the bird within the vertical range of the top or bottom pipe?
+ let verticalCollision = bird.y < this.top || bird.y > this.bottom;
+ // Is the bird within the horizontal range of the pipes?
+ let horizontalCollision = bird.x > this.x && bird.x < this.x + this.w;
+ //{!1} If it's both a vertical and horizontal hit, it's a hit!
+ return verticalCollision && horizontalCollision;
+ }
+
The algorithm currently treats the bird as a single point and does not take into account its size. This is something that should be improved for a more realistic version of the game.
+
All that’s left to do is write setup() and draw(). I need a single variable for the bird and an array for a list of pipes. The interaction is just a single press of the mouse. Rather than build a fully functional game with a score, end screen, and other usual elements, I’ll just make sure things are working by drawing the text “OOPS!” near any pipe when there is a collision. The code also assumes an additional offscreen() method to the Pipe class for when it has moved beyond the left edge of the canvas.
+
+
Example 10.3: Flappy Bird Clone
+
+
+
+
+
+
let bird;
+let pipes = [];
+
+function setup() {
+ createCanvas(640, 240);
+ //{!2} Create a bird and start with one pipe
+ bird = new Bird();
+ pipes.push(new Pipe());
+}
+
+//{!3} The bird flaps its wings when the mouse is pressed
+function mousePressed() {
+ bird.flap();
+}
+
+function draw() {
+ background(255);
+ // Handle all of the pipes
+ for (let i = pipes.length - 1; i >= 0; i--) {
+ pipes[i].show();
+ pipes[i].update();
+ if (pipes[i].collides(bird)) {
+ text("OOPS!", pipes[i].x, pipes[i].top + 20);
+ }
+ if (pipes[i].offscreen()) {
+ pipes.splice(i, 1);
+ }
+ }
+ // Update and show the bird
+ bird.update();
+ bird.show();
+ //{!3} Add a new pipe every 75 frames
+ if (frameCount % 75 == 0) {
+ pipes.push(new Pipe());
+ }
+}
+
The trickiest aspect of the above code lies in spawning the pipes at regular intervals with the frameCount variable and modulo operator %. In p5.js, frameCount is a system variable that tracks the number of frames rendered since the sketch began, incrementing with each cycle of the draw() loop. The modulo operator, denoted by %, returns the remainder of a division operation. For example, 7 % 3 would yield 1 because when dividing 7 by 3, the result is 2 with a remainder of 1. The boolean expression frameCount % 75 == 0 therefore checks if the current frameCount value, when divided by 75, has a remainder of 0. This condition is true every 75 frames and at those frame counts, a new pipe is spawned and added to the pipes array.
+
+
Exercise 10.7
+
Implement a scoring system that awards points for successfully navigating through each set of pipes. Feel free to add your own visual design elements for the bird, pipes, and environment!
+
+
Neuroevolution Flappy Bird
+
The game, as it currently stands, is controlled by mouse clicks. The first step to implementing neuroevolution is to give each bird a brain so that it can decide on its own whether or not to flap its wings.
+
The Bird Brain
+
In the previous section on reinforcement learning, I established a list of input features that comprise the bird's decision-making process. I’m going to use that same list with one simplification. Since the size of the opening between the pipes will remain constant, there’s no need to include both the y positions of the top and bottom; one will suffice.
+
+
y position of the bird.
+
y velocity of the bird.
+
y position of the next pipe’s top (or the bottom!) opening.
+
x distance to the next pipes.
+
+
The outputs have just two options: to flap or not to flap! With the inputs and outputs set, I can add a brain property to the bird’s constructor with the appropriate configuration. Just to demonstrate a different style here, I’ll skip including a separate options variable and pass the properties as an object literal directly into the ml5.neuralNetwork() function. Note the addition of a neuroEvolution property set to true. This is necessary to enable some of the features I’ll be using later in the code.
+
constructor() {
+ this.brain = ml5.neuralNetwork({
+ // A bird's brain receives 4 inputs and classifies them into one of two labels
+ inputs: 4,
+ outputs: ["flap", "no flap"],
+ task: "classification",
+ //{!1} A new property necessary to enable neuro evolution functionality
+ neuroEvolution: true
+ });
+ }
+
Next, I’ll add a new method called think() to the Bird class where all of the necessary inputs for the bird are calculated. The first two are easy, as they are simply the y and velocity properties of the bird itself. However, for inputs 3 and 4, I need to determine which pipe is the “next” pipe.
+
At first glance, it might seem that the next pipe is always the first one in the array, since the pipes are added one at a time to the end of the array. However, once a pipe passes the bird, it is no longer relevant. I need to find the first pipe in the array whose right edge (x-position plus width) is greater than the bird’s x position.
+
think(pipes) {
+ let nextPipe = null;
+ for (let pipe of pipes) {
+ //{!4} The next pipe is the one who hasn't passed the bird yet.
+ if (pipe.x + pipe.w > this.x) {
+ nextPipe = pipe;
+ break;
+ }
+ }
+
Once I have the next pipe, I can create the four inputs:
+
let inputs = [
+ // y-position of bird
+ this.y,
+ // y-velocity of bird
+ this.velocity,
+ // top opening of next pipe
+ nextPipe.top,
+ //{!1} distance from next pipe to this pipe
+ nextPipe.x - this.x,
+ ];
+
However, I have forgotten a critical step! The range of all input values is determined by the dimensions of the canvas. The neural network, however, expects values in a standardized range, such as 0 to 1. One method to normalize these values is to divide the inputs related to vertical properties byheight, and those related to horizontal ones by width.
+
let inputs = [
+ //{!4} All of the inputs are now normalized by width and height
+ this.y / height,
+ this.velocity / height,
+ nextPipe.top / height,
+ (nextPipe.x - this.x) / width,
+ ];
+
With the inputs in hand, I’m ready to pass them to the neural network’s classify() method. There is, however, one small problem. Remember, classify() is asynchronous! This means I need implement a callback inside the Bird class to process the decision! Unfortunately, doing so adds a level of complexity to the code here which is entirely unnecessary. Asynchronous callbacks with machine learning functions in ml5.js are typically necessary due to the time required to process a large amount of data in a model. Without a callback, the code might have to wait a long time and if it’s in the context of a p5.js animation, it could severely impact the smoothness of any animation. The neural network here, however, only has four floating point inputs and two output labels! It’s tiny and can run so fast there’s no reason to implement this asynchronously.
+
For completeness, I will include a version of the example on this book’s website that implements neuroevolution with asynchronous callbacks. For the discussion here, however, I’m going to use a feature of ml5.js that allows me to take a shortcut. The method classifySync() is identical to classify(), but it runs synchronously, meaning that the code stops and waits for the results before moving on. You should be very careful when using this version of the method as it can cause problems in other contexts, but it will work well for this scenario. Here is the end of the think() method with classifySync().
+
let results = this.brain.classifySync(inputs);
+ if (results[0].label == "flap") {
+ this.flap();
+ }
+ }
+
The neural network's prediction is in the same format as the gesture classifier and the decision can be made by checking the first element of the results array. If the output label is "flap", then call flap().
+
Now is where the real challenge begins: teaching the bird to win the game and flap its wings at the right moment! Recalling the discussion of genetic algorithms from Chapter 9, there are three key principles that underpin Darwinian evolution: Variation, Selection, and Heredity. Let’s go through each of these principles, implementing all the steps of the genetic algorithm itself with neural networks.
+
Variation: A Flock of Flappy Birds
+
A single bird with a randomly initialized neural network isn’t likely to have any success at all. That lone bird will most likely jump incessantly and fly way offscreen or sit perched at the bottom of the canvas awaiting collision after collision with the pipes. This erratic and nonsensical behavior is a reminder: a randomly initialized neural network lacks any knowledge or experience! The bird is essentially making wild guesses for its actions and success is going to be very rare.
+
This is where the first key principle of genetic algorithms comes in: variation. The hope is that by introducing as many different neural network configurations as possible, a few might perform slightly better than the rest. The very first step towards variation is to add an array of many birds.
+
// Population size
+let populationSize = 200;
+// Array of birds
+let birds = [];
+
+function setup() {
+ //{!3} Create the bird population
+ for (let i = 0; i < populationSize; i++) {
+ birds[i] = new Bird();
+ }
+
+ //{!1} Run the computations on the "cpu" for better performance
+ ml5.setBackend("cpu");
+}
+
+function draw() {
+ for (let bird of birds) {
+ //{!1} This is the new method for the bird to make a decision to flap or not
+ bird.think(pipes);
+ bird.update();
+ bird.show();
+ }
+}
+
You might notice a peculiar line of code that's crept into setup: ml5.setBackend("cpu"). When running neural networks, a lot of the heavy computational lifting is often offloaded to the GPU. This is the default behavior, and especially critical for larger pre-trained models included as part of ml5.js.
+
+
GPU vs. CPU
+
+
GPU (Graphics Processing Unit): Originally designed for rendering graphics, GPUs are adept at handling a massive number of operations in parallel. This makes them excellent for the kind of math operations and computations that machine learning models frequently perform.
+
CPU (Central Processing Unit): Often considered the "brain" or general-purpose heart of a computer, a CPU handles a wider variety of tasks than the specialized GPU.
+
+
+
But there's a catch! Transferring data to and from the GPU introduces some overhead. In most cases, the gains from the GPU's parallel processing offset this overhead. However, for such a tiny model like the one here, copying data to the GPU and back slows things down more than it helps.
+
This is where ml5.setBackend("cpu") comes in. By specifying "cpu", the neural network computations will instead run on the “Central Processing Unit” —the general-purpose heart of your computer— which handles the operations more efficiently for a population of many tiny bird brains.
+
Selection: Flappy Bird Fitness
+
Once I’ve got a diverse population of birds, each with their own neural network, the next step in the genetic algorithm is selection. Which birds should pass on their genes (in this case, neural network weights) to the next generation? In the world of Flappy Bird, the measure of success is the ability to stay alive the longest avoiding the pipes. This is the bird's "fitness." A bird that dodges many pipes is considered more "fit" than one that crashes into the first one it encounters.
+
To track the bird’s fitness, I am going to add two properties to the Bird class: fitness and alive.
+
constructor() {
+ // The bird's fitness
+ this.fitness = 0;
+ //{!1} Keeping track if the bird is alive or not
+ this.alive = true;
+ }
+
I’ll assign the fitness a numeric value that increases by 1 every cycle through draw(), as long as the bird remains alive. The birds that survive longer should have a higher fitness.
+
update() {
+ //{!1} Incrementing the fitness each time through update
+ this.fitness++;
+ }
+
The alive property is a boolean flag that is initially set to true. However, when a bird collides with a pipe, it is set to false. Only birds that are still alive are updated and drawn to the canvas.
+
function draw() {
+ // There are now an array of birds!
+ for (let bird of birds) {
+ //{!1} Only operate on the birds that are still alive
+ if (bird.alive) {
+ // Make a decision based on the pipes
+ bird.think(pipes);
+ // Update and show the bird
+ bird.update();
+ bird.show();
+
+ //{!4} Has the bird hit a pipe? If so, it's no longer alive.
+ for (let pipe of pipes) {
+ if (pipe.collides(bird)) {
+ bird.alive = false;
+ }
+ }
+ }
+ }
+}
+
In Chapter 9, I demonstrated two techniques for running an evolutionary simulation. The first involved a population living for a fixed amount of time each generation. The same approach would likely work here as well, but I want to allow the birds to accumulate the highest fitness possible and not arbitrarily stop them based on a time limit. The second technique, demonstrated with the "bloops" example, involved eliminating the fitness score entirely and setting a random probability for cloning alive birds. However, this approach could become messy and risks overpopulation or all the birds dying out completely. Instead, I propose combining elements of both approaches. I will allow a generation to continue as long as at least one bird is still alive. When all the birds have died, I will select parents for the reproduction step and start anew.
+
Let’s begin by writing a function to check if all the birds have died.
+
function allBirdsDead() {
+ for (let bird of birds) {
+ //{!3} If a single bird is alive, they are not all dead!
+ if (bird.alive) {
+ return false;
+ }
+ }
+ //{!1} If the loop completes without finding a living bird, they are all dead
+ return true;
+}
+
When all the birds have died, then it’s time for selection! In the previous genetic algorithm examples I demonstrated a technique for giving a fair shot to all members of a population, but increasing the chances of selection for those with higher fitness scores. I’ll use that same weightedSelection() function here.
+
//{!1} See chapter 9 for a detailed explanation of this algorithm
+function weightedSelection() {
+ let index = 0;
+ let start = random(1);
+ while (start > 0) {
+ start = start - birds[index].fitness;
+ index++;
+ }
+ index--;
+ //{!1} Instead of returning the entire Bird object, just the brain is returned
+ return birds[index].brain;
+}
+
However, for this algorithm to function properly, I need to first normalize the fitness values of the birds so that they collectively sum to 1. This way, each bird's fitness is equal to its probability of being selected.
+
function normalizeFitness() {
+ // Sum the total fitness of all birds
+ let sum = 0;
+ for (let bird of birds) {
+ sum += bird.fitness;
+ }
+ //{!3} Divide each bird's fitness by the sum
+ for (let bird of birds) {
+ bird.fitness = bird.fitness / sum;
+ }
+}
+
Heredity: Baby Birds
+
There’s only one step left in the genetic algorithm—reproduction. In Chapter 9, I explored in great detail the two step process for generating a “child” element: crossover and mutation. Crossover is where the third key principle of heredity arrives. After selecting the DNA of two parents, they are combined to form the child’s DNA. At first glance, the idea of inventing an algorithm for crossover of two neural networks might seem daunting. Yet, it’s actually quite straightforward. Think of the individual “genes” of a bird’s brain to be the weights within the network. Mixing two such brains boils down to creating a new neural network, where each weight is chosen by a virtual coin flip—picking a value from the first or second parent.
+
// Picking two parents and creating a child with crossover
+let parentA = weightedSelection();
+let parentB = weightedSelection();
+let child = parentA.crossover(parentB);
+
As you can see, today is my lucky day, as ml5.js includes a crossover() that manages the algorithm for mixing the two neural networks. I can happily move onto the mutation step.
+
// Mutating the child
+child.mutate(0.01);
+
The ml5.js library also provides a mutate() method that accepts a "mutation rate" as its primary argument. The rate determines how often a weight will be altered. For example, a rate of 0.01 indicates a 1% chance that any given weight will mutate. During mutation, ml5.js adjusts the weight slightly by adding a small random number to it, rather than selecting a completely new random value. This behavior mimics real-world genetic mutations, which typically introduce minor changes rather than entirely new traits. Although this default approach works for many cases, ml5.js offers more control over the process by allowing the use of a "custom" function as an optional second argument to mutate().
+
These crossover and mutation steps are repeated for the size of the population to create an entire new generation of birds. This is accomplished by populating an empty local array nextBirds with the new birds. Once the population is full, the global birds array is then updated to this fresh generation.
+
function reproduction() {
+ //{!1} Start with a new empty array
+ let nextBirds = [];
+ for (let i = 0; i < populationSize; i++) {
+ // Pick 2 parents
+ let parentA = weightedSelection();
+ let parentB = weightedSelection();
+ // Create a child with crossover
+ let child = parentA.crossover(parentB);
+ // Apply mutation
+ child.mutate(0.01);
+ //{!1} Create the new bird object
+ nextBirds[i] = new Bird(child);
+ }
+ //{!1} The next generation is now the current one!
+ birds = nextBirds;
+}
+
If you look closely at the reproduction() function, you may notice that I’ve slipped in another new feature of the Bird class, specifically an argument to the constructor. When I first introduced the idea of a bird “brain,” each new Bird object was created with a brand new brain—a fresh neural network courtesy of ml5.js. However, I now want the new birds to “inherit” a child brain that was generated through the processes of crossover and mutation.
+
To make this possible, I’ll subtly change the Bird constructor to look for an “optional” argument named, of course, brain.
+
constructor(brain) {
+ //{!1} Check if a brain was passed in
+ if (brain) {
+ this.brain = brain;
+ //{!1} If not, proceed as usual
+ } else {
+ this.brain = ml5.neuralNetwork({
+ inputs: 4,
+ outputs: ["flap", "no flap"],
+ task: "classification",
+ neuroEvolution: true,
+ });
+ }
+ }
+
Here’s the magic, if no brain is provided when a new bird is created, the brain argument remains undefined. In JavaScript, undefined is treated as false and so the code moves on to the else and calls ml5.neuralNetwork(). On the other hand, if I I do pass in an existing neural network, brain evaluates to true and is assigned directly to this.brain. This elegant trick allows the constructor to handle different scenarios.
+
And with that, the example is complete. All that is left to do is call normalizeFitness() and reproduction() in draw() at the end of each generation when all the birds have died out.
+
+
Example 10.4: Flappy Bird NeuroEvolution
+
+
+
+
+
+
function draw() {
+ //{inline} all the rest of draw
+
+ //{!4} Create the next generation when all the birds have died
+ if (allBirdsDead()) {
+ normalizeFitness();
+ reproduction();
+ }
+}
+
Example 10.4 also adjusts the behavior of birds so that they die when they leave the canvas, either by crashing into the ground or soaring too high above the top.
+
EXERCISE: SPEED UP TIME, ANNOTATE PROCESS, ETC.
+
EXERCISE: SAVE AND LOAD BIRD
+
Steering the Neuroevolutionary Way
+
Having explored neuroevolution with Flappy Bird, I’d like to shift the focus back to the realm of simulation, specifically the steering agents introduced in chapter 5. What if, instead of dictating the rules for an algorithm to calculate a steering force, a simulated creature could evolve its own strategy? Drawing inspiration from Craig Reynolds’ aim of “life-like and improvisational” behaviors, my goal is not to use neuroevolution to engineer the perfect creature that can flawlessly execute a task. Instead, I hope to create a captivating world of simulated life, where the quirks, nuances, and happy accidents of evolution unfold in the canvas.
+
Let’s begin with adapting the Smart Rockets example from Chapter 9. In that example, the genetic code for each rocket was an array of vectors.
+
this.genes = [];
+for (let i = 0; i < lifeSpan; i++) {
+ //{!2} Each gene is a vector with random direction and magnitude
+ this.genes[i] = p5.Vector.random2D();
+ this.genes[i].mult(random(0, this.maxforce));
+}
+
I propose adapting the above to instead use a neural network to "predict" the vector or steering force, transforming the genes into a brain.
But what are the inputs and outputs? In the original example, the vectors from the genes array were applied sequentially, querying the array with a counter variable.
+
this.applyForce(this.genes[this.counter]);
+
Now, instead of an array lookup, I want the neural network to return a vector with predictSync().
+
// Get the outputs from the neural network
+let outputs = this.brain.predictSync(inputs);
+// Use one output for an angle
+let angle = outputs[0].value * TWO_PI;
+// Use another outputs for magnitude
+let magnitude = outputs[1].value * this.maxforce;
+// Create and apply the force
+let force = p5.Vector.fromAngle(angle).setMag(magnitude);
+this.applyForce(force);
+
The neural network brain outputs two values; one for the angle of the vector, one for the magnitude. You might think to use these outputs for the vector’s x and y components. However, the default output range for an ml5 neural network is between 0 and 1. I want the forces to be capable of pointing in both positive and negative directions! Mapping an angle offers the full range.
+
You may have noticed that the code includes a variable called inputs that I have yet to declare or initialize. Defining the inputs to the neural network is where you as the designer of the system can be the most creative, and consider the simulated biology and capabilities of your creatures.
+
As a first try, I’ll assign something very basic for the inputs and see if it works. Since the Smart Rockets environment is static, with fixed obstacles and targets, what if the brain could learn and estimate a "flow field" to navigate towards its goal? A flow field receives a position and returns a vector, so the neural network can mirror this functionality and use the rocket's position as input (normalizing the x and y values according to the canvas dimensions).
+
let inputs = [this.position.x / width, this.position.y / height];
+
That’s it! Everything else from the original example can remain unchanged: the population, the fitness function, and the selection process. The only other small adjustment is to use ml5.js’s crossover() and mutate() functions, eliminating the need for a separate DNA class with implementations of these steps.
+
+
Example 10.5: Smart Rockets Neuroevolution
+
+
+
+
+
+
reproduction() {
+ let nextPopulation = [];
+ // Create the next population
+ for (let i = 0; i < this.population.length; i++) {
+ // Sping the wheel of fortune to pick two parents
+ let parentA = this.weightedSelection();
+ let parentB = this.weightedSelection();
+ let child = parentA.crossover(parentB);
+ //{!1} Apply mutation
+ child.mutate(this.mutationRate);
+ nextPopulation[i] = new Rocket(320, 220, child);
+ }
+ //{!1} Replace the old population
+ this.population = nextPopulation;
+ this.generations++;
+ }
+
EXERCISE: something about desired vs. steering and using the velocity as inputs also
+
A Changing World
+
In the Smart Rockets example, the environment was static. This made the rocket's task of finding the target easy to accomplish using only its position as input. However, what if the target and the obstacles in the rocket's path were moving? To handle a more complex and changing environment, I need to expand the neural network's inputs and consider additional "features" of the environment. This is similar to what I did with Flappy Bird, where I identified the key data points of the environment to guide the bird's decision-making process.
+
Let’s begin with the simplest version of this scenario, almost identical to the Smart Rockets, but removing obstacles and replacing the fixed target with a random “perlin noise” walker. In this world, I’ll rename the Rocket to Creature and write a new Glow class to represent a gentle, drifting orb. Imagine that the creature’s goal is to reach the light source and dance in its radiant embrace as long as it can.
+
class Glow {
+ constructor() {
+ //{!2} Two different perlin noise offsets
+ this.xoff = 0;
+ this.yoff = 1000;
+ this.position = createVector();
+ this.r = 24;
+ }
+
+ update() {
+ //{!2} Assign the position according to Perlin noise
+ this.position.x = noise(this.xoff) * width;
+ this.position.y = noise(this.yoff) * height;
+ //{!2} Move along the perlin noise space
+ this.xoff += 0.01;
+ this.yoff += 0.01;
+ }
+
+ show() {
+ stroke(0);
+ strokeWeight(2);
+ fill(200);
+ circle(this.position.x, this.position.y, this.r * 2);
+ }
+}
+
As the glow moves, the creature should take the glow’s position into account, as an input to its brain. However, it is not sufficient to know only the light’s position; it’s the position relative to the creature’s own that is key. A nice way to synthesize this information as an input feature is to calculate a vector that points from the creature to the glow. Here is where I can reinvent the seek() method from Chapter 5 using a neural network to estimate the steering force.
+
seek(target) {
+ //{!1} Calculate a vector from the position to the target
+ let v = p5.Vector.sub(target, this.position);
+
This is a good start, but the components of the vector do not fall within a normalized input range. I could divide v.x by width and v.y by height, but since my canvas is not a perfect square, it may skew the data. Another solution is to normalize the vector, but with that, I would lose any measure of the distance to the glow itself. After all, if the creature is sitting on top of the glow, it should steer differently than if it were very far away. There are multiple approaches I could take here. I’ll go with saving the distance in a separate variable before normalizing and plan to use it as an additional input feature.
+
seek(target) {
+ let v = p5.Vector.sub(target, this.position);
+ // Save the distance in a variable (one input)
+ let distance = v.mag();
+ // Normalize the vector pointing from position to target (two inputs)
+ v.normalize();
+
Now, if you recall, a key element of Reynolds’ steering formula involves comparing the desired velocity to the current velocity. How the vehicle is currently moving plays a significant role in how it should steer! For the creature to consider its own velocity as part of its decision-making, I can include the velocity vector in the inputs as well. To normalize these values, it works beautifully to divide the vector’s components by the maxspeed property. This retains both the direction and magnitude of the vector. The rest of the code follows the same with the output of the neural network synthesized into a force to be applied to the creature.
+
seek(target) {
+ let v = p5.Vector.sub(target.position, this.position);
+ let distance = v.mag();
+ v.normalize();
+ // Compiling the features into an inputs array
+ let inputs = [
+ v.x,
+ v.y,
+ distance / width,
+ this.velocity.x / this.maxspeed,
+ this.velocity.y / this.maxspeed,
+ ];
+ //{!5} Predicting the force to apply
+ let outputs = this.brain.predictSync(inputs);
+ let angle = outputs[0].value * TWO_PI;
+ let magnitude = outputs[1].value;
+ let force = p5.Vector.fromAngle(angle).setMag(magnitude);
+ this.applyForce(force);
+ }
+
Enough has changed here from the rockets that it is also worth reconsidering the fitness function. Previously, fitness was calculated based on the rocket's distance from the target at the end of each generation. However, since this new target is moving, I prefer to accumulate the amount of time the creature is able to catch the glow as the measure of fitness. This can be achieved by checking the distance between the creature and the glow in the update() method and incrementing a fitness value when they are intersecting. Both the Glow and Creature class include a radius property r which can be used to determine collision.
+
update(target) {
+ //{inline} the usual updating of position, velocity, accleration
+
+ //{!4} Increase the fitness whenever the creature reaches the glow
+ let d = p5.Vector.dist(this.position, target.position);
+ if (d < this.r + target.r) {
+ this.fitness++;
+ }
+ }
+
Now, one thing you may have noticed about these examples is that testing them requires a delightful exercise in patience as you watch the slow crawl of the simulation play out generation after generation. This is part of the point—I want to watch the process! It’s also a nice excuse to take a break, which is to be encouraged. Head outside, enjoy some non-simulated nature, perhaps a small cup of soothing tea while you wait? Take comfort in the fact that you only have to wait billions of milliseconds rather than the billions of years required for actual biology.
+
Nevertheless, for the system to evolve, there’s no inherent requirement that you draw and animate the world. Hundreds of generations could be completed in the blink of an eye if you could skip all that time spent rendering the scene.
+
One way to avoid tearing your hair out every time you change a small parameter and find yourself waiting what seems like hours to see if it had any effect is to render the environment, well, less often. In other words, you can compute multiple simulation steps per draw() cycle with a for loop.
+
Here is where I can make use of one of my favorite features of p5.js: the ability to quickly create standard interface elements. You saw this before in the interactive selection example from Chapter 10 with createButton(). In the following code, a "range" slider is used to control the skips in time. Only the code for the new time slider is shown here, excluding all the other global variables and their initializations in setup(). Remember, you will also need to separate the code for visuals from the physics to ensure that rendering still occurs only once.
+
//{!1} A variable to hold the slider
+let timeSlider;
+
+function setup() {
+ //{!1} Creating the slider with a min and max range, and starting value
+ timeSlider = createSlider(1, 20, 1);
+}
+
+function draw() {
+ //{!5} All of the drawing code happening just once!
+ background(255);
+ glow.show();
+ for (let creature of creatures) {
+ creature.show();
+ }
+
+ //{!8} All of the simulation code running multiple times according to the slider
+ for (let i = 0; i < timeSlider.value(); i++) {
+ for (let creature of creatures) {
+ creature.seek(glow);
+ creature.update(glow);
+ }
+ glow.update();
+ lifeCounter++;
+ }
+}
+
In p5.js, a slider is defined with three arguments: a minimum value (for when the slider is all the way to the left), a maximum value (for when the slider is all the way to the right), and a starting value (for when the page first loads). This allows the simulation to run at 20X speed to reach the results of evolution more quickly, then slow back down to bask in the glory of the intelligent behaviors on display. Here is the final version of the example with a newCreature constructor to create a neural network. Everything else has remained the same from the Flappy Bird example code.
+
+
Example 10.6: Neuroevolution Steering
+
+
+
+
+
+
class Creature {
+ constructor(x, y, brain) {
+ this.position = createVector(x, y);
+ this.velocity = createVector(0, 0);
+ this.acceleration = createVector(0, 0);
+ this.r = 4;
+ this.maxspeed = 4;
+ this.fitness = 0;
+
+ if (brain) {
+ this.brain = brain;
+ } else {
+ this.brain = ml5.neuralNetwork({
+ inputs: 5,
+ outputs: 2,
+ task: "regression",
+ neuroEvolution: true,
+ });
+ }
+ }
+
+ //{inline} seek() predicts a steering force as described previously
+
+ //{inline} update() increments the fitness if the glow is reached as described previously
+
+}
+
Neuroevolution Ecosystem
+
If I’m being honest here, this chapter is getting kind of long. My goodness, this book is incredibly long, are you really still here reading? I’ve been working on it for over ten years and right now, at this very moment as I type these letters, I feel like stopping. But I cannot. I will not. There is one more thing I must demonstrate, that I am obligated to, that I won’t be able to tolerate skipping. So bear with me just a little longer. I hope it will be worth it.
+
There are two key elements of what I’ve demonstrated so far that don’t fit into my dream of the Ecosystem Project that has been the through-line of this book. The first is something I covered in chapter 9 with the introduction of the bloops—a system of creatures that all lives and dies together, starting completely over with each subsequent generation, is not how the biological world works! I’d like to also examine this in the context of neuroevolution.
+
But even more so, there’s a major flaw in the way I am extracting features from a scene. The creatures in Example 10.6 are all knowing. They know exactly where the glow is regardless of how far away they are or what might be blocking their vision or senses. Yes, it may be reasonable to assume they are aware of their current velocity, but I didn’t introduce any limits to the perception of external elements in their environment.
+
A common approach in reinforcement learning simulations is to attach sensors to an agent. For example, consider a simulated mouse in a maze searching for cheese in the dark. Its whiskers might act as proximity sensors to detect walls and turns. The mouse can’t see the entire maze, only its immediate surroundings. Another example is a bat using echolocation to navigate, or a car on a winding road that can only see what is projected in front of its headlights.
+
I’d like to build on this idea of the whiskers (or more formally the “vibrissae”) found in mice, cats, and other mammals. In the real world, animals use their vibrissae to navigate and detect nearby objects, especially in dark or obscured environments.
I’ll keep the generic class name Creature but think of them now as the circular “bloops” of chapter 9, enhanced with whisker-like sensors that emanate from their center in all directions.
+
class Creature {
+ constructor(x, y) {
+ // The creature has a position and radius
+ this.position = createVector(x, y);
+ this.r = 16;
+ // The creature has an array of sensors
+ this.sensors = [];
+
+ // The creature has a 5 sensors
+ let totalSensors = 5;
+ for (let i = 0; i < totalSensors; i++) {
+ // First, calculate a direction for the sensor
+ let angle = map(i, 0, totalSensors, 0, TWO_PI);
+ // Create a vector a little bit longer than the radius as the sensor
+ this.sensors[i] = p5.Vector.fromAngle(angle).mult(this.r * 1.5);
+ }
+ }
+}
+
The code creates a series of vectors that each describe the direction and length of one “whisker” sensor attached to the creature. However, just the vector is not enough. I want the sensor to include a value, a numeric representation of what it is sensing. This value can be thought of as analogous to the intensity of touch. Just as a cat's whisker might detect a faint touch from a distant object or a stronger push from a closer one, the virtual sensor's value could range to represent proximity. Let’s assume there is a Food class to describe a circle of deliciousness that the creature wants to find.
+
class Food {
+ //{!4} A piece of food has a random position and fixed radius
+ constructor() {
+ this.position = createVector(random(width), random(height));
+ this.r = 50;
+ }
+
+ show() {
+ noStroke();
+ fill(0, 100);
+ circle(this.position.x, this.position.y, this.r * 2);
+ }
+}
+
A Food object is a circle drawn according to a position and radius. I’ll assume the creature in my simulation has no vision and relies on sensors to detect if there is food nearby. This begs the question: how can I determine if a sensor is touching the food? One approach is to use a technique called “raycasting.” This method is commonly employed in computer graphics to project rays (often representing light) from an origin point in a scene to determine what objects they intersect with. Raycasting is useful for visibility and collision checks, exactly what I am doing here!
+
Although raycasting is a robust solution, it requires more involved mathematics than I'd like to delve into here. For those interested, an explanation and implementation are available in Coding Challenge #145 on thecodingtrain.com. For the example now, I will opt for a more straightforward approach and check whether the endpoint of a sensor lies inside the food circle.
+
+
+ Figure 10.x: Endpoint of sensor is inside or outside of the food based on distance to center of food.
+
+
As I want the sensor to store a value for its sensing along with the sensing algorithm itself, it makes sense to encapsulate these elements into a Sensor class.
+
class Sensor {
+ constructor(v) {
+ this.v = v.copy();
+ //{!1} The sensor also stores a value for the proximity of what it is sensing
+ this.value = 0;
+ }
+
+ sense(position, food) {
+ //{!1} Find the "tip" (or endpoint) of the sensor by adding position
+ let end = p5.Vector.add(position, this.v);
+ //{!1} How far is it from the food center
+ let d = end.dist(food.position);
+ //{!1} If it is within the radius light up the sensor
+ if (d < food.r) {
+ //{!1} The further into the center the food, the more the sensor activates
+ this.value = map(d, 0, food.r, 1, 0);
+ } else {
+ this.value = 0;
+ }
+ }
+}
+
Notice how the sensing mechanism gauges how deep inside the food’s radius the endpoint is with the map() function. When the sensor's endpoint is just touching the outer boundary of the food, the value starts at 0. As the endpoint moves closer to the center of the food, the value increases, maxing out at 1. If the sensor isn't touching the food at all, its value remains at 0. This gradient of feedback mirrors the varying intensity of touch or pressure in the real world.
+
Let’s look at testing the sensors with one bloop (controlled by the mouse) and one piece of food (placed at the center of the canvas). When the sensors touch the food, they light up and get brighter the closer to the center.
+
+
Example 10.7: Bloops with Sensors
+
+
+
+
+
+
let bloop, food;
+
+function setup() {
+ createCanvas(640, 240);
+ //{!2} One bloop, one piece of food
+ bloop = new Creature();
+ food = new Food();
+}
+
+function draw() {
+ background(255);
+
+ // Temporarily control the bloop with the mouse
+ bloop.position.x = mouseX;
+ bloop.position.y = mouseY;
+ // Draw the food and the bloop
+ food.show();
+ bloop.show();
+ // The bloop senses the food
+ bloop.sense(food);
+
+}
+
+class Creature {
+ constructor(x, y) {
+ this.position = createVector(x, y);
+ this.r = 16;
+
+ //{!8} Create the sensors for the creature
+ this.sensors = [];
+ let totalSensors = 15;
+ for (let i = 0; i < totalSensors; i++) {
+ let a = map(i, 0, totalSensors, 0, TWO_PI);
+ let v = p5.Vector.fromAngle(a);
+ v.mult(this.r * 2);
+ this.sensors[i] = new Sensor(v);
+ }
+ }
+
+ //{!4} Call the sense() method for each sensor
+ sense(food) {
+ for (let i = 0; i < this.sensors.length; i++) {
+ this.sensors[i].sense(this.position, food);
+ }
+ }
+
+ //{inline} see book website for the drawing code
+}
+
Are you thinking what I’m thinking? What if the values of those sensors are the inputs to a neural network?! Assuming I bring back all of the necessary physics bits in the Creature class, I could write a new think() method that processes the sensor values through the neural network “brain” and outputs a steering force, just as with the previous two examples.
+
think() {
+ // Build an input array from the sensor values
+ let inputs = [];
+ for (let i = 0; i < this.sensors.length; i++) {
+ inputs[i] = this.sensors[i].value;
+ }
+
+ // Predicting a steering force from the sensors
+ let outputs = this.brain.predictSync(inputs);
+ let angle = outputs[0].value * TWO_PI;
+ let magnitude = outputs[1].value;
+ let force = p5.Vector.fromAngle(angle).setMag(magnitude);
+ this.applyForce(force);
+ }
+
The logical next step would be incorporate all the usual parts of the genetic algorithm, writing a fitness function (how much food did each creature eat?) and performing selection after a fixed generation time period. But this is a great opportunity to test out the principles of a ”continuous” ecosystem with a more sophisticated environment and set of potential behaviors for the creatures themselves.
+
Instead of a fixed lifespan cycle for the population, I will introduce the concept of health for each one. For every cycle through draw() that a creature lives, the health deteriorates.
+
class Creature {
+ constructor() {
+ //{inline} All of the creature's properties
+
+ // The health starts at 100
+ this.health = 100;
+ }
+
+ update() {
+ //{inline} the usual updating position, velocity, acceleration
+
+ // Losing some health!
+ this.health -= 0.25;
+ }
+
Now in draw(), if any bloop’s health drops below zero, it dies and is deleted from the array. And for reproduction, instead of performing the usual crossover and mutation all at once, each bloop (with a health grader than zero) will have a 0.1% chance of reproducing.
+
function draw() {
+ for (let i = bloops.length - 1; i >= 0; i--) {
+ if (bloops[i].health < 0) {
+ bloops.splice(i, 1);
+ } else if (random(1) < 0.001) {
+ let child = bloops[i].reproduce();
+ bloops.push(child);
+ }
+ }
+ }
+
This methodology will lose the crossover() functionality and instead use the copy() method. The reproductive process in this case is cloning rather than mating. A higher mutation rate isn’t always ideal but it will help introduce additional variation without the mixing of weights. However, I encourage you to consider ways that you could also incorporate crossover.
+
reproduce() {
+ //{!2} copy and mutate rather than crossover and mutate
+ let brain = this.brain.copy();
+ brain.mutate(0.1);
+ return new Creature(this.position.x, this.position.y, brain);
+ }
+
Now, for this to work, some bloops should live longer than others. By consuming food, their health increases giving them a boost of time to reproduce. I’ll manage in this an eat() method of the Creature class.
+
eat(food) {
+ // If the bloop is close to the food, increase its health!
+ let d = p5.Vector.dist(this.position, food.position);
+ if (d < this.r + food.r) {
+ this.health += 0.5;
+ }
+ }
+
Is this enough for the system to evolve and find its equilibrium? I could dive deeper, tweaking parameters and behaviors in pursuit of the ultimate evolutionary system. The allure of the infinite rabbit hole is one I cannot easily escape. I will do that on my own time and for the purpose of this book, invite you to run the example, experiment, and draw your own conclusions.
+
+
Example 10.8: Neuroevolution Ecosystem
+
+
+
+
+
+
let bloops = [];
+let timeSlider;
+let food = [];
+
+function setup() {
+ createCanvas(640, 240);
+ ml5.setBackend("cpu");
+ for (let i = 0; i < 20; i++) {
+ bloops[i] = new Creature(random(width), random(height));
+ }
+ for (let i = 0; i < 8; i++) {
+ food[i] = new Food();
+ }
+ timeSlider = createSlider(1, 20, 1);
+}
+
+function draw() {
+ background(255);
+ for (let i = 0; i < timeSlider.value(); i++) {
+ for (let i = bloops.length - 1; i >= 0; i--) {
+ bloops[i].think();
+ bloops[i].eat();
+ bloops[i].update();
+ bloops[i].borders();
+ if (bloops[i].health < 0) {
+ bloops.splice(i, 1);
+ } else if (random(1) < 0.001) {
+ let child = bloops[i].reproduce();
+ bloops.push(child);
+ }
+ }
+ }
+ for (let treat of food) {
+ treat.show();
+ }
+ for (let bloop of bloops) {
+ bloop.show();
+ }
+}
+
The final example also includes a few additional features that you’ll find in the accompanying code such as an array of food that shrinks as it gets eaten (re-spawning when it is depleted). Additionally, the bloops shrink as their health deteriorates.
+
+
The Ecosystem Project
+
Step 11 Exercise:
+
Try incorporating the concept of a “brain” into the creatures in your world!
+
+
What are each creature’s inputs and outputs?
+
How do the creatures perceive? Do they “see” everything or have limits based on sensors?
+
How can you find balance in your system?
+
+
+
The end
+
If you’re still reading, thank you! You’ve reached the end of the book. But for as much material as this book contains, I’ve barely scratched the surface of the physical world we inhabit and of techniques for simulating it. It’s my intention for this book to live as an ongoing project, and I hope to continue adding new tutorials and examples to the book’s website as well as expand and update accompanying video tutorials on thecodingtrain.com. Your feedback is truly appreciated, so please get in touch via email at (daniel@shiffman.net) or by contributing to the GitHub repository at github.com/nature-of-code, in keeping with the open-source spirit of the project. Share your work. Keep in touch. Let’s be two with nature.
+
+
\ No newline at end of file
diff --git a/content/chapters.json b/content/chapters.json
index 59ebfecf..f92cbbbe 100644
--- a/content/chapters.json
+++ b/content/chapters.json
@@ -1 +1 @@
-[{"title":"0. Randomness","src":"./00_randomness.html","slug":"random"},{"title":"1. Vectors","src":"./01_vectors.html","slug":"vectors"},{"title":"2. Forces","src":"./02_forces.html","slug":"force"},{"title":"3. Oscillation","src":"./03_oscillation.html","slug":"oscillation"},{"title":"4. Particle Systems","src":"./04_particles.html","slug":"particles"},{"title":"5. Autonomous Agents","src":"./05_steering.html","slug":"autonomous-agents"},{"title":"6. Physics Libraries","src":"./06_libraries.html","slug":"physics-libraries"},{"title":"7. Cellular Automata","src":"./07_ca.html","slug":"cellular-automata"},{"title":"8. Fractals","src":"./08_fractals.html","slug":"fractals"},{"title":"9. Evolutionary Computing","src":"./09_ga.html","slug":"genetic-algorithms"},{"title":"10. Neural Networks","src":"./10_nn.html","slug":"neural-networks"}]
\ No newline at end of file
+[{"title":"0. Randomness","src":"./00_randomness.html","slug":"random"},{"title":"1. Vectors","src":"./01_vectors.html","slug":"vectors"},{"title":"2. Forces","src":"./02_forces.html","slug":"force"},{"title":"3. Oscillation","src":"./03_oscillation.html","slug":"oscillation"},{"title":"4. Particle Systems","src":"./04_particles.html","slug":"particles"},{"title":"5. Autonomous Agents","src":"./05_steering.html","slug":"autonomous-agents"},{"title":"6. Physics Libraries","src":"./06_libraries.html","slug":"physics-libraries"},{"title":"7. Cellular Automata","src":"./07_ca.html","slug":"cellular-automata"},{"title":"8. Fractals","src":"./08_fractals.html","slug":"fractals"},{"title":"9. Evolutionary Computing","src":"./09_ga.html","slug":"genetic-algorithms"},{"title":"10. Neural Networks","src":"./10_nn.html","slug":"neural-networks"},{"title":"11. NeuroEvolution","src":"./11_nn_ga.html","slug":"neuro-evolution"}]
\ No newline at end of file
diff --git a/content/examples/03_oscillation/example_3_9_the_wave_a/screenshot.png b/content/examples/03_oscillation/example_3_9_the_wave_a/screenshot.png
new file mode 100644
index 00000000..06a8b890
Binary files /dev/null and b/content/examples/03_oscillation/example_3_9_the_wave_a/screenshot.png differ
diff --git a/content/examples/03_oscillation/example_3_9_the_wave_b/screenshot.png b/content/examples/03_oscillation/example_3_9_the_wave_b/screenshot.png
new file mode 100644
index 00000000..ff902518
Binary files /dev/null and b/content/examples/03_oscillation/example_3_9_the_wave_b/screenshot.png differ
diff --git a/content/examples/03_oscillation/example_3_9_the_wave_c/screenshot.png b/content/examples/03_oscillation/example_3_9_the_wave_c/screenshot.png
new file mode 100644
index 00000000..af661201
Binary files /dev/null and b/content/examples/03_oscillation/example_3_9_the_wave_c/screenshot.png differ
diff --git a/content/examples/09_ga/9_5_evolving_bloops/dna.js b/content/examples/09_ga/9_5_evolving_bloops/dna.js
index 34957b13..7c3dc920 100644
--- a/content/examples/09_ga/9_5_evolving_bloops/dna.js
+++ b/content/examples/09_ga/9_5_evolving_bloops/dna.js
@@ -6,22 +6,21 @@
// Constructor (makes a random DNA)
class DNA {
- constructor(newgenes) {
- if (newgenes) {
- this.genes = newgenes;
- } else {
- // The genetic sequence
- // DNA is random floating point values between 0 and 1 (!!)
- this.genes = new Array(1);
- for (let i = 0; i < this.genes.length; i++) {
- this.genes[i] = random(0, 1);
- }
+ constructor() {
+ // The genetic sequence
+ // DNA is random floating point values between 0 and 1 (!!)
+ this.genes = [];
+ for (let i = 0; i < 1; i++) {
+ this.genes[i] = random(0, 1);
}
}
copy() {
- let newgenes = this.genes.slice();
- return new DNA(newgenes);
+ // It gets made with random DNA
+ let newDNA = new DNA();
+ // But then it is overwritten
+ newDNA.genes = this.genes.slice();
+ return newDNA;
}
// Based on a mutation probability, picks a new random character in array spots
diff --git a/content/examples/11_nn_ga/10_3_flappy_bird/bird.js b/content/examples/11_nn_ga/10_3_flappy_bird/bird.js
new file mode 100644
index 00000000..608e9c0c
--- /dev/null
+++ b/content/examples/11_nn_ga/10_3_flappy_bird/bird.js
@@ -0,0 +1,38 @@
+class Bird {
+ constructor() {
+ // The bird's position (x will be constant)
+ this.x = 50;
+ this.y = 120;
+
+ // Velocity and forces are scalar since the bird only moves along the y-axis
+ this.velocity = 0;
+ this.gravity = 0.5;
+ this.flapForce = -10;
+ }
+
+ // The bird flaps its wings
+ flap() {
+ this.velocity += this.flapForce;
+ }
+
+ update() {
+ // Add gravity
+ this.velocity += this.gravity;
+ this.y += this.velocity;
+ // Dampen velocity
+ this.velocity *= 0.95;
+
+ // Handle the "floor"
+ if (this.y > height) {
+ this.y = height;
+ this.velocity = 0;
+ }
+ }
+
+ show() {
+ strokeWeight(2);
+ stroke(0);
+ fill(127);
+ circle(this.x, this.y, 16);
+ }
+}
diff --git a/content/examples/11_nn_ga/10_3_flappy_bird/index.html b/content/examples/11_nn_ga/10_3_flappy_bird/index.html
new file mode 100644
index 00000000..88b46a31
--- /dev/null
+++ b/content/examples/11_nn_ga/10_3_flappy_bird/index.html
@@ -0,0 +1,17 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/content/examples/11_nn_ga/10_3_flappy_bird/pipe.js b/content/examples/11_nn_ga/10_3_flappy_bird/pipe.js
new file mode 100644
index 00000000..9558ba9f
--- /dev/null
+++ b/content/examples/11_nn_ga/10_3_flappy_bird/pipe.js
@@ -0,0 +1,34 @@
+class Pipe {
+ constructor() {
+ this.spacing = 100;
+ this.top = random(height - this.spacing);
+ this.bottom = this.top + this.spacing;
+ this.x = width;
+ this.w = 20;
+ this.speed = 2;
+ }
+
+ collides(bird) {
+ // Is the bird within the vertical range of the top or bottom pipe?
+ let verticalCollision = bird.y < this.top || bird.y > this.bottom;
+ // Is the bird within the horizontal range of the pipes?
+ let horizontalCollision = bird.x > this.x && bird.x < this.x + this.w;
+ // If it's both a vertical and horizontal hit, it's a hit!
+ return verticalCollision && horizontalCollision;
+ }
+
+ show() {
+ fill(0);
+ noStroke();
+ rect(this.x, 0, this.w, this.top);
+ rect(this.x, this.bottom, this.w, height - this.bottom);
+ }
+
+ update() {
+ this.x -= this.speed;
+ }
+
+ offscreen() {
+ return this.x < -this.w;
+ }
+}
diff --git a/content/examples/11_nn_ga/10_3_flappy_bird/screenshot.png b/content/examples/11_nn_ga/10_3_flappy_bird/screenshot.png
new file mode 100644
index 00000000..890ca93d
Binary files /dev/null and b/content/examples/11_nn_ga/10_3_flappy_bird/screenshot.png differ
diff --git a/content/examples/11_nn_ga/10_3_flappy_bird/sketch.js b/content/examples/11_nn_ga/10_3_flappy_bird/sketch.js
new file mode 100644
index 00000000..d8b29c3c
--- /dev/null
+++ b/content/examples/11_nn_ga/10_3_flappy_bird/sketch.js
@@ -0,0 +1,36 @@
+let bird;
+let pipes = [];
+
+function setup() {
+ createCanvas(640, 240);
+ bird = new Bird();
+ pipes.push(new Pipe());
+}
+
+function draw() {
+ background(255);
+
+ for (let i = pipes.length - 1; i >= 0; i--) {
+ pipes[i].show();
+ pipes[i].update();
+
+ if (pipes[i].collides(bird)) {
+ text("OOPS!", pipes[i].x, pipes[i].top + 20);
+ }
+
+ if (pipes[i].offscreen()) {
+ pipes.splice(i, 1);
+ }
+ }
+
+ bird.update();
+ bird.show();
+
+ if (frameCount % 100 == 0) {
+ pipes.push(new Pipe());
+ }
+}
+
+function mousePressed() {
+ bird.flap();
+}
diff --git a/content/examples/11_nn_ga/10_3_flappy_bird/style.css b/content/examples/11_nn_ga/10_3_flappy_bird/style.css
new file mode 100644
index 00000000..9386f1c2
--- /dev/null
+++ b/content/examples/11_nn_ga/10_3_flappy_bird/style.css
@@ -0,0 +1,7 @@
+html, body {
+ margin: 0;
+ padding: 0;
+}
+canvas {
+ display: block;
+}
diff --git a/content/examples/11_nn_ga/10_4_flappy_bird_neuro_evolution/bird.js b/content/examples/11_nn_ga/10_4_flappy_bird_neuro_evolution/bird.js
new file mode 100644
index 00000000..7eddf362
--- /dev/null
+++ b/content/examples/11_nn_ga/10_4_flappy_bird_neuro_evolution/bird.js
@@ -0,0 +1,80 @@
+class Bird {
+ constructor(brain) {
+ // A bird's brain receives 5 inputs and classifies them into one of two labels
+ if (brain) {
+ this.brain = brain;
+ } else {
+ this.brain = ml5.neuralNetwork({
+ inputs: 4,
+ outputs: ["flap", "no flap"],
+ task: "classification",
+
+ // change to "neuroEvolution" for next ml5.js release
+ noTraining: true
+ // neuroEvolution: true,
+ });
+ }
+
+ // The bird's position (x will be constant)
+ this.x = 50;
+ this.y = 120;
+
+ // Velocity and forces are scalar since the bird only moves along the y-axis
+ this.velocity = 0;
+ this.gravity = 0.5;
+ this.flapForce = -10;
+
+ // Adding a fitness
+ this.fitness = 0;
+ this.alive = true;
+ }
+
+ think(pipes) {
+ let nextPipe = null;
+ for (let pipe of pipes) {
+ if (pipe.x + pipe.w > this.x) {
+ nextPipe = pipe;
+ break;
+ }
+ }
+
+ let inputs = [
+ this.y / height,
+ this.velocity / height,
+ nextPipe.top / height,
+ (nextPipe.x - this.x) / width,
+ ];
+
+ let results = this.brain.classifySync(inputs);
+ if (results[0].label == "flap") {
+ this.flap();
+ }
+ }
+
+ // The bird flaps its wings
+ flap() {
+ this.velocity += this.flapForce;
+ }
+
+ update() {
+ // Add gravity
+ this.velocity += this.gravity;
+ this.y += this.velocity;
+ // Dampen velocity
+ this.velocity *= 0.95;
+
+ // Handle the "floor"
+ if (this.y > height || this.y < 0) {
+ this.alive = false;
+ }
+
+ this.fitness++;
+ }
+
+ show() {
+ strokeWeight(2);
+ stroke(0);
+ fill(127, 200);
+ circle(this.x, this.y, 16);
+ }
+}
diff --git a/content/examples/11_nn_ga/10_4_flappy_bird_neuro_evolution/index.html b/content/examples/11_nn_ga/10_4_flappy_bird_neuro_evolution/index.html
new file mode 100644
index 00000000..5a21001e
--- /dev/null
+++ b/content/examples/11_nn_ga/10_4_flappy_bird_neuro_evolution/index.html
@@ -0,0 +1,16 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/content/examples/11_nn_ga/10_4_flappy_bird_neuro_evolution/pipe.js b/content/examples/11_nn_ga/10_4_flappy_bird_neuro_evolution/pipe.js
new file mode 100644
index 00000000..9558ba9f
--- /dev/null
+++ b/content/examples/11_nn_ga/10_4_flappy_bird_neuro_evolution/pipe.js
@@ -0,0 +1,34 @@
+class Pipe {
+ constructor() {
+ this.spacing = 100;
+ this.top = random(height - this.spacing);
+ this.bottom = this.top + this.spacing;
+ this.x = width;
+ this.w = 20;
+ this.speed = 2;
+ }
+
+ collides(bird) {
+ // Is the bird within the vertical range of the top or bottom pipe?
+ let verticalCollision = bird.y < this.top || bird.y > this.bottom;
+ // Is the bird within the horizontal range of the pipes?
+ let horizontalCollision = bird.x > this.x && bird.x < this.x + this.w;
+ // If it's both a vertical and horizontal hit, it's a hit!
+ return verticalCollision && horizontalCollision;
+ }
+
+ show() {
+ fill(0);
+ noStroke();
+ rect(this.x, 0, this.w, this.top);
+ rect(this.x, this.bottom, this.w, height - this.bottom);
+ }
+
+ update() {
+ this.x -= this.speed;
+ }
+
+ offscreen() {
+ return this.x < -this.w;
+ }
+}
diff --git a/content/examples/11_nn_ga/10_4_flappy_bird_neuro_evolution/screenshot.png b/content/examples/11_nn_ga/10_4_flappy_bird_neuro_evolution/screenshot.png
new file mode 100644
index 00000000..fe2013fd
Binary files /dev/null and b/content/examples/11_nn_ga/10_4_flappy_bird_neuro_evolution/screenshot.png differ
diff --git a/content/examples/11_nn_ga/10_4_flappy_bird_neuro_evolution/sketch.js b/content/examples/11_nn_ga/10_4_flappy_bird_neuro_evolution/sketch.js
new file mode 100644
index 00000000..b776e39a
--- /dev/null
+++ b/content/examples/11_nn_ga/10_4_flappy_bird_neuro_evolution/sketch.js
@@ -0,0 +1,96 @@
+let birds = [];
+let pipes = [];
+
+function setup() {
+ createCanvas(640, 240);
+ for (let i = 0; i < 200; i++) {
+ birds[i] = new Bird();
+ }
+ pipes.push(new Pipe());
+
+ ml5.tf.setBackend("cpu");
+}
+
+function draw() {
+ background(255);
+
+ for (let i = pipes.length - 1; i >= 0; i--) {
+ pipes[i].update();
+ pipes[i].show();
+ if (pipes[i].offscreen()) {
+ pipes.splice(i, 1);
+ }
+ }
+
+ for (let bird of birds) {
+ if (bird.alive) {
+ for (let pipe of pipes) {
+ if (pipe.collides(bird)) {
+ bird.alive = false;
+ }
+ }
+ bird.think(pipes);
+ bird.update();
+ bird.show();
+ }
+ }
+
+ if (frameCount % 100 == 0) {
+ pipes.push(new Pipe());
+ }
+
+ if (allBirdsDead()) {
+ normalizeFitness();
+ reproduction();
+ }
+}
+
+function allBirdsDead() {
+ for (let bird of birds) {
+ if (bird.alive) {
+ return false;
+ }
+ }
+ return true;
+}
+
+function reproduction() {
+ let nextBirds = [];
+ for (let i = 0; i < birds.length; i++) {
+ let parentA = weightedSelection();
+ let parentB = weightedSelection();
+ let child = parentA.crossover(parentB);
+ child.mutate(0.01);
+ nextBirds[i] = new Bird(child);
+ }
+ birds = nextBirds;
+}
+
+// Normalize all fitness values
+function normalizeFitness() {
+ let sum = 0;
+ for (let bird of birds) {
+ sum += bird.fitness;
+ }
+ for (let bird of birds) {
+ bird.fitness = bird.fitness / sum;
+ }
+}
+
+function weightedSelection() {
+ // Start with the first element
+ let index = 0;
+ // Pick a starting point
+ let start = random(1);
+ // At the finish line?
+ while (start > 0) {
+ // Move a distance according to fitness
+ start = start - birds[index].fitness;
+ // Next element
+ index++;
+ }
+ // Undo moving to the next element since the finish has been reached
+ index--;
+ //{!1} Instead of returning the entire Bird object, just the brain is returned
+ return birds[index].brain;
+}
diff --git a/content/examples/11_nn_ga/10_4_flappy_bird_neuro_evolution/style.css b/content/examples/11_nn_ga/10_4_flappy_bird_neuro_evolution/style.css
new file mode 100644
index 00000000..9386f1c2
--- /dev/null
+++ b/content/examples/11_nn_ga/10_4_flappy_bird_neuro_evolution/style.css
@@ -0,0 +1,7 @@
+html, body {
+ margin: 0;
+ padding: 0;
+}
+canvas {
+ display: block;
+}
diff --git a/content/examples/11_nn_ga/10_5_smart_rockets_neuro_evolution/index.html b/content/examples/11_nn_ga/10_5_smart_rockets_neuro_evolution/index.html
new file mode 100644
index 00000000..91fac77b
--- /dev/null
+++ b/content/examples/11_nn_ga/10_5_smart_rockets_neuro_evolution/index.html
@@ -0,0 +1,16 @@
+
+
+
+
+
+
+ Nature of Code Example 9.3: Smart Rockets
+
+
+
+
+
+
+
+
+
diff --git a/content/examples/11_nn_ga/10_5_smart_rockets_neuro_evolution/obstacle.js b/content/examples/11_nn_ga/10_5_smart_rockets_neuro_evolution/obstacle.js
new file mode 100644
index 00000000..f9249f8d
--- /dev/null
+++ b/content/examples/11_nn_ga/10_5_smart_rockets_neuro_evolution/obstacle.js
@@ -0,0 +1,35 @@
+// The Nature of Code
+// Daniel Shiffman
+// http://natureofcode.com
+
+// Pathfinding w/ Genetic Algorithms
+
+// A class for an obstacle, just a simple rectangle that is drawn
+// and can check if a Rocket touches it
+
+// Also using this class for target position
+
+class Obstacle {
+ constructor(x, y, w, h) {
+ this.position = createVector(x, y);
+ this.w = w;
+ this.h = h;
+ }
+
+ show() {
+ stroke(0);
+ fill(175);
+ strokeWeight(2);
+ rectMode(CORNER);
+ rect(this.position.x, this.position.y, this.w, this.h);
+ }
+
+ contains(spot) {
+ return (
+ spot.x > this.position.x &&
+ spot.x < this.position.x + this.w &&
+ spot.y > this.position.y &&
+ spot.y < this.position.y + this.h
+ );
+ }
+}
diff --git a/content/examples/11_nn_ga/10_5_smart_rockets_neuro_evolution/population.js b/content/examples/11_nn_ga/10_5_smart_rockets_neuro_evolution/population.js
new file mode 100644
index 00000000..c32d9cf5
--- /dev/null
+++ b/content/examples/11_nn_ga/10_5_smart_rockets_neuro_evolution/population.js
@@ -0,0 +1,91 @@
+// The Nature of Code
+// Daniel Shiffman
+// http://natureofcode.com
+
+// Pathfinding w/ Genetic Algorithms
+
+// A class to describe a population of "creatures"
+
+// Initialize the population
+class Population {
+ constructor(mutation, length) {
+ this.mutationRate = mutation; // Mutation rate
+ this.population = new Array(length); // Array to hold the current population
+ this.generations = 0; // Number of generations
+ // Make a new set of creatures
+ for (let i = 0; i < this.population.length; i++) {
+ this.population[i] = new Rocket(320, 220);
+ }
+ }
+
+ live(obstacles) {
+ // For every creature
+ for (let i = 0; i < this.population.length; i++) {
+ // If it finishes, mark it down as done!
+ this.population[i].checkTarget();
+ this.population[i].run(obstacles);
+ }
+ }
+
+ // Did anything finish?
+ targetReached() {
+ for (let i = 0; i < this.population.length; i++) {
+ if (this.population[i].hitTarget) return true;
+ }
+ return false;
+ }
+
+ // Calculate fitness for each creature
+ calculateFitness() {
+ for (let i = 0; i < this.population.length; i++) {
+ this.population[i].calculateFitness();
+ }
+ }
+
+ selection() {
+ // Sum all of the fitness values
+ let totalFitness = 0;
+ for (let i = 0; i < this.population.length; i++) {
+ totalFitness += this.population[i].fitness;
+ }
+ // Divide by the total to normalize the fitness values
+ for (let i = 0; i < this.population.length; i++) {
+ this.population[i].fitness /= totalFitness;
+ }
+ }
+
+ // Making the next generation
+ reproduction() {
+ let nextPopulation = [];
+ // Create the next population
+ for (let i = 0; i < this.population.length; i++) {
+ // Sping the wheel of fortune to pick two parents
+ let parentA = this.weightedSelection();
+ let parentB = this.weightedSelection();
+ let child = parentA.crossover(parentB);
+ // Mutate their genes
+ child.mutate(this.mutationRate);
+ nextPopulation[i] = new Rocket(320, 220, child);
+ }
+ // Replace the old population
+ this.population = nextPopulation;
+ this.generations++;
+ }
+
+ weightedSelection() {
+ // Start with the first element
+ let index = 0;
+ // Pick a starting point
+ let start = random(1);
+ // At the finish line?
+ while (start > 0) {
+ // Move a distance according to fitness
+ start = start - this.population[index].fitness;
+ // Next element
+ index++;
+ }
+ // Undo moving to the next element since the finish has been reached
+ index--;
+ return this.population[index].brain;
+ }
+}
diff --git a/content/examples/11_nn_ga/10_5_smart_rockets_neuro_evolution/rocket.js b/content/examples/11_nn_ga/10_5_smart_rockets_neuro_evolution/rocket.js
new file mode 100644
index 00000000..ae9d4983
--- /dev/null
+++ b/content/examples/11_nn_ga/10_5_smart_rockets_neuro_evolution/rocket.js
@@ -0,0 +1,146 @@
+// The Nature of Code
+// Daniel Shiffman
+// http://natureofcode.com
+
+// Rocket class -- this is just like our Boid / Particle class
+// the only difference is that it has DNA & fitness
+
+class Rocket {
+ constructor(x, y, brain) {
+ // All of our physics stuff
+ this.acceleration = createVector();
+ this.velocity = createVector();
+ this.position = createVector(x, y);
+ this.r = 4;
+ this.brain = brain;
+ this.finishCounter = 0; // We're going to count how long it takes to reach target
+ this.recordDistance = Infinity; // Some high number that will be beat instantly
+
+ this.fitness = 0;
+ this.geneCounter = 0;
+ this.hitObstacle = false; // Am I stuck on an obstacle?
+ this.hitTarget = false; // Did I reach the target
+ this.maxspeed = 4;
+ this.maxforce = 1;
+
+ if (brain) {
+ this.brain = brain;
+ } else {
+ this.brain = ml5.neuralNetwork({
+ inputs: 2,
+ outputs: 2,
+ task: "regression",
+ noTraining: true,
+ });
+ }
+ }
+
+ // FITNESS FUNCTION
+ // distance = distance from target
+ // finish = what order did i finish (first, second, etc. . .)
+ // f(distance,finish) = (1.0f / finish^1.5) * (1.0f / distance^6);
+ // a lower finish is rewarded (exponentially) and/or shorter distance to target (exponetially)
+ calculateFitness() {
+ // Reward finishing faster and getting close
+ this.fitness = 1 / (this.finishCounter * this.recordDistance);
+
+ // Let's try to to the 4th power!
+ this.fitness = pow(this.fitness, 4);
+
+ //{!3} lose 90% of fitness hitting an obstacle
+ if (this.hitObstacle) {
+ this.fitness *= 0.1;
+ }
+ //{!3} Double the fitness for finishing!
+ if (this.hitTarget) {
+ this.fitness *= 2;
+ }
+ }
+
+ // Run in relation to all the obstacles
+ // If I'm stuck, don't bother updating or checking for intersection
+ run(obstacles) {
+ // Stop the rocket if it's hit an obstacle or the target
+ if (!this.hitObstacle && !this.hitTarget) {
+ let inputs = [this.position.x / width, this.position.y / height];
+ // Predicting the force to apply
+ const outputs = this.brain.predictSync(inputs);
+ let angle = outputs[0].value * TWO_PI;
+ let magnitude = outputs[1].value * this.maxforce;
+ let force = p5.Vector.fromAngle(angle).setMag(magnitude);
+ this.applyForce(force);
+ this.update();
+ // Check if rocket hits an obstacle
+ this.checkObstacles(obstacles);
+ }
+ this.show();
+ }
+
+ checkTarget() {
+ let distance = p5.Vector.dist(this.position, target.position);
+ //{!3} Check if the distance is closer than the “record” distance. If it is, set a new record.
+ if (distance < this.recordDistance) {
+ this.recordDistance = distance;
+ }
+ // If the object reaches the target, set a boolean flag to true.
+ if (target.contains(this.position) && !this.hitTarget) {
+ this.hitTarget = true;
+ // Otherwise, increase the finish counter
+ } else if (!this.hitTarget) {
+ this.finishCounter++;
+ }
+ }
+
+ // This new function lives in the Rocket class and checks if a rocket has
+ // hit an obstacle.
+ checkObstacles(obstacles) {
+ for (let obstacle of obstacles) {
+ if (obstacle.contains(this.position)) {
+ this.hitObstacle = true;
+ }
+ }
+ }
+
+ applyForce(force) {
+ this.acceleration.add(force);
+ }
+
+ update() {
+ this.velocity.limit(this.maxspeed);
+ this.velocity.add(this.acceleration);
+ this.position.add(this.velocity);
+ this.acceleration.mult(0);
+ }
+
+ show() {
+ let theta = this.velocity.heading() + PI / 2;
+ fill(200, 100);
+ stroke(0);
+ strokeWeight(1);
+ push();
+ translate(this.position.x, this.position.y);
+ rotate(theta);
+
+ // Thrusters
+ rectMode(CENTER);
+ fill(0);
+ rect(-this.r / 2, this.r * 2, this.r / 2, this.r);
+ rect(this.r / 2, this.r * 2, this.r / 2, this.r);
+
+ // Rocket body
+ fill(200);
+ beginShape(TRIANGLES);
+ vertex(0, -this.r * 2);
+ vertex(-this.r, this.r * 2);
+ vertex(this.r, this.r * 2);
+ endShape();
+
+ fill(0);
+ noStroke();
+ rotate(-theta);
+ //text(nf(this.fitness,2,1), 5, 5);
+ // text(nf(this.fitness, 1, 5), 15, 5);
+
+ pop();
+ }
+}
diff --git a/content/examples/11_nn_ga/10_5_smart_rockets_neuro_evolution/screenshot.png b/content/examples/11_nn_ga/10_5_smart_rockets_neuro_evolution/screenshot.png
new file mode 100644
index 00000000..f4cb49a8
Binary files /dev/null and b/content/examples/11_nn_ga/10_5_smart_rockets_neuro_evolution/screenshot.png differ
diff --git a/content/examples/11_nn_ga/10_5_smart_rockets_neuro_evolution/sketch.js b/content/examples/11_nn_ga/10_5_smart_rockets_neuro_evolution/sketch.js
new file mode 100644
index 00000000..d2fd40df
--- /dev/null
+++ b/content/examples/11_nn_ga/10_5_smart_rockets_neuro_evolution/sketch.js
@@ -0,0 +1,87 @@
+// The Nature of Code
+// Daniel Shiffman
+// http://natureofcode.com
+
+// Smart Rockets w/ Genetic Algorithms
+
+// Each Rocket's DNA is an array of PVectors
+// Each PVector acts as a force for each frame of animation
+// Imagine an booster on the end of the rocket that can polet in any direction
+// and fire at any strength every frame
+
+// The Rocket's fitness is a function of how close it gets to the target as well as how fast it gets there
+
+// This example is inspired by Jer Thorp's Smart Rockets
+// http://www.blprnt.com/smartrockets/
+
+let lifeSpan = 300; // How long should each generation live
+
+let population; // Population
+
+let lifeCounter = 0; // Timer for cycle of generation
+let recordtime; // Fastest time to target
+
+let target; // Target position
+
+//let diam = 24; // Size of target
+
+let obstacles = []; //an array list to keep track of all the obstacles!
+
+function setup() {
+ createCanvas(640, 240);
+ ml5.tf.setBackend("cpu");
+ // Initialize variables
+ recordTime = lifeSpan;
+
+ target = new Obstacle(width / 2 - 12, 24, 24, 24);
+
+ // Create a population with a mutation rate, and population max
+ population = new Population(0.01, 150);
+
+ // Create the obstacle course
+ obstacles = [];
+ obstacles.push(new Obstacle(width / 2 - 75, height / 2, 150, 10));
+}
+
+function draw() {
+ background(255);
+
+ // Draw the start and target positions
+ target.show();
+
+ // If the generation hasn't ended yet
+ if (lifeCounter < lifeSpan) {
+ population.live(obstacles);
+ if (population.targetReached() && lifeCounter < recordTime) {
+ recordTime = lifeCounter;
+ } else {
+ lifeCounter++;
+ }
+ // Otherwise a new generation
+ } else {
+ lifeCounter = 0;
+ population.calculateFitness();
+ population.selection();
+ population.reproduction();
+ }
+
+ // Draw the obstacles
+ for (let i = 0; i < obstacles.length; i++) {
+ obstacles[i].show();
+ }
+
+ // Display some info
+ fill(0);
+ noStroke();
+ text("Generation #: " + population.generations, 10, 18);
+ text("Cycles left: " + (lifeSpan - lifeCounter), 10, 36);
+ text("Record cycles: " + recordTime, 10, 54);
+}
+
+// Move the target if the mouse is pressed
+// System will adapt to new target
+function mousePressed() {
+ target.position.x = mouseX;
+ target.position.y = mouseY;
+ recordTime = lifeSpan;
+}
diff --git a/content/examples/11_nn_ga/10_5_smart_rockets_neuro_evolution/style.css b/content/examples/11_nn_ga/10_5_smart_rockets_neuro_evolution/style.css
new file mode 100644
index 00000000..e78b7102
--- /dev/null
+++ b/content/examples/11_nn_ga/10_5_smart_rockets_neuro_evolution/style.css
@@ -0,0 +1,8 @@
+html,
+body {
+ margin: 0;
+ padding: 0;
+}
+canvas {
+ display: block;
+}
diff --git a/content/examples/11_nn_ga/10_6_neuro_evolution_steering_seek/creature.js b/content/examples/11_nn_ga/10_6_neuro_evolution_steering_seek/creature.js
new file mode 100644
index 00000000..992b4b04
--- /dev/null
+++ b/content/examples/11_nn_ga/10_6_neuro_evolution_steering_seek/creature.js
@@ -0,0 +1,80 @@
+class Creature {
+ constructor(x, y, brain) {
+ this.position = createVector(x, y);
+ this.velocity = createVector(0, 0);
+ this.acceleration = createVector(0, 0);
+ this.r = 4;
+ this.maxspeed = 4;
+ this.fitness = 0;
+
+ if (brain) {
+ this.brain = brain;
+ } else {
+ this.brain = ml5.neuralNetwork({
+ inputs: 5,
+ outputs: 2,
+ task: "regression",
+ // neuroEvolution: true,
+ noTraining: true
+ });
+ }
+ }
+
+ seek(target) {
+ let v = p5.Vector.sub(target.position, this.position);
+ let distance = v.mag();
+ v.normalize();
+ let inputs = [
+ v.x,
+ v.y,
+ distance / width,
+ this.velocity.x / this.maxspeed,
+ this.velocity.y / this.maxspeed,
+ ];
+
+ // Predicting the force to apply
+ let outputs = this.brain.predictSync(inputs);
+ let angle = outputs[0].value * TWO_PI;
+ let magnitude = outputs[1].value;
+ let force = p5.Vector.fromAngle(angle).setMag(magnitude);
+ this.applyForce(force);
+ }
+
+ // Method to update location
+ update(target) {
+ // Update velocity
+ this.velocity.add(this.acceleration);
+ // Limit speed
+ this.velocity.limit(this.maxspeed);
+ this.position.add(this.velocity);
+ // Reset acceleration to 0 each cycle
+ this.acceleration.mult(0);
+
+ let d = p5.Vector.dist(this.position, target.position);
+ if (d < this.r + target.r) {
+ this.fitness++;
+ }
+ }
+
+ applyForce(force) {
+ // We could add mass here if we want A = F / M
+ this.acceleration.add(force);
+ }
+
+ show() {
+ //{!1} Vehicle is a triangle pointing in the direction of velocity
+ let angle = this.velocity.heading();
+ fill(127);
+ stroke(0);
+ strokeWeight(1);
+ push();
+ translate(this.position.x, this.position.y);
+ rotate(angle);
+ beginShape();
+ vertex(this.r * 2, 0);
+ vertex(-this.r * 2, -this.r);
+ vertex(-this.r * 2, this.r);
+ endShape(CLOSE);
+ pop();
+ }
+}
diff --git a/content/examples/11_nn_ga/10_6_neuro_evolution_steering_seek/glow.js b/content/examples/11_nn_ga/10_6_neuro_evolution_steering_seek/glow.js
new file mode 100644
index 00000000..924cdb72
--- /dev/null
+++ b/content/examples/11_nn_ga/10_6_neuro_evolution_steering_seek/glow.js
@@ -0,0 +1,22 @@
+class Glow {
+ constructor() {
+ this.xoff = 0;
+ this.yoff = 1000;
+ this.position = createVector();
+ this.r = 24;
+ }
+
+ update() {
+ this.position.x = noise(this.xoff) * width;
+ this.position.y = noise(this.yoff) * height;
+ this.xoff += 0.01;
+ this.yoff += 0.01;
+ }
+
+ show() {
+ stroke(0);
+ strokeWeight(2);
+ fill(200);
+ circle(this.position.x, this.position.y, this.r * 2);
+ }
+}
diff --git a/content/examples/11_nn_ga/10_6_neuro_evolution_steering_seek/index.html b/content/examples/11_nn_ga/10_6_neuro_evolution_steering_seek/index.html
new file mode 100644
index 00000000..fd2ff644
--- /dev/null
+++ b/content/examples/11_nn_ga/10_6_neuro_evolution_steering_seek/index.html
@@ -0,0 +1,15 @@
+
+
+
+
+
+
+ Nature of Code Example 9.3: Smart Rockets
+
+
+
+
+
+
+
+
diff --git a/content/examples/11_nn_ga/10_6_neuro_evolution_steering_seek/population.js b/content/examples/11_nn_ga/10_6_neuro_evolution_steering_seek/population.js
new file mode 100644
index 00000000..c32d9cf5
--- /dev/null
+++ b/content/examples/11_nn_ga/10_6_neuro_evolution_steering_seek/population.js
@@ -0,0 +1,91 @@
+// The Nature of Code
+// Daniel Shiffman
+// http://natureofcode.com
+
+// Pathfinding w/ Genetic Algorithms
+
+// A class to describe a population of "creatures"
+
+// Initialize the population
+class Population {
+ constructor(mutation, length) {
+ this.mutationRate = mutation; // Mutation rate
+ this.population = new Array(length); // Array to hold the current population
+ this.generations = 0; // Number of generations
+ // Make a new set of creatures
+ for (let i = 0; i < this.population.length; i++) {
+ this.population[i] = new Rocket(320, 220);
+ }
+ }
+
+ live(obstacles) {
+ // For every creature
+ for (let i = 0; i < this.population.length; i++) {
+ // If it finishes, mark it down as done!
+ this.population[i].checkTarget();
+ this.population[i].run(obstacles);
+ }
+ }
+
+ // Did anything finish?
+ targetReached() {
+ for (let i = 0; i < this.population.length; i++) {
+ if (this.population[i].hitTarget) return true;
+ }
+ return false;
+ }
+
+ // Calculate fitness for each creature
+ calculateFitness() {
+ for (let i = 0; i < this.population.length; i++) {
+ this.population[i].calculateFitness();
+ }
+ }
+
+ selection() {
+ // Sum all of the fitness values
+ let totalFitness = 0;
+ for (let i = 0; i < this.population.length; i++) {
+ totalFitness += this.population[i].fitness;
+ }
+ // Divide by the total to normalize the fitness values
+ for (let i = 0; i < this.population.length; i++) {
+ this.population[i].fitness /= totalFitness;
+ }
+ }
+
+ // Making the next generation
+ reproduction() {
+ let nextPopulation = [];
+ // Create the next population
+ for (let i = 0; i < this.population.length; i++) {
+ // Sping the wheel of fortune to pick two parents
+ let parentA = this.weightedSelection();
+ let parentB = this.weightedSelection();
+ let child = parentA.crossover(parentB);
+ // Mutate their genes
+ child.mutate(this.mutationRate);
+ nextPopulation[i] = new Rocket(320, 220, child);
+ }
+ // Replace the old population
+ this.population = nextPopulation;
+ this.generations++;
+ }
+
+ weightedSelection() {
+ // Start with the first element
+ let index = 0;
+ // Pick a starting point
+ let start = random(1);
+ // At the finish line?
+ while (start > 0) {
+ // Move a distance according to fitness
+ start = start - this.population[index].fitness;
+ // Next element
+ index++;
+ }
+ // Undo moving to the next element since the finish has been reached
+ index--;
+ return this.population[index].brain;
+ }
+}
diff --git a/content/examples/11_nn_ga/10_6_neuro_evolution_steering_seek/screenshot.png b/content/examples/11_nn_ga/10_6_neuro_evolution_steering_seek/screenshot.png
new file mode 100644
index 00000000..a9bcf6f7
Binary files /dev/null and b/content/examples/11_nn_ga/10_6_neuro_evolution_steering_seek/screenshot.png differ
diff --git a/content/examples/11_nn_ga/10_6_neuro_evolution_steering_seek/sketch.js b/content/examples/11_nn_ga/10_6_neuro_evolution_steering_seek/sketch.js
new file mode 100644
index 00000000..1da9e2d2
--- /dev/null
+++ b/content/examples/11_nn_ga/10_6_neuro_evolution_steering_seek/sketch.js
@@ -0,0 +1,80 @@
+let creatures = [];
+let timeSlider;
+let lifeSpan = 250; // How long should each generation live
+let lifeCounter = 0; // Timer for cycle of generation
+let food;
+let generations = 0;
+
+function setup() {
+ createCanvas(640, 240);
+ ml5.tf.setBackend("cpu");
+ for (let i = 0; i < 50; i++) {
+ creatures[i] = new Creature(random(width), random(height));
+ }
+ glow = new Glow();
+ timeSlider = createSlider(1, 20, 1);
+ timeSlider.position(10, 220);
+}
+
+function draw() {
+ background(255);
+
+ glow.update();
+ glow.show();
+
+ for (let creature of creatures) {
+ creature.show();
+ }
+
+ for (let i = 0; i < timeSlider.value(); i++) {
+ for (let creature of creatures) {
+ creature.seek(glow);
+ creature.update(glow);
+ }
+ lifeCounter++;
+ }
+
+ if (lifeCounter > lifeSpan) {
+ normalizeFitness();
+ reproduction();
+ lifeCounter = 0;
+ generations++;
+ }
+ fill(0);
+ noStroke();
+ text("Generation #: " + generations, 10, 18);
+ text("Cycles left: " + (lifeSpan - lifeCounter), 10, 36);
+}
+
+function normalizeFitness() {
+ let sum = 0;
+ for (let creature of creatures) {
+ sum += creature.fitness;
+ }
+ for (let creature of creatures) {
+ creature.fitness = creature.fitness / sum;
+ }
+}
+
+function reproduction() {
+ let nextCreatures = [];
+ for (let i = 0; i < creatures.length; i++) {
+ let parentA = weightedSelection();
+ let parentB = weightedSelection();
+ let child = parentA.crossover(parentB);
+ child.mutate(0.1);
+ nextCreatures[i] = new Creature(random(width), random(height), child);
+ }
+ creatures = nextCreatures;
+}
+
+function weightedSelection() {
+ let index = 0;
+ let start = random(1);
+ while (start > 0) {
+ start = start - creatures[index].fitness;
+ index++;
+ }
+ index--;
+ return creatures[index].brain;
+}
diff --git a/content/examples/11_nn_ga/10_6_neuro_evolution_steering_seek/style.css b/content/examples/11_nn_ga/10_6_neuro_evolution_steering_seek/style.css
new file mode 100644
index 00000000..e78b7102
--- /dev/null
+++ b/content/examples/11_nn_ga/10_6_neuro_evolution_steering_seek/style.css
@@ -0,0 +1,8 @@
+html,
+body {
+ margin: 0;
+ padding: 0;
+}
+canvas {
+ display: block;
+}
diff --git a/content/examples/11_nn_ga/10_7_creature_sensors/creature.js b/content/examples/11_nn_ga/10_7_creature_sensors/creature.js
new file mode 100644
index 00000000..64679001
--- /dev/null
+++ b/content/examples/11_nn_ga/10_7_creature_sensors/creature.js
@@ -0,0 +1,39 @@
+class Creature {
+ constructor(x, y) {
+ this.position = createVector(x, y);
+ this.r = 16;
+ this.sensors = [];
+
+ let totalSensors = 15;
+ for (let i = 0; i < totalSensors; i++) {
+ let a = map(i, 0, totalSensors, 0, TWO_PI);
+ let v = p5.Vector.fromAngle(a);
+ v.mult(this.r * 2);
+ this.sensors[i] = new Sensor(v);
+ }
+ }
+
+ sense(food) {
+ for (let i = 0; i < this.sensors.length; i++) {
+ this.sensors[i].sense(this.position, food);
+ }
+ }
+
+ show() {
+ push();
+ translate(this.position.x, this.position.y);
+ for (let sensor of this.sensors) {
+ stroke(0);
+ line(0, 0, sensor.v.x, sensor.v.y);
+ if (sensor.value > 0) {
+ fill(255, sensor.value*255);
+ stroke(0, 100)
+ circle(sensor.v.x, sensor.v.y, 8);
+ }
+ }
+ noStroke();
+ fill(0);
+ circle(0, 0, this.r * 2);
+ pop();
+ }
+}
diff --git a/content/examples/11_nn_ga/10_7_creature_sensors/food.js b/content/examples/11_nn_ga/10_7_creature_sensors/food.js
new file mode 100644
index 00000000..9197e4c6
--- /dev/null
+++ b/content/examples/11_nn_ga/10_7_creature_sensors/food.js
@@ -0,0 +1,12 @@
+class Food {
+ constructor() {
+ this.position = createVector(width / 2, height / 2);
+ this.r = 32;
+ }
+
+ show() {
+ noStroke();
+ fill(0, 100);
+ circle(this.position.x, this.position.y, this.r * 2);
+ }
+}
diff --git a/content/examples/11_nn_ga/10_7_creature_sensors/index.html b/content/examples/11_nn_ga/10_7_creature_sensors/index.html
new file mode 100644
index 00000000..ca5e350e
--- /dev/null
+++ b/content/examples/11_nn_ga/10_7_creature_sensors/index.html
@@ -0,0 +1,16 @@
+
+
+
+
+
+
+ Nature of Code Example 9.3: Smart Rockets
+
+
+
+
+
+
+
+
+
diff --git a/content/examples/11_nn_ga/10_7_creature_sensors/screenshot.png b/content/examples/11_nn_ga/10_7_creature_sensors/screenshot.png
new file mode 100644
index 00000000..0d4c8ebf
Binary files /dev/null and b/content/examples/11_nn_ga/10_7_creature_sensors/screenshot.png differ
diff --git a/content/examples/11_nn_ga/10_7_creature_sensors/sensor.js b/content/examples/11_nn_ga/10_7_creature_sensors/sensor.js
new file mode 100644
index 00000000..34ff8a48
--- /dev/null
+++ b/content/examples/11_nn_ga/10_7_creature_sensors/sensor.js
@@ -0,0 +1,20 @@
+class Sensor {
+ constructor(v) {
+ this.v = v.copy();
+ this.value = 0;
+ }
+
+ sense(position, food) {
+ //{!1} Find the "tip" (or endpoint) of the sensor by adding position
+ let end = p5.Vector.add(position, this.v);
+ //{!1} How far is it from the food center
+ let d = end.dist(food.position);
+ //{!1} If it is within the radius light up the sensor
+ if (d < food.r) {
+ // The further into the center the food, the more the sensor activates
+ this.value = map(d, 0, food.r, 1, 0);
+ } else {
+ this.value = 0;
+ }
+ }
+}
\ No newline at end of file
diff --git a/content/examples/11_nn_ga/10_7_creature_sensors/sketch.js b/content/examples/11_nn_ga/10_7_creature_sensors/sketch.js
new file mode 100644
index 00000000..75b468d7
--- /dev/null
+++ b/content/examples/11_nn_ga/10_7_creature_sensors/sketch.js
@@ -0,0 +1,17 @@
+let creature;
+let food;
+
+function setup() {
+ createCanvas(640, 240);
+ creature = new Creature();
+ food = new Food();
+}
+
+function draw() {
+ background(255);
+ creature.position.x = mouseX;
+ creature.position.y = mouseY;
+ food.show();
+ creature.sense(food);
+ creature.show();
+}
diff --git a/content/examples/11_nn_ga/10_7_creature_sensors/style.css b/content/examples/11_nn_ga/10_7_creature_sensors/style.css
new file mode 100644
index 00000000..e78b7102
--- /dev/null
+++ b/content/examples/11_nn_ga/10_7_creature_sensors/style.css
@@ -0,0 +1,8 @@
+html,
+body {
+ margin: 0;
+ padding: 0;
+}
+canvas {
+ display: block;
+}
diff --git a/content/examples/11_nn_ga/10_8_neuroevolution_ecosystem/creature.js b/content/examples/11_nn_ga/10_8_neuroevolution_ecosystem/creature.js
new file mode 100644
index 00000000..ab9f822f
--- /dev/null
+++ b/content/examples/11_nn_ga/10_8_neuroevolution_ecosystem/creature.js
@@ -0,0 +1,116 @@
+class Creature {
+ constructor(x, y, brain) {
+ this.position = createVector(x, y);
+ this.velocity = createVector(0, 0);
+ this.acceleration = createVector(0, 0);
+ this.fullSize = 12;
+ this.r = this.fullSize;
+ this.maxspeed = 2;
+ this.sensors = [];
+ this.health = 100;
+
+ let totalSensors = 15;
+ for (let i = 0; i < totalSensors; i++) {
+ let a = map(i, 0, totalSensors, 0, TWO_PI);
+ let v = p5.Vector.fromAngle(a);
+ v.mult(this.fullSize * 1.5);
+ this.sensors[i] = new Sensor(v);
+ }
+
+ if (brain) {
+ this.brain = brain;
+ } else {
+ this.brain = ml5.neuralNetwork({
+ inputs: this.sensors.length,
+ outputs: 2,
+ task: "regression",
+ noTraining: true,
+ // neuroEvolution: true,
+ });
+ }
+ }
+
+ reproduce() {
+ let brain = this.brain.copy();
+ brain.mutate(0.1);
+ return new Creature(this.position.x, this.position.y, brain);
+ }
+
+ eat() {
+ for (let i = 0; i < food.length; i++) {
+ let d = p5.Vector.dist(this.position, food[i].position);
+ if (d < this.r + food[i].r) {
+ this.health += 0.5;
+ food[i].r -= 0.05;
+ if (food[i].r < 20) {
+ food[i] = new Food();
+ }
+ }
+ }
+ }
+
+ think() {
+ for (let i = 0; i < this.sensors.length; i++) {
+ this.sensors[i].value = 0;
+ for (let j = 0; j < food.length; j++) {
+ this.sensors[i].sense(this.position, food[j]);
+ }
+ }
+ let inputs = [];
+ for (let i = 0; i < this.sensors.length; i++) {
+ inputs[i] = this.sensors[i].value;
+ }
+
+ // Predicting the force to apply
+ const outputs = this.brain.predictSync(inputs);
+ let angle = outputs[0].value * TWO_PI;
+ let magnitude = outputs[1].value;
+ let force = p5.Vector.fromAngle(angle).setMag(magnitude);
+ this.applyForce(force);
+ }
+
+ // Method to update location
+ update() {
+ // Update velocity
+ this.velocity.add(this.acceleration);
+ // Limit speed
+ this.velocity.limit(this.maxspeed);
+ this.position.add(this.velocity);
+ // Reset acceleration to 0 each cycle
+ this.acceleration.mult(0);
+ this.health -= 0.25;
+ }
+
+ // Wraparound
+ borders() {
+ if (this.position.x < -this.r) this.position.x = width + this.r;
+ if (this.position.y < -this.r) this.position.y = height + this.r;
+ if (this.position.x > width + this.r) this.position.x = -this.r;
+ if (this.position.y > height + this.r) this.position.y = -this.r;
+ }
+
+ applyForce(force) {
+ // We could add mass here if we want A = F / M
+ this.acceleration.add(force);
+ }
+
+ show() {
+ push();
+ translate(this.position.x, this.position.y);
+ for (let sensor of this.sensors) {
+ stroke(0, this.health * 2);
+ line(0, 0, sensor.v.x, sensor.v.y);
+ if (sensor.value > 0) {
+ fill(255, sensor.value * 255);
+ stroke(0, 100);
+ circle(sensor.v.x, sensor.v.y, 4);
+ }
+ }
+ noStroke();
+ fill(0, this.health * 2);
+ this.r = map(this.health, 0, 100, 2, this.fullSize);
+ this.r = constrain(this.r, 2, this.fullSize);
+ circle(0, 0, this.r * 2);
+ pop();
+ }
+}
diff --git a/content/examples/11_nn_ga/10_8_neuroevolution_ecosystem/food.js b/content/examples/11_nn_ga/10_8_neuroevolution_ecosystem/food.js
new file mode 100644
index 00000000..ca0dc032
--- /dev/null
+++ b/content/examples/11_nn_ga/10_8_neuroevolution_ecosystem/food.js
@@ -0,0 +1,12 @@
+class Food {
+ constructor() {
+ this.position = createVector(random(width), random(height));
+ this.r = 50;
+ }
+
+ show() {
+ noStroke();
+ fill(0, 100);
+ circle(this.position.x, this.position.y, this.r * 2);
+ }
+}
diff --git a/content/examples/11_nn_ga/10_8_neuroevolution_ecosystem/index.html b/content/examples/11_nn_ga/10_8_neuroevolution_ecosystem/index.html
new file mode 100644
index 00000000..ca5e350e
--- /dev/null
+++ b/content/examples/11_nn_ga/10_8_neuroevolution_ecosystem/index.html
@@ -0,0 +1,16 @@
+
+
+
+
+
+
+ Nature of Code Example 9.3: Smart Rockets
+
+
+
+
+
+
+
+
+
diff --git a/content/examples/11_nn_ga/10_8_neuroevolution_ecosystem/screenshot.png b/content/examples/11_nn_ga/10_8_neuroevolution_ecosystem/screenshot.png
new file mode 100644
index 00000000..b627fcbb
Binary files /dev/null and b/content/examples/11_nn_ga/10_8_neuroevolution_ecosystem/screenshot.png differ
diff --git a/content/examples/11_nn_ga/10_8_neuroevolution_ecosystem/sensor.js b/content/examples/11_nn_ga/10_8_neuroevolution_ecosystem/sensor.js
new file mode 100644
index 00000000..9654ae43
--- /dev/null
+++ b/content/examples/11_nn_ga/10_8_neuroevolution_ecosystem/sensor.js
@@ -0,0 +1,20 @@
+class Sensor {
+ constructor(v) {
+ this.v = v.copy();
+ this.value = 0;
+ }
+
+ sense(position, food) {
+ //{!1} Find the "tip" (or endpoint) of the sensor by adding position
+ let end = p5.Vector.add(position, this.v);
+ //{!1} How far is it from the food center
+ let d = end.dist(food.position);
+ //{!1} If it is within the radius light up the sensor
+ if (d < food.r) {
+ // The further into the center the food, the more the sensor activates
+ this.value = 1;
+ } else {
+ // this.value = 0;
+ }
+ }
+}
\ No newline at end of file
diff --git a/content/examples/11_nn_ga/10_8_neuroevolution_ecosystem/sketch.js b/content/examples/11_nn_ga/10_8_neuroevolution_ecosystem/sketch.js
new file mode 100644
index 00000000..f2150384
--- /dev/null
+++ b/content/examples/11_nn_ga/10_8_neuroevolution_ecosystem/sketch.js
@@ -0,0 +1,40 @@
+let bloops = [];
+let timeSlider;
+let food = [];
+
+function setup() {
+ createCanvas(640, 240);
+ ml5.tf.setBackend("cpu");
+ for (let i = 0; i < 20; i++) {
+ bloops[i] = new Creature(random(width), random(height));
+ }
+ for (let i = 0; i < 8; i++) {
+ food[i] = new Food();
+ }
+ timeSlider = createSlider(1, 20, 1);
+ timeSlider.position(10, 220);
+}
+
+function draw() {
+ background(255);
+ for (let i = 0; i < timeSlider.value(); i++) {
+ for (let i = bloops.length - 1; i >= 0; i--) {
+ bloops[i].think();
+ bloops[i].eat();
+ bloops[i].update();
+ bloops[i].borders();
+ if (bloops[i].health < 0) {
+ bloops.splice(i, 1);
+ } else if (random(1) < 0.001) {
+ let child = bloops[i].reproduce();
+ bloops.push(child);
+ }
+ }
+ }
+ for (let treat of food) {
+ treat.show();
+ }
+ for (let bloop of bloops) {
+ bloop.show();
+ }
+}
diff --git a/content/examples/11_nn_ga/10_8_neuroevolution_ecosystem/style.css b/content/examples/11_nn_ga/10_8_neuroevolution_ecosystem/style.css
new file mode 100644
index 00000000..e78b7102
--- /dev/null
+++ b/content/examples/11_nn_ga/10_8_neuroevolution_ecosystem/style.css
@@ -0,0 +1,8 @@
+html,
+body {
+ margin: 0;
+ padding: 0;
+}
+canvas {
+ display: block;
+}
diff --git a/content/images/10_nn/10_nn_15.png b/content/images/10_nn/10_nn_15.png
new file mode 100644
index 00000000..9a696ce3
Binary files /dev/null and b/content/images/10_nn/10_nn_15.png differ
diff --git a/content/images/10_nn/10_nn_16.jpg b/content/images/10_nn/10_nn_16.jpg
new file mode 100644
index 00000000..955ea786
Binary files /dev/null and b/content/images/10_nn/10_nn_16.jpg differ
diff --git a/content/images/10_nn/10_nn_17.jpg b/content/images/10_nn/10_nn_17.jpg
index e32ef52d..68bd381f 100644
Binary files a/content/images/10_nn/10_nn_17.jpg and b/content/images/10_nn/10_nn_17.jpg differ
diff --git a/content/images/10_nn/10_nn_18.jpg b/content/images/10_nn/10_nn_18.jpg
index 97062add..e32ef52d 100644
Binary files a/content/images/10_nn/10_nn_18.jpg and b/content/images/10_nn/10_nn_18.jpg differ
diff --git a/content/images/10_nn/10_nn_19.jpg b/content/images/10_nn/10_nn_19.jpg
new file mode 100644
index 00000000..97062add
Binary files /dev/null and b/content/images/10_nn/10_nn_19.jpg differ
diff --git a/content/images/10_nn/10_nn_20.png b/content/images/10_nn/10_nn_20.png
index cf24da58..62aae538 100644
Binary files a/content/images/10_nn/10_nn_20.png and b/content/images/10_nn/10_nn_20.png differ
diff --git a/content/images/10_nn/10_nn_4.jpg b/content/images/10_nn/10_nn_4.jpg
new file mode 100644
index 00000000..f85c4b5e
Binary files /dev/null and b/content/images/10_nn/10_nn_4.jpg differ
diff --git a/content/images/11_nn_ga/11_nn_ga_1.png b/content/images/11_nn_ga/11_nn_ga_1.png
new file mode 100644
index 00000000..cf24da58
Binary files /dev/null and b/content/images/11_nn_ga/11_nn_ga_1.png differ
diff --git a/content/images/11_nn_ga/11_nn_ga_2.jpg b/content/images/11_nn_ga/11_nn_ga_2.jpg
new file mode 100644
index 00000000..d5e0d4dc
Binary files /dev/null and b/content/images/11_nn_ga/11_nn_ga_2.jpg differ
diff --git a/content/images/11_nn_ga/11_nn_ga_3.png b/content/images/11_nn_ga/11_nn_ga_3.png
new file mode 100644
index 00000000..00dafcbf
Binary files /dev/null and b/content/images/11_nn_ga/11_nn_ga_3.png differ
diff --git a/content/images/11_nn_ga/11_nn_ga_4.jpg b/content/images/11_nn_ga/11_nn_ga_4.jpg
new file mode 100644
index 00000000..5faaf782
Binary files /dev/null and b/content/images/11_nn_ga/11_nn_ga_4.jpg differ
diff --git a/content/images/11_nn_ga/11_nn_ga_5.jpg b/content/images/11_nn_ga/11_nn_ga_5.jpg
new file mode 100644
index 00000000..8e8516ee
Binary files /dev/null and b/content/images/11_nn_ga/11_nn_ga_5.jpg differ