Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
+
credit / url
+
Here we are: the beginning. If it’s been a while since you’ve programmed in JavaScript (or done any math, for that matter), this chapter will reacquaint your mind with computational thinking. To start your coding of nature journey, I’ll introduce you to some foundational tools for programming simulations: random numbers, random distributions, and noise. Think of this as the first (zeroth!) element of the array that makes up this book—a refresher and a gateway to the possibilities that lie ahead.
In Chapter 1, I’m going to talk about the concept of a vector and how it will serve as the building block for simulating motion throughout this book. But before I take that step, let’s think about what it means for something to move around a digital canvas. I’ll begin with one of the best-known and simplest simulations of motion: the random walk.
Random Walks
@@ -132,7 +145,7 @@
Example 0.1: A Traditional Random
There are a couple adjustments I could make to the random walker. For one, this Walker object’s steps are limited to four options: up, down, left, and right. But any given pixel in the canvas can be considered to have eight possible neighbors, including diagonals (see Figure 0.1). A ninth possibility to stay in the same place could also be an option.
To implement a Walker object that can step to any neighboring pixel (or stay put), I could pick a number between 0 and 8 (nine possible choices). However, another way to write the code would be to pick from three possible steps along the x-axis (-1, 0, or 1) and three possible steps along the y-axis.
@@ -384,7 +397,7 @@
A Custom Distribution of Random
However, this reduces the probabilities to a fixed number of options: 99 percent of the time, a small step; 1 percent of the time, a large step. What if you instead wanted to make a more general rule: the higher a number, the more likely it is to be picked. For example, 0.8791 would be more likely to be picked than 0.8532, even if that likelihood is just a tiny bit greater. In other words, if x is the random number, the likelihood of it being picked could be mapped to the y-axis with the function y=x (Figure 0.3).
@@ -515,14 +528,14 @@
Perlin Noise (A Smoother Approach)
I’ve chosen to increment t by 0.01, but using a different increment value will affect the smoothness of the noise. Larger jumps in time that skip ahead through the noise space produce values that are less smooth, and more random (Figure 0.5).
In the upcoming code examples that utilize Perlin noise, pay attention to how the animation changes with varying values of t.
Noise Ranges
Once you have noise values that range between 0 and 1, it’s up to you to map that range to whatever size suits your purpose. The easiest way to do this is with p5’s map() function (Figure 0.6). It takes five arguments. First is the value you want to map, in this case n. This is followed by the value’s current range (minimum and maximum), followed by the desired range.
In this case, while noise has a range between 0 and 1, I’d like to draw a circle with an x-position ranging between 0 and the canvas’s width.
@@ -562,7 +575,7 @@
Example 0.6: A Perlin Noise Walker
Notice how this example requires a new pair of variables: tx and ty. This is because we need to keep track of two time variables, one for the x-position of the Walker object and one for the y-position. But there’s something a bit odd about these variables. Why does tx start at 0 and ty at 10,000? While these numbers are arbitrary choices, I’ve intentionally initialized the two time variables this way because the noise function is deterministic: it gives you the same result for a specific time t each and every time. If I asked for the noise value at the same time t for both x and y, then x and y would always be equal, meaning that the Walker object would only move along a diagonal. Instead, I use two different parts of the noise space, starting at 0 for x and 10,000 for y, so that x and y appear to act independently of each other (Figure 0.7).
In truth, there’s no actual concept of time at play here. It’s a useful metaphor to help describe how the noise function works, but really what you have is space, rather than time. The graph in Figure 0.7 depicts a linear sequence of noise values in a one-dimensional space—that is, arranged along a line. Values are retrieved at a specific x-position which is why you’ll often see a variable named xoff in examples to indicate the x-offset along the noise graph, rather than t for time.
@@ -573,7 +586,7 @@
Exercise 0.7
Two-Dimensional Noise
Having explored the concept of noise values in one dimension, let's consider how they can also exist in a two-dimensional space. With one-dimensional noise, there’s a sequence of values in which any given value is similar to its neighbor. Imagine a piece of graph paper (or a spreadsheet!) with the values for 1D noise written across a single row, one value per cell. Because these values live in one dimension, each has only two neighbors: a value that comes before it (to the left) and one that comes after it (to the right), as shown on the left in Figure 0.8.
Two-dimensional noise works exactly the same way conceptually. The difference, of course, is that the values aren’t just written in a linear path along one row of the graph paper, but rather fill the whole grid. A given value will be similar to all of its neighbors: above, below, to the right, to the left, and along any diagonal, as in the right half of Figure 0.8.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
+
credit / url
+
This book is all about looking at the world around us and developing ways to simulate it with code. In this first part of the book, I’ll start by looking at basic physics: how an apple falls from a tree, how a pendulum swings in the air, how Earth revolves around the sun, and so on. Absolutely everything contained within the book’s first five chapters requires the use of the most basic building block for programming motion, the vector. And so that’s where I’ll begin the story.
The word vector can mean a lot of different things. It’s the name of a New Wave rock band formed in Sacramento, California, in the early 1980s, and the name of a breakfast cereal manufactured by Kellogg’s Canada. In the field of epidemiology, a vector is an organism that transmits infection from one host to another. In the C++ programming language, a vector (std::vector) is an implementation of a dynamically resizable array data structure.
While all these definitions are worth exploring, they’re not the focus here. Instead, this chapter dives into the Euclidean vector (named for the Greek mathematician Euclid), also known as the geometric vector. When you see the term vector in this book, you can assume it refers to a Euclidean vector, defined as an entity that has both magnitude and direction.
@@ -111,7 +124,7 @@
Example 1.1: Bouncing Ball wit
Vectors in p5.js
Think of a vector as the difference between two points, or as instructions for walking from one point to another. For example, Figure 1.2 shows some vectors and possible interpretations of them.
These vectors could be thought of in the following way:
@@ -139,13 +152,13 @@
Vectors in p5.js
You’ve probably already thought this way when programming motion. For every frame of animation (a single cycle through p5’s draw() loop), you instruct each object to reposition itself to a new spot a certain number of pixels away horizontally and a certain number of pixels away vertically. This instruction is essentially a vector, as in Figure 1.3; it has both magnitude (how far away did you travel?) and direction (which way did you go).
The vector sets the object’s velocity, defined as the rate of change of the object’s position with respect to time. In other words, the velocity vector determines the object’s new position for every frame of the animation, according to this basic algorithm for motion: the new position is equal to the result of applying the velocity to the current position.
If velocity is a vector (the difference between two points), what about position? Is it a vector too? Technically, you could argue that position is not a vector, since it’s not describing how to move from one point to another—it’s describing a single point in space. Nevertheless, another way to describe a position is as the path taken from the origin—point (0,0)—to the current point. When you think of position in this way, it becomes a vector, just like velocity, as in Figure 1.4.
In Figure 1.4, the vectors are placed on a computer graphics canvas. Unlike in Figure 1.2, the origin point (0,0) isn’t the center, it’s the top-left corner. And instead of north, south, east, and west, there are positive and negative directions along the x- and y-axes (with y pointing down in the positive direction).
@@ -201,13 +214,13 @@
Vector Addition
Let’s say I have the two vectors shown in Figure 1.5.
Each vector has two components, an x and a y. To add the two vectors together, add both x components and y components to create a new vector, as in Figure 1.6.
In other words, \vec{w} = \vec{u} + \vec{v} can be written as:
@@ -389,7 +402,7 @@
More Vector Math
Vector Subtraction
@@ -398,7 +411,7 @@
Vector Subtraction
\vec{u} - \vec{v} = \vec{u} + -\vec{v}
Just as vectors are added by placing them “tip to tail”—that is, aligning the tip (or end point) of one vector with the tail (or start point) of the next—vectors are subtracted by reversing the direction of the second vector and and placing it at the end of the first, as in Figure 1.8 .
To actually solve the subtraction, take the difference of the vectors’ components. That is, \vec{w} = \vec{u} - \vec{v} can be written as:
@@ -443,7 +456,7 @@
Vector Multiplication and Division
Moving on to multiplication, you have to think a bit differently. Multiplying a vector typically refers to the process of scaling a vector. If I want to scale a vector to twice its size or one-third of its size, while leaving its direction the same, I would say: “Multiply the vector by 2” or “Multiply the vector by 1/3.” Unlike with addition and subtraction, I’m multiplying the vector by a scalar (a single number), not by another vector. Figure 1.9 illustrates how to scale a vector by a factor of 3.
@@ -493,7 +506,7 @@
Example 1.4: Multiplying a Vector
}
The resulting vector is half its original size. Rather than multiplying the vector by 0.5, I could also achieve the same effect by dividing the vector by 2, as in Figure 1.10.
Vector division, then, works just like vector multiplication—just replace the multiplication sign (*) with the division sign (/). Here’s how the p5.Vector class implements the div() function:
@@ -516,14 +529,14 @@
More Number Properties with Vectors
Vector Magnitude
Multiplication and division, as just described, alter the length of a vector without affecting its direction. Perhaps you’re wondering: “OK, so how do I know what the length of a vector is? I know the vector’s components (x and y), but how long (in pixels) is the actual arrow?” Understanding how to calculate the length of a vector, also known as its magnitude, is incredibly useful and important.
@@ -565,7 +578,7 @@
Example 1.5: Vector Magnitude
Normalizing Vectors
@@ -575,7 +588,7 @@
Normalizing Vectors
\hat{u} = \frac{\vec{u}}{||\vec{u}||}
@@ -922,14 +935,14 @@
Interactive Motion (Acceler
To finish out this chapter, let’s try something a bit more complex and a great deal more useful. I’ll dynamically calculate an object’s acceleration according to the rule stated in Acceleration Algorithm #3: the object accelerates toward the mouse.
Anytime you want to calculate a vector based on a rule or a formula, you need to compute two things: magnitude and direction. I’ll start with direction. I know the acceleration vector should point from the object’s position toward the mouse position (Figure 1.15). Let’s say the object is located at the position vector (x, y) and the mouse at (mouseX, mouseY).
@@ -996,7 +1009,7 @@
The Ecosystem Project
Develop a set of rules for simulating the real-world behavior of a creature, such as a nervous fly, swimming fish, hopping bunny, slithering snake, and so on. Can you control the object’s motion by only manipulating the acceleration vector? Try to give the creature a personality through its behavior (rather than through its visual design, although that is, of course, worth exploring as well).
Here's an illustration to help you generate ideas about how to build an ecosystem based on the topics covered in this book. Watch how the illustration evolves as new concepts and techniques are introduced with each subsequent chapter. The goal of this book is to demonstrate algorithms and behaviors, so my examples will almost always only include a single primitive shape, such as a circle. However, I fully expect that there are creative sparks within you, and encourage you to challenge yourself with the designs of the elements you draw on the canvas. If drawing with code is new to you, the book's illustrator, Zannah Marsh, has written a helpful guide that you can find in [TBD].
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
+
credit / url
+
In the final example of Chapter 1, I demonstrated how to calculate a dynamic acceleration based on a vector pointing from a circle on the canvas to the mouse position. The resulting motion resembled a magnetic attraction between shape and mouse, as if some force was pulling the circle in toward the mouse. In this chapter, I’ll detail the concept of a force and its relationship to acceleration. The goal, by the end of this chapter, is to build a simple physics engine and understand how objects move around a canvas responding to a variety of environmental forces.
A physics engine is a computer program (or code library) that simulates the behavior of objects in a physical environment. With a p5.js sketch, the objects are two-dimensional shapes, and the environment is a rectangular canvas. Physics engines can be developed to be highly precise (requiring high-performance computing) or real-time (using simple and fast algorithms). This chapter will focus on building a rudimentary physics engine, with a focus on speed and ease of understanding.
Forces and Newton’s Laws of Motion
@@ -14,7 +27,7 @@
Newton’s First Law
When Newton came along, the prevailing theory of motion—formulated by Aristotle—was nearly 2,000 years old. It stated that if an object is moving, some sort of force is required to keep it moving. Unless that moving thing is being pushed or pulled, it will slow down or stop. This theory was borne out through observation of the world. For example, if you toss a ball, it falls to the ground and eventually stops moving, seemingly because the force of the toss is no longer being applied.
This older theory, of course, isn’t true. As Newton established, in the absence of any forces, no force is required to keep an object moving. When an object (such as the aforementioned ball) is tossed in Earth’s atmosphere, its velocity changes because of unseen forces such as air resistance and gravity. An object’s velocity will only remain constant in the absence of any forces or if the forces that act on it cancel each other out, meaning the net force adds up to zero. This is often referred to as equilibrium (see Figure 2.1). The falling ball will reach a terminal velocity (that stays constant) once the force of air resistance equals the force of gravity.
Considering a p5.js canvas, I could restate Newton’s first law as follows:
@@ -30,7 +43,7 @@
Newton’s Third Law
Consider pushing on a stationary truck. Although the truck is far more massive than you, unlike a moving one, a stationary truck will never overpower you and send you flying backwards. The force your hands exert on the truck is equal and opposite to the force exerted by the truck on your hands. The outcome depends on a variety of other factors. If the truck is a small truck on an icy street, you’ll probably be able to get it to move. On the other hand, if it’s a very large truck on a dirt road and you push hard enough (maybe even take a running start), you could injure your hand.
And what if, as in Figure 2.2, you are wearing roller skates when you push on that truck?
You’ll accelerate away from the truck, sliding along the road while the truck stays put. Why do you slide but not the truck? For one, the truck has a much larger mass (which I’ll get into with Newton’s second law). There are other forces at work too, namely the friction of the truck’s tires and your roller skates against the road.
@@ -380,7 +393,7 @@
Friction
Whenever two surfaces come into contact, they experience friction. Friction is a dissipative force, meaning it causes the kinetic energy of an object to be converted into another form, giving the impression of loss or dissipation. Let’s say you’re driving a car. When you press your foot down on the brake pedal, the car’s brakes use friction to slow down the motion of the tires. Kinetic energy (motion) is converted into thermal energy (heat). A complete model of friction would include separate cases for static friction (a body at rest against a surface) and kinetic friction (a body in motion against a surface), but for simplicity here, I’m only going to look at the kinetic case.
Figure 2.3 shows the formula for friction.
Since friction is a vector, let me separate this formula into two components that determine the direction of friction as well as its magnitude. Figure 2.3 indicates that friction points in the opposite direction of velocity. In fact, that’s the part of the formula that says -1 * \hat{v}, or –1 times the velocity unit vector. In p5.js, this would mean taking an object’s velocity vector and multiplying it by -1.
@@ -473,7 +486,7 @@
Exercise 2.7
Air and Fluid Resistance
Friction also occurs when a body passes through a liquid or gas. The resulting force has many different names, all really meaning the same thing: viscous force, drag force, air resistance, or fluid resistance (see Figure 2.4).
The effect of a drag force is ultimately the same as the effect in our previous friction examples: the object slows down. The exact behavior and calculation of a drag force is a bit different, however. Here’s the formula:
@@ -490,7 +503,7 @@
Air and Fluid Resistance
Now that I’ve analyzed each of these components and determined what’s actually needed for my simulation, I can reduce the formula, as shown in Figure 2.5.
While I’ve written the simplified formula with C_d as the lone constant representing the “coefficient of drag”, I can also think of it as all of the constants combined ( -1/2, \rho, A). A more sophisticated simulation might treat these constants separately—you could try factoring them in as an exercise.
@@ -624,7 +637,7 @@
Exercise 2.10
Gravitational Attraction
@@ -649,7 +662,7 @@
Gravitational Attraction
Given these assumptions, I want to compute a vector, the force of gravity. I’ll do it in two parts. First, I’ll compute the direction of the force (\hat{r} in the formula). Second, I’ll calculate the strength of the force according to the masses and distance.
@@ -664,7 +677,7 @@
Gravitational Attraction
dir.mult(magnitude);
@@ -685,7 +698,7 @@
Gravitational Attraction
Now that I’ve worked out the math and code for calculating an attractive force (emulating gravitational attraction), let’s turn our attention to applying this technique in the context of an actual p5.js sketch. II’ll continue to use the Mover class as a starting point—a template for making objects with position, velocity, and acceleration vectors, as well as an applyForce() method. I’ll take this class and put it in a sketch with:
@@ -920,7 +933,7 @@
The n-Body Problem
To begin, while it’s so far been helpful to have separate Mover and Attractor classes, this distinction is actually a bit misleading. After all, according to Newton's third law, all forces occur in pairs: if an attractor attracts a mover, then that mover should also attract the attractor. Instead of two different classes here, what I really want is a single type of thing—called, for example, a Body—with every body attracting every other body.
The scenario being described here is commonly referred to as the n-body problem. It involves solving for the motion of a group of objects that interact via gravitational forces. The two-body problem is a famously “solved” problem, meaning the motions can be precisely computed with mathematical equations when only two bodies are involved. However, adding one more body turns the two-body problem into a three-body problem, and suddenly no formal solution exists.
Although less accurate than using precise equations of motion, the examples built in this chapter can model both the two-body and three-body problems. To begin, I’ll move the attract() method from the Attractor class into the Mover class (which I will now call Body).
@@ -1061,7 +1074,7 @@
The Ecosystem Project
Step 2 Exercise:
Incorporate the concept of forces into your ecosystem. How might other environmental factors (for example, water versus mud, or the current of a river) affect how a character moves through an ecosystem? Try introducing other elements into the environment (food, a predator) for the creature to interact with. Does the creature experience attraction or repulsion to things in its world? Can you think more abstractly and design forces based on the creature’s desires or goals?
Bridget Riley, a celebrated British artist, was a driving force behind the Op Art movement of the 1960s. Her work features geometric patterns that challenge the viewer's perceptions and evoke feelings of movement or vibration. Her 1974 piece, "Gala," showcases a series of curvilinear forms that ripple across the canvas, evoking the natural rhythm of the sine wave.
In Chapters 1 and 2, I carefully worked out an object-oriented structure to animate a shape in a p5.js canvas, using the concept of a vector to represent position, velocity, and acceleration driven by forces in the environment. I could move straight from here into topics such as particle systems, steering forces, group behaviors, and more. However, doing so would mean skipping a fundamental aspect of motion in the natural world: oscillation, or the back-and-forth movement of an object around a central point or position. In order to model oscillation, you’ll need to understand a little bit about trigonometry.
Trigonometry is the mathematics of triangles, specifically right triangles. Learning some trig will give you new tools to generate patterns and create new motion behaviors in a p5.js sketch. You’ll learn to harness angular velocity and acceleration to spin objects as they move. You’ll be able to use the sine and cosine functions to model nice ease-in, ease-out wave patterns. You’ll also learn to calculate the more complex forces at play in situations that involve angles, such as a pendulum swinging or a box sliding down an incline.
I’ll start the chapter with the basics of working with angles in p5.js, then cover several aspects of trigonometry. In the end, I’ll connect trigonometry with what you learned about forces in Chapter 2. What I cover here will pave the way for more sophisticated examples that require trig later in this book.
@@ -11,19 +20,19 @@
Angles
Before going any further, I need to make sure you understand what it means to be an angle in p5.js. If you have experience with p5.js, you’ve undoubtedly encountered this issue while using the rotate() function to rotate and spin objects.
You’re most likely to be familiar with the concept of an angle as measured in degrees (see Figure 3.1). A full rotation goes from 0 to 360 degrees, and 90 degrees (a right angle) is 1/4th of 360, shown in Figure 3.1 as two perpendicular lines.
Angles are commonly used in computer graphics to specify a rotation for a shape. For example, the square in Figure 3.2 is rotated 45 degrees around its center.
@@ -188,7 +197,7 @@
Exercise 3.3
Trigonometry Functions
I think I’m ready to reveal the secret of trigonometry. I’ve discussed angles, I’ve spun a baton. Now it’s time for … wait for it … sohcahtoa. Yes, sohcahtoa! This seemingly nonsensical word is actually the foundation for much of computer graphics work. A basic understanding of trigonometry is essential if you want to calculate angles, figure out distances between points, and work with circles, arcs, or lines. And sohcahtoa is a mnemonic device (albeit a somewhat absurd one) for remembering what the trigonometric functions sine, cosine, and tangent mean. It references the different sides of a right triangle, as shown in Figure 3.4.
Take one of the non-right angles in the triangle. The adjacent side is the one touching that angle, the opposite side is the one not touching that angle, and the hypotenuse is the side opposite the right angle. Sohcahtoa tells you how to calculate the angle’s trigonometric functions in terms of the lengths of these sides:
@@ -200,7 +209,7 @@
Trigonometry Functions
Take a look at Figure 3.4 again. You don’t need to memorize it, but see if you feel comfortable with it. Try redrawing it yourself. Next, let’s look at it in a slightly different way (see Figure 3.5).
@@ -214,7 +223,7 @@
Pointing in the Direction of Move
You might notice that almost all of the shapes I’ve been drawing so far have been circles. This is convenient for a number of reasons, one of which is that it allowed me to avoid the question of rotation. Rotate a circle and, well, it looks exactly the same. Nevertheless, there comes a time in all motion programmers’ lives when they want to move something around on the screen that isn’t shaped like a circle. Perhaps it’s an ant, or a car, or a spaceship. To look realistic, that object should point in its direction of movement.
When I say “point in its direction of movement,” what I really mean is “rotate according to its velocity vector.” Velocity is a vector, with an x and a y component, but to rotate in p5.js you need one number, an angle. Let’s look at the trigonometry diagram once more, this time focused on an object’s velocity vector (Figure 3.6).
The vector’s x and y components are related to its angle through the tangent function. Using the toa in sohcahtoa, I can write the relationship as follows:
@@ -263,7 +272,7 @@
Pointing in the Direction of Move
}
@@ -306,7 +315,7 @@
Polar vs. Cartesian Coordinates
Another useful coordinate system known as polar coordinates describes a point in space as a distance from the origin (like the radius of a circle) and an angle of rotation around the origin (usually called \theta, the Greek letter theta). Thinking in terms of vectors, a Cartesian coordinate describes a vector’s x and y components, whereas a polar coordinate describes a vector’s magnitude (length) and direction (angle).
When working in p5.js, you may find it more convenient to think in polar coordinates, especially when creating sketches that involve rotational or circular movements. However, p5.js’s drawing functions only understand xy Cartesian coordinates. Happily for you, trigonometry holds the key to converting back and forth between polar and Cartesian (see Figure 3.8). This allows you to design with whatever coordinate system you have in mind, while always drawing using Cartesian coordinates.
For example, given a polar coordinate with a radius of 75 pixels and an angle (\theta) of 45 degrees (or \pi/4 radians), the Cartesian x and y can be computed as follows:
@@ -382,7 +391,7 @@
Exercise 3.5
Properties of Oscillation
Take a look at the graph of the sine function in Figure 3.9, where y = \sin(x).
The output of the sine function is a smooth curve alternating between −1 and 1, also known as a sine wave. This behavior, a periodic movement between two points, is the oscillation I mentioned at the start of the chapter. Plucking a guitar string, swinging a pendulum, bouncing on a pogo stick—these are all examples of oscillating motion, and they can all be modeled using the sine function.
@@ -696,14 +705,14 @@
Spring Forces
It’s been lovely exploring the mathematics of triangles and waves, but perhaps you’re starting to miss Newton’s laws of motion and vectors. After all, the core of this book is about simulating the physics of moving bodies. In the “Properties of Oscillation” section, I modeled simple harmonic motion by mapping a sine wave to a range of pixels on a canvas. Exercise 3.6 asked you to use this technique to create a simulation of a bob hanging from a spring with the sin() function. While such a solution is a quick-and-dirty, one-line-of-code way achieve the result, it won’t do if what you really want is a bob hanging from a spring that responds to other forces in the environment (wind, gravity, and so on).
To accomplish a simulation like that, you need to model the force of a spring using vectors. To accomplish this, I’ll consider the spring as a connection between a“bob” and an “anchor.” (see Figure 3.14).
The force of a spring is calculated according to Hooke’s law, named for Robert Hooke, a British physicist who developed the formula in 1660. Hooke originally stated the law in Latin: “Ut tensio, sic vis,” or “As the extension, so the force.” Think of it this way:
The force of the spring is directly proportional to the extension of the spring.
@@ -725,7 +734,7 @@
Spring Forces
Now that I’ve sorted out the elements necessary for the magnitude of the force (-kx), I need to figure out the direction, a unit vector pointing in the direction of the force. The good news is that I already have this vector. Right? Just a moment ago I asked the question “How I can calculate that distance?” and I answered “How about the magnitude of a vector that points from the anchor to the bob?” Well, that vector describes the direction of the force!
Figure 3.17 shows that if you stretch the spring beyond its rest length, there should be a force pulling it back towards the anchor. And if the spring shrinks below its rest length, the force should push it away from the anchor. The Hooke’s law formula accounts for this reversal of direction with the –1.
All I need to do now is set the magnitude of the vector used for the distance calculation. Let’s take a look at the code and rename that vector variable as force.
@@ -758,7 +767,7 @@
Spring Forces
}
One option would be to write all of the spring force code in the main draw() loop. But thinking ahead to when you might have multiple bob and spring connections, it would be wise to create an additional class, a Spring class. As shown in Figure 3.18, the Bob class keeps track of the movements of the bob; the Spring class keeps track of the spring’s anchor position, its rest length, and calculates the spring force on the bob.
This allows me to write a lovely sketch as follows:
@@ -876,14 +885,14 @@
The Pendulum
You might have noticed that in the spring forces code I never once used sine or cosine! But before you write off all this trigonometry stuff as a tangent, allow me to show an example of how it all fits together. Imagine a bob hanging from an anchor connected by a spring with a fully rigid connection that can neither be compressed nor extended. This idealized scenario describes a pendulum and provides an excellent opportunity to practice combining all that you have learned about forces and trigonometry.
A pendulum is a weight, or bob, suspended by an arm from a pivot (what was previously called the “anchor” in the spring). When it’s at rest, the pendulum hangs straight down, as in Figure 3.14. If you lift the pendulum up at an angle from its resting state and then release it, however, it starts to swing back and forth, tracing the shape of an arc. A real-world pendulum would live in a three-dimensional space, but I’m going to look at a simpler scenario: a pendulum in the two-dimensional space of a p5.js canvas. Let’s look at it in a non-resting position and add the forces as play: gravity and tension.
@@ -894,10 +903,10 @@
The Pendulum
-
+
@@ -614,17 +623,17 @@
Exercise 5.9
Simple Path Following
Figure 5.19 depicts all the ingredients of the path following behavior. There are a lot of components at play here beyond just a vehicle and target, so take some time to review the full diagram. I’ll then slowly unpack the algorithm piece by piece.
-
+
Figure 5.19: Path following includes a path, a vehicle, a future position, a “normal” to the path, and a target.
First, what do I mean by a path? There are many techniques for implementing a path, but one simple way is to define a path as a series of connected points, as in Figure 5.20.
-
+
Figure 5.20: A path is a sequence of connected points.
The simplest version of this path would just be a line between two points (Figure 5.21).
-
+
Figure 5.22: A path with a start, end, and radius.
I’m also going to consider a path to have a radius. If the path is a road, the radius is the road’s width. With a smaller radius, vehicles have to follow the path more closely; a wider radius allows them to stray a bit more to either side of the path.
@@ -654,7 +663,7 @@
Simple Path Following
}
Now, assume there’s a vehicle outside the path’s radius, moving with a velocity, as in Figure 5.22.
-
+
Figure 5.22: Adding a vehicle moving off and away from the path
The first thing to do is predict (assuming a constant velocity) where that vehicle will be in the future.
@@ -669,7 +678,7 @@
Simple Path Following
Once I have that position, it’s time to determine the distance from that predicted position to the path. If it’s very far away, the vehicle has strayed from the path and needs to steer back toward it. If it’s on the path, all is well and the vehicle can continue on its way.
Essentially, I need to calculate the distance between a point (the future position) and a line (path). That distance is defined as the length of the normal, a vector that extends from the point to the line and is perpendicular to the line (Figure 5.23).
-
+
Figure 5.23: The normal is a vector that extends from the future position to the path and is perpendicular to the path.
How do I find the normal? First, I can define a vector (call it \vec{A}) that extends from the path’s starting point to the vehicle’s future position.
@@ -678,7 +687,7 @@
Simple Path Following
let b = p5.Vector.sub(path.end, path.start);
Now, with a little trigonometry (the cah in sohcahtoa), I can calculate the distance from the path’s start to the normal point. As shown in Figure 5.24, it’s ||\vec{A}|| \times \cos(\theta).
-
+
Figure 5.24: The distance from the start of the path to the normal is ||\vec{A}|| \times \cos(\theta).
If I only knew \theta, I could find that normal point with the following code:
@@ -714,12 +723,12 @@
Simple Path Following
let normalPoint = p5.Vector.add(path.start, b);
This process of scaling \vec{B} according to the normal point is commonly known as scalar projection. We say that ||\vec{A}||\times\cos(\theta)is the scalar projection of \vec{A} onto \vec{B}, as in Figure 5.25.
-
+
Figure 5.25: The scalar projection of \vec{A} onto \vec{B} is equal to ||\vec{A}||\times\cos(\theta).
Once I have the normal point along the path, the next step is to decide whether and how the vehicle should steer toward the path. Reynolds’s algorithm states that the vehicle should only steer toward the path if it’s danger of straying beyond the path—that is, if the distance between the normal point and the predicted future position is greater than the path’s radius. This is illustrated in Figure 5.26.
-
+
Figure 5.26: A vehicle with a future position on the path and one that’s outside the path.
I can encode that logic with a simple if statement, and use my earlier seek() method to steer the vehicle when necessary.
@@ -732,7 +741,7 @@
Simple Path Following
}
But what’s the target that the path follower is seeking? Reynolds’s algorithm involves picking a point ahead of the normal on the path. Since I know the vector that defines the path (\vec{B}), I can implement this “point ahead” by adding a vector that points in \vec{B}’s direction to the vector representing the normal point, as in Figure 5.27.
-
+
Figure 5.27: The target is 25 pixels (an arbitrary choice) ahead of the normal point along the path.
I’ll arbitrarily say the target should be 25 pixels ahead of the normal.
@@ -775,7 +784,7 @@
Example 5.5: Simple Path Following
-
+
half-widith-rightFigure 5.28: The elements of the getNormalPoint() function: position, a, and b.
Notice that instead of using all that dot product and scalar projection code to find the normal point, I instead call a function: getNormalPoint(). In cases like this, it’s useful to break out the code that performs a specific task (finding a normal point) into a function that can be called when required. The function takes three vector arguments (see Figure 5.28): the first defines a point p in Cartesian space (the vehicle’s future position), and the second and third define a line segment between two points a and b (the path).
@@ -798,18 +807,18 @@
Example 5.5: Simple Path Following
Path Following with Multiple Segments
What if I want a vehicle to follow a more complex path than just a single straight line? Perhaps a curved path that moves in a variety of directions, as in Figure 5.29?
-
+
Figure 5.29: A more complex path
Maybe I’m being a little too ambitious. I could investigate algorithms for following a curved path, but I’m much less likely to end up needing a cool compress on my forehead if I stick with straight line segments, like in Figure 5.30. I could always still draw the path as a curve, but it’s best to approximate it behind the scenes with simplified geometric forms for the necessary calculations.
-
+
Figure 5.30: The same curved path, but approximated as connected line segments
If I made path following work with one line segment, how do I make it work with a series of connected line segments? The key is in how I find the target point along the path.
To find the target with just one line segment, I had to compute the normal to that line segment. Now that there’s a series of line segments, there's also a series of normal points to be computed—one for each segment (see Figure 5.31). Which one does the vehicle choose? The solution Reynolds proposed is to pick the normal point that is (a) closest and (b) on the path itself.
-
+
Figure 5.31: Finding the closest normal point along a series of connected line segments
If you have a point and an infinitely long line, you’ll always have a normal point that touches the line. But if you have a point and a finite line segment, you won’t necessarily find a normal that’s on the line segment itself. If this happens for any of the segments, I can disqualify those normals. Once I’m left with just those normals that are on the path itself (only two in Figure 5.31), I pick the one that’s shortest.
@@ -982,14 +991,14 @@
Implementi
}
-
+
Figure 5.32: The desired velocity for “separation” (equivalent to “fleeing”) is a vector that points in the opposite direction of a target.
Of course, this is just the beginning. The real work happens inside the separate() method itself. Reynolds defines the separation behavior as, “Steer to avoid crowding.” In other words, if a given vehicle is too close to you, steer away from that vehicle. Sound familiar? Remember the seek behavior, where a vehicle steers toward a target? Reverse that force and you have the flee behavior, which is what should be applied here to achieve separation (see Figure 5.32).
-
+
Figure 5.33: Separation from multiple vehicles is the average of all desired fleeing velocities
@@ -1186,7 +1195,7 @@
Flocking
These rules are illustrated in Figure 5.34.
-
+
Figure 5.34: The three rules of flocking: separation, alignment, and cohesion. The example vehicle and desired velocity are bold.
Just as with Example 5.8, where I combined separation and seeking, I’ll want the Boid objects to have a single method that manages all three behaviors. I’ll call it flock().
@@ -1227,7 +1236,7 @@
Flocking
This is pretty good, but it’s missing one rather crucial detail. One of the key principles behind complex systems like flocking is that the elements (in this case, boids) have short-range relationships. Thinking about ants again, it’s pretty easy to imagine an ant being able to sense its immediate environment, but less so an ant having an awareness of what another ant is doing hundreds of feet away. Indeed, the fact that the ants manifest such complex collective behavior from only these neighboring relationships is what makes them so exciting in the first place.
In the align() method, I’m currently taking the average velocity of all the boids, whereas I should really only be looking at the boids within a certain distance (see Figure 5.35). That distance threshold can be variable, of course. You could design boids that can see only 20 pixels away or boids that can see 100 pixels away.
-
+
Figure 5.35: The example vehicle (bold) only interacts with the vehicles within its neighborhood (the circle).
I already applied similar logic when I implemented separation, calculating a force based only on other vehicles within a certain distance. Now I want to do the same for alignment (and eventually, cohesion).
@@ -1260,13 +1269,13 @@
Flocking
Exercise 5.15
-
+
Can you rewrite the align() method so that boids only see other boids that fall within a direct line of sight?
-
+
@@ -1347,13 +1356,13 @@
Exercise 5.16
Exercise 5.17
-
+
In his book The Computational Beauty of Nature (MIT Press, 2000), Gary Flake describes a fourth rule for flocking, view: “Move laterally away from any boid that blocks the view.” Have your boids follow this rule.
-
+
@@ -1382,7 +1391,7 @@
Spatial Subdivisions
In his 2000 paper “Interaction with Groups of Autonomous Characters,” Craig Reynolds (surprise, surprise) suggests a technique known as bin-lattice spatial subdivision (often called “binning” for short) for optimizing flocking algorithms and other group behaviors. This technique hinges around dividing the simulation space into a grid of smaller cells (or “bins”).
To demonstrate, imagine the canvas is divided into a grid of 10 rows and 10 columns, for a total of 100 cells (10 \times 10 = 100). And let’s say you have 2,000 boids—a number small enough for you to realistically want, but large enough to run too slowly (\text{2,000} \times \text{2,000} = \text{4,000,000 cycles)}. At any given moment, each boid falls within a cell in the grid, as shown in Figure 5.36. With 2,000 boids and 100 cells, on average there will be approximately 20 boids per cell (\text{2,000} \div 100 = 20).
-
+
Figure 5.36: A square canvas full of vehicles, subdivided into a grid of square cells
Now say that in order to apply the flocking rules to a given boid, you only need to look at the other boids that are in that boid’s cell. With an average of 20 boids per cell, each cell would require 400 cycles (20 \times 20 = 400), and with 100 cells, that’s 40,000 cycles total (400 \times 100 = \text{40,000}). That’s a massive savings over 4,000,000!
“A library implies an act of faith / Which generations still in darkness hid / Sign in their night in witness of the dawn.”
— Victor Hugo
-
+
+
+
+
+
+
+
TITLE
+
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
+
credit / url
+
Think about what you’ve accomplished so far in this book. You’ve:
Learned about concepts from the world of physics. (What is a vector? What is a force? What is a wave?)
@@ -25,7 +34,7 @@
Why Use a Physics Library?
If you’re working with simple geometric shapes, question #1 isn’t too tough. In fact, perhaps you’ve encountered it before. With two circles, for instance, you know they’re intersecting if the distance between their centers is less than the sum of their radii (see Figure 6.1).
-
+
Figure 6.1: Two circles with radii r_1 and r_2 are colliding if the distance between them is less than r_1 + r_2.
That’s easy enough, but how about calculating the circles’ velocities after the collision? This is where I’m going to stop the discussion. Why, you ask? It’s not that understanding the math behind collisions isn’t important or valuable. (In fact, I’m including additional examples on the website related to collisions without a physics library.) The reason for stopping is that life is short! (Let this also be a reason for you to consider going outside and frolicking for a bit before sitting down to write your next sketch.) You can’t expect to master every detail of physics simulation. And while you might enjoy learning about collision resolution for circles, it’s only going to make you want to work with rectangles next. And then with strangely shaped polygons. And then curved surfaces. And then swinging pendulums colliding with springy springs. And then, and then, and then . . .
@@ -39,10 +48,10 @@
Other Physics Libraries
Another notable library is p5play, a project initiated by Paolo Pedercini and currently led by Quinton Ashley that was specifically designed for game development. It simplifies the creation of visual objects—known as “sprites”—and manages their interactions, namely collisions and overlaps. As you may have guessed from the name, p5play is tailored to work seamlessly with p5.js. It uses Box2D under the hood for physics simulation.
Importing the Matter.js Library
-
In a moment, I’ll turn to working with Matter.js, created by Liam [Last Name] in 2014. But before you can use an extrnal JavaScript library in a p5.js project, you need to import it into your sketch. As you’re already quite aware, I’m using the official p5.js web editor for developing and sharing this book’s code examples. The easiest way to add a library is to edit the index.html file that’s part of every new p5.js sketch created in the editor.
+
In a moment, I’ll turn to working with Matter.js, created by Liam [Last Name] in 2014. But before you can use an external JavaScript library in a p5.js project, you need to import it into your sketch. As you’re already quite aware, I’m using the official p5.js web editor for developing and sharing this book’s code examples. The easiest way to add a library is to edit the index.html file that’s part of every new p5.js sketch created in the editor.
To do that, first expand the file navigation bar on the lefthand side of the editor and select index.html, as shown in Figure 6.X.
-
+
Figure 6.X: Accessing a sketch’s index.html file
The file includes a series of <script> tags inside the HTML tags <head> and </head>. This is how JavaScript libraries are referenced in a p5.js sketch. It’s no different than including sketch.js or particle.js in the page’s <body>, only here, instead of keeping and editing a copy of the JavaScript code itself, the library is referenced with a URL of a content delivery network (CDN). This is a type of server for hosting files. For JavaScript libraries that are used across hundreds of thousands of web pages that millions upon millions of users access, CDNs need to be pretty good at their job of serving up these libraries.
@@ -205,7 +214,7 @@
Engine
// An "alias" for the Matter.js Engine class
let Engine = Matter.Engine;
-// A reference to the Matter physics engine
+//{!1} A reference to the Matter physics engine
let engine;
function setup() {
@@ -275,7 +284,7 @@
// {!1} Note the use of aliases for all of the Matter.js classes needed for this sketch.
-const { Engine, Bodies, Composite, Body, Vector } = Matter;
+const { Engine, Bodies, Composite, Body, Vector, Render } = Matter;
function setup() {
// Store a reference to the canvas
@@ -311,7 +320,7 @@
Example 6.1: Matter.js De
engine,
options: { width, height },
});
- Matter.Render.run(render);
+ Render.run(render);
// Create a box with custom friction and restitution
let options = {
@@ -340,7 +349,7 @@
Example 6.1: Matter.js De
Matter.js with p5.js
Matter.js keeps a list of all bodies that exist in the world, and as you’ve just seen, it can handle drawing and animating them with the Render and Runner objects. (That list, incidentally, is stored in engine.world.bodies.) What I’d like to show you now, however, is a technique for keeping your own list(s) of Matter.js bodies, so you can draw them with p5. Yes, this approach may add redundancy and sacrifice a small amount of efficiency, but it more than makes up for that with ease of use and customization. With this methodology, you’ll be able to code like you’re accustomed to in p5.js, keeping track of which bodies are which and drawing them appropriately. Consider the file structure of the sketch shown in Figure 6.x.
-
+
Figure 6.x: The file structure of a typical p5.js sketch
Structurally, this looks like just another p5.js sketch. There’s a main sketch.js file, as well as box.js. This sort of extra file is where I’d typically declare a class needed for the sketch—in this case, a Box class describing a rectangular body in the world.
@@ -509,7 +518,7 @@
Example 6.3: Falling Boxes
Polygons and Groups of Shapes
Now that I’ve demonstrated how easy it is to use a primitive shape like a rectangle or circle with Matter.js, let’s imagine that you want to have a more interesting body, such as the abstract character in Figure 6.2.
-
+
Figure 6.2: A “compound” body made up of multiple shapes
There are two strategies for making such complex forms. Beyond the four sides of a rectangle, there’s a generic Bodies.polygon() method for creating any regular polygon (pentagon, hexagon, and so on). Additionally, there’s Bodies.trapezoid() for making a quadrilateral with at least one pair of parallel sides.
@@ -547,12 +556,12 @@
Example 6.4: Polygon Shapes
}
When creating a custom polygon in Matter.js, you must remember two important details. First, the vertices must be specified in clockwise order. For instance, Figure 6.3 shows the five vertices used to create the bodies in Example 6.4. Notice how the example added them to the vertices array in clockwise order from the top-left.
-
+
Figure 6.3: Vertices on a custom polygon oriented in clockwise order
Second, each shape must be convex, not concave. As shown in Figure 6.4, a concave shape is one where the surface curves inward, whereas convex is the opposite. Every internal angle in a convex shape must be 180 degrees or less. Matter.js can in fact work with concave shapes, but you need to build them out of multiple convex shapes. (More about that in a moment.)
-
+
Figure 6.4: A concave shape can be drawn with multiple convex shapes.
Since the shape is built out of custom vertices, you can use p5’s beginShape(), endShape(), and vertex() functions when it comes time to actually draw the body. The CustomShape class could include an array to store the vertices’ pixel positions, relative to (0, 0), for drawing purposes. However, it’s best to query Matter.js for the positions instead. This way there’s no need to use translate() or rotate(), since the Matter.js body stores its vertices as absolute “world” positions.
@@ -574,7 +583,7 @@
Example 6.4: Polygon Shapes
Exercise 6.3
Using Bodies.fromVertices(), create your own polygon design (remember, it must be convex). Some possibilities are shown below.
-
+
@@ -591,12 +600,12 @@
Exercise 6.3
Composite.add(engine.world, body);
While this does create a compound body by combining two shapes, the code isn’t quite right. If you run it, you’ll see that both shapes are centered around the same (x, y) position, as in Figure 6.5.
-
+
Figure 6.5: A rectangle and a circle with the same (x, y) reference point.
Instead, I need to offset the center of the circle horizontally from the center of the rectangle, as in Figure 6.6.
-
+
Figure 6.6: A circle placed relative to a rectangle with a horizontal offset
I’ll use half the width of the rectangle as the offset, so the circle is centered around the edge of the rectangle.
@@ -667,7 +676,7 @@
Feeling Attached: Matter.js Const
Distance Constraints
-
+
Figure 6.8: A constraint is a connection between two bodies at an anchor point for each body.
@@ -767,7 +776,7 @@
Exercise 6.5
Revolute Constraints
-
+
Figure 6.9: A revolute constraint is a connection between two bodies at a single anchor point or hinge.
@@ -835,12 +844,12 @@
Example 6.7: Spinning Windmill
Exercise 6.6
-
+
Create a vehicle that has revolute joints for its wheels. Consider the size and positioning of the wheels. How does changing the stiffness property affect their movement?
-
+
@@ -1050,7 +1059,7 @@
A Brief Interlude: Integration Me
This methodology is known as Euler integration (named for the mathematician Leonhard Euler, pronounced “Oiler”), or the Euler method. It’s essentially the simplest form of integration, and it’s very easy to implement in code—just two lines! However, while it's computationally simple, it is by no means the most accurate or stable choice for certain types of simulations.
Why is Euler inaccurate? Think about it this way: when you bounce down a sidewalk on a pogo stick, does the pogo stick sit in one position at time equals 1 second, then disappear and suddenly reappear in a new position at time equals 2 seconds, and do the same thing for 3 seconds, and 4, and 5? No, of course not. The pogo stick moves continuously through time. But what’s happening in a p5.js sketch? A circle is at one position at frame 0, another at frame 1, another at frame 2, and so on. Sure, at 30 frames per second, you see the illusion of motion. But a new position is only computed every N units of time, whereas the real world is perfectly continuous. This results in some inaccuracies, as shown in Figure 6.10.
-
+
Figure 6.10: The Euler approximation of a curve
The “real world” is the smooth curve; the Euler simulation is the series of straight line segments. One option to improve on Euler is to use smaller time steps—instead of once per frame, you could recalculate an object’s position 20 times per frame. But this isn’t practical; the sketch might then run too slowly.
@@ -1368,7 +1377,7 @@
Soft Body Simulations
One of the first popular examples of soft body physics was SodaConstructor, a game created in the early 2000s. Players could construct and animate custom two-dimensional creatures built out of masses and springs. Other examples over the years have included games like LocoRoco, World of Goo, and more recently, JellyCar.
The basic building blocks of soft body simulations are particles connected by springs—just like the pair particles in the last example. Figure 6.11 shows how to configure a network of particle-spring connections to make various forms.
-
+
Figure 6.11: Soft body simulation designs
As the figure shows, a string can be simulated by connecting a line of particles with springs; a blanket can be simulated by connecting a grid of particles with springs; and a cute, cuddly, squishy cartoon character can be simulated with a custom layout of particles connected with springs. It’s not much of a leap from one to another.
@@ -1378,7 +1387,7 @@
A String
let particles = [];
Now, let’s say I want to have 20 particles, all spaced 10 pixels apart, as in Figure 6.12.
I can loop from i equals 0 all the way up to total, creating new particles and setting each one’s y position set to i * 10. This way the first particle is at (0,10), the second at (0,20), the third at (0,30), and so on.
@@ -1393,7 +1402,7 @@
A String
Even though it’s redundant, I’m adding the particles to both the toxiclibs.js physics world and to the particles array. This will help me manage the sketch (especially for the case where there might be more than one string of particles).
Now for the fun part: it’s time to connect all the particles. Particle index 0 will be connected to particle 1, particle 1 to particle 2, 2 to 3, 3 to 4, and so on (see Figure 6.13).
-
+
Figure 6.13: Each particle is connected to the next particle in the array.
In other words, particle i needs to be connected to particle i+1 (except for when i represents the last element of the array).
@@ -1443,7 +1452,7 @@
Exercise 6.10
A Soft Body Character
Now that I’ve built a simple connected system—a single string of particles—I’ll expand on this idea to create a squishy, cute friend in p5.js, otherwise known as a soft body character. The first step is to design a “skeleton” of connected particles. I’ll begin with a very simple design with only six vertices, as shown in Figure 6.XX. Each vertex (drawn as a dot) represents a Particle object, and each connection (drawn as a line) represents a Spring object.
-
+
Figure 6.X A skeleton for a soft body character. The vertices are numbered according to their positions in an array.
Creating the particles is the easy part; it’s exactly the same as before! I’d like to make one change, though. Rather than having the setup() function add the particles and springs to the physics world, I’ll instead incorporate this responsibility into the Particle constructor itself.
@@ -1500,7 +1509,7 @@
A Soft Body Character
}
The beauty of this system is that you can easily expand it to create your own design by adding more particles and springs! However, there’s one major issue here: I’ve only made connections around the perimeter of the character. if I were to apply a force (like gravity) to the body, it would instantly collapse onto itself. This is where additional “internal” springs come into play, as shown in Figure 6.XX. They keep the character’s structure stable while still allowing it to move and squish in a realistic manner.
-
+
Figure 6.X: Internal springs keep the structure from collapsing. This is just one possible design. Try others!
The final example incorporates the additional springs from Figure 6.X, a gravity force, and mouse interaction.
@@ -1582,7 +1591,7 @@
A Force-Directed Graph
Have you ever had the following thought? “I have a whole bunch of stuff I want to draw, and I want all that stuff to be spaced out evenly in a nice, neat, organized manner. Otherwise, I’ll have trouble sleeping at night.”
This isn’t an uncommon problem in computational design. One solution is a force-directed graph, a visualization of elements—let’s call them “nodes”—in which the positions of those nodes aren’t manually assigned. Instead, the nodes arrange themselves according to a set of forces. While any forces can be used, a classic method involves spring forces: each node is connected to every other node with a spring, such that when the springs reach equilibrium, the nodes are evenly spaced (see Figure 6.14). Sounds like a job for toxiclibs.js!
-
+
Figure 6.14: An example of a “force-directed graph”: clusters of particles connected by spring forces.h
To create a force-directed graph, I’ll first need a class to describe an individual node in the system. Because the term “node” is associated with the JavaScript framework Node.js, I’ll stick with the term “particle” to avoid any confusion, and I’ll continue using my Particle class from the earlier soft body examples.
@@ -1602,7 +1611,7 @@
A Force-Directed Graph
Let’s assume the Cluster class also has a show() method to draw all the particles in the cluster, and that I’ll create a new Cluster object in setup() and render it in draw(). If I ran the sketch as is, nothing would happen. Why? Because I have yet to implement the whole force-directed graph part! I need to connect every single node to every other node with a spring. This is somewhat similar to creating a soft body character, but rather than hand-craft a skeleton, I want to write an algorithm to make all the connections automatically.
What exactly do I mean by that? Say there are four Particle objects: 0, 1, 2 and 3. Here are the connections:
“To play life you must have a fairly large checkerboard and a plentiful supply of flat counters of two colors. It is possible to work with pencil and graph paper but it is much easier, particularly for beginners, to use counters and a board.” —Martin Gardner, Scientific American (October 1970)
-
+
+
“To play life you must have a fairly large checkerboard and a plentiful supply of flat counters of two colors. It is possible to work with pencil and graph paper but it is much easier, particularly for beginners, to use counters and a board.”
+
— Martin Gardner, Scientific American (October 1970)
+
+
+
+
+
+
+
TITLE
+
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
+
credit / url
+
In Chapter 5, I defined a complex system as a network of elements with short-range relationships, operating in parallel, that exhibit emergent behavior. I illustrated this definition by creating a flocking simulation and demonstrated how a complex system adds up to more than the sum of its parts. In this chapter, I’m going to turn to developing other complex systems known as cellular automata.
In some respects, this shift may seem like a step backward. No longer will the individual elements of my systems be members of a physics world, driven by forces and vectors to move around the canvas. Instead, I’ll build systems out of the simplest digital element possible: a single bit. This bit is called a cell, and its value (0 or 1) is called its state. Working with such simple elements will help illustrate the details behind how complex systems operate, and it will offer an opportunity to elaborate on some programming techniques that apply to code-based projects. Building cellular automata will also set the stage for the rest of the book, where I’ll increasingly focus on systems and algorithms rather than vectors and motion—albeit systems and algorithms that I can and will apply to moving bodies.
What Is a Cellular Automaton?
@@ -14,7 +24,7 @@
What Is a Cellular Automaton?
Figure 7.1 illustrates these various CA characteristics.
-
+
Figure 7.1: A 2D grid of cells, each with a state of “on” or “off.” A neighborhood is a subsection of the large grid, usually consisting of all the cells adjacent to a given cell (circled).
The idea that an object’s state can vary over time is an important development. So far in this book, the objects (movers, particles, vehicles, boids, bodies) have generally existed in only one state. They might have moved with sophisticated behaviors and physics, but ultimately they remained the same type of object over the course of their digital lifetime. I’ve alluded to the possibility that these entities can change over time (for example, the weights of steering “desires” can vary), but I haven’t fully put this into practice. Now, with cellular automata, you’ll see how an object’s state can change based on a system of rules.
@@ -26,72 +36,72 @@
Elementary Cellular Automata
What’s the simplest cellular automaton you can imagine? For Wolfram, an elementary CA has three key elements.
1) Grid. The simplest grid would be one-dimensional: a line of cells (Figure 7.2).
-
+
Figure 7.2: A one-dimensional line of cells
2) States. The simplest set of states (beyond having only one state) would be two states: 0 or 1 (Figure 7.3). Perhaps the initial states are set randomly.
-
+
Figure 7.3: A one-dimensional line of cells marked with states 0 or 1. What familiar programming data structure that could represent this sequence?
3) Neighborhood. The simplest neighborhood in one dimension for any given cell would be the cell itself and its two adjacent neighbors: one to the left and one to the right (Figure 7.4). I’ll have to decide what I want to do with the cells on the left and right edges, since those only have one neighbor each, but this is something I can sort out later.
-
+
Figure 7.4: A neighborhood in one dimension is three cells.
I have a line of cells, each with an initial state, and each with two neighbors. The exciting thing is, even with this simplest CA imaginable, the properties of complex systems can emerge. But I haven’t yet discussed perhaps the most important detail of how cellular automata work: change over time.
I’m not really talking about real-world time here, but rather about the CA developing across a series of discrete time steps, which could also be called a generations. In the case of a CA in p5.js, time will likely be tied to the frame count of the animation. The question, as depicted in Figure 7.5, is this: given the states of the cells at time equals 0 (or generation 0), how do I compute the states for all cells at generation 1? And then how do I get from generation 1 to generation 2? And so on and so forth.
-
+
Figure 7.5: The states for generation 1 are calculated using the states of the cells from generation 0.
Let’s say there’s an individual cell in the CA called \text{cell}. The formula for calculating the cell’s state at any given time t (\text{cell}_t) is as follows:
\text{cell}_t = f(\text{cell neighborhood}_{t-1})
In other words, a cell’s new state is a function of all the states in the cell’s neighborhood at the previous generation (time t-1). A new state value is calculated by looking at the previous generation’s neighbor states (Figure 7.6).
-
+
Figure 7.6 The state of a cell at generation 1 is a function of the previous generation’s neighborhood.
There are many ways to compute a cell’s state from its neighbors’ states. Consider blurring an image. (Guess what? Image processing works with CA-like rules!) A pixel’s new state (its color) is the average of its neighbors’ colors. Similarly, a cell’s new state could be the sum of all of its neighbors’ states. However, in Wolfram’s elementary CA, the process takes a different approach: instead of mathematical operations, new states are determined by predefined rules that account for every possible configuration of a cell and its neighbors. These rules are known collectively as a ruleset.
This approach might seem ridiculous at first—wouldn’t there be way too many possibilities for it to be practical? Well, let’s give it a try. A neighborhood consists of three cells, each with a state of 0 or 1. How many possible ways can the states in a neighborhood be configured? A quick way to figure this out is to think of each neighborhood configuration as a binary number. Binary numbers use “base 2,” meaning they’re represented with only two possible digits (0 and 1). In this case, each neighborhood configuration corresponds to a 3-bit number, and how many values can you represent with 3 bits? Eight, from 0 (000) up to 7 (111). Figure 7.7 shows how.
-
+
Figure 7.7: Counting with 3 bits in binary, or the eight possible configurations of a three-cell neighborhood
Once all the possible neighborhood configurations are defined, an outcome (new state value: 0 or 1) is specified for each configuration. In Wolfram's original notation and other common references, these configurations are written in descending order. Figure 7.8 follows this convention, starting with 111 and counting down to 000.
-
+
Figure 7.8: A ruleset shows the outcome for each possible configuration of three cells.
Keep in mind that unlike the sum or averaging methods, the rulesets in elementary CA don’t follow any arithmetic logic—they’re just arbitrary mappings of inputs to outputs. The input is the current configuration of the neighborhood (one of eight possibilities), and the output is the next state of the middle cell in the neighborhood (0 or 1—it’s up to you to define the rule).
Once you have a ruleset, you can set the cellular automaton in motion. The standard Wolfram model is to start generation 0 with all cells having a state of 0 except for the middle cell, which should have a state of 1. You can do this with any size (length) grid, but for clarity, I’ll use a one-dimensional CA of nine cells so that the middle is easy to pick out.
-
+
Figure 7.9: Generation 0 in a Wolfram CA, with the center cell set to 1
Based on the ruleset in Figure 7.8, how do the cells change from generation 0 to generation 1? Figure 7.10 shows how the center cell, with a neighborhood of 010, switches from a 1 to a 0. Try applying the ruleset to the remaining cells to fill in the rest of the generation 1 states.
-
+
Figure 7.10: Determining a state for generation 1 using the CA rule set
Now for a slight change: instead of representing the cells’ states with 0s and 1s, I’ll indicate them with visual cues—white for 0 and black for 1 (see Figure 7.11). Although this might seem counterintuitive, as 0 usually signifies black in computer graphics, I’m using this convention because the examples in this book have a white background, so “turning on” a cell corresponds to switching its color from white to black.
-
+
Figure 7.11: A white cell indicates 0, and a black cell indicates 1.
With this switch from numerical representations into visual forms, the fascinating dynamics and patterns of cellular automata will come into view! To see them even more clearly, instead of drawing one generation at a time, I’ll also start stacking the generations, with each new generation appearing below the previous one, as shown in Figure 7.12.
-
+
Figure 7.12 Translating a grid of 0s and 1s to white and black squares.
The low-resolution shape that emerges in Figure 7.12 is the Sierpiński triangle. Named after the Polish mathematician Wacław Sierpiński, it’s a famous example of a fractal. I’ll examine fractals more closely in the next chapter, but briefly, they’re patterns where the same shapes repeat themselves at different scales. To give you a better sense of this, Figure 7.13 shows the CA over several more generations, and with a wider grid size.
-
+
Figure 7.13: Wolfram elementary CA, rule 90
And Figure 7.14 shows the CA again, this time with cells that are just a single pixel wide so the resolution is much higher.
-
+
Figure 7.14: Wolfram elementary CA, rule 90, at higher resolution
Take a moment to let the enormity of what you’ve just seen sink in. Using an incredibly simple system of 0s and 1s, with little neighborhoods of three cells, I was able to generate a shape as sophisticated and detailed as the Sierpiński triangle. This is the beauty of complex systems.
@@ -99,24 +109,24 @@
Elementary Cellular Automata
Defining Rulesets
Take a look back at Figure 7.7 and notice again how there are eight possible neighborhood configurations, from 000 to 111. These are a ruleset’s inputs, and they remain constant from ruleset to ruleset. It’s only the outputs that vary from one ruleset to another—the individual 0 or 1 paired with each neighborhood configuration. Figure 7.9 represented a ruleset entirely with 0s and 1s. Now Figure 7.16 shows the same ruleset visualized with white and black squares.
-
+
Figure 7.15 Representing the same ruleset (from Figure 7.8) with white and black squares
Since the eight possible inputs are the same no matter what, a potential shorthand for indicating a ruleset is to specify just the outputs, writing them as a sequence of eight 0s or 1s—in other words, an 8-bit binary number. For example, the ruleset in Figure 7.15 could be written as 01011010. The 0 on the right corresponds to input configuration 000, the 1 next to it corresponds to input 001, and so on. On Wolfram’s website, CA rules are illustrated using a combination of this binary shorthand and the black-and-white square representation, yielding depictions like Figure 7.16.
-
+
Figure 7.16: How the Wolfram website represents a ruleset
I’ve said that each ruleset can essentially be boiled down to an 8-bit number, and how many combinations of eight 0s and 1s are there? Exactly 2^8, or 256. You might remember this from when you first learned about RGB color in p5.js. When you write background(r, g, b), each color component (red, green, and blue) is represented by an 8-bit number ranging from 0 to 255 in decimal, or 00000000 to 11111111 in binary.
The ruleset in Figure 7.16 could be called “Rule 01011010,” but Wolfram instead refers to it as “Rule 90.” Where does 90 come from? To make ruleset naming even more concise, Wolfram uses decimal (or “base 10”) representations rather than binary. To name a rule, you convert its 8-bit binary number to its decimal counterpart. The binary number 01011010 translates to the decimal number 90, and therefore it’s named “Rule 90.”
Since there are 256 possible combinations of eight 0s and 1s, there are also 256 unique rulesets. Let’s check out another one. How about rule 11011110, or more commonly, rule 222. Figure 7.17 shows how it looks.
-
+
Figure 7.17: Wolfram elementary CA, rule 222
-
+
Figure 7.18: A textile cone snail (Conus textile), Cod Hole, Great Barrier Reef, Australia, 7 August 2005. Photographer: Richard Ling richard@research.canon.com.au
This array corresponds to the row of cells shown in Figure 7.19.
-
+
Figure 7.19: One generation of a 1D cellular automata
To show that array, I check if each element is a 0 or a 1, choose a fill color accordingly, and draw a rectangle.
@@ -218,7 +228,7 @@
Programming an Elementary CA
function rules (a, b, c) { return _______ }
There are many ways to write this function, but I’d like to start with a long-winded one that will hopefully provide a clear illustration of what's happening. How shall I store the ruleset? Remember that a ruleset is a series of 8 bits (0 or 1) that define the outcome for every possible neighborhood configuration. If you need a refresher, Figure 7.20 shows the Wolfram notation for the Sierpiński triangle ruleset, along with the corresponding 0s and 1s listed in order. This should give you a hint as to the data structure I have in mind!
-
+
Figure 7.20 A visual representation of a Wolfram ruleset with numeric encoding
I can store this ruleset in an array.
@@ -293,7 +303,7 @@
Programming an Elementary CA
Drawing an Elementary CA
The standard technique for drawing an elementary CA is to stack the generations one on top of the other, and to draw each cell as a square that’s black (for state 1) or white (for state 0), as in Figure 7.21. Before implementing this particular visualization, however, I’d like to point out two things.
-
+
Figure 7.21 Ruleset 90 visualized as a stack of generations
First, this visual interpretation of the data is completely literal. It’s useful for demonstrating the algorithms and results of Wolfram’s elementary CA, but it shouldn’t necessarily drive your own personal work. It’s rather unlikely that you’re building a project that needs precisely this algorithm with this visual style. So while learning to draw a CA in this way will help you understand and implement CA systems, this skill should exist only as a foundation.
@@ -392,22 +402,22 @@
Wolfram Classification
Now that you have a sketch for visualizing an elementary CA, you can supply it whatever ruleset you want and see the results. What kind of outcomes can you expect? As I noted earlier, the vast majority of elementary CA rulesets produce visually uninspiring results, while some result in wondrously complex patterns like those found in nature. Wolfram himself has divided up the range of outcomes into four classes.
Class 1: Uniformity. Class 1 CAs end up, after some number of generations, with every cell constant. This isn’t terribly exciting to watch. Rule 222 (see Figure 7.22) is a class 1 CA; if you run it for enough generations, every cell will eventually become and remain black.
-
+
Figure 7.22: Rule 222
Class 2: Repetition. Like class 1 CAs, class 2 CAs remain stable, but the cell states aren’t constant. Instead, they oscillate in some repeating pattern of 0s and 1s. In rule 190 (Figure 7.23), each cell follows the sequence 11101110111011101110.
-
+
Figure 7.23: Rule 190
Class 3: Random. Class 3 CAs appear random and have no easily discernible pattern. In fact, rule 30 (Figure 7.24) is used as a random number generator in Wolfram’s Mathematica software. Again, this is a moment where you can feel amazed that such a simple system with simple rules can descend into a chaotic and random pattern.
-
+
Figure 7.24: Rule 30
Class 4: Complexity. Class 4 CAs can be thought of as a mix between class 2 and class 3. You can find repetitive, oscillating patterns inside the CA, but where and when these patterns appear is unpredictable and seemingly random. Class 4 CAs exhibit the properties of complex systems described earlier in this chapter and in Chapter 5. If a class 3 CA wowed you, then a class 4 like Rule 110 (Figure 7.25) should really blow your mind!
-
+
Figure 7.25: Rule 110
@@ -427,7 +437,7 @@
The Rules of the Game
Let’s look at how the Game of Life works. It won’t take up too much time or space, since I can build on everything from Wolfram’s elementary CA. First, instead of a line of cells, there’s now a two-dimensional matrix of cells. As with the elementary CA, the possible states are 0 or 1. In this case, however, since the system is all about “life," 0 means “dead” and 1 means “alive.”
Since the Game of Life is two-dimensional, each cell’s neighborhood has now expanded. If a neighbor is an adjacent cell, a neighborhood is now nine cells instead of three, as shown in Figure 7.26.
-
+
Figure 7.26: A two-dimensional CA showing the neighborhood of 9 cells.
With three cells, a 3-bit number had eight possible configurations. With nine cells, there are 9 bits, or 512 possible neighborhoods. In most cases, it would be impractical to define an outcome for every single possibility. The Game of Life gets around this problem by defining a set of rules according to general characteristics of the neighborhood: is the neighborhood overpopulated with life, surrounded by death, or just right? Here are the rules of life:
@@ -448,23 +458,23 @@
The Rules of the Game
Figure 7.27 shows a few examples of these rules. Focus on what happens to the center cell.
-
+
Figure 7.27: Example scenarios for “death” and “birth” in the Game of Life
With the elementary CA, I visualized many generations at once, stacked as rows in a 2D grid. With the Game of Life, however, the CA itself is in two dimensions. I could try to create an elaborate 3D visualization of the results and stack all the generations in a cube structure (and in fact, you might want to try this as an exercise), but a more typical way to visualize the Game of Life is to treat each generation as a single frame in an animation. This way, instead of viewing all the generations at once, you see them one at a time, and the result resembles rapidly developing bacteria in a Petri dish.
One of the exciting aspects of the Game of Life is that there are known initial patterns that yield intriguing results. For example, the patterns shown in Figure 7.28 remain static and never change.
-
+
Figure 7.28: Initial configurations of cells that remain stable
The patterns in Figure 7.29 oscillate back and forth between two states.
-
+
Figure 7.29: Initial configurations of cells that oscillate between two states
And the patterns in Figure 7.31 appear to move about the grid from generation to generation. The cells themselves don’t actually move, but you see the illusion of motion in the result of adjacent cells turning on and off.
-
+
Figure 7.30: Initial configurations of cells that appear to move
If you’re interested in these patterns, there are several good “out of the box” Game of Life demonstrations online that allow you to configure the CA’s initial state and watch it run at varying speeds. Two examples are:
@@ -511,7 +521,7 @@
The Implementation
}
-
+
Figure 7.31: The index values for the neighborhood of cells.
“Pathological monsters! cried the terrified mathematician
Every one of them a splinter in my eye
I hate the Peano Space and the Koch Curve
I fear the Cantor Ternary Set
The Sierpinski Gasket makes me wanna cry
And a million miles away a butterfly flapped its wings
-
- On a cold November day a man named Benoit Mandelbrot was born”
- — Jonathan Coulton, lyrics from “Mandelbrot Set”
-
-
+
On a cold November day a man named Benoit Mandelbrot was born”
+
— Jonathan Coulton, lyrics from “Mandelbrot Set”
+
+
+
+
+
+
+
TITLE
+
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
+
credit / url
+
Once upon a time, I took a course in high school called “Geometry.” Perhaps you took such a course too, where you learned about classic shapes in one, two, and maybe even three dimensions. What’s the circumference of a circle? The area of a rectangle? The distance between a point and a line? This sort of geometry is generally referred to as Euclidean geometry, after the Greek mathematician Euclid, and come to think of it, it’s a subject I’ve been covering all along in this book. Whenever I used vectors to describe the motion of bodies in Cartesian space, that was Euclidean geometry.
For us nature coders, however, I would suggest the question: can our world really be described with Euclidean geometry? The laptop screen I’m staring at right now sure looks like a rectangle. And the plum I ate this morning was spherical. But what if I were to look further, and consider the trees that line the street, the leaves that hang off those trees, the lightning from last night’s thunderstorm, the cauliflower I ate for dinner, the blood vessels in my body, and the mountains and coastlines that define a landscape? As Figure 8.1 shows, most of the stuff you find in nature looks quite different from the idealized geometrical forms of Euclidean geometry.
-
+
Figure 8.1: Comparing idealized Euclidean geometry to shapes found in nature
If you want to start building computational designs with patterns that move beyond basic shapes like circle(), square(), and line(), it’s time to learn about a different kind of geometry, the “geometry of nature”: fractals. This chapter will explore the concepts behind fractals and programming techniques for simulating fractal geometry.
@@ -23,22 +30,22 @@
What Is a Fractal?
The term fractal (from the Latin fractus, meaning “broken”) was coined by the mathematician Benoit Mandelbrot in 1975. In his seminal work The Fractal Geometry of Nature, he defines a fractal as “a rough or fragmented geometric shape that can be split into parts, each of which is (at least approximately) a reduced-size copy of the whole.”
I’ll illustrate this definition with two simple examples. First, think about the branching structure of a tree, as shown in Figure 8.3. (Later in the chapter, I’ll show you how to write the code to draw this tree.)
-
+
Figure 8.3: A branching fractal tree
Notice how the tree has a single trunk with branches connected at its end. Each one of those branches has branches at its end, and those branches have branches, and so on and so forth. And what if you were to pluck one branch from the tree and examine it more closely on its own, as in Figure 8.4?
-
+
Figure 8.4: Zooming in on one branch of the fractal tree
The zoomed-in branch is an exact replica of the whole, just as Mandelbrot describes. Not all fractals have to be perfectly self-similar like this tree, however. For example, take a look at the two illustrations of the coastline of Greenland, or Kalaallit Nunaat in the indigenous Kalaallisut language, in Figure 8.5.
-
+
Figure 8.5: Two coastlines of Greenland
The absence of a scale in these illustrations is no accident. Am I showing the entire coastline, or just a small portion of it? There’s no way for you to know without a scale reference because coastlines, as fractals, look essentially the same at any scale. (Incidentally, coastline A shows [TBD] and B zooms into a tiny part of coastline A, showing [TBD]. I’ve added the scales in Figure 8.6)
-
+
Figure 8.6: Two coastlines of Greenland, with scale
A coastline is an example of a stochastic fractal, meaning it’s built out of probabilities and randomness. Unlike the deterministic (or predictable) tree-branching structure, a stochastic fractal is statistically self-similar. This means that even if a pattern isn’t precisely the same at every size, the general quality of the shape and its overall feel stay the same no matter how much you zoom in or out. The examples in this chapter will explore both deterministic and stochastic techniques for generating fractal patterns.
@@ -46,7 +53,7 @@
What Is a Fractal?
The Mandelbrot Set
-
+
One of the most well-known and recognizable fractal patterns is named for Benoit Mandelbrot himself. Generating the Mandelbrot set involves testing the properties of complex numbers after they’re passed through an iterative function. Do they tend to infinity? Do they stay bounded? While a fascinating mathematical discussion, this “escape-time” algorithm is a less practical method for generating fractals than the recursive techniques we’ll examine in this chapter. However, code for generating the Mandelbrot set is included in the online examples.
@@ -55,7 +62,7 @@
The Mandelbrot Set
Recursion
Beyond self-similarity, another fundamental component of fractal geometry is recursion: the process of repeatedly applying a rule, known as a production rule,where the outcome of one iteration becomes the starting point for the next. Recursion has been in the picture since the first appearance of fractals in modern mathematics, when German mathematician George Cantor developed some simple rules for generating an infinite set of numbers in 1883. Cantor’s production rules are illustrated in Figure 8.7.
-
+
Figure 8.7: Recursive instructions for generating the Cantor set fractal
There is a feedback loop at work in Cantor’s rules. Take a single line and break it into two. Then return to those two lines and apply the same rule, breaking each line into two. Now you have four. Return to those four lines and apply the rule. Now you have eight. And so on. That’s how recursion works: the output of a process is fed back into the process itself, over and over again. In this case, the result is known as the Cantor set. Like the fractal tree from Figure 8.3, it’s an example of a deterministic, entirely predictable fractal, where each part is a precise replica of the whole.
@@ -104,7 +111,7 @@
Implementing Recursive Functions
The factorial() function calls itself within its own definition. It may look a bit odd at first, but it works, as long as there’s a stopping condition (in this case, n == 0) so the function doesn’t get stuck calling itself forever. (This implementation, however, assumes you’re passing a non-negative number as an argument and should probably include additional error checking.)
Figure 8.8 illustrates the steps that unfold when factorial(4) is called.
-
+
Figure 8.8: Visualizing the process of recursive factorial, this diagram uses an exit condition of 1 instead of 0 to save a step!
The function keeps calling itself, descending deeper and deeper down a rabbit hole of nested function calls until it reaches the stopping condition. Then it works its way up out of the hole, returning values until it’s arrives back home at the original call of factorial(4).
@@ -186,12 +193,12 @@
Drawing the Cantor Set with Recur
cantor(10, 20, width - 20);
You’d see something like Figure 8.9.
-
+
Figure 8.9: The visual result of a single call to cantor() is a single line.
-
+
Figure 8.10: The next iteration of lines in the Cantor set are one-third the length of the previous line.
@@ -208,7 +215,7 @@
Drawing the Cantor Set with Recur
Figure 8.11 shows the result.
-
+
Figure 8.11: Two generations of lines drawn with the Cantor set rules.
This works over two generations, but continuing to call line() manually will get unwieldy quite quickly. For the succeeding generations, I'd need four, then eight, then sixteen calls to line(). A for loop is the usual way around such a problem, but give that a try and you’ll see that working out the math for each iteration quickly proves inordinately complicated. Don’t despair, however: here’s where recursion comes to the rescue!
@@ -253,12 +260,12 @@
Exercise 8.1
The Koch Curve
I’ll now turn another famous fractal pattern, the Koch curve, discovered in 1904 by Swedish mathematician Helge von Koch. Figure 8.12 outlines the production rules for drawing this fractal. Notice that the rules start the same way as the Cantor set, with a single line that’s then divided into three equal parts.
-
+
Figure 8.12: The rules for drawing the Koch curve
Figure 8.13 shows how the fractal develops over several repetitions of these steps.
-
+
Figure 8.13: The evolution of the Koch curve
I could proceed in the same manner as I did with the Cantor set, and write a recursive function that iteratively applies the Koch rules over and over. Instead, I’m going to tackle this problem in a different manner by treating each segment of the Koch curve as an individual object. This will open up some exciting design possibilities. For example, if each segment is an object, it could move independently from its original position and participate in a physics simulation. In addition, the visual appearance of each segment could vary if the object includes customizable properties for color, line thickness, and so on.
@@ -307,7 +314,7 @@
The “Monster” Curve
This is my foundation for the sketch. I have a KochLine class that keeps track of a line from point start to point end, and I have an array that keeps track of all the KochLine objects. Given these elements, how and where should I apply the Koch rules and the principles of recursion?
Remember the Game of Life cellular automaton from Chapter 7? In that simulation, I always kept track of two generations: “current” and “next.” When I was finished calculating the next generation, “next” became “current,” and I moved on to computing the new next generation. I’m going to apply a similar technique here. I have a segments array listing the current set of line segments (at the start of the program, there’s only one). Now I need a second array (I’ll call it next) where I can place all the new KochLine objects generated from applying the Koch rules. For every singleKochLine in the current array, four new line segments will be added to next. When I'm done, the next array becomes the new segments (see Figure 8.14).
-
+
Figure 8.14: The next generation of the fractal is calculated from the current generation. Then next becomes the new current in the transition from one generation to another.
Here’s how the code looks:
@@ -329,7 +336,7 @@
The “Monster” Curve
By calling generate() over and over, the Koch curve rules will be recursively applied to the existing set of KochLine segments. But of course, I’ve skipped over the real “work” of the function: how do I actually break one line segment into four as described by the rules? I need a way to calculate the start and end points of each line.
Because the KochLine class uses p5.Vector objects to store the start and end points, this is a wonderful opportunity to practice all that vector math from Chapter 1, along with some trigonometry from Chapter 3. First, I should establish the scope of the problem: how many points do I need to compute for each KochLine object? Figure 8.15 shows the answer.
-
+
Figure 8.15: Two points become five points.
As the figure illustrates, I need to turn the two points (\text{start}, \text{end}) into five (a, b, c, d, e) to generate the four new line segments (a→b, b→c, c→d, d→e).
@@ -369,7 +376,7 @@
The “Monster” Curve
How about points b and d? Pointb is one-third of the way along the line segment, and d is two-thirds of the way along. As Figure 8.16 shows, If I create a vector \vec{v} that points from the original \text{start} to \text{end}, I can find the new points by scaling its magnitude it to one-third for the new b and two-thirds for the new d.
-
+
Figure 8.16: The original line expressed as a vector \vec{v} can be divided by 3 to find the positions of the points for the next generation.
Here’s how that looks in code:
@@ -393,7 +400,7 @@
The “Monster” Curve
}
-
+
Figure 8.17: The vector \vec{v} is rotated by 60° to find the third point.
@@ -445,7 +452,7 @@
Exercise 8.2
-
+
@@ -461,7 +468,7 @@
Exercise 8.4
Exercise 8.5
Draw the Sierpiński triangle (as seen in last chapter’s Wolfram elementary CA) using recursion.
-
+
@@ -471,7 +478,7 @@
Trees
The Deterministic Version
Figure 8.18 outlines a deterministic set of production rules for drawing a fractal tree.
-
+
Figure 8.18: Each generation of a fractal tree, following the given production rules. The final tree is several generations later.
Once again, I have a nice fractal with a recursive definition: a branch is a line with two branches connected to it. What makes this fractal a bit more difficult than the previous ones is the use of the word rotate in the fractal’s rules. Each new branch must rotate relative to the previous branch, which is rotated relative to all its previous branches. Luckily, p5.js has a mechanism to keep track of rotations: transformations.
@@ -482,7 +489,7 @@
The Deterministic Version
line(0, 0, 0, -100);
Once I’ve drawn the line, I must translate to the end of that line and rotate in order to draw the next branch, as demonstrated in Figure 8.19. (Eventually, I’m going to need to package up what I’m doing right now into a recursive function, but I’ll sort out the individual steps first.)
-
+
Figure 8.19: The process of drawing a line, translating to the end of the line, and rotating by an angle.
Here’s the code for the process illustrated in Figure 8.19. I’m using an angle of 30 degrees, or \pi/6 radians.
@@ -493,7 +500,7 @@
The Deterministic Version
Now that I have a branch going to the right, I need one going to the left (see Figure 8.20). For that, I should have used push() to save the transformation state before rotating and drawing the right branch. Then I’ll be able to call pop() after drawing the right branch to restore that state, putting me back in the right position to rotate and draw the left branch.
-
+
Figure 8.20: After “popping” back, a new branch is rotated to the left.
@@ -536,7 +543,7 @@
The Deterministic Version
Exercise 8.6
-
+
@@ -686,7 +693,7 @@
L-systems
-
+
Figure 8.21: And so on and so forth...
@@ -803,7 +810,7 @@
Example 8.8: Simple L-sy
Armed with this translation, I can treat each generation’s sentence as instructions for drawing. Figure 8.22 shows the result.
-
+
Figure 8.22: The Cantor set as expressed with the alphabet of an L-system
Look familiar? This L-system generated the Cantor set!
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
+
credit / url
+
Take a moment to think back to a simpler time, when you wrote your first p5.js sketches and life was free and easy. What was a fundamental programming concept that you likely used in those first sketches and continue to use over and over again to this day? Variables. Variables allow you to save data and reuse it while a program runs.
Of course, this is nothing new. In this book, you’ve moved far beyond sketches with just one or two simple variables, working up to sketches organized around more complex data structures: variables holding custom objects that include both data and functionality. You’ve used these complex data structures—classes—to build your own little worlds of movers and particles and vehicles and cells and trees. But there’s been a catch: in each and every example in this book, you’ve had to worry about initializing the properties of these objects. Perhaps you made a whole set of particles with random colors and sizes, or a list of vehicles all starting at the same x,y position.
What if, instead of acting as “intelligent designers,” assigning the properties of the objects through randomness or thoughtful consideration, you could let a process found in nature—evolution—decide the values for you? Can you think of the variables of a JavaScript object as the object’s DNA? Can objects give birth to other objects and pass down their DNA to a new generation? Can a p5.js sketch evolve?
@@ -20,7 +30,7 @@
Genetic Algorithms: Inspir
Why Use Genetic Algorithms?
To help illustrate the utility of the traditional genetic algorithm, I’m going to start with cats. No, not just your every day feline friends. I’m going to start with some purr-fect cats that paw-sess a talent for typing, with the goal of producing the complete works of Shakespeare (Figure 9.1).
-
+
Figure 9.1: Infinite cats typing at infinite keyboards
This is my meow-veloustwist on the infinite monkey theorem, whichis stated as follows: A monkey hitting keys randomly on a typewriter will eventually type the complete works of Shakespeare given an infinite amount of time. It’s only a theory because in practice the number of possible combinations of letters and words makes the likelihood of the monkey actually typing Shakespeare minuscule. To put it in perspective, even if the monkey had started typing at the beginning of the universe, the probability that by now it would have produced just Hamlet, to say nothing of the entire works of Shakespeare, is still absurdly unlikely.
@@ -237,7 +247,7 @@
Step 2: Selection
Now it’s time for the wheel of fortune, shown in Figure 9.2.
-
+
Figure 9.2: A “wheel of fortune” where each slice of the wheel is sized according to a fitness value
@@ -286,24 +296,24 @@
Step 3: Reproduction
The task at hand is now to create a child phrase from these two. Perhaps the most obvious way (call it the “50/50 method”) would be to take the first three characters from A and the second three from B, as shown in Figure 9.3.
-
+
Figure 9.3: A 50/50 crossover
A variation of this technique is to pick a random midpoint. In other words, I don’t always have to pick exactly half of the characters from each parent. I could also use a combination of 1 and 5, or 2 and 4. This is preferable to the 50/50 approach, since it increases the variety of possibilities for for the next generation (see Figure 9.4).
-
+
Figure 9.4: Two examples of crossover from a random midpoint
Another possibility is to randomly select a parent for each character in the child string, as in Figure 9.5. You can think of this as flipping a coin six times: heads, take a character from parent A; tails, from parent B. This yields even more possible outcomes: “codurg,” “natine,” “notune,” “cadune,” and so on.
-
+
Figure 9.5: Crossover with a “coin-flipping” approach
This strategy won’t significantly change the outcome from the random midpoint method; however, if the order of the genetic information plays some role in expressing the phenotype, you may prefer one solution over the other. Here, order does matter because the goal is to build an intelligible phrase. Other problems may benefit more from the randomness introduced by the coin-flipping approach.
Once the child DNA has been created via crossover, an extra, optional process can be applied before adding the child to the next generation: mutation. This second reproduction stage is unnecessary in some cases, but it exists to further uphold the Darwinian principle of variation. The initial population was created randomly, ensuring some variety of elements at the outset. However, this variation is limited by the size of the population, and the variation narrows over time by virtue of selection. Mutation introduces additional variety throughout the evolutionary process.
-
+
Figure 9.6: Mutating the child phrase
@@ -402,7 +412,7 @@
Step 2: Selection
In Chapter 0, I covered the basics of probability and generating a custom distribution of random numbers. I’m going to use the same techniques here to assign a probability to each member of the population, picking parents by spinning the “wheel of fortune.” Revisiting Figure 9.2 again, your mind might immediately go back to Chapter 3 and contemplate coding a simulation of an actual spinning wheel. As fun as this might be (and you should make one!), it’s quite unnecessary.
-
+
Figure 9.7: A bucket full of letters A, B, C, D, and E. The higher the fitness, the more instances of the letter in the bucket.
@@ -817,7 +827,7 @@
Key #2: The Fitness Function
There are a couple of problems here. First, I’m adding elements to the mating pool N times, where N equals fitness multiplied by 100. But objects can only be added to an array a whole number of times, so A and B will both be added 80 times, giving them an equal probability of being selected. Even with an improved solution that takes floating point probabilities into account, 80.1 percent is only a teeny tiny bit higher than 80 percent. But getting 801 characters right is a whole lot better than 800 in the evolutionary scenario. I really want to make that additional character count. I want the fitness score for 801 characters to be substantially better than the score for 800.
To put it another way, Figure 9.8 shows graphs of two possible fitness functions.
-
+
Figure 9.8: On the left, a fitness graph of y=x, on the right y = x^2
On the left is a linear graph; as the number of characters goes up, so does the fitness score. By contrast, in the graph on the right, as the number of characters goes up, the fitness score goes way up. That is, the fitness increases at an accelerating rate as the number of correct characters increases.
@@ -955,13 +965,13 @@
Evolving Forces: Smart Rockets
I mentioned rockets for a specific reason: in 2009, Jer Thorp released a genetic algorithms example on his blog entitled “Smart Rockets.” Thorp pointed out that NASA uses evolutionary computing techniques to solve all sorts of problems, from satellite antenna design to rocket firing patterns. This inspired him to create a Flash demonstration of evolving rockets.
Here’s the scenario: a population of rockets launches from the bottom of the screen with the goal of hitting a target at the top of the screen. There are obstacles blocking a straight-line path to the target (see Figure 9.9).
-
+
Figure 9.9: A population of smart rockets seeking a delicious strawberry planet
Each rocket is equipped with five thrusters of variable strength and direction (Figure 9.10). The thrusters don’t fire all at once and continuously; rather, they fire one at a time in a custom sequence.
-
+
Figure 9.10: A single smart rocket with five thrusters, carrying Clawdius the astronaut
@@ -1035,10 +1045,10 @@
Developing the Rockets
-
+
-
+
Figure 9.11: On the left, vectors created with random x and y values. On the right, using p5.Vector.random2D().
@@ -1340,7 +1350,7 @@
Interactive Selection
The innovation here isn’t the use of the genetic algorithm itself, but rather the strategy behind the fitness function. In front of each monitor is a sensor on the floor that can detect the presence of a visitor viewing the screen. The fitness of an image is tied to the length of time that viewers look at the image. This is known as interactive selection, a genetic algorithm with fitness values assigned by people.
Far from being confined to art installations, interactive selection is quite prevalent in the digital age of user-generated ratings and reviews. Could you imagine evolving the perfect song based on your Spotify ratings? Or the ideal book according to Goodreads reviews? In keeping with the book’s nature theme, however, I’ll illustrate how interactive selection works using a population of digital flowers like the ones in Figure 9.14.
-
+
9.13: Flower design for interactive selection
Each flower will have a set of properties: petal color, petal size, petal count, center color, center size, stem length, and stem color. A flower’s DNA (genotype) is an array of floating point numbers between 0 and 1, with a single value for each property.
@@ -1430,13 +1440,13 @@
Exercise 9.12
Another of Karl Sims’s seminal works in the field of genetic algorithms is “Evolved Virtual Creatures.” In this project, a population of digital creatures in a simulated physics environment is evaluated for their ability to perform tasks, such as swimming, running, jumping, following, and competing for a green cube. The project uses a “node-based” genotype. In other words, the creature’s DNA isn’t a linear list of vectors or numbers, but a map of nodes (much like the soft body simulation in Chapter 6.) The phenotype is the creature’s body itself, a network of limbs connected with muscles.
-
+
Can you design the DNA for a flower, plant, or creature as a “network” of parts? One idea is to use interactive selection to evolve the design. Alternatively, you could incorporate spring forces, perhaps with toxiclibs.js or Matter.js, to create a simplified 2D version of Sims’s creatures. What if they were to evolve according to a fitness function associated with a specific goal? For more about Sims’s techniques, you can read his 1994 Paper and watch the “Evolved Virtual Creatures” video on YouTube.
-
+
@@ -1529,7 +1539,7 @@
Ecosystem Simulation
Genotype and Phenotype
-
+
Figure 9.14: Small and big “bloop” creatures. The example will use simple circles, but you should try being more creative!
“The human brain has 100 billion neurons, each neuron connected to 10 thousand other neurons. Sitting on your shoulders is the most complicated object in the known universe.”
— Michio Kaku
-
+
+
+
+
+
+
+
TITLE
+
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
+
credit / url
+
I began with inanimate objects living in a world of forces, and gave them desires, autonomy, and the ability to take action according to a system of rules. Next, I allowed those objects, now called creatures, to live in a population and evolve over time. Now I’d like to ask: What is each creature’s decision-making process? How can it adjust its choices by learning over time? Can a computational entity process its environment and generate a decision?
The human brain can be described as a biological neural network—an interconnected web of neurons transmitting elaborate patterns of electrical signals. Dendrites receive input signals and, based on those inputs, fire an output signal via an axon. Or something like that. How the human brain actually works is an elaborate and complex mystery, one that I certainly am not going to attempt to tackle in rigorous detail in this chapter.
-
+
Figure 10.1 An illustration of a neuron with dendrites and an axon connected to another neuron.
The good news is that developing engaging animated systems with code does not require scientific rigor or accuracy, as you've learned throughout this book. You can simply be inspired by the idea of brain function.
-
In this chapter, I'll begin with a conceptual overview of the properties and features of neural networks and build the simplest possible example of one (a network that consists of a single neuron). I’ll then introduce you to more complex neural networks using the ml5.js library. Finally, I'll cover “neuroevolution”, a technique that combines genetic algorithms with neural networks to create a “Brain” object that can be inserted into the Vehicle class and used to calculate steering.
+
In this chapter, I'll begin with a conceptual overview of the properties and features of neural networks and build the simplest possible example of one (a network that consists of a single neuron). I’ll then introduce you to more complex neural networks using the ml5.js library. This will serve as a foundation for Chapter 11, the grand finale of this book: combining genetic algorithms with neural networks for physics simulation. I will demonstrate a technique called "neuroevolution" and evolve a "Brain" object in the Vehicle class to optimize steering.
Artificial Neural Networks: Introduction and Application
Computer scientists have long been inspired by the human brain. In 1943, Warren S. McCulloch, a neuroscientist, and Walter Pitts, a logician, developed the first conceptual model of an artificial neural network. In their paper, "A logical calculus of the ideas immanent in nervous activity,” they describe the concept of a neuron, a single cell living in a network of cells that receives inputs, processes those inputs, and generates an output.
Their work, and the work of many scientists and researchers that followed, was not meant to accurately describe how the biological brain works. Rather, an artificial neural network (hereafter referred to as a “neural network”) was designed as a computational model based on the brain to solve certain kinds of problems.
-
It’s probably pretty obvious to you that there are problems that are incredibly simple for a computer to solve, but difficult for you. Take the square root of 964,324, for example. A quick line of code produces the value 982, a number your computer computed in less than a millisecond. There are, on the other hand, problems that are incredibly simple for you or me to solve, but not so easy for a computer. Show any toddler a picture of a kitten or puppy and they’ll be able to tell you very quickly which one is which. Say “hello” and shake my hand one morning and you should be able to pick me out of a crowd of people the next day. But need a machine to perform one of these tasks? Scientists have already spent entire careers researching and implementing complex solutions.
+
It’s probably pretty obvious to you that there are problems that are incredibly simple for a computer to solve, but difficult for you. Take the square root of 964,324, for example. A quick line of code produces the value 982, a number your computer computed in less than a millisecond. There are, on the other hand, problems that are incredibly simple for you or me to solve, but not so easy for a computer. Show any toddler a picture of a kitten or puppy and they’ll be able to tell you very quickly which one is which. Listen to a conversation in a noisy café and focus on just one person's voice, you can effortlessly comprehend their words. But need a machine to perform one of these tasks? Scientists have already spent entire careers researching and implementing complex solutions.
The most prevalent use of neural networks in computing today involves these “easy-for-a-human, difficult-for-a-machine” tasks known as pattern recognition. These encompass a wide variety of problem areas, where the aim is to detect, interpret, and classify data. This includes everything from identifying objects in images, recognizing spoken words, understanding and generating human-like text, and even more complex tasks such as predicting your next favorite song or movie, teaching a machine to win at complex games, and detecting unusual cyber activities.
-
+
Figure 10.2: A neural network is a system of neurons and connections.
@@ -30,7 +39,7 @@
Artificial Neur
Unsupervised Learning —Required when there isn’t an example data set with known answers. Imagine searching for a hidden pattern in a data set. An application of this is clustering, i.e. dividing a set of elements into groups according to some unknown pattern. I won’t be showing at any examples of unsupervised learning in this chapter, as this strategy is less relevant for the examples in this book.
Reinforcement Learning —A strategy built on observation. Think of a little mouse running through a maze. If it turns left, it gets a piece of cheese; if it turns right, it receives a little shock. (Don’t worry, this is just a pretend mouse.) Presumably, the mouse will learn over time to turn left. Its neural network makes a decision with an outcome (turn left or right) and observes its environment (yum or ouch). If the observation is negative, the network can adjust its weights in order to make a different decision the next time. Reinforcement learning is common in robotics. At time t, the robot performs a task and observes the results. Did it crash into a wall or fall off a table? Or is it unharmed? I'll showcase how reinforcement learning works in the context of our simulated steering vehicles.
-
Reinforcement learning comes in many variants and styles. In this chapter, while I will lay the groundwork of neural networks using supervised learning, my primary focus will be a technique related to reinforcement learning known as neuroevolution. This method builds upon the code from chapter 9 and "evolves" the weights (and in some cases, the structure itself) of a neural network over generations of "trial and error" learning. It is especially effective in environments where the learning rules are not precisely defined or the task is complex with numerous potential solutions. And yes, it can indeed be applied to simulated steering vehicles!
+
Reinforcement learning comes in many variants and styles. In this chapter, while I will lay the groundwork of neural networks using supervised learning, my primary goal is to get to Chapter 11 where I will demonstrate a technique related to reinforcement learning known as neuroevolution. This method builds upon the code from chapter 9 and "evolves" the weights (and in some cases, the structure itself) of a neural network over generations of "trial and error" learning. It is especially effective in environments where the learning rules are not precisely defined or the task is complex with numerous potential solutions. And yes, it can indeed be applied to simulated steering vehicles!
A neural network itself is a “connectionist” computational system. The computational systems I have been writing in this book are procedural; a program starts at the first line of code, executes it, and goes on to the next, following instructions in a linear fashion. A true neural network does not follow a linear path. Rather, information is processed collectively, in parallel throughout a network of nodes (the nodes, in this case, being neurons).
Here I am showing yet another example of a complex system, much like the ones seen throughout this book. Remember how the individual boids in a flocking system, following only three rules—separation, alignment, cohesion, created complex behaviors? The individual elements of a neural network network are equally simple to understand. They read an input, a number, process it, and generate an output, another number. A network of many neurons, however, can exhibit incredibly rich and intelligent behaviors, echoing the complex dynamics seen in a flock of boids.
This ability of a neural network to learn, to make adjustments to its structure over time, is what makes it so useful in the field of artificial intelligence. Here are some standard uses of neural networks in software today.
@@ -49,7 +58,7 @@
Artificial Neur
The Perceptron
Invented in 1957 by Frank Rosenblatt at the Cornell Aeronautical Laboratory, a perceptron is the simplest neural network possible: a computational model of a single neuron. A perceptron consists of one or more inputs, a processor, and a single output.
-
+
Figure 10.3: A simple perceptron with two inputs and one output.
A perceptron follows the “feed-forward” model, meaning inputs are sent into the neuron, are processed, and result in an output. In the diagram above, this means the network (one neuron) reads from left to right: inputs come in, output goes out.
@@ -156,20 +165,20 @@
Simple Pattern Recognitio
Now that I have explained the computational process of a perceptron, let's take a look at an example of one in action. As I mentioned earlier, neural networks are commonly used for pattern recognition applications, such as facial recognition. Even simple perceptrons can demonstrate the fundamentals of classification. Let’s demonstrate with the following scenario.
Imagine you have a dataset of plants and you want to classify them into two categories: “xerophytes” (plants that have evolved to survive in an environment with little water and lots of sunlight, like the desert) and “hydrophytes” (plants that have adapted to living in submerged in water, with reduced light.) On the x-axis, you plot the amount of daily sunlight received by the plant and on the y-axis, the amount of water.
-
- Figure 10.4: A collection of points in two dimensional space divided by a line.
+
+ Figure 10.4: A collection of points in two dimensional space divided by a line, representing plant categories according to their water and sunlight intake.
-
While this is an oversimplified scenario and real-world data would have more messiness to it, you can see how the plants can easily be classified according to whether they are on side of a line or the other. To classify a new plant plotted into the space does not require a neural network (on which side a point lies can be determined with some simple algebra), I’ll use this as the basis to show how a perceptron can be trained to recognize points on one side versus another.
+
While this is an oversimplified scenario and real-world data would have more messiness to it, you can see how the plants can be classified according to whether they are on side of a line or the other. To classify a new plant plotted into the space does not require a neural network (on which side a point lies can be determined with some simple algebra). However, I can use this scenario as the basis to show how a perceptron can be trained to categorize points according to dimensional data.
Here the perceptron will have 2 inputs: x,y coordinates of a point, representing the amount of sunlight and water respectively. When using a sign activation function, the output will either be -1 or 1. The input data are classified according to the sign of the output, the weighted sum of inputs. In the above diagram, you can see how each point is either below the line (-1) or above (+1). I can use this to signify hydrophyte (+1, above the line) or xerophyte (-1, below the line.)
-
The perceptron itself can be diagrammed as follows. In machine learning x’s are typically the notation for inputs and y is typically the notation for an output. To keep this convention I’ll note in the diagram the inputs as x_0 and x_1. x_0 will correspond to the x-coordinate (sunlight) and x_1 to the y (water). I name the output simply “\text{output}”.
+
The perceptron itself can be diagrammed as follows. In machine learning x’s are typically the notation for inputs and y is typically the notation for an output. To keep this convention I’ll note in the diagram the inputs as x_0 and x_1. x_0 will correspond to the x-coordinate (sunlight) and x_1 to the y (water). I name the output simply “\text{output}”.
-
+
Figure 10.5 A perceptron with two inputs (x_0 and x_1), a weight for each input (\text{weight}_0 and \text{weight}_1) as well as a processing neuron that generates the output.
-
There is a pretty significant problem in Figure 10.5, however. Let’s consider the point (0,0). What if I send this point into the perceptron as its input: x_0 = 0 and x_1=1? What will the sum of its weighted inputs be? No matter what the weights are, the sum will always be 0! But this can’t be right—after all, the point (0,0) could certainly be above or below various lines in this two-dimensional world.
+
There is a pretty significant problem in Figure 10.5, however. Let’s consider the input data point: (0,0). What if I send this point into the perceptron as its input: x_0 = 0 and x_1=0? What will the sum of the weighted inputs be? No matter what the weights are, the sum will always be 0! But this can’t be right—after all, the point (0,0) could certainly be above or below various lines in this two-dimensional world.
To avoid this dilemma, the perceptron requires a third input, typically referred to as a bias input. A bias input always has the value of 1 and is also weighted. Here is the perceptron with the addition of the bias:
-
+
Figure 10.6: Adding a “bias” input, along with its weight to the Perceptron.
Let’s go back to the point (0,0).
@@ -205,18 +214,22 @@
Coding the Perceptron
class Perceptron {
constructor() {
this.weights = [];
- }
+ }
+...
The constructor could receive an argument indicating the number of inputs (in this case three: x_0, x_1, and a bias) and size the array accordingly.
-
// The argument "n" determines the number of inputs (including the bias)
+
...
+ // The argument "n" determines the number of inputs (including the bias)
constructor(n) {
this.weights = [];
for (let i = 0; i < n; i++) {
//{!1} The weights are picked randomly to start.
this.weights[i] = random(-1, 1);
}
- }
+ }
+...
A perceptron’s job is to receive inputs and produce an output. These requirements can be packaged together in a feedForward() function. In this example, the perceptron's inputs are an array (which should be the same length as the array of weights), and the output is an a number, +1 or -1, depending on the sign as returned by the activation function.
-
feedForward(inputs) {
+
...
+ feedForward(inputs) {
let sum = 0;
for (let i = 0; i < this.weights.length; i++) {
sum += inputs[i] * this.weights[i];
@@ -225,11 +238,12 @@
Coding the Perceptron
// Here the perceptron is making a guess.
// Is it on one side of the line or the other?
return this.activate(sum);
- }
-
I’ll note that the name of the function "feed forward" in this context comes from a commonly used term in neural networks to describe the process data passing through the network. This name relates to the way the data feeds directly forward through the network, read from left to right in a neural network diagram.
+ }
+...
+
I’ll note that the name of the function "feed forward" in this context comes from a commonly used term in neural networks to describe the process of data passing through the network. This name relates to the way the data feeds directly forward through the network, read from left to right in a neural network diagram.
Presumably, I could now create a Perceptron object and ask it to make a guess for any given point.
-
+
Figure 10.7: An xy coordinate from the two-dimensional space is the input to the perceptron.
// Create the Perceptron.
@@ -239,7 +253,7 @@
Coding the Perceptron
// The answer!
let guess = perceptron.feedForward(inputs);
Did the perceptron get it right? At this point, the perceptron has no better than a 50/50 chance of arriving at the right answer. Remember, when I created it, I gave each weight a random value. A neural network is not a magic tool that can guess things correctly on its own. I need to teach it how to do so!
-
To train a neural network to answer correctly, I will use the method of supervised learning, which I described in section 10.1. In this method, the network is provided with inputs for which there is a known answer. This enables the network to determine if it has made a correct guess. If it is incorrect, the network can learn from its mistake and adjust its weights. The process is as follows:
+
To train a neural network to answer correctly, I will use the method of supervised learning, which I described in section 10.1. In this method, the network is provided with inputs for which there is a known answer. This enables the network to determine if it has made a correct guess. If it is incorrect, the network can learn from its mistake and adjust its weights. The process is as follows:
Provide the perceptron with inputs for which there is a known answer.
This is also a calculation of an error! The current velocity serves as a guess, and the error (the steering force) indicates how to adjust the velocity in the correct direction. In a moment, you will see how adjusting a vehicle's velocity to follow a target is similar to adjusting the weights of a neural network to arrive at the correct answer.
+
This is also a calculation of an error! The current velocity serves as a guess, and the error (the steering force) indicates how to adjust the velocity in the correct direction. In a moment, you will see how adjusting a vehicle's velocity to follow a target is similar to adjusting the weights of a neural network towards the correct answer.
In the case of the perceptron, the output has only two possible values: +1 or -1. This means there are only three possible errors.
-
If the perceptron guesses the correct answer, then the guess equals e the desired output and the error is 0. If the correct answer is -1 and it guessed +1, then the error is -2. If the correct answer is +1 and it guessed -1, then the error is +2.
+
If the perceptron guesses the correct answer, then the guess equals the desired output and the error is 0. If the correct answer is -1 and it guessed +1, then the error is -2. If the correct answer is +1 and it guessed -1, then the error is +2.
@@ -296,31 +310,32 @@
Coding the Perceptron
With steering, however, I had an additional variable that controlled the vehicle’s ability to steer: the maximum force. A high maximum force allowed the vehicle to accelerate and turn quickly, while a lower force resulted in a slower velocity adjustment. The neural network will use a similar strategy with a variable called the "learning constant."
Note that a high learning constant causes the weight to change more drastically. This may help the perceptron arrive at a solution more quickly, but it also increases the risk of overshooting the optimal weights. A small learning constant, however, will adjust the weights slowly and require more training time, but allow the network to make small adjustments that could improve overall accuracy.
-
Assuming the addition of a this.learningConstant property to the Perceptronclass, , I can now write a training function for the perceptron following the above steps.
-
// Step 1: Provide the inputs and known answer.
+
Assuming the addition of a learningConstant property to the Perceptronclass, , I can now write a training function for the perceptron following the above steps.
+
...
+// Step 1: Provide the inputs and known answer.
// These are passed in as arguments to train().
-train(inputs, desired) {
-
- // Step 2: Guess according to those inputs.
- let guess = this.feedforward(inputs);
-
- // Step 3: Compute the error (difference between desired and guess).
- let error = desired - guess;
-
- //{!3} Step 4: Adjust all the weights according to the error and learning constant.
- for (let i = 0; i < this.weights.length; i++) {
- this.weights[i] += error * inputs[i] * this.learningConstant;
+ train(inputs, desired) {
+ // Step 2: Guess according to those inputs.
+ let guess = this.feedforward(inputs);
+
+ // Step 3: Compute the error (difference between desired and guess).
+ let error = desired - guess;
+
+ //{!3} Step 4: Adjust all the weights according to the error and learning constant.
+ for (let i = 0; i < this.weights.length; i++) {
+ this.weights[i] += error * inputs[i] * this.learningConstant;
+ }
}
-}
+...
Here’s the Perceptron class as a whole.
class Perceptron {
- constructor(n) {
+ constructor(totalInputs) {
//{!2} The Perceptron stores its weights and learning constants.
this.weights = [];
this.learningConstant = 0.01;
//{!3} Weights start off random.
- for (let i = 0; i < n; i++) {
- this.weights[i] = random(-1,1);
+ for (let i = 0; i < totalInputs; i++) {
+ this.weights[i] = random(-1, 1);
}
}
@@ -351,7 +366,8 @@
Coding the Perceptron
}
}
}
-
To train the perceptron, I need a set of inputs with a known answer. Now the question becomes, how do I pick a point and know whether it is above or below a line? Let’s start with the formula for a line, where y is calculated as a function of x:
+
To train the perceptron, I need a set of inputs with a known answer. However, I don’t happen to have a real-world dataset (or time to research and collect one) for the xerophytes and hydrophytes scenario. I'll instead demonstrate the training process with what's known as synthetic data. Synthetic data is generated data, often used in machine learning to create controlled scenarios for training and testing. In this case, my synthetic data will consist of a set of input points, each with a known answer, indicating whether the point is above or below a line. To define the line and generate the data, I'll use simple algebra This approach allows me to clearly demonstrate the training process and how the perceptron learns.
+
Now the question becomes, how do I pick a point and know whether it is above or below a line? Let’s start with the formula for a line, where y is calculated as a function of x:
y = f(x)
In generic terms, a line can be described as:
y = ax + b
@@ -370,7 +386,7 @@
Coding the Perceptron
let yline = f(x);
If the y value I am examining is above the line, it will be less than y_\text{line}.
-
+
Figure 10.8: If y is less than y_\text{line} then it is above the line. Note this is only true for a p5.js canvas where the y axis points down in the positive direction.
// Start with the value of +1
@@ -395,9 +411,9 @@
Example 10.1: The Perceptron
// The Perceptron
let perceptron;
-//{!1} 2,000 training points
+//{!1} An array for training data
let training = [];
-// A counter to track training points one by one
+// A counter to track training data points one by one
let count = 0;
//{!3} The formula for a line
@@ -407,32 +423,28 @@
Example 10.1: The Perceptron
function setup() {
createCanvas(640, 240);
-
+
// Perceptron has 3 inputs (including bias) and learning rate of 0.01
perceptron = new Perceptron(3, 0.01);
- //{!1} Make 1,000 training points.
+ //{!1} Make 2,000 training data points.
for (let i = 0; i < 2000; i++) {
- let x = random(-width / 2,width / 2);
- let y = random(-height / 2,height / 2);
+ let x = random(-width / 2, width / 2);
+ let y = random(-height / 2, height / 2);
//{!2} Is the correct answer 1 or -1?
let desired = 1;
if (y < f(x)) {
desired = -1;
}
- training[i] = {
- input: [x, y, 1],
- output: desired
- };
+ training[i] = { input: [x, y, 1], output: desired };
}
}
-
function draw() {
background(255);
- translate(width/0wiu2, height/2);
+ translate(width / 2, height / 2);
- ptron.train(training[count].inputs, training[count].answer);
+ perceptron.train(training[count].inputs, training[count].output);
//{!1} For animation, we are training one point at a time.
count = (count + 1) % training.length;
@@ -440,42 +452,134 @@
Example 10.1: The Perceptron
stroke(0);
let guess = ptron.feedforward(training[i].inputs);
//{!2} Show the classification—no fill for -1, black for +1.
- if (guess > 0) noFill();
- else fill(0);
- ellipse(training[i].inputs[0], training[i].inputs[1], 8, 8);
+ if (guess > 0) {
+ noFill();
+ } else {
+ fill(0);
+ }
+ circle(training[i].inputs[0], training[i].inputs[1], 8);
}
}
-
Section on Normalizing Here?
+
In practical machine learning applications, real-world datasets often feature diverse and dynamic ranges of input values. In this simplified scenario, the range of possible values for x is larger than that for y due to the canvas size of 640x240. Despite this, the example still works, after all, the sign activation function doesn't rely on specific input ranges and it's such a straightforward binary classification task. However, real-world data often has much greater complexity in terms of input ranges. To this end, data normalization is a critical step in machine learning. Normalizing data involves mapping the training data to ensure that all inputs (and outputs) conform to uniform ranges. This process can improve training efficiency and prevent individual inputs from dominating the learning process. In the next section, using the ml5.js library, I will build data normalization into the process.
Exercise 10.1
Instead of using the supervised learning model above, can you train the neural network to find the right weights by using a genetic algorithm?
Exercise 10.2
-
Visualize the perceptron itself. Draw the inputs, the processing node, and the output.
+
Instead of using the supervised learning model above, can you train the neural network to find the right weights by using a genetic algorithm?
+
+
+
Exercise 10.3
+
Incorporate data normalization into the example. Does this improve the learning efficiency?
It’s a “Network,” Remember?
-
Yes, a perceptron can have multiple inputs, but it is still a lonely neuron. The power of neural networks comes in the networking itself. Perceptrons are, sadly, incredibly limited in their abilities. If you read an AI textbook, it will say that a perceptron can only solve linearly separable problems. What’s a linearly separable problem? Let’s take a look at the first example, which determined whether points were on one side of a line or the other.
+
Yes, a perceptron can have multiple inputs, but it is still a lonely neuron. The power of neural networks comes in the networking itself. Perceptrons are, sadly, incredibly limited in their abilities. If you read an AI textbook, it will say that a perceptron can only solve linearly separable problems. What’s a linearly separable problem?
-
+
Figure 10.9: One the left a collection of points that is linearly separable. On the right, non-linearly separable data where a curve is required to separate the points.
-
On the left of Figure 10.9, is an example of classic linearly separable data, like the simplified plant classification of xerophytes and hydrophytes. Graph all of the possibilities; if you can classify the data with a straight line, then it is linearly separable. On the right, however, is non-linearly separable data. Imagine you are classifying plants according to soil acidity (x-axis) and temperature (y-axis). Some plants might thrive in acidic soils at a specific temperature range, while other plants prefer less acidic soils but tolerate a broader range of temperatures. There is a more complex relationship between the two variables, and a straight line cannot be drawn to separate the two categories of plants—"acidophilic" and "alkaliphilic.”
+
On the left of Figure 10.9, is an example of classic linearly separable data, like the simplified plant classification of xerophytes and hydrophytes. Graph all of the possibilities; if you can divide the categories of the data with a straight line, then it is linearly separable. On the right, however, is non-linearly separable data. Imagine you are classifying plants according to soil acidity (x-axis) and temperature (y-axis). Some plants might thrive in acidic soils at a specific temperature range, while other plants prefer less acidic soils but tolerate a broader range of temperatures. There is a more complex relationship between the two variables, and a straight line cannot be drawn to separate the two categories of plants—"acidophilic" and "alkaliphilic.” (Caveat here, I’m making up these scenarios, if you are a botanist and reading this book, please let me know if I’m anywhere close to reality!)
One of the simplest examples of a non-linearly separable problem is XOR, or “exclusive or.” I’m guessing, as someone who works with coding and p5.js, you are familiar with a logical \text{AND}. For A \text{ AND } B to be true, both A and B must be true. With \text{OR}|, either A or B can be true for A \text{ OR } B to evaluate as true. These are both linearly separable problems. Let’s look at the solution space, a “truth table.”
-
+
Figure 10.10: Truth tables for AND and OR logical operators, true and false outputs are separated by a line.
+
+
+
+
AND
+
true
+
false
+
+
+
+
+
true
+
true
+
false
+
+
+
false
+
false
+
false
+
+
+
+
+
+
+
OR
+
true
+
false
+
+
+
+
+
true
+
true
+
true
+
+
+
false
+
true
+
false
+
+
+
See how you can draw a line to separate the true outputs from the false ones?
-
\text{XOR} (”exclusive” or) is the equivalent \text{OR} and \text{NOT AND}. In other words, A \text{ XOR } B only evaluates to true if one of them is true. If both are false or both are true, then we get false. Take a look at the following truth table.
+
\text{XOR} (”exclusive” or) is the equivalent \text{OR} and \text{NOT AND}. In other words, A \text{ XOR } B only evaluates to true if one of them is true. If both are false or both are true, then we get false. Let’s say you are having pizza for dinner. You love pineapple on pizza, and you love mushrooms on pizza. But put them together, yech! And plain pizza, that’s no good! Here’s a table to describe that scenario and whether you want to eat the pizza or not.
+
+
+
+
+
🍍
+
no 🍍
+
+
+
+
+
🍄
+
🤢
+
😋
+
+
+
no 🍄
+
😋
+
🤢
+
+
+
+
The truth table version of this is as follow:
-
+
Figure 10.11: Truth table for XOR (“exclusive or”), true and false outputs cannot be separated by a single line.
+
+
+
+
XOR
+
true
+
false
+
+
+
+
+
true
+
false
+
true
+
+
+
false
+
true
+
false
+
+
+
This is not linearly separable. Try to draw a straight line to separate the true outputs from the false ones—you can’t!
So perceptrons can’t even solve something as simple as \text{XOR}. But what if we made a network out of two perceptrons? If one perceptron can solve \text{OR} and one perceptron can solve \text{NOT AND}, then two perceptrons combined can solve \text{XOR}.
-
+
Figure 10.12: A multi-layered perceptron, same inputs and output as the simple Perceptron, but now including a hidden layer of neurons.
The above diagram is known as a multi-layered perceptron, a network of many neurons. Some are input neurons and receive the inputs, some are part of what’s called a “hidden” layer (as they are connected to neither the inputs nor the outputs of the network directly), and then there are the output neurons, from which the results are read.
@@ -486,23 +590,23 @@
Machine Learning with ml5.js
That friend is ml5.js. Inspired by the philosophy of p5.js, ml5.js is a JavaScript library that aims to make machine learning accessible to a wide range of artists, creative coders, and students. It is built on top of TensorFlow.js, Google's open-source library that runs machine learning models directly in the browser without the need to install or configure complex environments. TensorFlow.js's low-level operations and highly technical API, however, can be intimidating to beginners. That's where ml5.js comes in, providing a friendly entry point for those who are new to machine learning and neural networks.
Before I get to my goal of adding a "neural network" brain to a steering agent and tying ml5.js back into the story of the book, I would like to demonstrate step-by-step how to train a neural network model with "supervised learning." There are several key terms and concepts important to cover, namely “classification”, “regression”, “inputs”, and “outputs”. By walking through the full process of a supervised learning scenario, I hope to define these terms, explore other foundational concepts, introduce the syntax of the ml5.js library, and provide the tools to train your first machine learning model with your own data.
Classification and Regression
-
The majority of machine learning tasks fall into one of two categories: classification and regression. Classification is probably the easier of the two to understand at the start. It involves predicting a “label” (or “category” or “class”) for a piece of data. For example, an image classifier might try to guess if a photo is of a cat or a dog and assign the corresponding label.
+
The majority of machine learning tasks fall into one of two categories: classification and regression. Classification is probably the easier of the two to understand at the start. It involves predicting a “label” (or “category” or “class”) for a piece of data. For example, an image classifier might try to guess if a photo is of a cat or a dog and assign the corresponding label.
-
+
Figure 10.13: CAT OR DOG OR BIRD OR MONKEY OR ILLUSTRATIONS ASSIGNED A LABEL???
This doesn’t happen by magic, however. The model must first be shown many examples of dogs and cats with the correct labels in order to properly configure the weights of all the connections. This is the supervised learning training process.
The classic “Hello, World” demonstration of machine learning and supervised learning is known as “MNIST”. MNIST, short for “Modified National Institute of Standards and Technology,” is a dataset that was collected and processed by Yann LeCun and Corinna Cortes (AT&T Labs) and Christopher J.C. Burges (Microsoft Research). It is widely used for training and testing in the field of machine learning and consists of 70,000 handwritten digits from 0 to 9, with each one being a 28x28 pixel grayscale image.
While I won't be building a complete MNIST model with ml5.js (you could if you wanted to!), it serves as a canonical example of a training dataset for image classification: 70,000 images each assigned one of 10 possible labels. This idea of a “label” is fundamental to classification, where the output of a model involves a fixed number of discrete options. There are only 10 possible digits that the model can guess, no more and no less. After the data is used to train the model, the goal is to classify new images and assign the appropriate label.
Regression, on the other hand, is a machine learning task where the prediction is a continuous value, typically a floating point number. A regression problem can involve multiple outputs, but when beginning it’s often simpler to think of it as just one. Consider a machine learning model that predicts the daily electricity usage of a house based on any number of factors like number of occupants, size of house, temperature outside. Here, rather than a goal of the neural network picking from a discrete set of options, it makes more sense for the neural network to guess a number. Will the house use 30.5 kilowatt-hours of energy that day? 48.7 kWh? 100.2 kWh? The output is therefore a continuous value that the model attempts to predict.
Inputs and Outputs
-
Once the task has been determined, the next step is to finalize the configuration of inputs and outputs of the neural network. Instead of MNIST which involves using an image as the input to a neural network, let's use another classic “Hello, World” example in the field of data science and machine learning: Iris Flower classification. This dataset can be found in the University of California Irvine Machine Learning Repository and originated from the work of American botanist Edgar Anderson. Anderson embarked on a data collection endeavor over many years that encompassed multiple regions of the United States and Canada. After carefully analyzing the collected data, he built a table to classify Iris flowers into three distinct species: Iris setosa, Iris virginica and Iris versicolor.
+
Once the task has been determined, the next step is to finalize the configuration of inputs and outputs of the neural network. Rather than using MNIST, which adds complexity due to the image input to a neural network, let's use another classic "Hello, World" example in the field of data science and machine learning: Iris flower classification. This dataset can be found in the University of California Irvine Machine Learning Repository and originated from the work of American botanist Edgar Anderson. Anderson collected flower data over many years across multiple regions of the United States and Canada. After carefully analyzing the data, he built a table to classify Iris flowers into three distinct species: Iris setosa, Iris virginica and Iris versicolor.
Anderson included four numeric attributes for each flower: sepal length, sepal width, petal length, and petal width, all measured in centimeters. (He also recorded color information but that data appears to have been lost.) Each record is then paired with its Iris categorization.
@@ -561,19 +665,19 @@
Inputs and Outputs
-
In this dataset, the first four columns (sepal length, sepal width, petal length, petal width) serve as inputs to the neural network. The output classification is provided in the fourth column on the right. Figure 10.9 depicts a possible architecture for a neural network that can be trained on this data.
+
In this dataset, the first four columns (sepal length, sepal width, petal length, petal width) serve as inputs to the neural network. The output classification is provided in the fourth column on the right. Figure 10.9 depicts a possible architecture for a neural network that can be trained on this data. (I’m leaving out the bias, as it will be handled by ml5.js behind the scenes.)
-
+
Figure 10.9: Possible network architecture for flower classification scenario.
On the left of Figure 10.9, you can see the four inputs to the network, which correspond to the first four columns of the data table. On the right, there are three possible outputs, each representing one of the Iris species labels. The neural network's goal is to “activate” the correct output for the input data, much like how the Perceptron would output a +1 or -1 for its single binary classification. In this case, the output values are like signals that help the network decide which Iris species label to assign. The highest computed value “activates” to signify the correct classification for the input data.
-
In the diagram, you'll also notice the inclusion of a hidden layer. Unlike input and output neurons, the nodes in this “hidden” layer are not directly connected to the network's inputs or outputs. The layer introduces an added layer of complexity to the network's architecture, necessary as I have established for more complex, non-linearly separable data. The number of nodes depicted, in this case, five nodes, is arbitrary. Neural network architectures can vary greatly, and the number of hidden nodes is often determined through experimentation and optimization. In the context of this book, I'm relying on ml5.js to automatically configure the architecture based on the input and output data, simplifying the implementation process.
+
In the diagram, you'll also notice the inclusion of a hidden layer. Unlike input and output neurons, the nodes in this “hidden” layer are not directly connected to the network's inputs or outputs. The layer introduces complexity to the network's architecture, necessary as I have shown for non-linearly separable data! The number of nodes depicted here, five nodes, is arbitrary. Neural network architectures can vary greatly, and the number of hidden nodes is often determined through trial and error or other educated guessing methods (aka “heuristics”). In the context of this book, I'm going to rely on ml5.js to automatically configure the architecture based on the input and output data for me.
Now, let’s move onto a regression scenario.
-
+
Figure 10.10: A depiction of different kinds of houses and weather and electricity usage???
-
Figure 10.10 shows a variety of homes and weather conditions. Let use scenario propose early of a regression predicting the electricity usage of a house. Here’s, I’ll use a “made-up” dataset.
+
Figure 10.10 shows a variety of homes and weather conditions. Considering the scenario of a regression predicting the electricity usage of a house, let’s create a “made-up” dataset. (This is much like a synthetic dataset, given that it’s not data collected for a real world scenario, but instead of being automated I’m manually inputting numbers from my own imagination.)
@@ -628,17 +732,17 @@
Inputs and Outputs
Just as before, the inputs to the neural network are the first three columns (occupants, size, temperature). The fourth column on the right is what the neural network is expected to guess, or the output. The network architecture follows suit in Figure 10.10, also with an arbitrary choice of four nodes for the hidden layer.
-
+
Figure 10.10 Possible network architecture for 3 inputs and 1 regression output
-
Unlike the Iris classification, since there is just one number to be predicted (rather than a choice between three labels), this neural network as only one output. I’ll note, however, that this is not a requirement of a regression. A machine learning model can perform a regression that predicts multiple continuous values.
+
Unlike the Iris classification, since there is just one number to be predicted (rather than a choice between three labels), this neural network has only one output. I’ll note, however, that this is not a requirement of a regression. A machine learning model can perform a regression that predicts multiple continuous values.
Setting up the Neural Network with ml5.js
-
In a typical machine learning scenario, the next step after establishing the inputs and outputs is to configure the architecture of the neural network. This involves specifying the number of hidden layers between the inputs and outputs, the number of neurons in each layer, which activation functions to use, and more! While all of this is possible with ml5.js, it will make its best guess and design a model for you based on the task and data.
+
In a typical machine learning scenario, the next step after establishing the inputs and outputs is to configure the architecture of the neural network. This involves specifying the number of hidden layers between the inputs and outputs, the number of neurons in each layer, which activation functions to use, and more! While all of this is possible with ml5.js, I can skip these decisions as the library will make its best guess and design the model architecture based on the task and data.
As demonstrated with Matter.js and toxiclibs.js in chapter 6, you can import the ml5.js library into your index.html file.
The ml5.js library is a collection of machine learning models that can be accessed using the syntax ml5.functionName(). For example, to use a pre-trained model that detects hands, you can use ml5.handpose(). For classifying images, you can use ml5.imageClassifier(). While I encourage exploring all that ml5.js has to offer (I will reference some of these pre-trained models in upcoming exercise ideas), for this chapter, I will focus on only one function in ml5.js: ml5.neuralNetwork(), which creates an empty neural network for you to train.
To create a neural network, you must first create a JavaScript object that will configure the model. While there are many properties that you can set, most of them are optional, as the network will use default values. Let’s begin by specifying the "task" that you intend the model to perform: "regression" or "classification.”
-
let options = { task: "classification" }
+
let options = { task: "classification" };
let classifier = ml5.neuralNetwork(options);
This, however, gives ml5.js very little to go on in terms of designing the network architecture. Adding the inputs and outputs will complete the rest of the puzzle for it. In the case of Iris Flower classification, there are 4 inputs and 3 possible output labels. This can be configured in ml5.js with a single integer for the number of inputs and an array of strings for the list of output labels.
let options = {
@@ -654,15 +758,15 @@
Setting up the Neural Network
task: "regression",
};
let energyPredictor = ml5.neuralNetwork(options);
-
While the Iris flower and energy predictor scenarios are useful starting points for understanding how machine learning works, it's important to note that they are simplified versions of what you might encounter in a “real-world” machine learning application. Depending on the problem, there could be significantly higher levels of complexity both in terms of the network architecture and the scale and preparation of data. Instead of a neatly packaged dataset, you might be dealing with enormous amounts of messy data. This data might need to be processed and refined before it can be effectively used. You can think of it like organizing, washing, and chopping ingredients before you can start cooking with them.
+
While the Iris flower and energy predictor scenarios are useful starting points for understanding how machine learning works, they are simplified versions of what you might encounter in a “real-world” machine learning application. Depending on the problem, there could be significantly higher levels of complexity both in terms of the network architecture and the scale and preparation of data. Instead of a neatly packaged dataset, you might be dealing with enormous amounts of messy data. This data might need to be processed and refined before it can be effectively used. You can think of it like organizing, washing, and chopping ingredients before you can start cooking with them.
The “lifecycle” of a machine learning model is typically broken down into seven steps.
-
Data Collection: Data forms the foundation of any machine learning task. This stage might involve running experiments, manually inputting values, sourcing public data, or a myriad of other methods.
-
Data Preparation: Raw data often isn't in a format suitable for machine learning algorithms. It might also have duplicate or missing values, or contain outliers that skew the data. Such inconsistencies may need to be manually adjusted. Additionally, neural networks work best with “normalized” data. While this term might remind you of normalizing vectors, it's important to understand that it carries a slightly different meaning in the context of data preparation. A “normalized” vector’s length is set to a fixed value, usually 1, with the direction intact. However, data normalized for machine learning involves adjusting the values so that they fit within a specific range, generally between 0 and 1 or -1 and 1. Another key part of preparing data is separating it into distinct sets: training, validation, and testing. The training data is used to teach the model (Step 5). On the other hand, the validation and testing data (the distinction is subtle, more on this later) are set aside and reserved for evaluating the model's performance (Step 6).
+
Data Collection: Data forms the foundation of any machine learning task. This stage might involve running experiments, manually inputting values, sourcing public data, or a myriad of other methods (like generating synthetic data!)
+
Data Preparation: Raw data often isn't in a format suitable for machine learning algorithms. It might also have duplicate or missing values, or contain outliers that skew the data. Such inconsistencies may need to be manually adjusted. Additionally, as I mentioned earlier, neural networks work best with “normalized” data. While this term might remind you of normalizing vectors, it's important to understand that it carries a slightly different meaning in the context of data preparation. A “normalized” vector’s length is set to a fixed value, usually 1, with the direction intact. However, data normalized for machine learning involves adjusting the values so that they fit within a specific range, generally between 0 and 1 or -1 and 1. Another key part of preparing data is separating it into distinct sets: training, validation, and testing. The training data is used to teach the model (Step 5). On the other hand, the validation and testing data (the distinction is subtle, more on this later) are set aside and reserved for evaluating the model's performance (Step 6).
Choosing a Model: This step involves designing the architecture of the neural network. Different models are more suitable for certain types of data and outputs.
Training: This step involves feeding the "training" data through the model, allowing the model to adjust the weights of the neural network based on its errors. This process is known as “optimization” where the model tunes the weights to optimize for the least amount of errors.
Evaluation: Remember that “testing” data that was saved for in step 3? Since that data wasn’t used in training, it provides a means to evaluate how well the model performs on new, unseen data.
-
Parameter Tuning: The training process is influenced by a set of parameters (often called “hyperparameters”), such as the "learning rate," which dictates how much the model should adjust its weights based on errors in prediction. By fine-tuning these parameters and revisiting steps 5 (Training), 4 (Choosing a Model), or even 3 (Data Preparation), you can often improve the model's performance.
+
Parameter Tuning: The training process is influenced by a set of parameters (often called “hyperparameters”), such as the "learning rate," which dictates how much the model should adjust its weights based on errors in prediction. I called this learningConstant earlier in the perceptron example. By fine-tuning these parameters and revisiting steps 5 (Training), 4 (Choosing a Model), or even 3 (Data Preparation), you can often improve the model's performance.
Deployment: Once the model is trained and its performance is evaluated satisfactorily, it’s time to actually use the model out in the real world with new data!
Building a Gesture Classifier
@@ -673,13 +777,13 @@
Building a Gesture Classifier
After all, how are you supposed to collect your data without knowing what you are even trying to do? Are you predicting a number? A category? A sequence? Is it a binary choice, or are there multiple options? These considerations about your inputs (the data fed into the model) and outputs (the predictions) are critical for every other step of the machine learning journey.
Let’s take a crack at step 0 for an example problem of training your first machine learning model with ml5.js and p5.js. Imagine for a moment, you’re working on an interactive application that responds to a gesture, maybe that gesture is ultimately meant to be classified via body tracking, but you want to start with something much simpler—one single stroke of the mouse.
-
- [POSSIBLE ILLUSTRATION OF A SINGLE MOUSE SWIPE AS A GESTURE: basically can the paragraph below be made into a drawing?]
+
+ Figure 10.11 ILLUSTRATION OF A SINGLE MOUSE SWIPE AS A GESTURE? basically can the paragraph below be made into a drawing?]
Each gesture could be recorded as a vector (extending from the start to the end points of a mouse movement) and the model’s task could be to predict one of four options: “up”, “down”, “left”, or “right.” Perfect! I’ve now got the objective and boiled it down into inputs and outputs!
Data Collection and Preparation
-
Next, I’ve got steps 1 and 2: data collection and preparation. Here, I’d like to take the approach of ordering a machine learning “meal-kit,” where the ingredients (data) comes pre-portioned and prepared. This way, I’ll get straight to the cooking itself, the process of training the model. After all, this is really just an appetizer for what will be the ultimate meal later in this chapter when I get to applying neural networks to steering agents.
-
For this step, I’ll hard-code that data itself and manually keep it normalized within a range of -1 and 1. Here it is directly written into the code, rather than loaded from a separate file. It is organized into an array of objects, pairing the x,y components of a vector with a string label.
+
Next, I’ve got steps 1 and 2: data collection and preparation. Here, I’d like to take the approach of ordering a machine learning “meal-kit,” where the ingredients (data) comes pre-portioned and prepared. This way, I’ll get straight to the cooking itself, the process of training the model. After all, this is really just an appetizer for what will be the ultimate meal later in the next chapter when I get to applying neural networks to steering agents.
+
For this step, I’ll hard-code that data itself and manually keep it normalized within a range of -1 and 1. Here it is directly written into the code, rather than loaded from a separate file. It is organized into an array of objects, pairing the x,y components of a vector with a string label. I’m picking that values that I feel clearly point in a specific direction and assigning the appropriate label.
In truth, it would likely be better to collect example data by asking users to perform specific gestures and recording their inputs, or by creating synthetic data that represents the idealized versions of the gestures I want the model to recognize. In either case, the key is to collect a diverse set of examples that adequately represent the variations in how the gestures might be performed. But let’s see how it goes with just a few servings of data.
-
Exercise 10.3
+
Exercise 10.3
Create a p5.js sketch that collects gesture data from users and saves it to a JSON file. You can use mousePressed() and mouseReleased() to mark the start and end of each gesture and saveJSON() to download the data into a file.
JSON (JavaScript Object Notation) and CSV (Comma-Separated Values) are two popular formats for storing and loading data. JSON stores data in key-value pairs and follows the same exact format as JavaScript objects. CSV is a file format that stores “tabular” data (like a spreadsheet). There are numerous other data formats you could use depending on your needs what programming environment you are working with.
-
I’ll also note that, much like some of the genetic algorithm demonstrations in chapter 9, I am selecting a problem here that has a known solution and could have been solved more easily and efficiently without a neural network. The direction of a vector can be classified with the heading2D() function and a series of if statements! However, by using this seemingly trivial scenario, I hope to explain the process of training a machine learning model in an understandable and friendly way. Additionally, it will make it easy to check if the code is working as expected! When I’m done I’ll provide some ideas about how to expand the classifier to a scenario where if statements would not apply.
+
I’ll also note that, much like some of the genetic algorithm demonstrations in chapter 9, I am selecting a problem here that has a known solution and could have been solved more easily and efficiently without a neural network. The direction of a vector can be classified with the heading() function and a series of if statements! However, by using this seemingly trivial scenario, I hope to explain the process of training a machine learning model in an understandable and friendly way. Additionally, it will make it easy to check if the code is working as expected! When I’m done I’ll provide some ideas about how to expand the classifier to a scenario where if statements would not apply.
Choosing a Model
This is where I am going to let ml5.js do the heavy lifting for me. To create the model with ml5.js, all I need to do is specify the task, the inputs, and the outputs!
let options = {
@@ -730,7 +839,7 @@
Training
After passing the data into the classifier, ml5.js provides a helper function to normalize it.
// Normalize the data
classifier.normalizeData();
-
As I’ve mentioned, normalizing data (adjusting the scale to a standard range) is a critical step in the machine learning process. However, if you recall during the data collection process, the hand-coded data was written with values that already range between -1 and 1. So, while calling normalizeData() here is likely redundant, it's important to demonstrate. Normalizing your data as part of the pre-processing step will absolutely work, but the auto-normalization feature of ml5.js is a quite convenient alternative.
+
As I’ve mentioned, normalizing data (adjusting the scale to a standard range) is a critical step in the machine learning process. However, during the data collection process, the hand-coded data was written with values that already range between -1 and 1. So, while calling normalizeData() here is likely redundant, it's important to demonstrate. Normalizing your data as part of the pre-processing step will absolutely work, but the auto-normalization feature of ml5.js is a big help!
Ok, this subsection is called training. So now it’s time to train! Here’s the code:
// The "train" method initiates the training process
classifier.train(finishedTraining);
@@ -740,26 +849,25 @@
Training
console.log("Training complete!");
}
Yes, that’s it! After all, the hard work has already been completed! The data was collected, prepared, and fed into the model. However, if I were to run the above code and then test the model, the results would probably be inadequate. Here is where it’s important to introduce another key term in machine learning: epoch. The train() method tells the neural network to start the learning process. But how long should it train for? You can think of an epoch as one round of practice, one cycle of using the entire dataset to update the weights of the neural network. Generally speaking, the longer you train, the better the network will perform, but at a certain point there are diminishing returns. The number of epochs can be set by passing in an options object into train().
-
-//{!1} Setting the number of epochs for training
+
//{!1} Setting the number of epochs for training
let options = { epochs: 25 };
classifier.train(options, finishedTraining);
There are other "hyperparameters" that you can set in the options variable (learning rate is one again!), but I'm going to stick with the defaults. You can read more about customization options in the ml5.js reference.
-
The second argument, finishedTraining(), is optional, but it's good to include because it's a callback that runs when the training process is complete. This is useful for knowing when you can proceed to the next steps in your code. There is even another optional callback, which I usually name whileTraining(), that is triggered after each epoch. However, for my purposes, knowing when the training is done is plenty!
Callbacks
If you've worked with p5.js, you're already familiar with the concept of a callback even if you don't know it by that name. Think of the mousePressed() function. You define what should happen inside it, and p5.js takes care of calling it at the right moment, when the mouse is pressed.
A callback function in JavaScript operates on a similar principle. It's a function that you provide as an argument to another function, intending for it to be “called back” at a later time. They are needed for “asynchronous” operations, where you want your code to continue along with animating or doing other things while waiting for another task to finish. A classic example of this in p5.js is loading data into a sketch with loadJSON().
In JavaScript, there's also a more recent approach for handling asynchronous operations known as "Promises." With Promises, you can use keywords like async and await to make your asynchronous code look more like traditional synchronous code. While ml5.js also supports this style, I’ll stick to using callbacks to stay aligned with p5.js style.
+
The second argument, finishedTraining(), is optional, but it's good to include because it's a callback that runs when the training process is complete. This is useful for knowing when you can proceed to the next steps in your code. There is even another optional callback, which I usually name whileTraining(), that is triggered after each epoch. However, for my purposes, knowing when the training is done is plenty!
Evaluation
If debug is set to true in the initial call to ml5.neuralNetwork(), once train() is called, a visual interface appears covering most of the p5.js page and canvas.
-
+
Figure 10.19: The TensorFlow.js “visor” with a graph of the loss function and model details.
This panel, called "Visor," represents the evaluation step, as shown in Figure X.X. The Visor is a part of TensorFlow.js and includes a graph that provides real-time feedback on the progress of the training. Let’s take a moment to focus on the "loss" plotted on the y-axis against the number of epochs along the x-axis.
-
So, what exactly is this "loss"? Loss is a measure of how far off the model's predictions are from the “correct” outputs provided by the training data. It quantifies the model’s total error. When training begins, it's common for the loss to be high because the model has yet to learn anything. As the model trains through more epochs, it should, ideally, get better at its predictions, and the loss should decrease. If the graph goes down as the epochs increase, this is a good sign!
+
So, what exactly is this "loss"? Loss is a measure of how far off the model's predictions are from the “correct” outputs provided by the training data. It quantifies the model’s total error. When training begins, it's common for the loss to be high because the model has yet to learn anything. As the model trains through more epochs, it should, ideally, get better at its predictions, and the loss should decrease. If the graph goes down as the epochs increase, this is a good sign!
Running the training for 200 epochs might strike you as a bit excessive. In a real-world scenario with more extensive data, I would probably use fewer epochs. However, because the dataset here is so tiny, the higher number of epochs helps the model get enough "practice" with the data. Remember, this is a "toy" example, aiming to make the concepts clear rather than to produce a sophisticated machine learning model.
Below the graph, you will find a "model summary" table that provides details on the lower-level TensorFlow.js model architecture created behind the scenes. The summary includes layer names, neuron counts per layer, and a "parameters" count, which is the total number of weights, one for each connection between two neurons.
Now, before moving on, I’d like to refer back to the data preparation step. There I mentioned the idea of splitting the data between “training,” “validation,” and “testing.”
@@ -768,10 +876,10 @@
Evaluation
validation: subset of data used to check the model during training
testing: additional untouched data never considered during the training process to determine its final performance.
-
With ml5.js, while it’s possible to incorporate all three categories of data. However, I’m simplifying things here and focusing only on the training dataset. After all, my dataset only has 8 records, it’s much too small to divide three different sets! Using such a small dataset risks the model “overfitting” the data. Overfitting is a term that describes when a machine learning model has learned the training data too well. In this case, it’s become so “tuned” to the specific peculiarities of the training data, that is is much less effective when working with new, unseen data. The best way to combat overfitting, is to use validation data during the training process! If it performs well on the training data but poorly on the validation data, it's a strong indicator that overfitting might be occurring.
+
With ml5.js, while it’s possible to incorporate all three categories of data. However, I’m simplifying things here and focusing only on the training dataset. After all, my dataset only has 8 records, it’s much too small to divide three different sets! Using such a small dataset risks the model “overfitting” the data. Overfitting is a term that describes when a machine learning model has learned the training data too well. An overfitted model is so “tuned” to the specific peculiarities of the training data that it is much less effective when working with new, unseen data. The best way to combat overfitting, is to use validation data during the training process! If it performs well on the training data but poorly on the validation data, it's a strong indicator that overfitting might be occurring.
ml5.js provides some automatic features to employ validation data, if you are inclined to go further, you can explore the full set of neural network examples at ml5js.org.
Parameter Tuning
-
After the evaluation step, there is typically an iterative process of adjusting "hyperparameters" to achieve the best performance from the model. The ml5.js library is designed to provide a higher-level, user-friendly interface to machine learning. So while it does offer some capabilities for parameter tuning (which you can explore in the reference), it is not as geared towards low-level, fine-grained adjustments as some other frameworks might be. Using TensorFlow.js directly might be your best bet since it offers a broader suite of tools and allows for lower-level control over the training process. For this demonstration—seeing a loss all the way down to 0.1 on the evaluation graph—I am satisfied with the result and happy to move onto deployment!
+
After the evaluation step, there is typically an iterative process of adjusting "hyperparameters" to achieve the best performance from the model. As I keep saying, the ml5.js library is designed to provide a higher-level, user-friendly interface to machine learning. So while it does offer some capabilities for parameter tuning (which you can explore in the reference), it is not as geared towards low-level, fine-grained adjustments as some other frameworks might be. Using TensorFlow.js directly might be your best bet since it offers a broader suite of tools and allows for lower-level control over the training process. For this demonstration—seeing a loss all the way down to 0.1 on the evaluation graph—I am satisfied with the result and happy to move onto deployment!
Deployment
This is it, all that hard work has paid off! Now it’s time to deploy the model. This typically involves integrating it into a separate application to make predictions or decisions based on new, unseen data. For this, ml5.js offers the convenience of a save() and load() function. After all, there’s no reason to re-train a model every single time you use it! You can download the model to a file in one sketch and then load it for use in a completely different one. However, for simplicity, I’m going to demonstrate deploying and utilizing the model in the same sketch where it was trained.
Once the training process is complete, the resulting model is saved in the classifier variable and is, in essence, deployed. You can detect the completion of the training process using the finishedTraining() callback and use a boolean variable or other logic to initiate the prediction stage of the code. For this example, I’ll include a global variable statusto track the training process and ultimately display the predicted label on the canvas.
@@ -789,14 +897,14 @@
Deployment
function finishedTraining() {
status = "ready";
}
-
Once the model is trained, the classify() method can be called to send new data into the model for prediction. The format of the data sent to classify() should match the format of the data used in training, in this case two floating point numbers, representing the x and y components of a direction vector.
+
After training, the classify() method can be called to send new data into the model for prediction. The format of the data sent to classify() should match the format of the data used in training, in this case two floating point numbers, representing the x and y components of a direction vector.
// Manually creating a vector
let direction = createVector(1, 0);
// Converting the x and y components into an input array
let inputs = [direction.x, direction.y];
// Asking the model to classify the inputs
classifier.classify(inputs, gotResults);
-
The second argument of the classify() function is a callback. Although it would be more convenient to receive the results immediately and move on to the next line of code, the results are returned later through a separate callback event (just as with model loading and training).
+
The second argument of the classify() function is also a callback. Although it would be more convenient to receive the results immediately and move on to the next line of code, the results are returned later through a separate event (just as with model loading and training).
function gotResults(results) {
console.log(results);
}
@@ -819,7 +927,7 @@
Deployment
"confidence": 0.00029277068097144365
}
]
-
In the example output here, the model is highly confident (approximately 96.7%) that the correct label is "right," while it has minimal confidence in the "left" label, 0.03%. The confidence values are normalized and add up to 100%.
+
In the example output here, the model is highly confident (approximately 96.7%) that the correct label is "right," while it has minimal confidence in the "left" label, 2%. The confidence values are normalized and add up to 100%.
Example 10.2: Gesture Classifier
@@ -827,8 +935,7 @@
Example 10.2: Gesture Classifier
-
-// Storing the start of a gesture when the mouse is pressed
+
// Storing the start of a gesture when the mouse is pressed
function mousePressed() {
start = createVector(mouseX, mouseY);
}
@@ -863,12 +970,20 @@
Exercise 10.5
Exercise 10.6
-
[Exercise around hand pose classifier?]
+
One of the pre-trained models in ml5.js is called “handpose.” The input of the model is an image and the prediction is a list of 21 keypoints (x,y positions, also known as “landmarks”) that describe a hand.
+
+
+
+
+
Can you use the output of the ml5.handpose() model as the inputs to an ml5.neuralNetwork() and classify different hand gestures (like a thumbs up or thumbs down.) For hints, you can watch my video tutorial that walks you through this process for body poses in the machine learning track on thecodingtrain.com.
-
+
The Ecosystem Project
Step 10 Exercise:
-
???????????????
-
+
Incorporate machine learning into your ecosystem to enhance the behavior of creatures. How could classification or regression be applied?
+
+
Can you classify the creatures of your eco-system into different categories? What if you use an initial population as a training dataset and as new creatures are born, the system classifies them according to their features? What are the inputs and outputs for your system?
+
Can you use a regression to predict the lifespan of a creature based on its properties? Think about the bloops, could you then analyze how well the regression’s model predicitons align with the actual outcomes?
+
\ No newline at end of file
diff --git a/content/11_nn_ga.html b/content/11_nn_ga.html
index 9613009b..85a909bc 100644
--- a/content/11_nn_ga.html
+++ b/content/11_nn_ga.html
@@ -1,11 +1,24 @@
Chapter 11. NeuroEvolution
+
+
“quote”
+
— name
+
+
+
+
+
+
+
TITLE
+
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
+
credit / url
+
There is so much more to working with data, machine learning, ml5.js, and beyond. I’ve only scratched the surface. As I close out this book, my goal is to tie the foundational machine learning concepts I’ve covered back into animated, interactive p5.js sketches that simulate physics and complex systems. Let’s see if I can bring as many concepts from the entire book back together for one last hurrah!
Reinforcement Learning
Towards the start of this chapter, I referenced an approach to incorporating machine learning into a simulated environment called “reinforcement learning.” Imagine embedding a neural network into any of the example objects (walker, mover, particle, vehicle) and calculating a force or some other action. The neural network could receive inputs related to the environment (such as distance to an obstacle) and produce a decision that requires a choice from a set of discrete options (e.g., move “left” or “right”) or a set of continuous values (e.g., magnitude and direction of a steering force). This is starting to sound familiar: it’s a neural network that receives inputs and performs classification or regression!
Here is where things take a turn, however. To better illustrate the concept, let’s start with a hopefully easy to understand and possibly familiar scenario, the game “Flappy Bird.” The game is deceptively simple. You control a small bird that continually moves horizontally across the screen. With each tap or click, the bird flaps its wings and rises upward. The challenge? A series of vertical pipes spaced apart at irregular intervals emerges from the right. The pipes have gaps, and your primary objective is to navigate the bird safely through these gaps. If you hit one, it’s game over. As you progress, the game’s speed increases, and the more pipes you navigate, the higher your score.
Suppose you wanted to automate the gameplay, and instead of a human tapping, a neural network will make the decision as to whether to “flap” or not. Could machine learning work here? Skipping over the “data” steps for a moment, let’s think about “choosing a model.” What are the inputs and outputs of the neural network?
These are the inputs to the neural network. But what about the outputs? Is the problem a "classification" or "regression" one? This may seem like an odd question to ask in the context of a game like Flappy Bird, but it's actually incredibly important and relates to how the game is controlled. Tapping the screen, pressing a button, or using keyboard controls are all examples of classification. After all, there is only a discrete set of choices: tap or not, press 'w', 'a', 's', or 'd' on the keyboard. On the other hand, using an analog controller like a joystick leans towards regression. A joystick can be tilted in varying degrees in any direction, translating to continuous output values for both its horizontal and vertical axes.
@@ -29,7 +42,7 @@
Reinforcement Learning
don’t flap
-
+
Figure 10.22: The neural network as ml5.js might design it
This gives me the information needed to choose the model and I can let ml5.js build it.
@@ -604,7 +617,7 @@
Neuroevolution Ecosystem
A common approach in reinforcement learning simulations is to attach sensors to an agent. For example, consider a simulated mouse in a maze searching for cheese in the dark. Its whiskers might act as proximity sensors to detect walls and turns. The mouse can’t see the entire maze, only its immediate surroundings. Another example is a bat using echolocation to navigate, or a car on a winding road that can only see what is projected in front of its headlights.
I’d like to build on this idea of the whiskers (or more formally the “vibrissae”) found in mice, cats, and other mammals. In the real world, animals use their vibrissae to navigate and detect nearby objects, especially in dark or obscured environments.
I’ll keep the generic class name Creature but think of them now as the circular “bloops” of chapter 9, enhanced with whisker-like sensors that emanate from their center in all directions.
@@ -643,7 +656,7 @@
Neuroevolution Ecosystem
A Food object is a circle drawn according to a position and radius. I’ll assume the creature in my simulation has no vision and relies on sensors to detect if there is food nearby. This begs the question: how can I determine if a sensor is touching the food? One approach is to use a technique called “raycasting.” This method is commonly employed in computer graphics to project rays (often representing light) from an origin point in a scene to determine what objects they intersect with. Raycasting is useful for visibility and collision checks, exactly what I am doing here!
Although raycasting is a robust solution, it requires more involved mathematics than I'd like to delve into here. For those interested, an explanation and implementation are available in Coding Challenge #145 on thecodingtrain.com. For the example now, I will opt for a more straightforward approach and check whether the endpoint of a sensor lies inside the food circle.
-
+
Figure 10.x: Endpoint of sensor is inside or outside of the food based on distance to center of food.
As I want the sensor to store a value for its sensing along with the sensing algorithm itself, it makes sense to encapsulate these elements into a Sensor class.
diff --git a/content/examples/04_particles/4_4_emitters_1/screenshot.png b/content/examples/04_particles/4_4_emitters_1/screenshot.png
index 7246b0fa..71efc48e 100644
Binary files a/content/examples/04_particles/4_4_emitters_1/screenshot.png and b/content/examples/04_particles/4_4_emitters_1/screenshot.png differ
diff --git a/content/examples/06_libraries/6_1_default_matter_js/sketch.js b/content/examples/06_libraries/6_1_default_matter_js/sketch.js
index a786d6d1..6a33265d 100644
--- a/content/examples/06_libraries/6_1_default_matter_js/sketch.js
+++ b/content/examples/06_libraries/6_1_default_matter_js/sketch.js
@@ -3,7 +3,7 @@
// http://natureofcode.com
// Aliases
-const { Engine, Bodies, Composite, Body, Vector } = Matter;
+const { Engine, Bodies, Composite, Body, Vector, Render } = Matter;
function setup() {
let canvas = createCanvas(640, 240);
@@ -17,7 +17,7 @@ function setup() {
engine,
options: { width, height },
});
- Matter.Render.run(render);
+ Render.run(render);
// Create the box
let options = {
diff --git a/content/images/00_randomness/00_randomness_1.png b/content/images/00_randomness/00_randomness_1.png
index 8c30dc0f..1b80a7a6 100644
Binary files a/content/images/00_randomness/00_randomness_1.png and b/content/images/00_randomness/00_randomness_1.png differ
diff --git a/content/images/00_randomness/00_randomness_2.png b/content/images/00_randomness/00_randomness_2.png
index 43a4a53b..8c30dc0f 100644
Binary files a/content/images/00_randomness/00_randomness_2.png and b/content/images/00_randomness/00_randomness_2.png differ
diff --git a/content/images/00_randomness/00_randomness_3.png b/content/images/00_randomness/00_randomness_3.png
index 33339d69..43a4a53b 100644
Binary files a/content/images/00_randomness/00_randomness_3.png and b/content/images/00_randomness/00_randomness_3.png differ
diff --git a/content/images/00_randomness/00_randomness_4.png b/content/images/00_randomness/00_randomness_4.png
index 354b4b35..33339d69 100644
Binary files a/content/images/00_randomness/00_randomness_4.png and b/content/images/00_randomness/00_randomness_4.png differ
diff --git a/content/images/00_randomness/00_randomness_5.png b/content/images/00_randomness/00_randomness_5.png
index eca1e06c..354b4b35 100644
Binary files a/content/images/00_randomness/00_randomness_5.png and b/content/images/00_randomness/00_randomness_5.png differ
diff --git a/content/images/00_randomness/00_randomness_6.png b/content/images/00_randomness/00_randomness_6.png
index 505ddcb6..eca1e06c 100644
Binary files a/content/images/00_randomness/00_randomness_6.png and b/content/images/00_randomness/00_randomness_6.png differ
diff --git a/content/images/00_randomness/00_randomness_7.png b/content/images/00_randomness/00_randomness_7.png
index be233e33..505ddcb6 100644
Binary files a/content/images/00_randomness/00_randomness_7.png and b/content/images/00_randomness/00_randomness_7.png differ
diff --git a/content/images/00_randomness/00_randomness_8.png b/content/images/00_randomness/00_randomness_8.png
index b63c849a..be233e33 100644
Binary files a/content/images/00_randomness/00_randomness_8.png and b/content/images/00_randomness/00_randomness_8.png differ
diff --git a/content/images/01_vectors/01_vectors_1.png b/content/images/01_vectors/01_vectors_1.png
index f27d4879..1b80a7a6 100644
Binary files a/content/images/01_vectors/01_vectors_1.png and b/content/images/01_vectors/01_vectors_1.png differ
diff --git a/content/images/01_vectors/01_vectors_10.png b/content/images/01_vectors/01_vectors_10.png
index e68c347d..eb2a5410 100644
Binary files a/content/images/01_vectors/01_vectors_10.png and b/content/images/01_vectors/01_vectors_10.png differ
diff --git a/content/images/01_vectors/01_vectors_11.png b/content/images/01_vectors/01_vectors_11.png
index 1c84b0b7..e68c347d 100644
Binary files a/content/images/01_vectors/01_vectors_11.png and b/content/images/01_vectors/01_vectors_11.png differ
diff --git a/content/images/01_vectors/01_vectors_12.png b/content/images/01_vectors/01_vectors_12.png
index dbe74fd8..1c84b0b7 100644
Binary files a/content/images/01_vectors/01_vectors_12.png and b/content/images/01_vectors/01_vectors_12.png differ
diff --git a/content/images/01_vectors/01_vectors_13.png b/content/images/01_vectors/01_vectors_13.png
index 78cfe530..dbe74fd8 100644
Binary files a/content/images/01_vectors/01_vectors_13.png and b/content/images/01_vectors/01_vectors_13.png differ
diff --git a/content/images/01_vectors/01_vectors_14.png b/content/images/01_vectors/01_vectors_14.png
index 115ea777..78cfe530 100644
Binary files a/content/images/01_vectors/01_vectors_14.png and b/content/images/01_vectors/01_vectors_14.png differ
diff --git a/content/images/01_vectors/01_vectors_15.png b/content/images/01_vectors/01_vectors_15.png
index b54fc1d8..115ea777 100644
Binary files a/content/images/01_vectors/01_vectors_15.png and b/content/images/01_vectors/01_vectors_15.png differ
diff --git a/content/images/01_vectors/01_vectors_16.png b/content/images/01_vectors/01_vectors_16.png
index 0731f6cb..b54fc1d8 100644
Binary files a/content/images/01_vectors/01_vectors_16.png and b/content/images/01_vectors/01_vectors_16.png differ
diff --git a/content/images/01_vectors/01_vectors_17.png b/content/images/01_vectors/01_vectors_17.png
index 83de1dd8..0731f6cb 100644
Binary files a/content/images/01_vectors/01_vectors_17.png and b/content/images/01_vectors/01_vectors_17.png differ
diff --git a/content/images/01_vectors/01_vectors_2.png b/content/images/01_vectors/01_vectors_2.png
index b272c456..f27d4879 100644
Binary files a/content/images/01_vectors/01_vectors_2.png and b/content/images/01_vectors/01_vectors_2.png differ
diff --git a/content/images/01_vectors/01_vectors_3.png b/content/images/01_vectors/01_vectors_3.png
index 64e7d1df..b272c456 100644
Binary files a/content/images/01_vectors/01_vectors_3.png and b/content/images/01_vectors/01_vectors_3.png differ
diff --git a/content/images/01_vectors/01_vectors_4.png b/content/images/01_vectors/01_vectors_4.png
index 64f983ce..64e7d1df 100644
Binary files a/content/images/01_vectors/01_vectors_4.png and b/content/images/01_vectors/01_vectors_4.png differ
diff --git a/content/images/01_vectors/01_vectors_5.png b/content/images/01_vectors/01_vectors_5.png
index 3aee4855..64f983ce 100644
Binary files a/content/images/01_vectors/01_vectors_5.png and b/content/images/01_vectors/01_vectors_5.png differ
diff --git a/content/images/01_vectors/01_vectors_6.png b/content/images/01_vectors/01_vectors_6.png
index ae49c4c4..3aee4855 100644
Binary files a/content/images/01_vectors/01_vectors_6.png and b/content/images/01_vectors/01_vectors_6.png differ
diff --git a/content/images/01_vectors/01_vectors_7.png b/content/images/01_vectors/01_vectors_7.png
index 4b43a67e..ae49c4c4 100644
Binary files a/content/images/01_vectors/01_vectors_7.png and b/content/images/01_vectors/01_vectors_7.png differ
diff --git a/content/images/01_vectors/01_vectors_8.png b/content/images/01_vectors/01_vectors_8.png
index e750c1e1..4b43a67e 100644
Binary files a/content/images/01_vectors/01_vectors_8.png and b/content/images/01_vectors/01_vectors_8.png differ
diff --git a/content/images/01_vectors/01_vectors_9.png b/content/images/01_vectors/01_vectors_9.png
index eb2a5410..e750c1e1 100644
Binary files a/content/images/01_vectors/01_vectors_9.png and b/content/images/01_vectors/01_vectors_9.png differ
diff --git a/content/images/02_forces/02_forces_1.png b/content/images/02_forces/02_forces_1.png
index 069de90f..1b80a7a6 100644
Binary files a/content/images/02_forces/02_forces_1.png and b/content/images/02_forces/02_forces_1.png differ
diff --git a/content/images/02_forces/02_forces_10.png b/content/images/02_forces/02_forces_10.png
index dc900787..9fed533c 100644
Binary files a/content/images/02_forces/02_forces_10.png and b/content/images/02_forces/02_forces_10.png differ
diff --git a/content/images/02_forces/02_forces_11.jpg b/content/images/02_forces/02_forces_11.jpg
new file mode 100644
index 00000000..445edbe2
Binary files /dev/null and b/content/images/02_forces/02_forces_11.jpg differ
diff --git a/content/images/02_forces/02_forces_12.png b/content/images/02_forces/02_forces_12.png
new file mode 100644
index 00000000..dc900787
Binary files /dev/null and b/content/images/02_forces/02_forces_12.png differ
diff --git a/content/images/02_forces/02_forces_2.png b/content/images/02_forces/02_forces_2.png
index fb65c6cc..069de90f 100644
Binary files a/content/images/02_forces/02_forces_2.png and b/content/images/02_forces/02_forces_2.png differ
diff --git a/content/images/02_forces/02_forces_3.png b/content/images/02_forces/02_forces_3.png
index 629e79b1..fb65c6cc 100644
Binary files a/content/images/02_forces/02_forces_3.png and b/content/images/02_forces/02_forces_3.png differ
diff --git a/content/images/02_forces/02_forces_4.png b/content/images/02_forces/02_forces_4.png
index a4a1bc2a..629e79b1 100644
Binary files a/content/images/02_forces/02_forces_4.png and b/content/images/02_forces/02_forces_4.png differ
diff --git a/content/images/02_forces/02_forces_5.png b/content/images/02_forces/02_forces_5.png
index ba121aac..a4a1bc2a 100644
Binary files a/content/images/02_forces/02_forces_5.png and b/content/images/02_forces/02_forces_5.png differ
diff --git a/content/images/02_forces/02_forces_6.png b/content/images/02_forces/02_forces_6.png
index 08eee66b..ba121aac 100644
Binary files a/content/images/02_forces/02_forces_6.png and b/content/images/02_forces/02_forces_6.png differ
diff --git a/content/images/02_forces/02_forces_7.png b/content/images/02_forces/02_forces_7.png
index 950955dd..08eee66b 100644
Binary files a/content/images/02_forces/02_forces_7.png and b/content/images/02_forces/02_forces_7.png differ
diff --git a/content/images/02_forces/02_forces_8.png b/content/images/02_forces/02_forces_8.png
index c920f38b..950955dd 100644
Binary files a/content/images/02_forces/02_forces_8.png and b/content/images/02_forces/02_forces_8.png differ
diff --git a/content/images/02_forces/02_forces_9.png b/content/images/02_forces/02_forces_9.png
index 9fed533c..c920f38b 100644
Binary files a/content/images/02_forces/02_forces_9.png and b/content/images/02_forces/02_forces_9.png differ
diff --git a/content/images/03_oscillation/03_oscillation_1.png b/content/images/03_oscillation/03_oscillation_1.png
index edd3349f..1b80a7a6 100644
Binary files a/content/images/03_oscillation/03_oscillation_1.png and b/content/images/03_oscillation/03_oscillation_1.png differ
diff --git a/content/images/03_oscillation/03_oscillation_10.png b/content/images/03_oscillation/03_oscillation_10.png
index e5eb9275..f3f8325d 100644
Binary files a/content/images/03_oscillation/03_oscillation_10.png and b/content/images/03_oscillation/03_oscillation_10.png differ
diff --git a/content/images/03_oscillation/03_oscillation_11.png b/content/images/03_oscillation/03_oscillation_11.png
index 2e2d8222..e5eb9275 100644
Binary files a/content/images/03_oscillation/03_oscillation_11.png and b/content/images/03_oscillation/03_oscillation_11.png differ
diff --git a/content/images/03_oscillation/03_oscillation_12.png b/content/images/03_oscillation/03_oscillation_12.png
index 8736c21e..2e2d8222 100644
Binary files a/content/images/03_oscillation/03_oscillation_12.png and b/content/images/03_oscillation/03_oscillation_12.png differ
diff --git a/content/images/03_oscillation/03_oscillation_13.png b/content/images/03_oscillation/03_oscillation_13.png
index cf4d11bd..8736c21e 100644
Binary files a/content/images/03_oscillation/03_oscillation_13.png and b/content/images/03_oscillation/03_oscillation_13.png differ
diff --git a/content/images/03_oscillation/03_oscillation_14.png b/content/images/03_oscillation/03_oscillation_14.png
index 01c88578..cf4d11bd 100644
Binary files a/content/images/03_oscillation/03_oscillation_14.png and b/content/images/03_oscillation/03_oscillation_14.png differ
diff --git a/content/images/03_oscillation/03_oscillation_15.png b/content/images/03_oscillation/03_oscillation_15.png
index 366abb5f..01c88578 100644
Binary files a/content/images/03_oscillation/03_oscillation_15.png and b/content/images/03_oscillation/03_oscillation_15.png differ
diff --git a/content/images/03_oscillation/03_oscillation_16.png b/content/images/03_oscillation/03_oscillation_16.png
index 1f9a9c55..366abb5f 100644
Binary files a/content/images/03_oscillation/03_oscillation_16.png and b/content/images/03_oscillation/03_oscillation_16.png differ
diff --git a/content/images/03_oscillation/03_oscillation_17.png b/content/images/03_oscillation/03_oscillation_17.png
index 9302bddf..1f9a9c55 100644
Binary files a/content/images/03_oscillation/03_oscillation_17.png and b/content/images/03_oscillation/03_oscillation_17.png differ
diff --git a/content/images/03_oscillation/03_oscillation_18.png b/content/images/03_oscillation/03_oscillation_18.png
index 681fb022..9302bddf 100644
Binary files a/content/images/03_oscillation/03_oscillation_18.png and b/content/images/03_oscillation/03_oscillation_18.png differ
diff --git a/content/images/03_oscillation/03_oscillation_19.png b/content/images/03_oscillation/03_oscillation_19.png
index eaff8ee2..681fb022 100644
Binary files a/content/images/03_oscillation/03_oscillation_19.png and b/content/images/03_oscillation/03_oscillation_19.png differ
diff --git a/content/images/03_oscillation/03_oscillation_2.png b/content/images/03_oscillation/03_oscillation_2.png
index c0fc52e9..40efc004 100644
Binary files a/content/images/03_oscillation/03_oscillation_2.png and b/content/images/03_oscillation/03_oscillation_2.png differ
diff --git a/content/images/03_oscillation/03_oscillation_20.png b/content/images/03_oscillation/03_oscillation_20.png
index c86fb22d..eaff8ee2 100644
Binary files a/content/images/03_oscillation/03_oscillation_20.png and b/content/images/03_oscillation/03_oscillation_20.png differ
diff --git a/content/images/03_oscillation/03_oscillation_21.png b/content/images/03_oscillation/03_oscillation_21.png
index 91153c90..c86fb22d 100644
Binary files a/content/images/03_oscillation/03_oscillation_21.png and b/content/images/03_oscillation/03_oscillation_21.png differ
diff --git a/content/images/03_oscillation/03_oscillation_22.png b/content/images/03_oscillation/03_oscillation_22.png
new file mode 100644
index 00000000..91153c90
Binary files /dev/null and b/content/images/03_oscillation/03_oscillation_22.png differ
diff --git a/content/images/03_oscillation/03_oscillation_3.png b/content/images/03_oscillation/03_oscillation_3.png
index b0a7c82a..c0fc52e9 100644
Binary files a/content/images/03_oscillation/03_oscillation_3.png and b/content/images/03_oscillation/03_oscillation_3.png differ
diff --git a/content/images/03_oscillation/03_oscillation_4.png b/content/images/03_oscillation/03_oscillation_4.png
index 4253a5c9..b0a7c82a 100644
Binary files a/content/images/03_oscillation/03_oscillation_4.png and b/content/images/03_oscillation/03_oscillation_4.png differ
diff --git a/content/images/03_oscillation/03_oscillation_5.png b/content/images/03_oscillation/03_oscillation_5.png
index 8375bfa2..4253a5c9 100644
Binary files a/content/images/03_oscillation/03_oscillation_5.png and b/content/images/03_oscillation/03_oscillation_5.png differ
diff --git a/content/images/03_oscillation/03_oscillation_6.png b/content/images/03_oscillation/03_oscillation_6.png
index 96eb94e0..8375bfa2 100644
Binary files a/content/images/03_oscillation/03_oscillation_6.png and b/content/images/03_oscillation/03_oscillation_6.png differ
diff --git a/content/images/03_oscillation/03_oscillation_7.png b/content/images/03_oscillation/03_oscillation_7.png
index 20ff282f..96eb94e0 100644
Binary files a/content/images/03_oscillation/03_oscillation_7.png and b/content/images/03_oscillation/03_oscillation_7.png differ
diff --git a/content/images/03_oscillation/03_oscillation_8.png b/content/images/03_oscillation/03_oscillation_8.png
index 1c5c18e5..20ff282f 100644
Binary files a/content/images/03_oscillation/03_oscillation_8.png and b/content/images/03_oscillation/03_oscillation_8.png differ
diff --git a/content/images/03_oscillation/03_oscillation_9.png b/content/images/03_oscillation/03_oscillation_9.png
index f3f8325d..1c5c18e5 100644
Binary files a/content/images/03_oscillation/03_oscillation_9.png and b/content/images/03_oscillation/03_oscillation_9.png differ
diff --git a/content/images/04_particles/04_particles_1.png b/content/images/04_particles/04_particles_1.png
index 279a9505..1b80a7a6 100644
Binary files a/content/images/04_particles/04_particles_1.png and b/content/images/04_particles/04_particles_1.png differ
diff --git a/content/images/04_particles/04_particles_2.png b/content/images/04_particles/04_particles_2.png
index 0ef9971b..279a9505 100644
Binary files a/content/images/04_particles/04_particles_2.png and b/content/images/04_particles/04_particles_2.png differ
diff --git a/content/images/04_particles/04_particles_3.png b/content/images/04_particles/04_particles_3.png
index d4fecb95..0ef9971b 100644
Binary files a/content/images/04_particles/04_particles_3.png and b/content/images/04_particles/04_particles_3.png differ
diff --git a/content/images/04_particles/04_particles_4.png b/content/images/04_particles/04_particles_4.png
index 3449fe72..d4fecb95 100644
Binary files a/content/images/04_particles/04_particles_4.png and b/content/images/04_particles/04_particles_4.png differ
diff --git a/content/images/04_particles/04_particles_5.png b/content/images/04_particles/04_particles_5.png
index 776a6cbe..3449fe72 100644
Binary files a/content/images/04_particles/04_particles_5.png and b/content/images/04_particles/04_particles_5.png differ
diff --git a/content/images/04_particles/04_particles_6.png b/content/images/04_particles/04_particles_6.png
index bc2e8f3f..776a6cbe 100644
Binary files a/content/images/04_particles/04_particles_6.png and b/content/images/04_particles/04_particles_6.png differ
diff --git a/content/images/05_steering/05_steering_1.png b/content/images/05_steering/05_steering_1.png
index 14b8d029..1b80a7a6 100644
Binary files a/content/images/05_steering/05_steering_1.png and b/content/images/05_steering/05_steering_1.png differ
diff --git a/content/images/05_steering/05_steering_10.png b/content/images/05_steering/05_steering_10.png
index a04d16ac..29198fc1 100644
Binary files a/content/images/05_steering/05_steering_10.png and b/content/images/05_steering/05_steering_10.png differ
diff --git a/content/images/05_steering/05_steering_11.png b/content/images/05_steering/05_steering_11.png
index 72c23ffb..a04d16ac 100644
Binary files a/content/images/05_steering/05_steering_11.png and b/content/images/05_steering/05_steering_11.png differ
diff --git a/content/images/05_steering/05_steering_12.png b/content/images/05_steering/05_steering_12.png
index 7f0fbe39..72c23ffb 100644
Binary files a/content/images/05_steering/05_steering_12.png and b/content/images/05_steering/05_steering_12.png differ
diff --git a/content/images/05_steering/05_steering_13.png b/content/images/05_steering/05_steering_13.png
index d78127da..7f0fbe39 100644
Binary files a/content/images/05_steering/05_steering_13.png and b/content/images/05_steering/05_steering_13.png differ
diff --git a/content/images/05_steering/05_steering_14.png b/content/images/05_steering/05_steering_14.png
index ed4646dc..d78127da 100644
Binary files a/content/images/05_steering/05_steering_14.png and b/content/images/05_steering/05_steering_14.png differ
diff --git a/content/images/05_steering/05_steering_15.png b/content/images/05_steering/05_steering_15.png
index 99f548bf..ed4646dc 100644
Binary files a/content/images/05_steering/05_steering_15.png and b/content/images/05_steering/05_steering_15.png differ
diff --git a/content/images/05_steering/05_steering_16.png b/content/images/05_steering/05_steering_16.png
index ebcb5db5..99f548bf 100644
Binary files a/content/images/05_steering/05_steering_16.png and b/content/images/05_steering/05_steering_16.png differ
diff --git a/content/images/05_steering/05_steering_17.png b/content/images/05_steering/05_steering_17.png
index 05ae45e5..ebcb5db5 100644
Binary files a/content/images/05_steering/05_steering_17.png and b/content/images/05_steering/05_steering_17.png differ
diff --git a/content/images/05_steering/05_steering_18.png b/content/images/05_steering/05_steering_18.png
index 3d7e02f4..05ae45e5 100644
Binary files a/content/images/05_steering/05_steering_18.png and b/content/images/05_steering/05_steering_18.png differ
diff --git a/content/images/05_steering/05_steering_19.png b/content/images/05_steering/05_steering_19.png
index 6dfd625e..3d7e02f4 100644
Binary files a/content/images/05_steering/05_steering_19.png and b/content/images/05_steering/05_steering_19.png differ
diff --git a/content/images/05_steering/05_steering_2.png b/content/images/05_steering/05_steering_2.png
index e382150e..14b8d029 100644
Binary files a/content/images/05_steering/05_steering_2.png and b/content/images/05_steering/05_steering_2.png differ
diff --git a/content/images/05_steering/05_steering_20.png b/content/images/05_steering/05_steering_20.png
index 91153c90..6dfd625e 100644
Binary files a/content/images/05_steering/05_steering_20.png and b/content/images/05_steering/05_steering_20.png differ
diff --git a/content/images/05_steering/05_steering_21.png b/content/images/05_steering/05_steering_21.png
index 6205aa97..91153c90 100644
Binary files a/content/images/05_steering/05_steering_21.png and b/content/images/05_steering/05_steering_21.png differ
diff --git a/content/images/05_steering/05_steering_22.png b/content/images/05_steering/05_steering_22.png
index 87b3c69d..6205aa97 100644
Binary files a/content/images/05_steering/05_steering_22.png and b/content/images/05_steering/05_steering_22.png differ
diff --git a/content/images/05_steering/05_steering_23.png b/content/images/05_steering/05_steering_23.png
index 21ffdfc2..87b3c69d 100644
Binary files a/content/images/05_steering/05_steering_23.png and b/content/images/05_steering/05_steering_23.png differ
diff --git a/content/images/05_steering/05_steering_24.png b/content/images/05_steering/05_steering_24.png
index e3a8b404..21ffdfc2 100644
Binary files a/content/images/05_steering/05_steering_24.png and b/content/images/05_steering/05_steering_24.png differ
diff --git a/content/images/05_steering/05_steering_25.png b/content/images/05_steering/05_steering_25.png
index dbea5481..e3a8b404 100644
Binary files a/content/images/05_steering/05_steering_25.png and b/content/images/05_steering/05_steering_25.png differ
diff --git a/content/images/05_steering/05_steering_26.png b/content/images/05_steering/05_steering_26.png
index f1f4d1a2..dbea5481 100644
Binary files a/content/images/05_steering/05_steering_26.png and b/content/images/05_steering/05_steering_26.png differ
diff --git a/content/images/05_steering/05_steering_27.png b/content/images/05_steering/05_steering_27.png
index af874679..f1f4d1a2 100644
Binary files a/content/images/05_steering/05_steering_27.png and b/content/images/05_steering/05_steering_27.png differ
diff --git a/content/images/05_steering/05_steering_28.png b/content/images/05_steering/05_steering_28.png
index 293a923c..af874679 100644
Binary files a/content/images/05_steering/05_steering_28.png and b/content/images/05_steering/05_steering_28.png differ
diff --git a/content/images/05_steering/05_steering_29.png b/content/images/05_steering/05_steering_29.png
index d2c3bd6b..293a923c 100644
Binary files a/content/images/05_steering/05_steering_29.png and b/content/images/05_steering/05_steering_29.png differ
diff --git a/content/images/05_steering/05_steering_3.png b/content/images/05_steering/05_steering_3.png
index 3590c785..e382150e 100644
Binary files a/content/images/05_steering/05_steering_3.png and b/content/images/05_steering/05_steering_3.png differ
diff --git a/content/images/05_steering/05_steering_30.png b/content/images/05_steering/05_steering_30.png
index 36bcf2f6..d2c3bd6b 100644
Binary files a/content/images/05_steering/05_steering_30.png and b/content/images/05_steering/05_steering_30.png differ
diff --git a/content/images/05_steering/05_steering_31.png b/content/images/05_steering/05_steering_31.png
index 4f835c41..36bcf2f6 100644
Binary files a/content/images/05_steering/05_steering_31.png and b/content/images/05_steering/05_steering_31.png differ
diff --git a/content/images/05_steering/05_steering_32.png b/content/images/05_steering/05_steering_32.png
index 2e678cff..4f835c41 100644
Binary files a/content/images/05_steering/05_steering_32.png and b/content/images/05_steering/05_steering_32.png differ
diff --git a/content/images/05_steering/05_steering_33.png b/content/images/05_steering/05_steering_33.png
index 7787fbfc..2e678cff 100644
Binary files a/content/images/05_steering/05_steering_33.png and b/content/images/05_steering/05_steering_33.png differ
diff --git a/content/images/05_steering/05_steering_34.png b/content/images/05_steering/05_steering_34.png
index 16516da3..7787fbfc 100644
Binary files a/content/images/05_steering/05_steering_34.png and b/content/images/05_steering/05_steering_34.png differ
diff --git a/content/images/05_steering/05_steering_35.png b/content/images/05_steering/05_steering_35.png
index 00d07f27..16516da3 100644
Binary files a/content/images/05_steering/05_steering_35.png and b/content/images/05_steering/05_steering_35.png differ
diff --git a/content/images/05_steering/05_steering_36.png b/content/images/05_steering/05_steering_36.png
index 1d4d1bc0..00d07f27 100644
Binary files a/content/images/05_steering/05_steering_36.png and b/content/images/05_steering/05_steering_36.png differ
diff --git a/content/images/05_steering/05_steering_37.png b/content/images/05_steering/05_steering_37.png
index 4e8cd702..1d4d1bc0 100644
Binary files a/content/images/05_steering/05_steering_37.png and b/content/images/05_steering/05_steering_37.png differ
diff --git a/content/images/05_steering/05_steering_38.png b/content/images/05_steering/05_steering_38.png
index 71688600..4e8cd702 100644
Binary files a/content/images/05_steering/05_steering_38.png and b/content/images/05_steering/05_steering_38.png differ
diff --git a/content/images/05_steering/05_steering_39.png b/content/images/05_steering/05_steering_39.png
index 91153c90..71688600 100644
Binary files a/content/images/05_steering/05_steering_39.png and b/content/images/05_steering/05_steering_39.png differ
diff --git a/content/images/05_steering/05_steering_4.png b/content/images/05_steering/05_steering_4.png
index e7d0f8ef..3590c785 100644
Binary files a/content/images/05_steering/05_steering_4.png and b/content/images/05_steering/05_steering_4.png differ
diff --git a/content/images/05_steering/05_steering_40.png b/content/images/05_steering/05_steering_40.png
index 86091b7c..91153c90 100644
Binary files a/content/images/05_steering/05_steering_40.png and b/content/images/05_steering/05_steering_40.png differ
diff --git a/content/images/05_steering/05_steering_41.png b/content/images/05_steering/05_steering_41.png
index 91153c90..86091b7c 100644
Binary files a/content/images/05_steering/05_steering_41.png and b/content/images/05_steering/05_steering_41.png differ
diff --git a/content/images/05_steering/05_steering_42.png b/content/images/05_steering/05_steering_42.png
index d166ceb5..91153c90 100644
Binary files a/content/images/05_steering/05_steering_42.png and b/content/images/05_steering/05_steering_42.png differ
diff --git a/content/images/05_steering/05_steering_43.png b/content/images/05_steering/05_steering_43.png
index 91153c90..d166ceb5 100644
Binary files a/content/images/05_steering/05_steering_43.png and b/content/images/05_steering/05_steering_43.png differ
diff --git a/content/images/05_steering/05_steering_5.png b/content/images/05_steering/05_steering_5.png
index f7ba512c..e7d0f8ef 100644
Binary files a/content/images/05_steering/05_steering_5.png and b/content/images/05_steering/05_steering_5.png differ
diff --git a/content/images/05_steering/05_steering_6.png b/content/images/05_steering/05_steering_6.png
index e4101a58..f7ba512c 100644
Binary files a/content/images/05_steering/05_steering_6.png and b/content/images/05_steering/05_steering_6.png differ
diff --git a/content/images/05_steering/05_steering_7.png b/content/images/05_steering/05_steering_7.png
index b8521e5d..e4101a58 100644
Binary files a/content/images/05_steering/05_steering_7.png and b/content/images/05_steering/05_steering_7.png differ
diff --git a/content/images/05_steering/05_steering_8.png b/content/images/05_steering/05_steering_8.png
index 1f03fa08..b8521e5d 100644
Binary files a/content/images/05_steering/05_steering_8.png and b/content/images/05_steering/05_steering_8.png differ
diff --git a/content/images/05_steering/05_steering_9.png b/content/images/05_steering/05_steering_9.png
index 29198fc1..1f03fa08 100644
Binary files a/content/images/05_steering/05_steering_9.png and b/content/images/05_steering/05_steering_9.png differ
diff --git a/content/images/06_libraries/06_libraries_1.png b/content/images/06_libraries/06_libraries_1.png
index 0c3db4bb..1b80a7a6 100644
Binary files a/content/images/06_libraries/06_libraries_1.png and b/content/images/06_libraries/06_libraries_1.png differ
diff --git a/content/images/06_libraries/06_libraries_10.png b/content/images/06_libraries/06_libraries_10.png
index 36d3f0b8..c56e7bd8 100644
Binary files a/content/images/06_libraries/06_libraries_10.png and b/content/images/06_libraries/06_libraries_10.png differ
diff --git a/content/images/06_libraries/06_libraries_11.png b/content/images/06_libraries/06_libraries_11.png
index 4d72d934..36d3f0b8 100644
Binary files a/content/images/06_libraries/06_libraries_11.png and b/content/images/06_libraries/06_libraries_11.png differ
diff --git a/content/images/06_libraries/06_libraries_12.png b/content/images/06_libraries/06_libraries_12.png
index dee2bbd9..4d72d934 100644
Binary files a/content/images/06_libraries/06_libraries_12.png and b/content/images/06_libraries/06_libraries_12.png differ
diff --git a/content/images/06_libraries/06_libraries_13.png b/content/images/06_libraries/06_libraries_13.png
index 91153c90..dee2bbd9 100644
Binary files a/content/images/06_libraries/06_libraries_13.png and b/content/images/06_libraries/06_libraries_13.png differ
diff --git a/content/images/06_libraries/06_libraries_14.png b/content/images/06_libraries/06_libraries_14.png
index 50d2489c..91153c90 100644
Binary files a/content/images/06_libraries/06_libraries_14.png and b/content/images/06_libraries/06_libraries_14.png differ
diff --git a/content/images/06_libraries/06_libraries_15.png b/content/images/06_libraries/06_libraries_15.png
index d6af47dc..50d2489c 100644
Binary files a/content/images/06_libraries/06_libraries_15.png and b/content/images/06_libraries/06_libraries_15.png differ
diff --git a/content/images/06_libraries/06_libraries_16.png b/content/images/06_libraries/06_libraries_16.png
index cd5e32a7..d6af47dc 100644
Binary files a/content/images/06_libraries/06_libraries_16.png and b/content/images/06_libraries/06_libraries_16.png differ
diff --git a/content/images/06_libraries/06_libraries_17.png b/content/images/06_libraries/06_libraries_17.png
index 343447d9..cd5e32a7 100644
Binary files a/content/images/06_libraries/06_libraries_17.png and b/content/images/06_libraries/06_libraries_17.png differ
diff --git a/content/images/06_libraries/06_libraries_18.png b/content/images/06_libraries/06_libraries_18.png
index 80115afe..343447d9 100644
Binary files a/content/images/06_libraries/06_libraries_18.png and b/content/images/06_libraries/06_libraries_18.png differ
diff --git a/content/images/06_libraries/06_libraries_19.png b/content/images/06_libraries/06_libraries_19.png
index c60d35bc..80115afe 100644
Binary files a/content/images/06_libraries/06_libraries_19.png and b/content/images/06_libraries/06_libraries_19.png differ
diff --git a/content/images/06_libraries/06_libraries_2.png b/content/images/06_libraries/06_libraries_2.png
index ab1cce88..0c3db4bb 100644
Binary files a/content/images/06_libraries/06_libraries_2.png and b/content/images/06_libraries/06_libraries_2.png differ
diff --git a/content/images/06_libraries/06_libraries_20.png b/content/images/06_libraries/06_libraries_20.png
index 42d237b3..c60d35bc 100644
Binary files a/content/images/06_libraries/06_libraries_20.png and b/content/images/06_libraries/06_libraries_20.png differ
diff --git a/content/images/06_libraries/06_libraries_21.png b/content/images/06_libraries/06_libraries_21.png
index f98bcb3e..42d237b3 100644
Binary files a/content/images/06_libraries/06_libraries_21.png and b/content/images/06_libraries/06_libraries_21.png differ
diff --git a/content/images/06_libraries/06_libraries_22.png b/content/images/06_libraries/06_libraries_22.png
index 6ecbafbd..f98bcb3e 100644
Binary files a/content/images/06_libraries/06_libraries_22.png and b/content/images/06_libraries/06_libraries_22.png differ
diff --git a/content/images/06_libraries/06_libraries_3.png b/content/images/06_libraries/06_libraries_3.png
index f86256a4..ab1cce88 100644
Binary files a/content/images/06_libraries/06_libraries_3.png and b/content/images/06_libraries/06_libraries_3.png differ
diff --git a/content/images/06_libraries/06_libraries_4.png b/content/images/06_libraries/06_libraries_4.png
index ff76a222..f86256a4 100644
Binary files a/content/images/06_libraries/06_libraries_4.png and b/content/images/06_libraries/06_libraries_4.png differ
diff --git a/content/images/06_libraries/06_libraries_5.png b/content/images/06_libraries/06_libraries_5.png
index fb91fbb3..ff76a222 100644
Binary files a/content/images/06_libraries/06_libraries_5.png and b/content/images/06_libraries/06_libraries_5.png differ
diff --git a/content/images/06_libraries/06_libraries_6.png b/content/images/06_libraries/06_libraries_6.png
index 11b6ded0..fb91fbb3 100644
Binary files a/content/images/06_libraries/06_libraries_6.png and b/content/images/06_libraries/06_libraries_6.png differ
diff --git a/content/images/06_libraries/06_libraries_7.png b/content/images/06_libraries/06_libraries_7.png
index 45c5c8f1..11b6ded0 100644
Binary files a/content/images/06_libraries/06_libraries_7.png and b/content/images/06_libraries/06_libraries_7.png differ
diff --git a/content/images/06_libraries/06_libraries_8.png b/content/images/06_libraries/06_libraries_8.png
index 7e9f2dfd..45c5c8f1 100644
Binary files a/content/images/06_libraries/06_libraries_8.png and b/content/images/06_libraries/06_libraries_8.png differ
diff --git a/content/images/06_libraries/06_libraries_9.png b/content/images/06_libraries/06_libraries_9.png
index c56e7bd8..7e9f2dfd 100644
Binary files a/content/images/06_libraries/06_libraries_9.png and b/content/images/06_libraries/06_libraries_9.png differ
diff --git a/content/images/07_ca/07_ca_1.png b/content/images/07_ca/07_ca_1.png
index 9bf2c844..1b80a7a6 100644
Binary files a/content/images/07_ca/07_ca_1.png and b/content/images/07_ca/07_ca_1.png differ
diff --git a/content/images/07_ca/07_ca_10.png b/content/images/07_ca/07_ca_10.png
index e7c696ae..fff97b21 100644
Binary files a/content/images/07_ca/07_ca_10.png and b/content/images/07_ca/07_ca_10.png differ
diff --git a/content/images/07_ca/07_ca_11.png b/content/images/07_ca/07_ca_11.png
index ab57a9de..e7c696ae 100644
Binary files a/content/images/07_ca/07_ca_11.png and b/content/images/07_ca/07_ca_11.png differ
diff --git a/content/images/07_ca/07_ca_12.png b/content/images/07_ca/07_ca_12.png
index 31622b7a..ab57a9de 100644
Binary files a/content/images/07_ca/07_ca_12.png and b/content/images/07_ca/07_ca_12.png differ
diff --git a/content/images/07_ca/07_ca_13.png b/content/images/07_ca/07_ca_13.png
index e0ee4ea5..31622b7a 100644
Binary files a/content/images/07_ca/07_ca_13.png and b/content/images/07_ca/07_ca_13.png differ
diff --git a/content/images/07_ca/07_ca_14.png b/content/images/07_ca/07_ca_14.png
index db5c377f..e0ee4ea5 100644
Binary files a/content/images/07_ca/07_ca_14.png and b/content/images/07_ca/07_ca_14.png differ
diff --git a/content/images/07_ca/07_ca_15.png b/content/images/07_ca/07_ca_15.png
index 602aeaa3..db5c377f 100644
Binary files a/content/images/07_ca/07_ca_15.png and b/content/images/07_ca/07_ca_15.png differ
diff --git a/content/images/07_ca/07_ca_16.png b/content/images/07_ca/07_ca_16.png
index 8d684397..602aeaa3 100644
Binary files a/content/images/07_ca/07_ca_16.png and b/content/images/07_ca/07_ca_16.png differ
diff --git a/content/images/07_ca/07_ca_17.png b/content/images/07_ca/07_ca_17.png
index 6a3ccb02..8d684397 100644
Binary files a/content/images/07_ca/07_ca_17.png and b/content/images/07_ca/07_ca_17.png differ
diff --git a/content/images/07_ca/07_ca_2.png b/content/images/07_ca/07_ca_2.png
index 8c145892..9bf2c844 100644
Binary files a/content/images/07_ca/07_ca_2.png and b/content/images/07_ca/07_ca_2.png differ
diff --git a/content/images/07_ca/07_ca_20.png b/content/images/07_ca/07_ca_20.png
index d37b799c..ae8dc521 100644
Binary files a/content/images/07_ca/07_ca_20.png and b/content/images/07_ca/07_ca_20.png differ
diff --git a/content/images/07_ca/07_ca_21.png b/content/images/07_ca/07_ca_21.png
index e0ee4ea5..d37b799c 100644
Binary files a/content/images/07_ca/07_ca_21.png and b/content/images/07_ca/07_ca_21.png differ
diff --git a/content/images/07_ca/07_ca_22.png b/content/images/07_ca/07_ca_22.png
index e0aea8ff..e0ee4ea5 100644
Binary files a/content/images/07_ca/07_ca_22.png and b/content/images/07_ca/07_ca_22.png differ
diff --git a/content/images/07_ca/07_ca_23.png b/content/images/07_ca/07_ca_23.png
index 62d3557c..e0aea8ff 100644
Binary files a/content/images/07_ca/07_ca_23.png and b/content/images/07_ca/07_ca_23.png differ
diff --git a/content/images/07_ca/07_ca_24.png b/content/images/07_ca/07_ca_24.png
index 1f8d78d6..62d3557c 100644
Binary files a/content/images/07_ca/07_ca_24.png and b/content/images/07_ca/07_ca_24.png differ
diff --git a/content/images/07_ca/07_ca_25.png b/content/images/07_ca/07_ca_25.png
index 035a8694..1f8d78d6 100644
Binary files a/content/images/07_ca/07_ca_25.png and b/content/images/07_ca/07_ca_25.png differ
diff --git a/content/images/07_ca/07_ca_26.png b/content/images/07_ca/07_ca_26.png
index 6151c6d5..035a8694 100644
Binary files a/content/images/07_ca/07_ca_26.png and b/content/images/07_ca/07_ca_26.png differ
diff --git a/content/images/07_ca/07_ca_27.png b/content/images/07_ca/07_ca_27.png
index 4b3e6ef3..6151c6d5 100644
Binary files a/content/images/07_ca/07_ca_27.png and b/content/images/07_ca/07_ca_27.png differ
diff --git a/content/images/07_ca/07_ca_28.png b/content/images/07_ca/07_ca_28.png
index 8d790d21..4b3e6ef3 100644
Binary files a/content/images/07_ca/07_ca_28.png and b/content/images/07_ca/07_ca_28.png differ
diff --git a/content/images/07_ca/07_ca_29.png b/content/images/07_ca/07_ca_29.png
index 4a8a3e8b..8d790d21 100644
Binary files a/content/images/07_ca/07_ca_29.png and b/content/images/07_ca/07_ca_29.png differ
diff --git a/content/images/07_ca/07_ca_3.png b/content/images/07_ca/07_ca_3.png
index 449718d5..8c145892 100644
Binary files a/content/images/07_ca/07_ca_3.png and b/content/images/07_ca/07_ca_3.png differ
diff --git a/content/images/07_ca/07_ca_30.png b/content/images/07_ca/07_ca_30.png
index 8a743d62..4a8a3e8b 100644
Binary files a/content/images/07_ca/07_ca_30.png and b/content/images/07_ca/07_ca_30.png differ
diff --git a/content/images/07_ca/07_ca_31.png b/content/images/07_ca/07_ca_31.png
index 32d39cb1..8a743d62 100644
Binary files a/content/images/07_ca/07_ca_31.png and b/content/images/07_ca/07_ca_31.png differ
diff --git a/content/images/07_ca/07_ca_32.png b/content/images/07_ca/07_ca_32.png
index 5dc9fff4..32d39cb1 100644
Binary files a/content/images/07_ca/07_ca_32.png and b/content/images/07_ca/07_ca_32.png differ
diff --git a/content/images/07_ca/07_ca_4.png b/content/images/07_ca/07_ca_4.png
index 30da9150..449718d5 100644
Binary files a/content/images/07_ca/07_ca_4.png and b/content/images/07_ca/07_ca_4.png differ
diff --git a/content/images/07_ca/07_ca_5.png b/content/images/07_ca/07_ca_5.png
index 5b59517b..30da9150 100644
Binary files a/content/images/07_ca/07_ca_5.png and b/content/images/07_ca/07_ca_5.png differ
diff --git a/content/images/07_ca/07_ca_6.png b/content/images/07_ca/07_ca_6.png
index e72a8704..5b59517b 100644
Binary files a/content/images/07_ca/07_ca_6.png and b/content/images/07_ca/07_ca_6.png differ
diff --git a/content/images/07_ca/07_ca_7.png b/content/images/07_ca/07_ca_7.png
index ae58c73c..e72a8704 100644
Binary files a/content/images/07_ca/07_ca_7.png and b/content/images/07_ca/07_ca_7.png differ
diff --git a/content/images/07_ca/07_ca_8.png b/content/images/07_ca/07_ca_8.png
index eedaa351..ae58c73c 100644
Binary files a/content/images/07_ca/07_ca_8.png and b/content/images/07_ca/07_ca_8.png differ
diff --git a/content/images/07_ca/07_ca_9.png b/content/images/07_ca/07_ca_9.png
index fff97b21..eedaa351 100644
Binary files a/content/images/07_ca/07_ca_9.png and b/content/images/07_ca/07_ca_9.png differ
diff --git a/content/images/08_fractals/08_fractals_1.png b/content/images/08_fractals/08_fractals_1.png
index af1f62b3..1b80a7a6 100644
Binary files a/content/images/08_fractals/08_fractals_1.png and b/content/images/08_fractals/08_fractals_1.png differ
diff --git a/content/images/08_fractals/08_fractals_10.png b/content/images/08_fractals/08_fractals_10.png
index f5f6ac78..205a5608 100644
Binary files a/content/images/08_fractals/08_fractals_10.png and b/content/images/08_fractals/08_fractals_10.png differ
diff --git a/content/images/08_fractals/08_fractals_11.png b/content/images/08_fractals/08_fractals_11.png
index d661136b..f5f6ac78 100644
Binary files a/content/images/08_fractals/08_fractals_11.png and b/content/images/08_fractals/08_fractals_11.png differ
diff --git a/content/images/08_fractals/08_fractals_12.png b/content/images/08_fractals/08_fractals_12.png
index f43b4682..d661136b 100644
Binary files a/content/images/08_fractals/08_fractals_12.png and b/content/images/08_fractals/08_fractals_12.png differ
diff --git a/content/images/08_fractals/08_fractals_13.png b/content/images/08_fractals/08_fractals_13.png
index 702694fd..f43b4682 100644
Binary files a/content/images/08_fractals/08_fractals_13.png and b/content/images/08_fractals/08_fractals_13.png differ
diff --git a/content/images/08_fractals/08_fractals_14.png b/content/images/08_fractals/08_fractals_14.png
index ee680ee4..702694fd 100644
Binary files a/content/images/08_fractals/08_fractals_14.png and b/content/images/08_fractals/08_fractals_14.png differ
diff --git a/content/images/08_fractals/08_fractals_15.png b/content/images/08_fractals/08_fractals_15.png
index a2a5e9f8..ee680ee4 100644
Binary files a/content/images/08_fractals/08_fractals_15.png and b/content/images/08_fractals/08_fractals_15.png differ
diff --git a/content/images/08_fractals/08_fractals_16.png b/content/images/08_fractals/08_fractals_16.png
index 80c2bb13..a2a5e9f8 100644
Binary files a/content/images/08_fractals/08_fractals_16.png and b/content/images/08_fractals/08_fractals_16.png differ
diff --git a/content/images/08_fractals/08_fractals_17.png b/content/images/08_fractals/08_fractals_17.png
index f042f993..80c2bb13 100644
Binary files a/content/images/08_fractals/08_fractals_17.png and b/content/images/08_fractals/08_fractals_17.png differ
diff --git a/content/images/08_fractals/08_fractals_18.png b/content/images/08_fractals/08_fractals_18.png
index 91153c90..f042f993 100644
Binary files a/content/images/08_fractals/08_fractals_18.png and b/content/images/08_fractals/08_fractals_18.png differ
diff --git a/content/images/08_fractals/08_fractals_19.png b/content/images/08_fractals/08_fractals_19.png
index cb039666..91153c90 100644
Binary files a/content/images/08_fractals/08_fractals_19.png and b/content/images/08_fractals/08_fractals_19.png differ
diff --git a/content/images/08_fractals/08_fractals_2.png b/content/images/08_fractals/08_fractals_2.png
index 716e59ed..1d1f8b29 100644
Binary files a/content/images/08_fractals/08_fractals_2.png and b/content/images/08_fractals/08_fractals_2.png differ
diff --git a/content/images/08_fractals/08_fractals_20.png b/content/images/08_fractals/08_fractals_20.png
index b63e5815..cb039666 100644
Binary files a/content/images/08_fractals/08_fractals_20.png and b/content/images/08_fractals/08_fractals_20.png differ
diff --git a/content/images/08_fractals/08_fractals_21.png b/content/images/08_fractals/08_fractals_21.png
index ea4bbe5d..b63e5815 100644
Binary files a/content/images/08_fractals/08_fractals_21.png and b/content/images/08_fractals/08_fractals_21.png differ
diff --git a/content/images/08_fractals/08_fractals_22.png b/content/images/08_fractals/08_fractals_22.png
index 93d2e38e..ea4bbe5d 100644
Binary files a/content/images/08_fractals/08_fractals_22.png and b/content/images/08_fractals/08_fractals_22.png differ
diff --git a/content/images/08_fractals/08_fractals_23.png b/content/images/08_fractals/08_fractals_23.png
index d51def86..93d2e38e 100644
Binary files a/content/images/08_fractals/08_fractals_23.png and b/content/images/08_fractals/08_fractals_23.png differ
diff --git a/content/images/08_fractals/08_fractals_24.png b/content/images/08_fractals/08_fractals_24.png
index ec986774..d51def86 100644
Binary files a/content/images/08_fractals/08_fractals_24.png and b/content/images/08_fractals/08_fractals_24.png differ
diff --git a/content/images/08_fractals/08_fractals_25.png b/content/images/08_fractals/08_fractals_25.png
index 36bd4787..ec986774 100644
Binary files a/content/images/08_fractals/08_fractals_25.png and b/content/images/08_fractals/08_fractals_25.png differ
diff --git a/content/images/08_fractals/08_fractals_3.png b/content/images/08_fractals/08_fractals_3.png
index ed4b15d0..716e59ed 100644
Binary files a/content/images/08_fractals/08_fractals_3.png and b/content/images/08_fractals/08_fractals_3.png differ
diff --git a/content/images/08_fractals/08_fractals_4.png b/content/images/08_fractals/08_fractals_4.png
index 0e417062..ed4b15d0 100644
Binary files a/content/images/08_fractals/08_fractals_4.png and b/content/images/08_fractals/08_fractals_4.png differ
diff --git a/content/images/08_fractals/08_fractals_5.png b/content/images/08_fractals/08_fractals_5.png
index 65e9e59f..0e417062 100644
Binary files a/content/images/08_fractals/08_fractals_5.png and b/content/images/08_fractals/08_fractals_5.png differ
diff --git a/content/images/08_fractals/08_fractals_6.png b/content/images/08_fractals/08_fractals_6.png
index fa61f18f..65e9e59f 100644
Binary files a/content/images/08_fractals/08_fractals_6.png and b/content/images/08_fractals/08_fractals_6.png differ
diff --git a/content/images/08_fractals/08_fractals_7.png b/content/images/08_fractals/08_fractals_7.png
index 7a24b737..fa61f18f 100644
Binary files a/content/images/08_fractals/08_fractals_7.png and b/content/images/08_fractals/08_fractals_7.png differ
diff --git a/content/images/08_fractals/08_fractals_8.png b/content/images/08_fractals/08_fractals_8.png
index d92ce923..7a24b737 100644
Binary files a/content/images/08_fractals/08_fractals_8.png and b/content/images/08_fractals/08_fractals_8.png differ
diff --git a/content/images/08_fractals/08_fractals_9.png b/content/images/08_fractals/08_fractals_9.png
index 205a5608..d92ce923 100644
Binary files a/content/images/08_fractals/08_fractals_9.png and b/content/images/08_fractals/08_fractals_9.png differ
diff --git a/content/images/09_ga/09_ga_1.png b/content/images/09_ga/09_ga_1.png
index ea8d053d..1b80a7a6 100644
Binary files a/content/images/09_ga/09_ga_1.png and b/content/images/09_ga/09_ga_1.png differ
diff --git a/content/images/09_ga/09_ga_10.png b/content/images/09_ga/09_ga_10.png
index 8df52eca..d8651c1d 100644
Binary files a/content/images/09_ga/09_ga_10.png and b/content/images/09_ga/09_ga_10.png differ
diff --git a/content/images/09_ga/09_ga_11.png b/content/images/09_ga/09_ga_11.png
index 3ced2759..8df52eca 100644
Binary files a/content/images/09_ga/09_ga_11.png and b/content/images/09_ga/09_ga_11.png differ
diff --git a/content/images/09_ga/09_ga_12.png b/content/images/09_ga/09_ga_12.png
index 54f35d36..3ced2759 100644
Binary files a/content/images/09_ga/09_ga_12.png and b/content/images/09_ga/09_ga_12.png differ
diff --git a/content/images/09_ga/09_ga_13.png b/content/images/09_ga/09_ga_13.png
index aebc3bf8..54f35d36 100644
Binary files a/content/images/09_ga/09_ga_13.png and b/content/images/09_ga/09_ga_13.png differ
diff --git a/content/images/09_ga/09_ga_14.png b/content/images/09_ga/09_ga_14.png
index 6748abda..aebc3bf8 100644
Binary files a/content/images/09_ga/09_ga_14.png and b/content/images/09_ga/09_ga_14.png differ
diff --git a/content/images/09_ga/09_ga_15.png b/content/images/09_ga/09_ga_15.png
index 91153c90..6748abda 100644
Binary files a/content/images/09_ga/09_ga_15.png and b/content/images/09_ga/09_ga_15.png differ
diff --git a/content/images/09_ga/09_ga_16.png b/content/images/09_ga/09_ga_16.png
index 72acca0a..91153c90 100644
Binary files a/content/images/09_ga/09_ga_16.png and b/content/images/09_ga/09_ga_16.png differ
diff --git a/content/images/09_ga/09_ga_17.png b/content/images/09_ga/09_ga_17.png
index 65c80277..72acca0a 100644
Binary files a/content/images/09_ga/09_ga_17.png and b/content/images/09_ga/09_ga_17.png differ
diff --git a/content/images/09_ga/09_ga_2.png b/content/images/09_ga/09_ga_2.png
index b5694cee..ea8d053d 100644
Binary files a/content/images/09_ga/09_ga_2.png and b/content/images/09_ga/09_ga_2.png differ
diff --git a/content/images/09_ga/09_ga_3.png b/content/images/09_ga/09_ga_3.png
index 99a49d10..b5694cee 100644
Binary files a/content/images/09_ga/09_ga_3.png and b/content/images/09_ga/09_ga_3.png differ
diff --git a/content/images/09_ga/09_ga_4.png b/content/images/09_ga/09_ga_4.png
index ef35bfe9..99a49d10 100644
Binary files a/content/images/09_ga/09_ga_4.png and b/content/images/09_ga/09_ga_4.png differ
diff --git a/content/images/09_ga/09_ga_5.png b/content/images/09_ga/09_ga_5.png
index 38f2460a..ef35bfe9 100644
Binary files a/content/images/09_ga/09_ga_5.png and b/content/images/09_ga/09_ga_5.png differ
diff --git a/content/images/09_ga/09_ga_6.png b/content/images/09_ga/09_ga_6.png
index 216efa7b..38f2460a 100644
Binary files a/content/images/09_ga/09_ga_6.png and b/content/images/09_ga/09_ga_6.png differ
diff --git a/content/images/09_ga/09_ga_7.png b/content/images/09_ga/09_ga_7.png
index 6e98c000..216efa7b 100644
Binary files a/content/images/09_ga/09_ga_7.png and b/content/images/09_ga/09_ga_7.png differ
diff --git a/content/images/09_ga/09_ga_8.png b/content/images/09_ga/09_ga_8.png
index bc2c9075..6e98c000 100644
Binary files a/content/images/09_ga/09_ga_8.png and b/content/images/09_ga/09_ga_8.png differ
diff --git a/content/images/09_ga/09_ga_9.png b/content/images/09_ga/09_ga_9.png
index d8651c1d..bc2c9075 100644
Binary files a/content/images/09_ga/09_ga_9.png and b/content/images/09_ga/09_ga_9.png differ
diff --git a/content/images/10_nn/10_nn_1.png b/content/images/10_nn/10_nn_1.png
index 938dd5d6..1b80a7a6 100644
Binary files a/content/images/10_nn/10_nn_1.png and b/content/images/10_nn/10_nn_1.png differ
diff --git a/content/images/10_nn/10_nn_10.png b/content/images/10_nn/10_nn_10.png
index a141237e..53b6307c 100644
Binary files a/content/images/10_nn/10_nn_10.png and b/content/images/10_nn/10_nn_10.png differ
diff --git a/content/images/10_nn/10_nn_11.png b/content/images/10_nn/10_nn_11.png
index ba6d23b3..a141237e 100644
Binary files a/content/images/10_nn/10_nn_11.png and b/content/images/10_nn/10_nn_11.png differ
diff --git a/content/images/10_nn/10_nn_12.png b/content/images/10_nn/10_nn_12.png
index bed66cf3..ba6d23b3 100644
Binary files a/content/images/10_nn/10_nn_12.png and b/content/images/10_nn/10_nn_12.png differ
diff --git a/content/images/10_nn/10_nn_13.png b/content/images/10_nn/10_nn_13.png
new file mode 100644
index 00000000..bed66cf3
Binary files /dev/null and b/content/images/10_nn/10_nn_13.png differ
diff --git a/content/images/10_nn/10_nn_14.jpg b/content/images/10_nn/10_nn_14.jpg
new file mode 100644
index 00000000..d5670e32
Binary files /dev/null and b/content/images/10_nn/10_nn_14.jpg differ
diff --git a/content/images/10_nn/10_nn_15.png b/content/images/10_nn/10_nn_15.png
index 9a696ce3..b5c526fe 100644
Binary files a/content/images/10_nn/10_nn_15.png and b/content/images/10_nn/10_nn_15.png differ
diff --git a/content/images/10_nn/10_nn_16.png b/content/images/10_nn/10_nn_16.png
index 46ced636..9a696ce3 100644
Binary files a/content/images/10_nn/10_nn_16.png and b/content/images/10_nn/10_nn_16.png differ
diff --git a/content/images/10_nn/10_nn_17.jpg b/content/images/10_nn/10_nn_17.jpg
index 68bd381f..955ea786 100644
Binary files a/content/images/10_nn/10_nn_17.jpg and b/content/images/10_nn/10_nn_17.jpg differ
diff --git a/content/images/10_nn/10_nn_18.jpg b/content/images/10_nn/10_nn_18.jpg
index e32ef52d..68bd381f 100644
Binary files a/content/images/10_nn/10_nn_18.jpg and b/content/images/10_nn/10_nn_18.jpg differ
diff --git a/content/images/10_nn/10_nn_19.jpg b/content/images/10_nn/10_nn_19.jpg
index 97062add..e32ef52d 100644
Binary files a/content/images/10_nn/10_nn_19.jpg and b/content/images/10_nn/10_nn_19.jpg differ
diff --git a/content/images/10_nn/10_nn_2.png b/content/images/10_nn/10_nn_2.png
index a41dc667..938dd5d6 100644
Binary files a/content/images/10_nn/10_nn_2.png and b/content/images/10_nn/10_nn_2.png differ
diff --git a/content/images/10_nn/10_nn_20.jpg b/content/images/10_nn/10_nn_20.jpg
new file mode 100644
index 00000000..97062add
Binary files /dev/null and b/content/images/10_nn/10_nn_20.jpg differ
diff --git a/content/images/10_nn/10_nn_21.png b/content/images/10_nn/10_nn_21.png
new file mode 100644
index 00000000..5d63c7b6
Binary files /dev/null and b/content/images/10_nn/10_nn_21.png differ
diff --git a/content/images/10_nn/10_nn_22.png b/content/images/10_nn/10_nn_22.png
index 00dafcbf..62aae538 100644
Binary files a/content/images/10_nn/10_nn_22.png and b/content/images/10_nn/10_nn_22.png differ
diff --git a/content/images/10_nn/10_nn_23.jpg b/content/images/10_nn/10_nn_23.jpg
index 5faaf782..40690afe 100644
Binary files a/content/images/10_nn/10_nn_23.jpg and b/content/images/10_nn/10_nn_23.jpg differ
diff --git a/content/images/10_nn/10_nn_3.png b/content/images/10_nn/10_nn_3.png
index 9feea2b3..a41dc667 100644
Binary files a/content/images/10_nn/10_nn_3.png and b/content/images/10_nn/10_nn_3.png differ
diff --git a/content/images/10_nn/10_nn_4.png b/content/images/10_nn/10_nn_4.png
index 102d8d44..9feea2b3 100644
Binary files a/content/images/10_nn/10_nn_4.png and b/content/images/10_nn/10_nn_4.png differ
diff --git a/content/images/10_nn/10_nn_5.png b/content/images/10_nn/10_nn_5.png
index 3fd1e33f..92fb14c7 100644
Binary files a/content/images/10_nn/10_nn_5.png and b/content/images/10_nn/10_nn_5.png differ
diff --git a/content/images/10_nn/10_nn_6.png b/content/images/10_nn/10_nn_6.png
index 0176e7a1..3fd1e33f 100644
Binary files a/content/images/10_nn/10_nn_6.png and b/content/images/10_nn/10_nn_6.png differ
diff --git a/content/images/10_nn/10_nn_7.png b/content/images/10_nn/10_nn_7.png
index 7c3641cc..0176e7a1 100644
Binary files a/content/images/10_nn/10_nn_7.png and b/content/images/10_nn/10_nn_7.png differ
diff --git a/content/images/10_nn/10_nn_8.png b/content/images/10_nn/10_nn_8.png
index b1fee1af..7c3641cc 100644
Binary files a/content/images/10_nn/10_nn_8.png and b/content/images/10_nn/10_nn_8.png differ
diff --git a/content/images/10_nn/10_nn_9.png b/content/images/10_nn/10_nn_9.png
index 53b6307c..b1fee1af 100644
Binary files a/content/images/10_nn/10_nn_9.png and b/content/images/10_nn/10_nn_9.png differ
diff --git a/content/images/11_nn_ga/11_nn_ga_1.png b/content/images/11_nn_ga/11_nn_ga_1.png
index cf24da58..1b80a7a6 100644
Binary files a/content/images/11_nn_ga/11_nn_ga_1.png and b/content/images/11_nn_ga/11_nn_ga_1.png differ
diff --git a/content/images/11_nn_ga/11_nn_ga_2.png b/content/images/11_nn_ga/11_nn_ga_2.png
new file mode 100644
index 00000000..cf24da58
Binary files /dev/null and b/content/images/11_nn_ga/11_nn_ga_2.png differ
diff --git a/content/images/11_nn_ga/11_nn_ga_3.jpg b/content/images/11_nn_ga/11_nn_ga_3.jpg
new file mode 100644
index 00000000..d5e0d4dc
Binary files /dev/null and b/content/images/11_nn_ga/11_nn_ga_3.jpg differ
diff --git a/content/images/11_nn_ga/11_nn_ga_4.png b/content/images/11_nn_ga/11_nn_ga_4.png
new file mode 100644
index 00000000..00dafcbf
Binary files /dev/null and b/content/images/11_nn_ga/11_nn_ga_4.png differ
diff --git a/content/images/11_nn_ga/11_nn_ga_5.jpg b/content/images/11_nn_ga/11_nn_ga_5.jpg
index 8e8516ee..5faaf782 100644
Binary files a/content/images/11_nn_ga/11_nn_ga_5.jpg and b/content/images/11_nn_ga/11_nn_ga_5.jpg differ
diff --git a/content/images/11_nn_ga/11_nn_ga_6.jpg b/content/images/11_nn_ga/11_nn_ga_6.jpg
new file mode 100644
index 00000000..8e8516ee
Binary files /dev/null and b/content/images/11_nn_ga/11_nn_ga_6.jpg differ