Skip to content

Commit 0b4cca2

Browse files
Add project rationale
1 parent eae0773 commit 0b4cca2

File tree

9 files changed

+93
-1
lines changed

9 files changed

+93
-1
lines changed

README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,7 @@ The RuleBook Compiler (`RLC`) is an MLIR-based compiler for a domain-specific la
99
The elevator pitch description of the `RL` is:
1010
> **A language that turns a easy-to-write procedural description of a simulation into a easy-to-use and easy-to-reuse efficient library**.
1111
12+
Read the project rationale [here](./docs/where_we_are_going.md)
1213
Read the language rationale [here](./docs/rationale.md)
1314

1415
At the moment `RLC` is a proof of concept, and is released to gather feedback on the features of the language. Until version 1.0 syntax and semantics may change at any point.

docs/machine_learning.dot

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
digraph loop {
2+
node[shape=box];
3+
"initial state" -> "machine learning" [label="state"];
4+
"machine learning" -> "game rules" [label="\naction"];
5+
"game rules" -> "machine learning" [label="next state"];
6+
}

docs/rationale.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
The `Rulebook` (or `rl` for short) is a domain specific-language that tries to reduce the complexity of writing simulations, such as those used in reinforcement learning, game programming, and similar other domains. This document explains the rationale behind it.
44

5-
If you want to jump directly into the code, try it out as described in the main page instead instead. If you want to see a quick description of what the `rlc` tool is capable of, instead of the `rl` language, you can see video [here](https://www.youtube.com/watch?v=tMnBo3TGIbU). A less pratical and more rambly philosofical description of why `rl` is usefull can be found [here](./philosophy.md)
5+
If you want to jump directly into the code, try it out as described in the main page instead instead. If you want to read about the project rationale, read it [here](./where_we_are_going.md). If you want to see a quick description of what the `rlc` tool is capable of, instead of the `rl` language, you can see video [here](https://www.youtube.com/watch?v=tMnBo3TGIbU). A less pratical and more rambly philosofical description of why `rl` is usefull can be found [here](./philosophy.md)
66

77
### The Complexity of writing games and rule heavy simulations.
88

docs/where_we_are_going.md

Lines changed: 85 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,85 @@
1+
# Where we are going and what RL will do once we get there.
2+
3+
4+
Machine learning is being unleashed onto the world with tremendous force. Year by year, machines solve more and more problems considered untractable and achieve super-human results.
5+
6+
![ML performances](../imgs/ml_progress.jpeg)
7+
8+
Reasoning and general game-playing still stand unsolved. We have no doubt algorithm improvements will soon yield super-human performance in those categories too.
9+
Here is a picture taken from https://proceedings.mlr.press/v202/schwarzer23a/schwarzer23a.pdf.
10+
The X axis is the average competence at 26 atari games achieved by various algorithms, where 1 is the human-level ability, while the Y axis is how many hours of A100 GPUs took on average to get there.
11+
12+
![Reinforcement learnign game progress](../imgs/rl_game_progress.webp)
13+
14+
Training a network to play an Atarii game takes (or will soon take) less than 32 A100 GPU hours. As of 2023, an A100 GPU hour costs between 1 and 5 dollars. https://gpus.llm-utils.org/a100-gpu-cloud-availability-and-pricing/
15+
16+
So, as of the moment of writing, a training run for an Atarii game costs in the ballpark of 30-150 dollars. If humanity can achieve a further 100x cost reduction, it will become possible for game designers to validate the quality of their designs.
17+
Thankfully, it seems that we will not have to wait that long to obtain that cost reduction. https://www.redsharknews.com/nvidia-wants-to-increase-computing-power-by-a-factor-of-1-million
18+
19+
Still, 150 dollars seems a low enough cost that we would expect game companies to start adopting machine-learning techniques to at least validate their design before release.
20+
21+
However, this is not happening. We can observe that most of the machine learning datasets about games are either:
22+
* Games solved by directly executing the engine and by inspecting the framebuffer, such as atari games.
23+
* Games with simple rules but large combinatorial complexity, such as chess and go, which can be easily implemented even multiple times for different purposes (interacting with engines, machine learning ...)
24+
25+
Games and simulations with sprawling amounts of rules that both interact with game engines and machine learning tools are almost non-existent.
26+
This lack of games using machine learning has nothing to do with issues on the machine-learning side. The issue is that game rules, as currently implemented in video games, are so deeply mixed with the game engine code that they are not extractable, and thus if one wished to use machine learning techniques, it would need to run the game engine too, making the whole process too complex or too slow.
27+
28+
For the rest of this document, we will explain why this is the case, and how RL will solve it.
29+
30+
### Machine Learning
31+
32+
Machine learning is a resource-consuming process that devours all the computing and all the datasets you can provide them. The more data and compute you feed to a GPU, the better results you will get, and the best way to collect datasets about games is by running them.
33+
34+
Here is a platonic description of how a machine-learning algorithm may interact with the game, either in training or inference mode. The machine learning engines receive the visible state of the game (for example, the framebuffer, the serialized state of the game, or a textual description). The machine learning engine returns the action it has selected to execute, and the game rules (however implemented) advance the simulation to the next state, then the loop begins again.
35+
36+
![machine learning in games](../imgs/machine_learning_rules.png)
37+
38+
This setup implies that the execution time is a function of both the execution time of the neural network and the game rules implementation. The faster the game rules run, the more samples of the game you can collect, and the better results machine learning will yield.
39+
40+
The faster rules you will be able to get are those that
41+
* are implemented in a low-level language
42+
* and do not require running any piece of code except for the rules themselves. (That is: rules written independently from graphical engines, network protocols, or similar other mechanisms.)
43+
44+
This requirement is essential since Moore's law for CPU cores is dead, and single CPU core speed is not increasing. Some games may be parallelizable and run on GPUs, but, in general, game rules are intrinsically imperative, and they must use traditional architectures. Since we cannot assume that in the future we will get faster hardware on which to run them, we must design our solutions to extract every last drop of compute from single CPU cores.
45+
46+
![Moore law for single-core CPUs image](../imgs/moore_law.jpg)
47+
48+
Game rules, as implemented today, are designed with opposite intents than those just expressed, and in the next section, we will see why.
49+
50+
### Games
51+
52+
To deliver 30 or 60 frames per second, games and engines must prioritize the rendering pipeline performances over anything else. While game rules often drive content to the scene, they are conceptually separated from the rendering pipeline algorithms and data structures. Game rules are akin to the business logic of a "buy" button on a website. The business logic tells you what the user wishes to do, but 99% of the complexity lies in forwarding that information to the various actors involved in delivering the product you bought to you and taking care of the payments. Game rules are thus the least critical aspect of game engine designs. It is acceptable for them to be slow or harder to understand as long as the rendering pipeline is the fastest it can be.
53+
This leads to a development loop where the engine and graphic programmers work on the internals of the engine, while designers hack together game mechanics through high-level features such as Godot scripting language, Unreal engine blueprints, Unity c# classes, and so on.
54+
55+
![Unity script image](../imgs/mono_behaviour.webp)
56+
57+
Here is a Unity c# script, where a designer implements the game rules. Those rules will be expressed in terms of updates driven by the rendering pipeline main loop. Already, the game logic has been intrinsically tied to the game engine and cannot be extracted.
58+
59+
On top of that, if a piece of code written by a designer ends up being too slow, either because it was poorly written, the programming language itself was too slow, or the problem that programs solve is intrinsically slow, programmers will rewrite by placing performances at the forefront. Often this process will extract parts of the logic and will move then deeper down into the engine until parts of the game logic are no longer executable without executing the whole engine.
60+
61+
Rules are therefore:
62+
* slow
63+
* hard to run without the engine, making them even slower
64+
* hard to refactor
65+
66+
All of these issues impede machine learning adoption in the game programming space. In the next section, we will see what solution we propose.
67+
68+
### Unifing games and machine learning.
69+
70+
From the previous sections we have seen that the requirements imposed by machine learning and game programming are very different, let us recap them here. Game rules must be
71+
* **inspectable data structures**: the state of the game is of interest to the machine learning engine, so it must be inspectable, serializable, and copiable.
72+
* **have enumerable actions**: the machine learning component will interact with the simulation by deciding which actions must be executed, so they must be all known.
73+
* **implemented in a low-level language**: so that single-core CPU performances are not left on the table.
74+
* **interoperable with C**: so that the language can interact with machine learning tools and game programming tools, as well as allowing programmers to write optimized C code if they wish.
75+
* **engine independent**: so that samples can be generated without running the engine
76+
* **easily writable**: so that game designers can still write and refactor it. We should assume that in the future automatic code editing through LLM and similar techniques will be very common. It is important to create a language that is as easy as possible to edit for machines as well.
77+
78+
To meet those requirements we have designed RL with the following characteristics:
79+
* **LLVM-based compiled language**: LLVM is a compiler framework used by many compilers (such as rustic, clang, and so on). Reinventing the wheel would lead to worse performances.
80+
* **Statically typed language**: All possible analyses on the code, such as discovering all the possible actions as functions that can advance the game state, should be performed during compilation. Thus, we should design the language so the compiler can get as much insight as possible into the program.
81+
* **Same ABI as C**: By laying our structs in memory the same way C does and by following the same calling conventions we obtain trivial compatibility with C programs and, transitively, with every application compatible with C (for example, with Python due to ctypes).
82+
* **Imperative language**: Game rules are conceptualized by designers as procedures players must follow. Since we are trying to produce a simple language, we should strive to keep the difference between how one thinks of games and how one implements them. Thus, RL must be an imperative language.
83+
84+
Finally, the most important feature is
85+
**Engine independence**: which we explain how to achieve [here](./rationale.md).

imgs/machine_learning_rules.png

11.4 KB
Loading

imgs/ml_progress.jpeg

81.9 KB
Loading

imgs/mono_behaviour.webp

16.7 KB
Loading

imgs/moore_law.jpg

78.4 KB
Loading

imgs/rl_game_progress.webp

4.87 KB
Loading

0 commit comments

Comments
 (0)