diff --git a/README.md b/README.md index fe4c7c0..30c92f5 100644 --- a/README.md +++ b/README.md @@ -22,6 +22,11 @@ The inspiration for STRX was that of Redux and its origin via the FLUX design pa * [Unified Turing Machine](https://github.com/Phuire-Research/STRX/blob/main/The-Unified-Turing-Machine.md) - The governing concept for this entire framework. ## The Halting Problem +* [Video Citation: The requirement to stop within a behavior tree. Artificial Intelligence Summit @GDC 2016](https://youtube.com/clip/UgkxtZlIbvaMv0OUCJ5kJFiaUCjmEQCBD0C6?si=tkrAkvbpqByq096U) + +Noting that in the clip above, the speaker is using behavior trees and the stopping term. Here within STRX, a behavior tree would be an ActionStrategy that is dispatched via a staged "Plan." What separates STRX from the approach above is that we are using the finite state machine pattern to avoid the use of the infinitely looping check of some observed value. In addition we are referring to this as halting. + +## How STRX Solves this Problem Further the Unified Turing also accomplishes what has been considered to be an impossible to solve problem of the original theoretical Turing Machine. The halting problem, this is accomplished via the finite state machine pattern in conjunction with the new ActionStrategy pattern. This new pattern is capable of representing any calculation, but must be designed with a conclusion. Thus the finite state machine of STRX can perform any calculation and halts upon their conclusion. Noting that here we are using logic to solve this and utilizing a set of specified requirements to have said solution. The primary requirement to satisfy the solution is that the main run time of a program, must be a recursive function. The ActionStrategy pattern in addition satisfies the next requirement, via being a specific set of instruction that concludes, but is capable of a branching behavior that affords for error correction. The specific interest in presenting this solution at this time is to demonstrate a method of safety as to disallow some run away effect from an Artificial Intelligence or Neural Network. As this pattern of halting is designed to be an analog to the inner workings of some graph network that eventually has some output. This is noting that previous to 2023, one of the major problem behind LLMs is whether they would have an output due to some input. That we may compare the runtime of an ActionStrategy to a Neural Network, would represent a series of weighted sums that fails to halt in aggregate and bears no output, or a repeating output. In addition this likewise demonstrates a method of proving safe functionality of any new Ai systems in their ability to halt. As if we task some Ai to create paperclips, how would we analyze their strategies to demonstrate that they would not paperclip the entire universe? That their strategies should be proven to be able to halt once some condition is met.