diff --git a/README.md b/README.md index 834876c..af990c7 100644 --- a/README.md +++ b/README.md @@ -33,7 +33,7 @@ As the ability to halt within an intelligent system is the demonstration of its ## How STRX Solves this Problem Further the Unified Turing also accomplishes what has been considered to be an impossible to solve problem of the original theoretical Turing Machine. The halting problem, this is accomplished via the finite state machine pattern in conjunction with the new ActionStrategy pattern. This new pattern is capable of representing any calculation, but must be designed with a conclusion. Thus the finite state machine of STRX can perform any calculation and halts upon their conclusion. Noting that here we are using logic to solve this and utilizing a set of specified requirements to have said solution. The primary requirement to satisfy the solution is that the main run time of a program, must be a recursive function. The ActionStrategy pattern in addition satisfies the next requirement, via being a specific set of instruction that concludes, but is capable of a branching behavior that affords for error correction. The specific interest in presenting this solution at this time is to demonstrate a method of safety as to disallow some run away effect from an Artificial Intelligence or Neural Network. -As this pattern of halting is designed to be an analog to the inner workings of some graph network that eventually has some output. This is noting that previous to 2023, one of the major problem behind LLMs is whether they would have an output due to some input. That we may compare the runtime of an ActionStrategy to a Neural Network, would represent a series of weighted sums that fails to halt in aggregate and bears no output, or a repeating output. In addition this likewise demonstrates a method of proving safe functionality of any new Ai systems in their ability to halt. As if we task some Ai to create paperclips, how would we analyze their strategies to demonstrate that they would not paperclip the entire universe? That their strategies should be proven to be able to halt once some condition is met. +As this pattern of halting is designed to be an analog to the inner workings of some graph network that eventually has some output. This is noting that previous to 2023, one of the major problem behind LLMs is whether they would have an output due to some input. [Video Citation: QLoRA is all you need @sentdex](https://youtube.com/clip/Ugkx47h3s4gtOSrKxF-CdqsnTrPTWwnwwha8?si=VLQJSBoZDw0dYsCF) That we may compare the runtime of an ActionStrategy to a Neural Network, would represent a series of weighted sums that fails to halt in aggregate and bears no output, or a repeating output. In addition this likewise demonstrates a method of proving safe functionality of any new Ai systems in their ability to halt. As if we task some Ai to create paperclips, how would we analyze their strategies to demonstrate that they would not paperclip the entire universe? That their strategies should be proven to be able to halt once some condition is met. Likewise the unfortunate truth of a Unified Turing Machine due to its recursive functionality. Is that it requires the ability to halt to function as a hard requirement. Otherwise the developer will run into unexpected behavior in their applications. This would be due to strategies and/or the supporting framework are halting incomplete and experiencing action overflow. As our general good enough computers and their branch prediction will generate ghost actions and other unexpected behaviors during this condition. Such as the thrashing the applications memory, and the inability to receive some output akin to a unresponsive Neural Network. So by strange effect the solution to solve the halting problem, was a method of programming that went beyond data entry of classic. Utilizing logic over mathematics to create the scope of this framework, to afford for the dynamic functionality of data transformation versus data entry.