From 43a3ec714c71462df90eeb986d7def85a6c826b4 Mon Sep 17 00:00:00 2001 From: REllEK-IO Date: Wed, 4 Oct 2023 06:58:18 -0700 Subject: [PATCH] Refinement --- ActionStrategy.md | 10 +++++----- The-Unified-Turing-Machine.md | 22 +++++++++++----------- 2 files changed, 16 insertions(+), 16 deletions(-) diff --git a/ActionStrategy.md b/ActionStrategy.md index 5830877..fc8295f 100644 --- a/ActionStrategy.md +++ b/ActionStrategy.md @@ -1,16 +1,16 @@ # Action Strategy ### Abstract -An alternative name for this pattern that would be more literal would be: Action Tree Strategy. But even within that scope the composition of this data structure includes the ability to trace back to previous nodes and likewise via an additional **decisionNode** parameter, the Ability to further expand the data pattern to describe an N-Tree or graph. +An alternative name for this pattern that would be more literal would be: Action Tree Strategy. But even within that scope the composition of this data structure includes the ability to trace back to previous nodes and likewise via an additional **decisionNode** parameter, the ability to further expand the data pattern to describe an N-Tree or graph. That is finite in operation and concludes. -The staked effect of this data structure is one that is capable of mapping the internal structure of some Neural Network. But the origination of this design was for the utilization of programmers to describe their own decision making process. As the scope of this pattern in combination with a stored ActionList is a Array that can be Flattened into a sequential series of steps or a paragraph. But was originally intended as a Method of troubleshooting this pattern in a complex computation environment. That it happens to match the composition of a paragraph was an accidental discovery at the time of its creation some five years ago. +The staked effect of this data structure is one that is capable of mapping the internal structure of some Neural Network. But the origination of this design was for the utilization of programmers to describe their own decision making process. As the scope of this pattern in combination with a stored ActionList is a Array that can be flattened into a sequential series of steps or a paragraph. But was originally intended as a method of troubleshooting this pattern in a complex computation environment. That it happens to match the composition of a paragraph was an accidental discovery at the time of its creation some five years ago. -As the original pursuit of this data structure was to utilize the data parameter to formalize a transformation of said data over a period of steps. This chain like pattern of design is represented as a separate concept within this framework. As the framework is designed to be wholly responsible for itself. As the main issue would be the change of data that the chained behavior may be dependent upon. To allow for that data to transformation across any number of nodes on a graph, we introduced the spatial ownership design pattern. Where the final Destination for such could be the original axium, the client's screen, or even the database of some server. This is a formalization of a greater than the sums approach to programming by way of composition, decomposition, and recomposition of sets of concepts, contained within some axium that interacts with other axiums. +As the original pursuit of this data structure was to utilize the data parameter to formalize a transformation of said data over a period of steps. To map the exact transformation of data through an application, versus a series of factories. This chain like pattern of design is represented as a separate concept within this framework. As the framework is designed to be wholly responsible for itself. As the main issue would be the change of data that the chained behavior may be dependent upon. To allow for that data to transformation across any number of nodes on a graph, we introduced the spatial ownership design pattern. Where the final destination for such could be the original axium, the client's screen, or even the database of some server. This is a formalization of a greater than the sums approach to programming by way of composition, decomposition, and recomposition of sets of concepts, contained within some axium that interacts with other axiums. #### The Exponential/Higher Order Complexity of a Binary Tree ![The Exponential Higher Order Complexity of a Binary Tree](https://github.com/Phuire-Research/STRX/blob/main/TreeExponetial.png?raw=true) -The Reality of what the ActionStrategy Pattern represents despite deceiving simplicity, is the direct mapping of higher orders of logic. As if we examine the increasing levels of complexity of any given ActionStrategy. The complexity by default is squared or doubling. As each step in combination with its dynamic nature has the possibility of failure. If the axium has the ownership concept loaded and a value of state has a lock, or via some other test that can be supplied within the governing method. It is interesting to Note how this relationship is obfuscated via mathematics, but plain in conceptual logic. That of a mechanical greater than the sums relationship. "As if you attempt to square one, you get one. And if you square the Number of branches, you get four, but at the one that is the head of the tree. And represents a doubling, but likewise ignores the possibility of additional branches beyond the two." +The Reality of what the ActionStrategy Pattern represents despite deceiving simplicity, is the direct mapping of higher orders of logic. As if we examine the increasing levels of complexity of any given ActionStrategy. The complexity by default is squared or doubling. As each step in combination with its dynamic nature has the possibility of failure. If the axium has the ownership concept loaded and a value of state has a lock, or via some other test that can be supplied within the governing method. It is interesting to Note how this relationship is obfuscated via mathematics, but plain in conceptual logic. That of a mechanical greater than the sums relationship. "As if you attempt to square one, you get one. And if you square the number of branches, you get four, but at the one that is the head of the tree. And represents a doubling, but likewise ignores the possibility of additional branches beyond the two." -This demonstrates the need for an additional level of control to handle this higher order/exponential quality. And the reason for the spatial ownership pattern to supply some baseline coherency for the possibility of failure of each action. This is due to how a strategy run alongside in sequence with other actions/strategies that can transform values that the strategy is dependent upon. These other actions can be independently dispatched from some other subscription via an observation into the action stream. This is Especially likely when performing transformations off premise in a network of axiums. +This demonstrates the need for an additional level of control to handle this higher order/exponential quality. And the reason for the spatial ownership pattern to supply some baseline coherency for the possibility of failure of each action. This is due to how a strategy can run alongside in sequence with other actions/strategies that can transform values that the strategy is dependent upon. These other actions can be independently dispatched from some other subscription via an observation into the action stream. This is especially likely when performing transformations off premise in a network of axiums. "Would be the equivalent of attempting to build some car in a factory. But someone who was building another car, had taken the door you were intending on attaching to that car. Thus the failure mode of the strategy would be to find that next car door if not present, otherwise the car is left on the assembly line, or thankfully in the scope of programming. Just ceases to exist and all previously locked data or parts are freed to be used via other processes." diff --git a/The-Unified-Turing-Machine.md b/The-Unified-Turing-Machine.md index 9cb7fc2..77a6088 100644 --- a/The-Unified-Turing-Machine.md +++ b/The-Unified-Turing-Machine.md @@ -4,28 +4,28 @@ The original Unified Turing Machine was made possible by this author's ActionStr The entire step scope of a Unified Turing Machine is treated as a recursive unified function. As a function is allowed to be composed of functions. The mode specifically is the point of recursion. This allows for each step of the Unified Turing Machine and its deciding functionality to be a written equivalent of that of a universal function within the bounds of a graph and its decisions. As a universal function is some graph in isolation that a machine learning algorithm fits to some input and performs a weighted some as the deciding factor as to what node would be ran after the previous universal function layer within the Neural Network. There is a gap of understanding here. And to fit into current paradigm without expansion, the Unified Turing Machine could be referred to as a non-deterministic deterministic Turing Machine counter intuitively. But for the sake of the rediscovery of Unified Science and its usage of logic and concepts as its formalized format to unify all fields of study. We shall name it the Unified Turing Machine. -In addition to these behaviors the Unified Turing Machine is capable in another way that Neural Networks are currently not. In that their Functionality may be Continuous and Halt pending some close signal. Where as modern LLMs receive some input and give some output via some black box graph of universal functions. A Neural Network that would be equivalent to that of a Unified Turing Machine would have a constant coherency in time, while still being able to accept input and output. +In addition to these behaviors the Unified Turing Machine is capable in another way that Neural Networks are currently not. In that their functionality may be continuous and halt pending the conclusion of its strategies, or even a close signal. Where as modern LLMs receive some input and give some output via some black box graph of universal functions. A Neural Network that would be equivalent to that of a Unified Turing Machine would have a constant coherency in time, while still being able to accept input and output. Noting that prior to 2023 and even in the midst of fine tuning open source LLMs, developers can run into occasions where the LLM fails to return an output. The comparison to the Unified Turing Machine, would be a Neural Network that was unable to halt given some input. -Further because of the configuration of the Unified Turing Machine, its functionality may also expand or reduce itself depending on its current state. Thus a sufficient mirror of the machine within a Neural Network paradigm. Would be a model that is capable of running continuously and able to modify its composition and size based on its the inputs. +Further because of the configuration of the Unified Turing Machine, its functionality may also expand or reduce itself depending on its current state. Thus a sufficient mirror of the machine within a Neural Network paradigm. Would be a model that is capable of running continuously and able to modify its composition and size based on its the inputs. And can be networked alongside other Neural Networks that support the same functionality. -And with the spatial ownership paradigm, these Neural Networks would be capable of being aggregated together coherently and allow for specialization in a similar way as the human mind. That one part may have some set of concepts in its axium and the other part a different specialization. While being able to reference one another and able to mutate the state of the other without creating a race condition within either network. +As with the spatial ownership paradigm, these Neural Networks would be capable of being aggregated together coherently and allow for specialization in a similar way as the human mind. That one part may have some set of concepts in its axium and the other part a different specialization. While being able to reference one another and able to mutate the state of the other without creating a race condition within either network. -The benefit of the Unified Turing Machine over that of a Neural Network, is that such is written in plain language by way of the action types. And the strategies demonstrate what would be considered to be probabilistic changes in the head, but mechanical in choice. The difficulty of such would be the complexity of managing such a machine. But each step in the machine may also carry some test to its ability to halt. This is to not replace Neural Networks, but to classify machines built using this methodology as aut intelligence or baseline automatic intelligence. Written in plain text in the spirit of the open internet. As aut is merely the origin of the letter "A" and originally meant that of ox. Would be a tool between both man and machine that can be refined by way of cooperation. +The benefit of the Unified Turing Machine over that of a Neural Network, is that such is written in plain language by way of the action types in the spirit of the open internet. And the strategies demonstrate what would be traditionally be considered to be probabilistic changes in the head, but now mechanical in choice. The difficulty of such would be the complexity of managing such a machine. But each step in the machine may also carry some test to its ability to halt. This is to not replace Neural Networks, but to classify machines built using this methodology as aut intelligence or baseline automatic intelligence that can safely be deployed. Written in plain text in the spirit of the open internet. As aut is merely the origin of the letter "A" and originally meant that of ox. Would be a tool between both man and machine that can be refined by way of cooperation and reactively function only when given some input. -The use case for these types of machines have several primary purposes. First is the utilization of Neural Networks to map their own universal functions using a format that would be organized conceptually and explained logically. explaining the opaque nature of universal functions that facilitate some dialog. The second use case would be a form of embodying current Neural Networks to allow the same form and coherency that is similar to its inner workings, while allowing for the transparent interpretation over that of their opaque collection of universal graphed functions and their interaction with a plain text environment. As these plain functions can be logically determined and subsequently limited to what is safe. +The use case for these types of machines have several primary purposes. First is the utilization of Neural Networks to map their own universal functions using a format that would be organized conceptually and explained logically. Explaining the opaque nature of universal functions that facilitate some dialog. The second use case would be a form of embodying current Neural Networks to allow the same form and coherency that is similar to its inner workings, while allowing for the transparent interpretation over that of their opaque collection of universal graphed functions and their interaction with a plain text environment. As these plain functions can be logically determined and subsequently limited to what is safe. -But likewise one could also train a Neural Network on the basis of this machine to have the addition qualities described here while maintaining the opaque nature of Neural Networks. This is outside of the scope of of now. As the central focus is that of the safety of expandability that ActionStrategies bring to the table by way of explanation of those mysterious universal functions and their relations in a graph. And would be the beginning of a new field of study of that of Unified Conceptual Science. Or simply the study of all fields and how they may be utilized with a Unified Turing Machine. +But likewise one could also train a Neural Network on the basis of this machine to have the addition qualities described here while maintaining the opaque nature of Neural Networks. This is outside of the scope of of now. As the central focus is that of the safety of expandability that ActionStrategies bring to the table by way of explanation of those mysterious universal functions and their relations in a graph. And would be the beginning of a new field of study of that of Unified Conceptual Science. Or simply the study of all fields to discovery their shared concepts and how they may be utilized with a Unified Turing Machine. So here is the third option to the P equals or not equals NP postulate. A different set of organization entirely thanks to that of conceptually testable logic over that of symbolic mathematics that currently informs the modern paradigm of computer science. To add to, not take away. While providing a form of merit to those who are already acquainted with computer science and their cooperation with other fields. -The paths from here are truly unlimited and to imagine that we will be done in the scope of the orders and scales of complexity of such an explainable intelligent system, is to find coherency. As just because a system is highly chaotic and intelligent by consequence, does not mean that it is sane, or coherent. As classically within the annuls of history we have known intelligence to be followed suite by madness. That the higher orders of complexity also bare the burden of having to maintain some amount of predictability in ones environment. And as machines like our thoughts exist within a simulation of some data. It is our actions in a physical environment that we may test our ability to understand the environment and if what we are predicting is sane. The need to find some logical implementation of some nebulas idea simply. Is the same difference between that of writing fantasy over that of writing a hard science fiction novel. As fantasy may be logically consistent, but only operate within a reality that allows for magic in the first place, like a video game. That concepts in contrast to 100 year old classical Conceptualism, are in fact testable in reality. This is the very formalization of Logical Conceptualism and the proposed format of a new Unified Conceptual Science. +The paths from here are truly unlimited and to imagine that we can accomplish, in the scope of the orders and scales of complexity of such an explainable intelligent systems, is to find coherency in ever rising productivity. As just because a system is highly chaotic and intelligent by consequence, does not mean that it is sane, or coherent. As classically within the annuls of history we have known intelligence to be followed suite by madness. That the higher orders of complexity also bare the burden of having to maintain some amount of predictability in ones environment. And as machines like our thoughts exist within a simulation of some data. It is our actions in a physical environment that we may test our ability to understand the environment and if what we are predicting is sane. The need to find some logical implementation of some nebulas idea simply. Is the same difference between that of writing fantasy over that of writing a hard science fiction novel. As fantasy may be logically consistent, but only operate within a reality that allows for magic in the first place, like a video game. That concepts in contrast to 100 year old classical Conceptualism, are in fact testable in reality due to our technology. This is the very formalization of Logical Conceptualism and the proposed format of a new Unified Conceptual Science. ## Specification of An Unified Turing Machine 0. Extends a base Turing Machine or built from the ground up. 1. Restricts its symbol selection to a set of concepts to be loaded into the axium via their qualities. 2. Has a quality of completeness in its ability to halt in a complex state arrangement by way of the loaded concepts and their own completeness towards halting. 3. Rather than a looping machine, the Unified Turing Machine is a function that indirectly recalls its functionality by way of a mode function. -4. Utilizes two tapes where one is a sequence of values modified by a second tape that is represent via a tree/graph structure that has logically determined set of symbols. +4. Utilizes two tapes where one is a sequence of values modified by a second tape that is represent via a tree/graph structure that has logically determined set of symbols that concludes and is finite. 5. During each call the Unified Turing Machine performs the traditional Turing operations of add, copy, move, and delete on the first tape based on the current symbol loaded on the second tape. 6. That symbols represented on the second tape may be of value, other machines, or other even another Unified Turing Machine. 7. Besides the initial creator function, can be readily decomposed into the sum of its parts. @@ -44,13 +44,13 @@ The paths from here are truly unlimited and to imagine that we will be done in t * Reducer - The function that restricts memory manipulation based on symbol selection. * Construct - A generalized construction that cannot be decomposed to its parts. * Semaphore - A symbol flagging system that is the symbol selection of actions at runtime. -* Spatial Ownership aka, Ownership - Blocks transformation of values via a ticketing system and assembles actions to be dispatched into the system via their Most recent values from the ownership state. +* Spatial Ownership aka, Ownership - Blocks transformation of values via a ticketing system and assembles actions to be dispatched into the system via determined by their ticket's line placement from the ownership state. ## Clarifying Terminology -Noting that chain, or a chain of action, does not meet the definition requirements for a Unified Turing Machine, as it is not a complete system of reasoning. Where a system of reasoning is capable of error correction. As it represents a reduced set of instructions that allows for said machine to behave automatically would be a flattened order of logic. +Noting that chain, or a chain of action, does not meet the definition requirements for a Unified Turing Machine, as it is not a complete system of reasoning, despite being finite. Where a system of reasoning is capable of error correction. As it represents a reduced set of instructions that allows for said machine to behave automatically would be a flattened presentation of higher orders of logic. And likewise the Action Tree Strategy pattern, referred to as ActionStrategy still affords for the functionality of the chained dynamic via a dumb set of ActionNodes that only supply one potential action for its outcome and is represented by a default success consuming function supplied within this framework. Which is why here we move to strike tree or chain from the concept's expression as we are defining ActionStrategy as a unified set of concepts that balances the deficiencies in a action chain as well as encompassing all the possible tree variations. With that, Action Binary Tree Strategy, Action N Tree Strategy, or even Action Graph Strategy, while exact in definition can be noted from examining the parts of the ActionStrategy as an additional quality. -Noting that a Action Graph Strategy is merely a tree strategy where some node is connected to a to leaf. That creates in effect a looping mechanism that is capable of halting due to some mechanism that prevents that leaf from actualizing the loop again. Would be the machine receiving a set of instructions to run over a period of time till and still exit. Likewise strategies may be atomic and the need for some grand strategy to guarantee coherency in time, may not be the most efficient route. And instead it would be the utilization of ActionStrategies in a composable manner alongside some testing mechanisms. But likewise these tests would also have to take into account the total complexity of the entire application at order of scales. As even though each part can be tested, all parts together form a greater than the sums relationship and that whole in the higher orders of complexity by way of configuration would require additional tests. As the greater than the sums relationship dictates some emergent properties that classical statistical determinism is unable to quantify beyond a scale of complexity. This relays to the natural law of thermal dynamics in all systems and highlights the bifurcation of systems. Where at different scales the rules of the system reorder themselves to better handle a throughout put. That there is a difference between the quantum and daily physics of life. "Try as I may, my head would sooner break through the wall, than merge with it." +Noting that a Action Graph Strategy is merely a tree strategy where some node is connected to a to leaf. That creates in effect a looping mechanism that is capable of halting due to some mechanism that prevents that leaf from actualizing the loop again. Would be the machine receiving a set of instructions to run over a period of time till and still exit. Likewise strategies may be atomic and the need for some grand strategy to guarantee coherency in time, may not be the most efficient route. And instead it would be the utilization of ActionStrategies in a composable manner alongside some testing mechanisms. But likewise these tests would also have to take into account the total complexity of the entire application at order of scales. As even though each part can be tested, all parts together form a greater than the sums relationship and that whole in the higher orders of complexity by way of configuration would require additional tests. As the greater than the sums relationship dictates some emergent properties that classical statistical determinism is unable to quantify beyond a scale of complexity. This relays to the natural law of thermal dynamics in all systems and highlights the effect of bifurcating systems. Where at different scales the rules of systems reorganize themselves to better handle the increased energetic throughput. That there is a difference between the quantum and daily physics of life. "Try as I may, my head would sooner break through the wall, than jump through it."