Evolutionary Logical Learning

conjunctive relations are in general used to represent groups of objects that are somehow constrained either by time or space, such that observed patterns of objects by nature are not random, meaning that certain combinations are “favored”, or more likely to happen than others.

sequential relations define ways in which the objects or elements of a pattern differ from one another. they are used to describe deltas in space or time that indicate connections between elements, which are based on the nature of the input space. by this i mean that depending on the type of data being given as input and the kinds of measurements being computed over said data, the structural properties of relations can change drastically from one input space to the next.

image_549682677069701

the distances/angles between points in space are equivalent to the elapsed time between points in a sequence. spatial deltas naturally require the use of a vector representation, as opposed to temporal space which uses scalar values to represent their one-dimensional deltas.

this means that the complexity of a given pattern is determined by the dimensionality of its input space, since additional structure is required to model each dimension of the input data. its therefore efficient to utilize a process in which patterns adapt naturally to whatever form of data is given as input.

for example, a spatial pattern would develop further structure to compensate for the second dimension, whereas temporal patterns require one-dimensional deltas be represented with sufficient accuracy, such that the outcome of basing predictions on that knowledge is beneficial toward the achievement of some goal.

certain rules and behaviors are known a priori, and reflect the nature of the input space being operated on. this is what kickstarts the process of making decisions, observing results, and taking action to change the predicted faults in ones own behavior. the initial stage of development is primarily guided by the built-in rules of once

as patterns develop and become more complex, the task of measuring equivalencies, or for that matter performing any computation that deals with the comparison of two or more structures, quickly becomes a much more nuanced and ambiguous process that costs more to carry out.

the problem of selecting when to perform a computation (i.e. take measurements, make decisions, etc.) is therefore an exponentially complex one. the cost of carrying out any given action depends on the subject, that is, the patterns on which the computation is being performed.

this means that in the early stages of the system, before it has had time to develop beyond simple structures, is a fault-tolerant environment to perform computations. this results in a perfect opportunity to learn simple rules about constructing templates, performing comparisons, and everything else that will soon become far too costly if the inefficient actions expected during beginning stages are not corrected before everything gets out of hand.

trial-and-error is the dominant form of learning throughout the system, the idea being that each problem faced as time goes on will be both greater in complexity as well as the potential cost that comes from making a wrong or inefficient decision. this resembles elements of reinforcement learning, however the utility function is based on an intuitive understanding that computational output should be minimized while still maintaining integrity of stored patterns. failing to do the latter results in an unreliable system that performs less adequately, and therefore indirectly causes the overall efficiency of the system to plummet.

the drive to reduce net energy output is built-in to the system. by using templates along with the certain inference rules (also built-in), this drive becomes linked to specific patterns and asserts an excitatory it inhibitory influence on their state. it is therefore a motivational resource by which higher-level actions are driven and carried out.

without this ability, there would be no way to escape a local maximum caused by the misguided assumption that one should never expel any computational power through the performance of actions, due to the fact that energy output is supposed to be minimized.

this is a simple case of scope, where that assumption is made with little knowledge of potential situations that result in a long-term payoff but require an initial computation and therefore a short-term cost in order to work. these are often more efficient in over time, but require a more complex procedure of analysis to be realized.

Leave a comment