Friday, February 13, 2015

Quantum Information and Quantum Of Information - III


Review from last post:

In getting to what is a Quantum of Information, I discussed some notions around ontology synthesis which is at the core of understanding any domain:  the ability to conceptualize the world and domain in it. 

I then went on to say that when a given hypothesized theory T2 contains more information than a competing theory T1, that we can find the distribution where the Quantum probability density matrix that defines the ontology states can be discovered and refined with increasing precision driven by using knowledge contained in theory T2 over that contained in T1.

The strategy used is called Quantum Inspired Semantic Tomography (QIST) drawn from Quantum State Tomography but adapted for information science.
High quality decision making depends on a high utility of knowledge which in turn is based on identifying the structure and the probability distribution of variants, the importance of which can only be lifted from raw data itself or the evidence about the data.

Finally, the subject at hand: The Quantum Of Information

Infon:  A Quantum Of Information

 

So, and here is the critical thing:  the traditional model of thinking in AI in general is based on the Symbol Hypothesis much like physics had originally been conceived of in terms of physical particles when, in fact, particles are just the observable and measurable phenomena of fields. In other words, particles emerge as the kinks of a field, such as the electromagnetic field.  The Physical Symbol System Hypothesis was first put forth over half a century ago in their paper,  Computer science as empirical inquiry: symbols and search,  for which they won the prestigious Turing Award.  The theory, rooted firmly in the foundations of Atomism, has been the cornerstone for intelligent systems.

For a contemporary review of the status of the symbol hypothesis, see for example the Stanford paper by Nils J. Nilsson.

I am not going to go further into this except to state, categorically, that it becomes obvious that whatever symbol system emerges that it must be ontologically grounded and that it must have some relationship to fidelity, accuracy and precision to the concepts that it seeks to express.

The central question, therefore, for Quantum AI is not about the validity or invalidity of the symbol hypothesis, just as atoms, electrons, protons and all the other more exotic particles in the atomistic tradition are not invalid but rather, how can symbols themselves be created such that they, as Pierce's semiotic elements, are fit to the purposes of their environment (which includes the entities that utilize them as well as their significance with respect to the environment in which the entities exist).

The concept is the Quantum Of Information:  how is a symbol to be rationally existent with a utility, preference and objective value in semiosis.

My hypothesis is that the concept of the Infon has most of the character of the quantum of information but does not explain how to create such an infon without the presence of a human author.

Our goal is that the machine is the author of concepts and teaches us about them.

But, in order to achieve this, therefore, we must identify, just as physicists have done, what are the raw mathematical materials to use in developing models that produce the observable and usable infons (like the Higgs that give mass to particles and the quarks and gluons that give cohesion to the nuclei of atoms).

If indeed we are to continue to adopt and to use the atomistic model, we must also acknowledge that there is the possibility that the atomistic model has a deeper structure underlying it.

It is this deeper structure that I believe Quantum inspired, pseudo-Quantum and alternative thinking can model.

Implementing Infons 

 

In the next post, I will formally introduce a field model out of which such a structure could possibly be defined as McLennan states:  "A field computer processes fields, that is, spatially continuous arrays of continuous value, or discrete arrays of data that are sufficiently large that they may be treated mathematically as though they are spatially continuous. Field computers can operate in discrete time, like conventional digital computers, or in continuous time like analog computers."

In recent news, new engineering prototypes such as the Metasurface analog computation substrate my provide the bridge between quantum-like analog based computing for addressing many of the ideas in these blogs.

Computer scientists believe quantum computers can solve problems that are intractable for conventional computers because they work according to principles that can solve problems whose solution will never be feasible on a conventional computer. No one is advocating that the Church-Turing thesis is violated, but, that in the linear sequential Turing machine of the classic kind that this model itself limits what is possible:  the Turing machine comes in many different flavors and in another blog I will address these issues as I have so far never been happy with the dispersed fragments on the subject that I have had to collect.

In fact, Michael Nielsen writes that many folk ask for a simple concrete explanation of a quantum computer.  Feynmann himself was asked for a simple concrete explanation of why he won a Nobel prize by a newsman.

Nielsen's answer:  " The right answer to such requests is that quantum computers cannot be explained in simple concrete terms; if they could be, quantum computers could be directly simulated on conventional computers, and quantum computing would offer no advantage over such computers. In fact, what is truly interesting about quantum computers is understanding the nature of this gap between our ability to give a simple concrete explanation and what’s really going on"

Feynmann's answer to his question: "Hell, if I could explain it to the average person, it wouldn't have been worth the Nobel prize."

An Analogical Paradigm of Representation


I assume that the brain uses an analog model with an analogical pattern strategy as a means to representation, the whole resting on a foundation of quantum mechanics: in other words, an in the light of Peircean semiotics, that what is perceived emerges as the result of the interactions between primordial analogs for patterns.
The implication is that objects are represented in the brain using ephemeral infons that are spatial analogs of them that synthesize mimicking models of objects:   it seems to us only because we have never seen those objects in their raw form, inside our own minds, but only through our perceptual representations of them that they are indeed the objects themselves and not their proxies.

Perception is usually overlooked in reasoning because the the illusion is that the objects of reasoning exists when in fact they are interpretants in the mind of the beholder, like the virtual particals in quantum physical computations.

The world of perception appears so much like the real world of which it is merely a copy that we forget that there is actually a significant distinction.

In other words, an abstract symbolic code, as suggested in the symbol hypothesis paradigm, is not used to represent anything in the brain; nor are they encoded by the activation of individual cells or groups of cells representing particular features detected in the scene, as suggested in the neural network or feature detection paradigm.  Rather, these abstractions exist as structures on the substrate of the physico bio electrical structures that sustain their existence.

The reason why the brain expresses perceptual experience in explicit spatial form originates in evolution where a brain capable of processing that spatial information provided higher survivability (i,e. to jump and climb into trees). In fact the nature of those spatial algorithms is itself open to investigation.

These spatial and sensori-motor structures produce the schemes which can support analogical representation and reasoning.




No comments:

Post a Comment