## Sunday, March 1, 2015

### Towards Quantum Gravity

From page 82 onward (click the book on the right) the analysis focuses its attention to General Relativistic description of gravity. The very first paragraphs allude to the fact that in conventional physics, Relativity Theory is not compatible with Quantum Mechanics. The subtle problems between Special Relativity and Quantum Mechanics are sometimes dismissed as negligible semantic issues, but the deeper you get towards relativistic gravity, more concrete those problems become.

In the attempts to quantify General Relativity, there are significant conceptual problems in the way time is conceived as a dynamic metric of space-time. The notion of time is embedded to the dynamic space-time geometry, and in quantifying that geometry, notion of time becomes self-referential to the dynamics of the geometry that describes time in the first place.

Click here for some commentary on the issue. Notice how typical attempts to quantize gravity involve quantizing general relativistic space-time itself. Notice how that leads into ill-defined background for quantum fluctuations. If it is space-time that fluctuates, what is it fluctuating in relation to?

“Since time is essentially a geometrical concept [in General Relativity], its definition must be in terms of the metric. But the metric is also the dynamical variable, so the flow of time becomes intertwined with the flow of the dynamics of the system”

If we still step back to Special Relativity for a bit, it's worth noting that usually the space-time is described with the metric;

$$ds^2= dx^2 + dy^2 + dz^2 - (cdt)^2$$

The time component $t$ is taken as a coordinate axis, and its signature means the interval $s$ is exactly 0 in the case that $dx^2 + dy^2 + dz^2 = (cdt)^2$.

Meaning, anything moving with the speed of light is described as having 0 length interval. It takes the same coordinates when it is at earth, as when it is in Alpha Centauri, in terms of its own coordinate system. Or to be more accurate, such coordinate system is ill-defined; "light" is conceptually time-dilated to 0 "it doesn't see  time", and thus it doesn't see "t"-coordinate, in so far that one wishes to describe time as geometry. On the flip-side of the same coin, the spatial distance between events is also exactly 0, or to say it another way, there's no way to describe time evolution from the perspective of light.

Note that this convention is instantly thrown out of the window when one wishes to move to General Relativity. (See the quote from "Gravitation")

However, typical attempts to quantify gravity still tend to conceptualize time as geometrical axis of the coordinate system where the data is displayed.

Despite these kinds of conventional difficulties, Richard's analysis at hand reproduces both quantum mechanical and relativistic relationships from the same exact underlying framework, and in doing so it does in fact give a quantified representation of relativistic gravity, without employing the concept of space-time at all. In other words, while the presented framework is not really a theory about reality per se, it does effectively give a fully general representation of any and each possible unifying theory, or so-called "Theory of everything".

Naturally, unifying both relationships under the same logical framework begs the question, what exactly is the critical difference between Richard's framework, and the conventional form of these theories?

Many definitions picked up from the analysis correspond almost identically to concepts also defined in conventional physics - e.g., objects, "energy", "momentum", "rest mass" (p. 56-58) - there is one very crucial difference with the way time is conceptualized. Think carefully of the following;

The parameter $t$, or "time", is not defined as a coordinate axis. It was defined explicitly as an ordering parameter; elements associated with the same $t$ are taken to belong to the same "circumstance" (it is meant as an ordering parameter). And if you follow the development of general rules (p. 14-40), you can see that under this notation, elements must be seen as obeying contact interactions. Meaning, only the elements that come to occupy the same position at the same "time" can interact. Notice how that statement about "time" is not consistent with the idea that clocks measure "time"; what clocks measure is a different concept and great care must be taken to not tacitly confuse the two at any point of an analysis.

However, $\tau$, or tau, was defined as a coordinate axis, originally having nothing to do with time. It was named as tau at the get-go because in the derivation of Special Relativity, $\tau$ displacement turns out to correspond to exactly what any natural clock must measure; closely related to the relativistic concept of proper time which is typically symbolized with $\tau$. It is important to understand the difference though; in Special Relativity, $\tau$ is NOT a coordinate axis! Under the specific paradigm that we call Special Relativity, it cannot be generalized as a coordinate axis. The associated definitions of the entire framework need to be built accordingly.

Remember, in the analysis, $\tau$ was an "imaginary" coordinate value at the get-go; object's position in $\tau$ yields no impact in the evaluation of the final probabilities; its value is to be seen as completely unknown. On the other hand, object's momentum along $\tau$ corresponds to its mass (Eq. 3.22), which is essentially a fixed quantity, and thus treated as completely known.

Which simply means that $\tau$ position cannot play a role in the contact interactions of the elements. You may conceptualize it as if every object is infinitely long along $\tau$, if you wish. Either way, projecting $\tau$ position out from object interactions is the notion which is typically amiss in most attempts to describe relativity with euclidean geometries, and without it you tend to become unable to define interactions meaningfully.

Note how this corresponds exactly to the fact that the fundamental equation we are working with, is a wave equation. Since it is a wave equation, it is also governed by uncertainty principle from bottom-up.

Meaning mass $m$ and $\tau$ must be related with  $\sigma_m \sigma_{\tau} \geq \frac{\hbar}{2}$. Since the momentum along $\tau$ (defined as mass) is completely known, the $\tau$ position must be completely unknown.

Note that under conventional description of Relativity, while $t$ is a coordinate axis against which all data is plotted, there is no such thing as clock that measures $t$; each clock measures its own proper time, $\tau$. We just cast $t$ from $\tau$ measurement. In so far that we are willing to approximate our clocks as stationary (which they never are), and approximate the effects of gravity and its unpredictable fluctuations, that casting is rather trivial. But when you involve gravity, you involve curving and wiggling world lines to all the elements, including the time reference devices, and the relationship between $\tau$ and the coordinate axis you wish to plot your data with (t) becomes much harder to handle. It seems quite rational to investigate a paradigm where $\tau$ is in fact by its very definition a coordinate axis.

I won't repeat the derivation in the book, but please post a comment if anything in the derivation seems unclear, so it can be clarified by the author.

Few comments are in order though. Note that the final result of the deduction (Eq. 4.23) is not completely identical to Schwarzschild solution; it contains an additional term, whose impact is extremely small. This is rather interesting, because it implies a possibility for experimental verification. On the other hand, it also may be simply an error by the author. I think this part would require that enough experienced people would walk through the derivation and see if they can find errors. With so many people looking for a way to describe quantum gravity, I would think there are interested parties out there.

## Wednesday, January 14, 2015

### Creating an Absolutely Universal Representation

It has become quite obvious to me that practically no one seems to comprehend what I have put forth in my book. I recently attended a Sigma Xi conference in Glendale Arizona where I spoke to several people about my logical presentation. From their reactions, I think I have a somewhat better comprehension of the difficulty they perceive. The opening chapters of the book seems to emphasize the wrong issues.

The two first chapters seem to overcomplicate a rather simple issue. I suggest one might consider the following post to be a simpler replacement of those opening issues.

The underlying issue I was presenting was the fact that our knowledge, from which we deduce our beliefs, constitutes a finite body of facts. This is an issue modern scientists usually have little interest in thinking about. See Zeno's paradox. My analysis can be seen as effectively uncovering some deep implications of the fact that our knowledge is built on a finite basis.

The same issue can be applied to human communication. Note that all languages spoken by mankind to express their beliefs is also a construct based on a finite number of concepts. What is important here is that the number of languages spoken by mankind is greater than one. This fact also has implications far beyond the common perception.

Normal scientific analysis of any problem invariably ignores some issues of learning (and understanding) the language in which the problem is expressed. Large collections of concepts are presumed to be understood by intuition or implicit meanings. One should comprehend that they can not even begin to discuss the logic of such an analysis with a new born baby. In fact I suspect a newborn can not even have any meaningful thoughts before some concepts have been created to identify their experiences. Any concepts we use to understand any problem had to be mentally constructed. The fact that multiple languages exist implies that the creation of those concepts arise from early experiences and that the representation itself is, to some degree, an arbitrary construct.

The central issue of my deduction is the fact that once one has come up with a theoretical explanation of some phenomena (that is, created a mental model of their experiences) the number of concepts they use to think is finite (maybe quite large but must nonetheless be finite). It follows that, being a finite collection, a list of the relevant concepts can be created. (Think about the creation of libraries, museums and other intellectual  properties together with an inventory log.)

Once one has that inventory log, numerical labels may be given each and every log entry. Using those numerical labels, absolutely every conceivable circumstance which can be discussed may be represented by the notation $(x_1,x_2,\cdots,x_n)$. Note that learning a language is exactly the process of establishing the meaning of such a collection from your experiences expressed with specific collections of such circumstances: i.e., if you have at your disposal all of the circumstances you have experienced expressed in the form $(x_1,x_2,\cdots,x_n)$ you can use that data to reconstruct the meaning of each $x_i$ as that is actually the central issue of learning itself.

I would like to point out that, just because people think they are speaking the same language does not mean their concepts are semantically identical.  Each of them possess what they think is the meaning of each specified concept.  What is important here is that "what they think those meanings are" was deduced from their experiences with communications; i.e., what they know is the sum total of their experiences (that finite body of facts referred to above).

But back to my book. The above circumstance leads to one very basic an undeniable fact. If one has solved the problem (created a mental model of their beliefs) then they can express those beliefs in a very simple form: $P(x_1,x_2,\cdots,x_n)$ which can be defined to be the probability that they believe the specific circumstance represented by $(x_1,x_2,\cdots,x_n)$ is true. In essence, if they had an opinion as to the truth of the represented circumstance, $P(x_1,x_2,\cdots,x_n)$ could be thought of as representing their explanation of the relevant circumstance $(x_1,x_2,\cdots,x_n)$.

It is at this point that a single, most significant, observation can be made.  Those labels, $x_i$, are absolutely arbitrary. If any specific number is added to each and every numerical label $x_i$ in the entire defined log, nothing changes in the patterns of experiences from which the solution was deduced. In other words the following expression is absolutely valid for any possible solution representing any possible explanation (what is ordinarily referred to as one's belief in the nature of reality itself); i.e.,

$\lim_{\Delta a \rightarrow 0}\frac{P(x_1+a+\Delta a,x_2+a+\Delta a,x_n+a+\Delta a)-P(x_1+a,x_2+a,x_n+a)}{\Delta a}\equiv 0.$

What is important here is that, if this were a mathematical expression, it would be exactly the definition of the derivative of $P(x_1+a,x_2+a,\cdots,x_n+a)$ with respect to a.

If $P(x_1,x_2,\cdots,x_n)$ were a mathematical expression the above derivative would lead directly to the constraint that $$\sum_{i=1}^n\frac{\partial\;}{\partial x_i}P(x_1,x_2,\cdots,x_n)\equiv 0.$$ However, it should be evident to anyone trained in mathematics that the expression defined above above does not satisfy the definition of a mathematical expression for a number of reasons.

The reader should comprehend that there are two very significant issues before even continuing the deduction. First, the numerical labels $x_i$ are not variables (they are fixed numerical labels) and second, the actual number of concepts labeled by those $x_i$ required to represent a specific circumstance of interest is not fixed in any way. (Consider representing a description of some circumstance in some language; the number of words required to express that circumstance can not be a fixed number for all possible circumstances.)

The remainder of chapter 2 is devoted to handling all the issues related to transforming the above derivative representation into a valid mathematical function. Any attempt to handle the two issues above will bring up additional issues which must be handled very carefully. The single most important point in that extension of the analysis is making sure that no possible explanation is omitted in the final representation: i.e., if there exist explanations which can not be represented by the transformed representation the representation is erroneous.

There is another important aspect of such a representation. Though the number of experiences standing behind the proposed expression $P(x_1,x_2,\cdots,x_n)$ is finite, the number of possibilities to be represented by the explanation must be infinite (the probability of truth must be representable for all conceivable circumstances  $(x_1,x_2,\cdots,x_n)$.

I take care of the first issue by changing the representation from a list of numerical labels to a representation by patterns of points in a geometry. This would be something quite analogous to representation via printed words or geometric objects representing the relevant represented concepts. I handle the second issue is by introducing the concept of hypothetical objects, a very common idea in any scientific explanation of most anything.

At this point another very serious issue arises. If the geometric representation is to represent all possible collections of concepts, that geometry must be Euclidean. This is required by the fact that all "non Euclidean" geometries introduce constraints defining relationships between the represented variables. Only Euclidean geometry makes absolutely no constraints on the relationships between the represented variables. This is an issue many theorists omit from their consideration.

I look forward to any issues which the reader considers to be errors in this presentation.