In **part II** of this series it has been discussed and demonstrated -with numbers-, that

**when they are assumed to be in a**

*features fit best a variable***rather than the traditional flat one. This perspective was elaborated in**

*curved space***besides an argument to support a**

__part IV__**curvature as the best fit. However, as other approaches have been combined with the current standpoint, this time it will be used**

*positive***as in the**

__eigen vectors__**(PCA).**

__Principal Component Analysis method__These will be used in the ** markov chains** predicted for the first five states -as there are five classes-. The reason why the latest is chosen is due to

**changing more in the first periods than in the ones close to the**

*states' probabilities***; therefore, there is**

__steady state__**about the**

*more information***of each class in the state vector. In fact, this concept is also shared by the**

*significance***which will be used in this new approach, and are defined as a description of the**

*eigenvectors*

__linear transformation__**.**

__intrinsic properties__Consequently, as throughout this series has been said, one of the reason ** stochastic** processes are such -random- could be the fact that we are

**everytime a**

*assuming***although other fields -as physics- have**

*flat space***proved that we live in a**

*mathematically***. Thus, as the output Markov Chains for each instance is taken as a matrix:**

*hyperdimensional and curved reality*They are modified to match the 5x5 shape ** required** to calculate the eigenvectors, in case the number of rows is lower than five, more steady states are added and in the opposite case, only the first five states are taken. Hence, using the linear

**of Numpy, the resulting matrix of eigenvectors is as follows:**

__algebra library__As the reader can see, some eigenvectors have a real and ** imaginary part**, why is it? An eigenvector is a

**vector such that when going under**

*non-zero***does**

*linear transformation A***and is only multiplied by an**

*not change*

*scalar***-lambda- called the**

*Î»***. Then, the equation below is true:**

*eigenvalue*Expressed as a** ****polynomial**, the above equation can be written as:

*p( **Î» ) = det( Î»I - A )*

Which is called the ** characteristic polynomial** of the matrix A. Pay attention to the fact that this time was written

**instead of**

*Î»I-A***, it is deducted from**

*A-Î»I***being**

*v***as mentioned before, thus, the former and the latter are the**

*non-zero***. Consequently, as the five degree polynomial would be too long to be analyzed here, the quadratic form is shown:**

*same**Î»**2 âˆ’ tr( A ) * Î» + det( A ) = 0*

It means that if the **discriminant** of the polynomial:

*tr( A )**2 âˆ’ 4 * det( A )*

Is ** negative**, the eigenvectors will have

**and**

*real***components. Therefore, nothing wrong with the calculations so far as the latter can be perfectly expected. Now, the current series proposes**

*complex***of distances; however, this time the eigenvectors are subjected to**

*four types***to find out which class has the**

*transformations***distance to the**

*shortest***when positive**

*hyperplane***or**

*"+k"***are selected.**

*"-k"*Hence, for this section ** Non-Euclidean Metrics** must be used as the ones we know

**as we used to know:**

*don't behave*Therefore, based on the **theory** for a

**of a**

*3D surface***:**

*4D sphere**R**2 = x**2 + y**2 + z**2 + w**2*

Where:

*R**2 = r**2 + w**2 **âˆ§ r**2 = x**2 + y**2 + z**2*

And:

*Sin( X ) = Sin( D / R ) **âˆ§ D = R * ArcSin ( r / R )*

** "D"** is known as the

**and is what will be used as a proxy for the distance to the hyperplane as this is the**

*radial distance***formed by the**

*arc***of the eigenvector on the**

*projection***. Similarly, as**

*curved space***is unknown for a**

*"r"***- five classes in the current problem- the**

*5D sphere***from the polar form**

*module r*

*r*

*âˆ***will be used as an**

*Î¸***of "r".**

*estimate*The visual representation of how the flat space is curved as a function of X can be seen below:

An example of the "D" matrix is as follows:

Notice that both "D" matrices are ** very similar** and it is

**for**

*true***, what does it mean? It does not matter the ETF, the**

*all other ETFs***of each**

*probability***will always end up at the**

*class***from the curved surface; thus, at every**

*same radial distance***it can be known in advance what the selected class will be for a**

*state***.**

*determined period*The graph below helps to identify how the eigenvector components -classes- have been transformed to a surface with positive curvature:

Notice that vertical and horizontal axes are not ** both horizontal**, why? To access the transformation graphs for all ETFs click

**. On the other hand, now that radial distances are known, they are stacked into one single array for running the machine learning model:**

__here__An the classification report and confussion matrix are:

What the results suggest is that a positive curvature may not be the best fit for the data, let's compare with what flat spaces recorded:

As can be seen, a ** positive curvature** assumption

**both**

*outperforms***and**

*Euclidean***distance approaches. The results with negative curvature can be explored by using the source code. Download it fully here:**

*Manhattan*
## ã‚³ãƒ¡ãƒ³ãƒˆ