Engineer's Book

————— PHYSICS – SPACE – ALGORITHMICS ——-

Maximum Entropy Method – 1 – Notions

Written By: Jean-Paul Cipria - Mai• 28•16

How can we use the Maximum Entropy Method (MEM) ?

Squeleton and DNA – Picture Enhancements on MATLAB

.

Created :2016-05-28 20:13:46. – Modified : 2017-12-29 13:52:48.

Very Difficult ! PhD Level.

.

Done … Un Café ?

How do we do to display the « right » informations with a mathematical « point of view » ? Are we sure that « projection » on a particular « space » or « space-time » or « frequency-space », … etc is the one with the maximum of informations « contained » in a set of signals ?

Picture enhancements on Matlab

Now it is difficult to find bad xray photography on internet or in medical instrument but we find one « Le corps entier » or « Whole human body » in english with bad intensity and contrast. Now we use « embeded digital signal algorithmes » in some measurements devices.

Then in this Matlab example we calculate « Squeleton Xray » picture to show us how to enhance xray photography. Thus we can see, after Matlab digital signal operations, the « skin shape » around the bones. The other example is the DNA diffraction map. We reduce the intensity dynamics to show special diffraction directly on the map.

Squeleton and DNA – Picture Enhancements on MATLAB

Are there new informations on picture ?

There are no new informations on the squeletons picture but only the « set of informations » is not correctly view by the type of mathematic representation on the first picture. I.e that we have to chose the « right transformation » to have the maximum set of informations ables to be interpreted by the physicist, the engineer or the physician on the second picture.

Old ADN Diffraction To Map Example

Imagine the « real » ADN geographical structure described in the figure 4 drawing in  [CIPRIA-2016] document Page 7/71. Then we « light » this molecule with an X electromagnetic wave. An interferometer plots the illustration 6 map. It is a diffraction map. Then what is the correct ADN pattern, shape or geographical structure ? In the 1960 we can’t do some calculations to find this.

We « inferate » the correct solution with a set of matching maps. For exemple we find in illustration 5 that metallic spring diffraction matches correctly with the real ADN diffraction in illustration 6. But it is a long duration and complex method because we needed a big set of maps diffractions to compare them.

Pictures Entropies Examples

The pictures on the front end of this document describe three systems. First we display pictures in gray scales with Matlab signal informations analysis. In a second time we calculate for a 9*9 pixels area the local entropy in the pictures. In a third time we display the pictures histograms.

Picture Entropy

The global entropy in a red color is calculated with the histogram p values with the following formula :

• $Entropy=\sum_i{p_i.log_{2}(\frac{1}{p_i})}$

Then this operation was just how to display where the picture entropy is. But we need to know if we must use Fourier transform, Wavelet transform or other space projections to have the maximum informations on a single picture.

Maximum Entropy Method (MEM)

Is the spin density map the right one ?

This method describes how to find the most probable « vision » we have from a system. For example if you want to have a picture from a diffraction map for spins distributions described by Pascal Roussel from CNRS (ROUSSEL-2009) we need to transform it with Fourier or Wavelets integrals. We have with it a map of the spins densities dispositions on a picture.

But are you sure this method is the good one to display correctly your picture ? The Fourier Transform projects diffraction space on a (space) frequential one. Is it the good way to have the maximum information on the diffraction transform ?

Do we use map entropy ?

Méthode de l’Entropie Maximum – (ROUSSEL-2009)

Yes ! We did. We need to calculate the most probable display witch contains the maximum of informations to have the best picture. Then we need to exercice us to the Maximum Entropy Method (MEM).

We explain this such as : At the left picture the physicist authors calculated spin density by a Fourier Transform, a spatial integral one. Then with this mathematical representation of reality we have only the coefficient for a « cosinus » or spatial frequency distribution displayed on the map. When we change the wave function form ou pattern like wavelets then coefficients matched more rapidly with another point of view. Here we see in the right picture that « stationnary » curves from left Fourier are not displayed beetween atoms but the displayed density gradiant are closer. Then spin densities seems to be more concentrated around atoms.

Wavelets have more informations on its « shape » like they « rise » and « sink » not as a simple sinus but like an exponential one. Thus we explain, for this part, how densities gradiant are more represented in the right picture. Then scales effects with small details compared to big ones are « included » on some wavelets as the same weight as others. Thus we can have, like « logarithmics » representations, a compression in big signals and a dilation on small details.

In conclusion those two points of view have two physics aims : left picture shows us how atoms must be in « stationnary » states like we use to view it by spatial frequencies or with the famous « k representation ». The second picture are matching with two others contrainsts : variation of equipotential lines and displaying with more accuracy the small details.

Shape variables description ?

I assume that we can do the same method with a neuronal links grid. In the brain all information are coded with schematic links for neurones to neurones via the synaps. An other thing to understand is the brain structure. We have to determine the structural 3D brain to evaluate the global entropy. Like the very simple picture above we can evaluate local entropy by zone. In the exemple it was an area from 9 to 9 pixel. On a 3D object we can localize n molecules, m links,  l structural forms in a volumic zone, and obviously, the k correct variables to describe the REAL informations (and not an intensity  gray scale like the example pictures above).

Discriminante variables ?

Maybe the number of disciminante variables are not $n+m+l+k$ but $l+k$ just like the Maxwell-Boltzman methods ? Like gaz, the number of molecules and links are a simple proportional formula (ie $n$ in $PV=n.RT$). We need to find extensive variables like V and n and intensive variables like P and T ? In this case physicist assumes that $E_{Cinetic}=E_{Statistic}$ or $=n.K_B.T$ ? We don’t think so or … we mean to determinate the right minimum information $K_{bit}$ and the statistical « temperature » $T_{bit}$ ?

Stationary Mind Principle ?

But now physics problem are différent !

Physicists and biologists don’t need to use ony energy conservation but also stationnary law principle ? I.e. Feynman (1945) and De Broglie (1925) used it in dynamic quantic and phase wave constant as we see in another article ? In this case we determine the functional integral A.

• $A=\int_T{L.dt}=C^{te}$ is a constant in a time description.

or can we determine :

• $A=\int_V{L.d\tau}=C^{te}$ is a constant in a space description ?

We assume that this action A is constant and now we calculate the maximum or minimum différential equation with the L(Shape,link,signal) description or trajectory just as the Lagrangian in physics ? Then we understand how we said « quasi-stationnary thought » in neuroscience ?

If we don’t know L(Shape,link,signal), like Lagrangian in classic or quantic physics then we ignore the right statistical description of phenomenon in informations quantity !

Is it a good idea to use general variable (dot Shape) $(\dot{Shape}, Shape,\tau)$ like $(\dot{q}, q,x)$ in applied mathematics physics ?

With:

• dot Shape : $\dot{Shape}=\frac{dShape}{d\tau}$

Information or Boltzman entropy ?

The classical formula discoverded by Shannon in 1948 is incomplet or to be applied to geograpical description. Maybe we need to change it for exemple by new CRAMER genious mathematician description (1990) in theorical statistics ?

• $E=\sum_i{p_i.log_2{(\frac{1}{p_i})}}$ Shannon.
• $E=\sum_i{k_B.ln{(\Omega)}}$ : Boltzman

Bayesian Inference

12 monthes baby use small information set ?

New studies (TENENBAUM-2011) demonstrate that brain accomodates informations and constructs general concepts with Bayesian Inference Method. Then a 12 monthes old young baby  can discriminate the good model with a small set of informations with some coloured balls in an urne …

Construct systematicaly mind concept ?

Tenenbaum discovered that young students over 6 years old can inferate an hierachical structure for example (and other statitistical structures !) as a theorical model in her mind when they have some partial informations on animals. See the figure below :

Abstract to Data (TENENBAUM-2011)

Conclusions

We find relationship between Physicist Maximum Entropy where we calculate the most probable maximum informations set and the biologist with the Bayesian Inference where we distingish the most probable concept matching with the informations set. This is a remarquable link between two points of vue.

References

Scientifics

• (CIPRIA-2016) : Jean-Paul Cipria – De l’entropie thermodynamique et des signaux à la quantité d’informations des structures complexes – Essai en expérimentations de physique et simulations sous Matlab – 2016
.
• (GONZALVEZ -2009) : Gonzalvez – Histogram Processing and Function Plotting P93.
.
• (ROUSSEL-2009) : Pascal Roussel – Méthode de l’Entropie Maximale – 2009. UCCS – Equipe Chimie du Solide CNRS UMR 8181 ENSC Lille – UST Lille
.
• (TENENBAUM-2011) : Joshua B. Tenenbaum and al – How to Grow a Mind Statistics, Structure, and Abstraction – 2011 – Science 331, 1279 (2011)
.

.