Archive for the ‘Information-Theory’ Category

Vigenère Cipher And Breaking it

Friday, September 11th, 2015

It was originally thought to be unbreakable. You use a key such as PASSWORD.  Each Letter in your message is encoded by placing

PASSWORDPASSWORDPASSWORD

THISISATESTOFTHECODE

Each letter is encoded by using choosing the corresponding row from the key letter above your message letter.  The obvious failing is that after a bit you run out of key and start repeating.  This makes the code subject to frequency analysis.  Of course if your key word is a phrase of some length you might be able to avoid that.

It is interesting that only about 150 years ago this would be considered unbreakable.

300px-vigenc3a8re_square_shading[1]

Research Links

3 Track Magnetic Stripe Card Reader

Wednesday, July 3rd, 2013

Video: Ultraconservation and living fossils Mysteries of the Human Genome

Tuesday, March 16th, 2010

Genetic code is reused in exactly the same manner human programmers reuse code.  Thus once some random string of genetic coding is found useful there is a process that preserves it from change.  That change is called survival.  If it is a random string with no purpose then it is swept from the system by error.  Error is death.  Thus if you sequence the human genome you can compare useful portions with those of widely separated animals such as mice and find almost identical code.

Gill Bejerano holds a BSc, summa cum laude, in Mathematics, Physics, and Computer Science, and a PhD in Computer Science from the Hebrew University of Jerusalem, Israel. Twice recipient of the RECOMB best paper by a young scientist award, and a former Eshkol pre-doctoral Scholar and HHMI postdoc. As co-discoverer of ultraconserved elements, his research focuses on deciphering the function and evolution of the non-coding regions of the Human Genome. Gill is currently a postdoc with David Haussler at UC Santa Cruz, and in early 2007 he will join Stanford university as an Assistant Professor in the Department of Developmental Biology and the Department of Computer…

Quantum Entanglement of Photons demonstrated in relatively low cost set up

Sunday, August 23rd, 2009
  • Louis DeBroglie does not get the credit he deserves for original thinking in quantum machanics – DeBroglie thesis paper   – he won the Nobel Prize in 1929 for very good reasons which you will see if you read his thesis paper.
  • I am looking for the single photon counter used in this experimental setup

 

PowerPoint presentation of experiment Uses an SPCM-APD  ( Single Photon Counting Module – Avalanch Photo Detector )

One of the experimenters

Relatively simple setup uses spontaneous parametric downconversion of photon to create 2 photons that are entangled.  Then these are sent to 2 single photon detectors.   If you have any of the parts or pieces of this setup for sale I would be interested in buying.     

Memristor based nano computing promises to revolutionalize the ways we compute

Saturday, January 24th, 2009

 

When memory elements become nonvolatile and can be configured as processing elements computing is going to start to use very different approaches.  Behavoiral evolution approaches will become dominant.

Professor Chua talks with a more mathematical presentation on the memristor.

 

Derivation of the Normal Gaussian distribution from physical principles

Monday, August 25th, 2008

In many physical systems the question arises what is the probability distribution that describes a system with a given expected energy E  over the interval from -infinity to + infinity?     Again you will use the maximum entropy principle to determine this.

The constraints are as follows:

  •  sum{kappa=1}{N}{P(x_i)}=1       …. sum over all probabities must = 1
  •  sum{kappa=1}{N}{P(x_i){x_i}}=mu     …. given an average value AKA "mean"
  •  sum{kappa=1}{N}{P(x_i){x_i}^2}=E     ….. given an "energy" or standard deviation AKA "variance"

The langrangian is formed as follows:

 L=sum{kappa=1}{N}{{-P(x_i)}{log_2 P(x_i)}}+lambda_0(1-sum{kappa=1}{N}{P(x_i)} )+lambda_1(mu-sum{kappa=1}{N}{{P(x_i)}{x_i}})+lambda_2(E-sum{kappa=1}{N}{{P(x_i)}{x_i}^2})  

 {partial L} / {partial P_i}= {-log_2 P(x_i)}-1-lambda_0-lambda_1{x_i}-lambda_2{x_i}^2=0    ….setting equal to zero to find the extrema point

Now the problem is to solve for the lambda coefficients.   I use a trick.  I assume the curve centered at the Y axis.   The curve must have the same amount of entropy on the left side of the Y axis as on the right or entropy will not be maximized.  Because the polynomial is even degree the probability curve must be "even"….that is symmetric about the Y axis.  That reduces the previous langrangian to:

 {partial L} / {partial P_i}= {-log_2 P(x_i)}-lambda_2{x_i}^2=0 

 {log_2 P(x_i)} = -lambda_2{x_i}^2 

   P(x_i) = e^{-lambda_2{x_i}^2}          …..which you will recognize as the gaussian distribution

 

 

Solve for the coefficients using identity of the integral of normal -infin to + infin

Using the identity and setting it to 1: 

int{-infty}{+infty}{e^{-{(x)^2/{2sigma^2}}}}=sigma{sqrt{2pi}} =1     yields:  lambda_2={pi} 

         The resultant distribution is:      {e^{-{pi}x^2}}    which is the normal distribution.

 Now for the other coefficients the job is made easier by observing the distribution can only retain this even about the mean form if  the polynomial is in the form :  (x-mu)^2  : this form can propagate along the X axis without distribution shape change.  Gaussian wave packet can not change shape because it is already max entropy.  If it changes shape it decreases entropy which requires force and increases its energy.  But that would be a change in state which we are assuming is not happening.

 

Observations

  • base state of quantum harmonic oscillator is gaussian : It is maximum entropy.  
  • gaussian wave packet can not mishapen because it is already max entropy.  If it changes shape it decreases entropy which requires force.
  • gaussian is the base state of the wave packet?  Is it possible to have forms of the higher energy states?

 

The Maximum Entropy Principle – The distribution with the maximum entropy is the distribution nature chooses

Sunday, August 17th, 2008

In a previous article entropy was defined as the expected number of bits in a binary number required to enumerate all the outcomes.  This was expressed as follows:

entropy= H(x)= sum{kappa=1}{N}{delim{[}{-P(x_i) * log_2 P(x_i) }{]}} 

In physics ( nature ) it is found that the probability distribution that represents a physical process is the one that has the maximum entropy given the constraints on the physical system.   What are constraints?  An example of a probabalistic system is a die with 6 sides.  For now pretend you do not know that it is equally likely to show any 1 of the 6 faces when you roll it.  Assume only that it is balanced.

In the case of a die the above summation is equivalent to the following sort of computation:

  • Initial assumption set of 6 probabilities that  sum up = 1  … this is a given as it has to be at least one of the 6 faces unless it stands on edge Twilight Zone style.  Lets assume P(xi) = 0.05, 0.05, 0.05, 0.05,0.05, 0.75  …. you know instinctively this is not correct but demonstrates the maximum entropy principle

The total entropy given these probabilities = (.05) * (4.322) * 5 + 0.75 * (.415)= 1.0805 + .311= 1.39 bits

Let us use our common sense now.  We know there are 6 equally probable states that can roll up.  So its easy to calculate the number of bits required.  

  • Bits required = log26 = 2.585 bits 

Thus we can see our initial assumption of probabilities yields an entropy number less than we would expect from common sense.   How do we find the maximum entropy possible? 

  • Use the Langrangian maximization method. 
  • Maximize the entropy phrase with the constraint that  

          sum{kappa=1}{N}{P(x_i)}=1       …. sum over all probabities must = 1

The langrangian is formed as follows:

     L=sum{kappa=1}{N}{delim{[}{-P(x_i) * log_2 P(x_i) }{]}}+lambda(1-sum{kappa=1}{N}{delim{[}{P(x_i)}{]}}  )  

Now differentiating the langrangian and setting the derivative = 0 we can find the maximal entropic probability

     {partial L} / {partial P_i}= {-log_2 P(x_i)}-1-{lambda}=0 

     {-log_2 P(x_i)}=1+{lambda}    solving for the Pi  yields

     {P(x_i)}= e^{1+{lambda}}   All the Pi= the same constant with the probabilities summing to 1….Thus Pi=1/6 since N=6

While this is alot of work to derive the obvious it there is a purpose. In the case of more complicated situations where the probability distribution is not obvious this method works.  For example in the case of the Black Body emission curve of Planck.  Given just the quantization of energy levels you can derive the black body curve!!  This principle is woven all through nature.  Learn it because it will serve you well. 

Some interesting Notes to myself — myself? I meant me.

What is Entropy?

Thursday, August 14th, 2008

There are many mathematical definitions of entropy.  The mental picture I find most useful is to imagine the following:  

  • you are put in a room and your job is to label everything in the room with a sharpy indelible marker and masking tape.  
  • You are asked to label everything in the room using the binary numbering system.  This binary number will be that particular objects I.D.

As you go about this you may want to number the objects you most commonly refer to with the lower digits that have less length.  That way since you mention "FORK" much more often than "NUMBER 6 SCREW" you will end up having to say less digits.

The measure of entropy in this room is the number of binary digits required to number all the objects.  This is entropy.  The formula for this sentence that I just said is:

             Entropy ~= log2N    where N is the number of different types of objects in the room

Now in a probabalistic situation with outcomes  x1 , x2 ….  xn    with P(xi) = probability of xi

            entropy = H(x)= SUM [ -P(xi) * log2( P(xi) ) ]      This formula calculates the expectation of the number of required digits to enumerate the outcomes.

Now let us compare this to a realistic situation in the form of the good old fashioned coin flip with

  • P(heads) = 1/2 
  • P(tails) = 1/2

Entropy = -(1/2) * ( -1)  -(1/2)*(-1) = 1 bit

Thus in order to enumerate all the states of a coin you require 1 bit.  So you just call heads bit = 1 and tails bit = 0 ….

… and thus you only need 1 bit. 

If you have a strange coin that always comes up heads:  That is to say P(Heads)=1 then:

Entropy = -(1) * (0)= 0

Thus for your weirdo coin that only flips out heads you need no bits to enumerate its states.  There is no entropy.   You always get heads sucker!  Or heads I win tails you lose!

 

Use of Maximum Entropy to explain the form of Energy States of an Electron in a Potential Well

Friday, May 23rd, 2008

The base state of an electron in an infinite potential well has the most "space" for the electron state.  Thus it has the maximum entropy. Take that same state and imagine pinching the electrons existence to nil in the middle of the trough.  Now you have state-2.  The electron now exists in a smaller entropic state and guess what?  It contains exploitable energy now. This is like a spring compressed.  The electron can decompress and exert force / expend energy.  For example in an interaction with another atom possibly a recoil could occur.   In a crystal lattice an electron can transfer its energy to the atom next door and in effect yield conduction.  All these are preliminary suppositions subject to more scrutiny. electron-in-infinite-well.bmp As mentioned before since the electron exists in this potential well in the form of free fall it can not have any acceleration.  Thus its distribution must thoroughly avoid the edges of the well were it would indeed experience accelerations by bouncing and recoiling off of the walls.