Saturday, September 24, 2011

Sub-particle traveling faster than the speed of light



Jack Dikian
September 2011

Today I awoke to the shocking news that CERN scientists have measured, although they are treading carefully, neutrinos to be traveling faster than the speed of light.

For those who do this type of work will know that a neutrino "little neutral one" in Italian was first postulated by Wolfgang Pauli in order to preserve the conservation of energy, conservation of momentum, and conservation of angular momentum in the decay of an atomic nucleus into a proton, an electron and an antineutrino.

Pauli theorised that an undetected particle was carrying away the observed difference between the energy, momentum, and angular momentum of the initial and final particles. A neutrino as theorized is an electrically neutral, weakly interacting elementary subatomic particle with a small but non-zero mass.

Now, it has always been assumed that neutrinos travel at the speed of light. Relativity required mass-less particles to travel at the speed of light. But for a particle to travel faster than the speed of light undermines Einstein's 1905 special theory of relativity, one of the most important pillars in modern physics. And, of course the expansion of the fabric of space-time doesn’t count here, nor “apparent" or "effective" faster than light theories in unusually distorted regions of space-time.

The early comment by most scientists has been disbelief. For example, University of Maryland physics department chairman Drew Baden called it "a flying carpet," something that was too fantastic to be believable. Indeed, CERN are asking others to independently verify the measurements before claiming an actual discovery. They are inviting the broader physics community to look at what they've done and really scrutinize it in great detail, and ideally for someone elsewhere in the world to repeat the measurements.

You see Fermilab (the other Accelerator Laboratory in Chicago) had announced similar faster-than-light results in 2007, but those came with a margin of error that undercut its scientific significance. However, Fermilab would be capable of running the tests according to Stavros Katsanevas, the deputy director of France's National Institute for Nuclear and Particle Physics Research. The institute collaborated with Italy's Gran Sasso National Laboratory for the experiment at CERN.

My immediate thought after reading the CERN press release this morning was special relativity. The second, EPR and entanglement, what Einstein called spooky action at a distance. Along with hidden variables and all. Looking at it, whilst entanglement implies instantaneous communication - there is no actual information transmitted when the entangled particles affect each other. entanglement doesn’t imply faster than light communication. We can’t affect the state the particle goes into, even though it doesn't 'decide' on its state until it is observed.

Monday, September 5, 2011

The Remarkable Theorem


Jack Dikian
September 2011

Ever since I read Flatland: A Romance of Many Dimensions by the English schoolmaster Edwin Abbott my mind turns to the idea of higher dimensions, and whether we humans have the capacity to visualize the fourth dimension. I don’t mean using time as a fourth dimension viz a viz Special Relativity – rather trying to imagine the existence of a 4-dimensional being looking back at us and our world.

Abbott‘s in his 1884 satirical novella wrote pseudonymously as "A Square", in the fictional two-dimensional world of Flatland to offer pointed observations on the social hierarchy of Victorian culture. A 3-dimensional being, of course, is could see everything in their world, and all at once.

In the same way, a 4-dmensional being looking back at us could look inside our stomach, and remove, if they want to the lunch we just had without cutting through our skin, just like we can remove a dot inside a circle (flatland) by moving it up into the third dimension perpendicular to the circle, without breaking the circle.

Then years later, I learned about Carl Friedrich Gauss and his Theorema Egregium (the remarkable theorem in Latin). How for example, can an Ant (say a 2-dimensional being) stuck on the surface of our curved world, and can’t stand back to see the curvature of our planet ever realize that the surface is curved.

The theorem says that the curvature of a surface can be determined entirely by measuring distances along paths on the surface. That is, curvature does not depend on how the surface might be embedded in 3-dimensional space. An absolutely amazing insight! This however only applies to curved surfaces which are 2-dimensional.

It would take a brilliant student of Gauss, Bernhard Riemann at the age of just 26 to develop and extend Gauss's theory to higher dimensional spaces called manifolds in a way that also allows distances and angles to be measured and the notion of curvature to be defined, again in a way that was intrinsic to the manifold and not dependent upon its embedding in higher-dimensional spaces. That is generalizing Gauss’ work to describe the curvature of space in any dimension. Again, how do we, non-mathematicians, visualize a curved 3-dimensional space. What encapsulates it? The genius of Riemann was to show that we don’t need to step into the fourth dimension to tell if space is curved. We can do it form the inside.

Albert Einstein, as we know, came along and used the theory of Riemannian manifolds to develop his General Theory of Relativity. In particular, his equations for gravitation are restrictions on the curvature of space. He took the mathematics of Gauss and Riemannian and used it to develop a revolutionary picture of our physical world showing that we live in the curved worlds of Gauss and Riemannian.

So we get to finally that gravity is not a pull downwards but rather an object falls following the simplest path through bend space. Of course, Einstein didn’t stop there and showed that the presence of mass that bends space.

Friday, August 19, 2011

2001: A Space Odyssey & An Explanation Of What Happens In A Vacuum


Jack Dikian
August 2011

I first watched Stanley Kubrick's 2001: A Space Odyssey in the mid 90’s and I remember being struck by the power of its visual imagery. In the enduring years I probably saw the film another 2 or 3 times. Yesterday evening I watched again, this time thinking about how and why this film appealed so much to of one of the greatest mathematicians of the twentieth century, Paul Adrien Maurice Dirac.

Paul Adrien Maurice Dirac, held the Lucasian Chair of Mathematics at the University of Cambridge, and shared the Nobel Prize in physics for 1933 with Erwin Schrödinger, "for the discovery of new productive forms of atomic theory."

As I read about Dirac, one learns that he hardly spoke unnecessarily. In fact people who knew Dirac well coined the word “a Dirac” meaning, amusingly the smallest number of words spoken in an hour and still be involved in a conversation. Interestingly, it takes almost 25 minutes before a word is spoken in 2001.

In the late 1920s Dirac unified special relativity and plank’s quantum effects, unravel the vacuum and explained what is really taking place in empty space. This is regarded by many, as one of the greatest achievements in mathematics and physics in the 20th century leading to a new picture of [nothing].

2001’s appeal for Dirac helps give us an insight how he managed this achievement. As Kubrick himself said 2001 is a demonstration that a really good film script can be made without many words but with the power of visual imagery. Dirac had a very strong sense of what his equations were telling him visually.


Wednesday, August 10, 2011

When Black Holes Waltz




Jack Dikian
August 2011

When examining galaxies at when the universe is half its age, approximately 30 out of 100 galaxies not only have a supper massive (million to a billion times larger then our Sun) black hole but have 2 black holes Orbiting around each other – dancing the waltz.

What’s interesting is that almost anywhere we look we find supper massive black holes waltzing at the centre of their respective galaxies. As these block holes orbit around each other, they distort the very fabric of Spacetime and send ripples cross the universe. This wobble might be heard here on Earth even when we can’t see them.

The orbits of paired black holes look nothing like the typical elliptical paths we see in our local neighbourhood of the universe. Instead supper massive paired-blacks holes follow a path that resembles more a 3-leaf clover.

The shocking thing here is that how some of the heaviest objects in the universe orbit around each other resembles exactly how the lightest objects in the universe orbit around each other, ie protons and electrons.

The smallest objects in the universe behave the same way as the largest.

Thursday, June 30, 2011

Shattering Naive Expectations Of Humans

Jack Dikian
June 2011


Christians, for example, see man as a spiritual being with understanding that springs not just from the physical organ of the mind but also from soul and spirit.


Recently I came across a review that I had written of a book that had been a bain of my life. I’m talking about Gödel, Escher, Bach: An Eternal Golden Braid by Douglas Hofstadter. I have reposted that review elsewhere amongst my blog interests.

Reading it again I was reminded of Gödel’s extraordinary theorems. Accepted by mathematicians, they have not only modernised mathematics, showing that mathematical truth is more than logic and computation but also how we might interpret the universe itself. One once said “Does Godel's work imply that someone or something transcends the universe?”.

The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an "effective procedure" is capable of proving all facts about the natural numbers. For any such system, there will always be statements about the natural numbers that are true, but that are unprovable within the system. The second incompleteness theorem shows that if such a system is also capable of proving certain basic facts about the natural numbers, then one particular arithmetic truth the system cannot prove is the consistency of the system itself.

For over 2000 years Euclidean geometry past the test of time. Euclid's method consists in assuming a small set of intuitively appealing axioms, and deducing many other theorems from these. An example is the idea that you can add one to any number and get a bigger number. If Euclid's work had a weak link, it was his fifth axiom, the axiom about parallel lines. Euclid said that if you were given a straight line, you could draw only one other straight line parallel to it through a set point somewhere outside it.

However, in the 19th century Riemann and others created new geometries by saying that there could be two parallel lines through the outside point or no parallel lines. It turned out that Riemann's geometry is better at describing the curvature of space than Euclid's and Einstein in the early 1900s incorporated Riemann's ideas into relativity theory.

Now not only had Riemann created a system of geometry which stood commonsense notions on its head, but the philosopher-mathematician Bertrand Russell had bumped into a serious paradox for set theory. Russell did not feel that this paradox was insurmountable and felt he could create a single self-consistent, self-contained mathematical system. He and Whitehead produced the 3-volume Principia Mathematica. However, even before it was complete, Russell's expectations were dashed.

Godel’s paper "On Formally Undecidable Propositions of Principia Mathematica and Related Systems." In it he showed that a statement in a system could be made to refer to itself in such a way that it said about itself that it was unproveable and shattered naive expectations that human thinking could be reduced to algorithms or that our thoughts can be a mechanical process.

Christians, for example, see man as a spiritual being with understanding that springs not just from the physical organ of the mind but also from soul and spirit.

Tuesday, June 28, 2011

$6 Million still on offer...


Jack Dikian
June 2011

In order to celebrate mathematics in the new millennium, The Clay Mathematics Institute of Cambridge, Massachusetts established seven Prize Problems. As of June 2011, six of the problems remain unsolved. A correct solution to any of the problems results in a US$1,000,000 prize being awarded by the institute. Grigori Perelman solved the Poincar̩ conjecture and was awarded the first Clay Millennium Prize in 2010 Рhe turned down the prize. The seven Millennium Prize Problems are:

1. Birch & Swinnerton - Dyer Conjecture

This conjecture relates the number of points on an elliptic curve mod p to the rank of the group of rational points. Elliptic curves, defined by cubic equations in two variables, are fundamental mathematical objects that arise in many areas.

2. Hodge Conjecture

Over the years mathematicians discovered powerful ways to investigate the shapes of complicated objects. The idea is to ask to what extent can we approximate the shape of a given object by gluing together simple geometric building blocks of increasing dimension. The Hodge conjecture asserts that for particularly nice types of spaces called projective algebraic varieties, the pieces called Hodge cycles are actually (rational linear) combinations of geometric pieces called algebraic cycles.

3. Navier-Stokes Equation

Mathematicians have long believed that an explanation for the way turbulent air and water currents follow moving planes and boats can be found through an understanding of solutions to the Navier-Stokes equations. The challenge here is to make substantial progress toward a mathematical theory which will unlock the secrets hidden in the Navier-Stokes equations.

4. P vs NP Problem

One of my interest areas - Suppose that you are organizing housing accommodations for a group of four hundred university students. Space is limited and only one hundred of the students will receive places in the dormitory. To complicate matters, the Dean has provided you with a list of pairs of incompatible students, and requested that no pair from this list appear in your final choice.

This is an example of what computer scientists call an NP-problem, since it is easy to check if a given choice of one hundred students proposed by a co-worker is satisfactory, however the task of generating such a list from scratch seems to be so hard as to be completely impractical. Indeed, the total number of ways of choosing one hundred students from the four hundred applicants is greater than the number of atoms in the known universe.

Thus no future civilization could ever hope to build a supercomputer capable of solving the problem by brute force; that is, by checking every possible combination of 100 students. However, this apparent difficulty may only reflect the lack of ingenuity of your programmer. In fact, one of the outstanding problems in computer science is determining whether questions exist whose answer can be quickly checked, but which require an impossibly long time to solve by any direct procedure.

5. Poincaré Conjecture (Solved)

If we stretch a rubber band around the surface of an apple, then we can shrink it down to a point by moving it slowly, without tearing it and without allowing it to leave the surface. On the other hand, if we imagine that the same rubber band has somehow been stretched in the appropriate direction around a doughnut, then there is no way of shrinking it to a point without breaking either the rubber band or the doughnut. We say the surface of the apple is "simply connected," but that the surface of the doughnut is not. Poincaré, almost a hundred years ago, knew that a two dimensional sphere is essentially characterized by this property of simple connectivity, and asked the corresponding question for the three dimensional sphere (the set of points in four dimensional space at unit distance from the origin).

6. Riemann Hypothesis

Some numbers have the special property that they cannot be expressed as the product of two smaller numbers, e.g., 2, 3, 5, 7, etc. Such numbers are called prime numbers, and they play an important role, both in pure mathematics and its applications. The distribution of such prime numbers among all natural numbers does not follow any regular pattern, however the German mathematician G.F.B. Riemann observed that the frequency of prime numbers is very closely related to the behavior of an elaborate function:

ζ(s) = 1 + 1/2s + 1/3s + 1/4s + ...

called the Riemann Zeta function. The Riemann hypothesis asserts that all interesting solutions of the equation

ζ(s) = 0

lie on a certain vertical straight line. This has been checked for the first 1,500,000,000 solutions. A proof that it is true for every interesting solution would shed light on many of the mysteries surrounding the distribution of prime numbers.

7. Yang-Mills and Mass Gap

The laws of quantum physics stand to the world of elementary particles in the way that Newton's laws of classical mechanics stand to the macroscopic world. Almost half a century ago, Yang and Mills introduced a remarkable new framework to describe elementary particles using structures that also occur in geometry.

Quantum Yang-Mills theory is now the foundation of most of elementary particle theory, and its predictions have been tested at many experimental laboratories, but its mathematical foundation is still unclear. The successful use of Yang-Mills theory to describe the strong interactions of elementary particles depends on a subtle quantum mechanical property called the "mass gap:" the quantum particles have positive masses, even though the classical waves travel at the speed of light.

This property has been discovered by physicists from experiment and confirmed by computer simulations, but it still has not been understood from a theoretical point of view. Progress in establishing the existence of the Yang-Mills theory and a mass gap and will require the introduction of fundamental new ideas both in physics and in mathematics.


Tuesday, June 21, 2011

Alice, Bob and Eve - Quantum Cryptography

Jack Dikian
June 2011

Quantum cryptography is the use of quantum systems to perform cryptographic tasks. A key application of quantum cryptography is quantum key distribution (QKD). QKD describes the process of using quantum communication to establish a shared key between two parties Alice and Bob without a third party Eve learning anything about that key, even if Eve can eavesdrop on all communication between Alice and Bob.

This is due to the process of measuring a quantum system in general disturbs the system. A third party trying to eavesdrop on the key must in some way measure it, thus introducing detectable anomalies.

In contrast traditional cryptography often relies on mathematical theorems that may be vulnerable to attacks by individuals armed with increasingly powerful supercomputers.

But, even though QKD is theoretically secure researchers have been aware that loopholes may arise when QKD is used in real world applications.

Now, a team of researchers at the Centre for Quantum Technologies at the National University of Singapore, the Norwegian University of Science and Technology and the University Graduate Center in Norway have created a perfect eavesdropper for QKD allowing researchers to obtain an entire shared secret key without alerting either of the legitimate parties that there had been a security breach.

Monday, May 23, 2011

The Role of Fractal Geometry In Evolution


Jack Dikian
May 2011

A fractal is "a rough or fragmented geometric shape that can be split into parts, each of which is (at least approximately) a reduced-size copy of the whole," a property called self-similarity (wikipedia).

James Gleick writes that a decade after Mandelbrot published his physiological speculations; some theoretical biologists began to find fractal organization controlling structures all through the body. The standard exponential description of a bronchial branching proved to be quite wrong; a fractal description turned out to fit the data.

In the view of the Darwinists, the endlessly exquisite designs of nature are the result of an interplay of two factors - random genetic mutation and Natural Selection. Genetic mutation proposes, Natural Selection disposes. Some maintain that mutations usually detract from the viability of an organism and that Darwin's premise that genetic mutations are random is wrong. That is, that nature selects the strongest does not hold true.

The question of "design" in nature was one that troubled Charles Darwin. In the year following the publication of the Origin, he wrote, "I am conscious that I am in an utterly hopeless muddle. I cannot think that the world, as we see it, is the result of chance; and yet I cannot look at each separate thing as the result of design."

Design according to [Darwin] is Random Mutation + Natural Selection + Time. An awful lot of time. Regardless of the species, the changes over time have to come from changes in the Deoxyribonucleic acid (DNA). The entire plan for any organism is contained in its DNA, a molecule with 4 nucleobases. A strand of DNA can have anywhere from 500 thousand bases (in the case of the smallest known parasite) to about 3 billion for man and large animals.

So Darwinism says that random mutation or copying errors in the DNA produce a modified organism, and that natural selection weeds out less adaptable ones. What [remains] are new innovations in an organism design.

Of course, one of the difficulties with Darwinism is that it takes millions of years and many billions of organisms to produce significant change over time making it difficult to empirically prove Darwinism in the short lifetime of a human being.

In a totally different scientific discipline, mathematics, Benoit Mandelbrot‘s interest in irregular and seemingly chaotic patterns developed fractal geometry in the mid 1970’s. Sure a number of geometry systems have been developed since Euclidean geometry (c.300 B.C) fractals explore the infinitely complex shapes and self-similarity across scale featuring recursion – patterns seen everywhere in nature, economics, and cell biology (and here is the rub).

An example of this is a photograph of a section of coastline from a plane will show the same ragged contours as a photograph of the whole coast taken from a space station. A photograph of a one-foot-long section of the same coast will also show the same contours. The various coastlines are "self-similar," each similar to the others in shape, but different in magnitude.

Biologists began to find fractal organization controlling structures throughout the body. The standard exponential description of bronchial branching, for example, proved to be wrong; a fractal description turned out to fit the data. Other systems include the urinary collecting system, Biliary duct in the liver, network of special fibers in the heart that carry pulses of electric current to the contracting muscles.


Sunday, May 22, 2011

Ham Sandwich and Pancakes - A Theorem In Topology


Jack Dikian
May 2011

Biting into a ham sandwich today, it occurred to me that I’d almost forgotten the taste of ham. I hadn’t eaten it for years. The other thing, it reminded me of fond memories of university days, a time when I first read about the ham sandwich theorem. You can depend on Topologists to make what are otherwise abstract ideas more appealing. And so the theorem goes like this:

Regardless of the distribution of ham in a sandwich you can use one slice to divide the sandwich into two parts containing equal parts of bread and ham.

More formally, the ham sandwich theorem, also called the Stone–Tukey theorem after Arthur H. Stone and John Tukey, states that given n measurable "objects" in n-dimensional space, it is possible to divide all of them in half with a single (n − 1) - dimensional hyperplane.

For those who enjoy pancakes - the two dimensions case is known as the pancake theorem of having to cut two infinitesimally thin pancakes on a plate each in half with a single cut.


Monday, January 3, 2011

7 Bridges


Jack Dikian
January 2011

While writing about Knot Theory last night the subject and associated problems in Topology bought back many memories that are reminiscent to a child experiencing the color and movement of a circus for the first time.

Topology and in particular Graph Theory is often not part of high school math curricula, and thus for many it sounds strange and intimidating. However, there are some readily graspable ideas at the base of topology that are interesting, fun, and highly applicable to all sorts of situations. One of these is the topology of networks, first developed by Euler in the early 1700’s inspired by the Seven Bridges of Konigsberg.

Konigsberg is a city which was the capital of East Prussia but now is known as Kaliningrad in Russia. The city is built around the River Pregel where it joins another river. An island named Kniephof is in the middle of where the two rivers join. There are seven bridges that join the different parts of the city on both sides of the rivers and the island.

The people wondered whether or not one could walk around the city in a way that would involve crossing each bridge exactly once. No one was able to find a way to do it, and this came to the attention of a mathematician Euler.

In 1735, Euler, before the Russian Academy presented the solution to the problem explaining why crossing all seven bridges without crossing a bridge twice was impossible.

Euler simplified the bridge problem by representing each land mass as a point and each bridge as a line. He reasoned that anyone standing on land would have to have a way to get on and off. Thus each land mass would need an even number of bridges. But in Konigsberg, each land mass had an odd number of bridges. This was why all seven bridges could not be crossed without crossing one more than twice.

Euler's recognition that the key information was the number of bridges and the list of their endpoints (rather than their exact positions) presaged the development of topology. The difference between the actual layout and the graph schematic is a good example of the idea that topology is not concerned with the rigid shape of objects.

Sunday, January 2, 2011

Untangling Knots



Jack Dikian

January 2011

Recently, over the Christmas and New Year holidays, I found myself getting increasingly irritable with the number of times I found I needed to untangle my Ipod’s headphones. The angrier I become, the harder I tried to untangle the mess, the tighter the knots seemed to get. My young niece on the other hand approached the problem very differently. For her, it was a breeze - calm, and almost mechanical – little fingers might have been a benefit.

What did interest me however was the idea of whether a heuristic approach can be taken to untangle a knot. Two things just changed in my language. One that I used the word “knot” in the singular and secondly, implicitly, the ability to identify the knot-type and use a set of steps to unwind, untangle the knot.

We all know from experience how to create knots. We do this all the time, often, like me, unwittingly. In Pure Maths, we learned about those knots whose ends are glued together and their classification form the subject of a branch of Topology known as Knot Theory.

The thread goes back almost 150 years when in the nineteenth century physicists were speculating about the underlying principles of atoms. Lord Kelvin put forward a comprehensive theory of atoms, which, seemed to explain several of the essential qualities of the chemical elements - Kelvin's theory conjectured that atoms were knotted tubes of ether.

Kelvin's theory of atoms was taken seriously for many years. Maxwell for example thought that ``it satisfies more of the conditions than any atom hitherto considers''. The topological stability and the variety of knots were thought to mirror the stability of matter and the variety of chemical elements.

This theory inspired physicist Peter Tait to undertake an extensive study and tabulation of knots in an attempt to understand when two knots were ``different''. He’s intuitive understanding of ``different'' and ``same'' is a useful notion even today. Two knots are isotopic if one can be continuously manipulated in 3-space with no self-intersections allowed until it looks like the other.

Mathematicians, then and now, continue to pose the same question. That is, how do we know when two knots are isotopically the same. This failed atomic theory also left in its wake the riches of Tait's tabulation---163 knot projections---and a rudimentary understanding of isotopic sameness in terms of how one projection could be continuously manipulated to look like another. This understanding of projection manipulation was summarized in a set of conjectures for knot projections, the famous Tait Conjectures.