Equation of the Day #10: Golden Pentagrams

Ah, the pentagram, a shape associated with a variety of different ideas, some holy, some less savory. But to me, it’s a golden figure, and not just because of how I chose to render it here. The pentagram has a connection with a number known as the golden ratio, which is defined as

\begin{aligned}   \phi &= \dfrac{a}{b} = \dfrac{a+b}{a} \text{ for } a>b\\[8pt]    &= \dfrac{1+\sqrt{5}}{2} \approx 1.618\ldots \end{aligned}

This number is tied to the Fibonacci sequence and the Lucas numbers and seems to crop up a lot in nature (although how much it crops up is disputed). It turns out that the various line segments present in the pentagram are in golden ratio with one another.

In the image above, the ratio of red:green = green:blue = blue:black is the golden ratio. The reason for this is not immediately obvious and requires a bit of digging, but the proof is fairly straightforward and boils down to a simple statement.

First, let’s consider the pentagon at the center of the pentagram. What is the angle at each corner of a pentagon? There’s a clever way to deduce this. It’s not quite clear what the interior angle is (that is, the angle on the inside of the shape at an individual corner), but it’s quite easy to get the exterior angle.

The exterior angle of the pentagon (which is the angle of the base of the triangles that form the points of the pentagram) is equal to 1/5 of a complete revolution around the circle, or 72°. For the moment, let’s call this angle 2θ. To get the angle that forms the points of the pentagram, we need to invoke the fact that the sum of all angles in a triangle must equal 180°. Thus, the angle at the top is 180° – 72° – 72° = 36°. This angle I will call θ. While I’m at it, I’m going to label the sides of the triangle x and s (the blue and black line segments from earlier, respectively).

We’re nearly there! We just have one more angle to determine, and that’s the first angle I mentioned – the interior angle of the pentagon. Well, we know that the interior angle added to the exterior angle must be 180°, since the angles both lie on a straight line, so the interior angle is 180° – 72° = 108° = 3θ. Combining the pentagon and the triangle, we obtain the following picture.

Now you can probably tell why I labeled the angles the way I did; they are all multiples of 36°. What we want to show is that the ratio x/s is the golden ratio. By invoking the law of sines on the two isosceles triangles in the image above, we can show that

\dfrac{x}{s} = \dfrac{\sin 2\theta}{\sin\theta} = \dfrac{\sin 3\theta}{\sin\theta}

This equation just simplifies to sin 2θ = sin 3θ. With some useful trigonometric identities, we get a quadratic equation which we can solve for cos θ.

4\cos^2\theta - 2\cos\theta - 1 =0

Solving this equation with the quadratic formula yields

2\cos\theta = \dfrac{\sin 2\theta}{\sin\theta} = \phi,

which, when taken together with the equation for x/s, shows that x/s is indeed the golden ratio! Huzzah!

The reason the pentagram and pentagon are so closely tied to the golden ratio has to do with the fact that the angles they contain are multiples of the same angle, 36°, or one-tenth of a full rotation of the circle. Additionally, since the regular dodecahedron (d12) and regular icosahedron (d20) contain pentagons, the golden ratio is abound in them as well.

As a fun bonus fact, the two isosceles triangles are known as the golden triangle (all acute angles) and the golden gnomon (obtuse triangle), and are the two unique isosceles triangles whose sides are in golden ratio with one another.

So the next time you see the star on a Christmas tree, the rank of a military officer, or the geocentric orbit of Venus, think of the number that lurks within those five-pointed shapes.


Equation of the Day #9: The Uncertainty Principle

The Uncertainty Principle is one of the trickiest concepts for people learning quantum physics to wrap their heads around. In words, the Uncertainty Principle says “you cannot simultaneously measure the position and the momentum of a particle to arbitrary precision.” In equation form, it looks like this:

\Delta x \Delta p \ge \dfrac{\hbar}{2},

where \Delta x is the uncertainty of a measurement of a particle’s position, \Delta p is the uncertainty associated with its measured momentum, and ħ is the reduced Planck constant. What this equation says is that the product of these two uncertainties has to be greater than some constant. This has nothing to do with the tools with which we measure particles; this is a fundamental statement about the way our universe behaves. Fortunately, this uncertainty product is very small, since

\hbar \approx 0.000000000000000000000000000000000105457 \text{ J s}.

The real question to ask is, “Why do particles have this uncertainty associated with them in the first place? Where does it come from?” Interestingly, it comes from wave theory.

Take the two waves above. The one on top is very localized, meaning its position is well-defined. But what is its wavelength? For photons and other quantum objects, wavelength (λ) determines momentum,

p = \dfrac{h}{\lambda},

so here we see a localized wave doesn’t really have a well-defined wavelength, and thus an ill-defined momentum. In fact, the wavelength of this pulse is smeared over a continuous spectrum of momenta (much like how the “color” of white light is smeared over the colors of the rainbow). The second wave has a pretty well-defined wavelength, but where is it? It’s not really localized, so you could say it lies smeared over a set of points, but it isn’t really in one place. This is the heart of the uncertainty principle.

So why does this apply to particles? After all, particles aren’t waves. However, at the quantum level, objects no longer fit into either category. So particles have wavelike properties and waves have particle-like properties. In fact, from here on I will refer to “particles” as “quantum objects” to rid ourselves of this confusing nomenclature. So, because waves exhibit this phenomenon – and quantum objects have wavelike properties – quantum objects also have an uncertainty principle associated with them.

However, this is arguably not the most bizarre thing about the uncertainty principle. There is another facet of the uncertainty principle that says that the shorter the lifetime of a quantum object (how long the object exists before it decays), the less you can know about its energy. Since mass and energy are equivalent via Einstein’s E = mc2, this means that objects that exist for very short times don’t have a well-defined mass. It also means that, if you pulse a laser over a short enough time, the light that comes out will not have a well-defined energy, which means that it will have a spread of colors (our eyes can’t see this spread, of course, but it means a big deal when you want to use very precise wavelengths of light in your experiment and short pulses at the same time). In my graduate research, we used this so-called “energy-time” uncertainty to determine whether certain configurations of the hydrogen molecule, H2, are long-lived or short lived; the longer-lived states exhibit sharper spectral lines, indicating a more well-defined energy, and the short-lived states exhibit wider spectral lines, a less defined energy.

So while we can’t simultaneously measure the position and momentum of an object to arbitrary certainty, we can definitely still use it to glean information about the world of the very, very small.


Equation of the Day #8: Absolute Zero and Negative Temperatures

If you’re reading this indoors, the room you are currently sitting in is probably around 20°C, or 68°F (within reasonable error, since different people like their rooms warmer or colder or have no control over the temperature of the room they’re reading this entry in). But what does it mean to be at a certain temperature? Well, we often define temperature as an average of the movement of an ensemble of constituent particles – usually atoms or molecules. For instance, the temperature of a gas in a room is given as a relation to the gas’ rms molecular speed:

v_{\rm rms} = \sqrt{\langle v^2\rangle} = \sqrt{\dfrac{3kT}{m}},

where T is the absolute temperature (e.g. Kelvin scale), m is the mass per particle making up the gas, and k is the Boltzmann constant, and the angular brackets mean “take the average of the enclosed quantity.” For reference, room temperature nitrogen (which makes up 78% of the atmosphere) has an rms speed of half a kilometer (one third of a mile) per second. But this definition is a specific case. In general, we need a more encompassing definition. There is a quantity that arises in thermodynamics known as entropy, which basically quantifies the disorder of a system. It is related to the number of ways to arrange the elements of a system without changing the energy.

For instance, there are a lot of ways of having a messy room. You can have clothes on the floor, you can track mud into it, you can leave dishes and food everywhere. But there are very few ways to have an immaculately clean room, where everything is tidy and put in its proper place. Thus, the messy room has a larger entropy, while the clean room has very low entropy. It is this quantity that helps to define temperature generally. Denoting entropy as S, the more robust definition is

T \equiv \left( \dfrac{\partial E}{\partial S} \right)_V = \left( \dfrac{\partial H}{\partial S} \right)_P,

or, in words, temperature is defined as the change in energy divided by the corresponding change in entropy of something with fixed volume, which is equivalent to the change in enthalpy (heat content) divided by the change in entropy at a fixed pressure. Thus, if you increase the energy of an object and find that it becomes more disordered, the temperature is positive. This is what we are used to. When you heat up air, it becomes more disorderly because the particles making it up are moving faster and more randomly, so it makes sense that the temperature must be positive. If you cool air, the particles making it up slow down and it tends to become more orderly, so the temperature is still positive, but decreasing. What happens when you can’t pull any more energy out of the air? Well, that means that the temperature has gone to zero, and movement has stopped. Since the movement has stopped, the gas must be in a very ordered state, and the entropy isn’t changing. When the speed of the gas particles is zero, we call its temperature absolute zero, when all motion has stopped.

It is impossible to reach absolute zero temperature, but it isn’t intuitive as to why at first. The main reason is due to quantum mechanics. If all atomic motion of an object stopped, its momentum would be known exactly, and this violates the Uncertainty Principle. But there is also another reason. In thermodynamics, there is a quantity related to temperature that is defined as

\beta = \dfrac{1}{kT}.

Since k is just a constant, β can be thought of as inverse temperature. This sends absolute zero to β being infinity! Now, this makes much more sense as to why achieving absolute zero is impossible – it means we have to make a quantity go to infinity! It turns out that β is the more fundamental quantity to deal with in thermodynamics for this reason (among others).

Now, you’re probably thinking, “Well, that’s all well and good, but, are you saying that this means that you can get to infinite temperature?” In actuality, you can, but you need a special system to be able to do it. To get temperature to infinity, you need β to go to zero. How do we do that? Well, once you cross zero, you end up with a negative quantity, so if we could somehow get a negative temperature, then we would have to cross β equals zero. But how do we get a negative temperature, and what would that be like? Well, we would need entropy to decrease when energy is added to our system.

It turns out that an ensemble of magnets in an external magnetic field would do the trick. See, when a compass is placed in a magnetic field, it wants to align with the field (call that direction north). But if I put some energy into the system (i.e. I push the needle), I can get the needle of the compass to point in the opposite direction (south). When less than half of the compasses are pointing opposite the external field, each time I flip a compass needle I’m increasing entropy (since the perfect order of all the compasses pointing north has been tampered with). But once more than half of those compasses are pointing south, I am decreasing the disorder of the system when I flip another magnet south! This means that the temperature must be negative! In practice, the compasses are actually molecules with an electric dipole moment or electrons with a certain spin (which act like magnets), but the same principles apply. So, β equals zero is when exactly half of the compasses are pointing north and the other half are pointing south, and β equals zero is when T is infinite, and it is at this infinity that the sign on T swaps.

Lasers are a more realistic physical system that employs negative temperatures. For lasers to work, atoms or electrons are excited to a higher energy state. When the higher energy state is populated by more than half of the atoms or electrons in the system, a population inversion occurs, which puts the system at a negative temperature in the same way as the compass needles described above.

It’s interesting to note that negative temperatures are actually hotter than any positive temperature, since you have to add energy to get to negative temperature. One could define a quantity as –β, so that plotting it on a line would be a more intuitive way to see that the smaller the quantity, the colder the object is, while preserving the infinities of absolute zero and “absolute hot.”


Equation of the Day #7: E=mc^2 and the Pythagorean Theorem

I doubt I’m the first person to introduce you to either of these equations, but if I am, then you’re in for a treat! The first equation is courtesy of Albert Einstein,


Just bask in that simplicity. The constant c is the speed of light, which is a rather large number. In fact, light takes just over a second to travel to Earth from the moon, while the Apollo missions took three days. The fastest we’ve ever sent anything toward the moon was the New Horizons mission to Pluto, and it passed the moon after a little over eight-and-a-half hours. So, light is pretty fast, and therefore c2 is a gigantic number. The E and m in this equation are energy and mass, respectively. If we ignore the c2, which acts as a conversion rate, E=mc2 says that energy and mass are equivalent, and that things with mass have energy as a result of that mass, regardless of what they’re doing or where they are in the universe. This equation is at the heart of radioactive decay, matter/antimatter annihilation, and the processes occurring at the center of the sun. Now, that all may sound foreign, but it’s at work constantly, and we take advantage of it. For instance, positron emission tomography scans, or PET scans, are used to image the inside of the body using a radioactive substance that emits positrons (the antimatter counterpart to the electron). These positrons annihilate with the electrons in your body and release light, which is then detected by a special camera. This information is then used to reconstruct an image of your insides.

As numerous as the phenomena are that E=mc2 covers, it’s actually not the full story. After all, objects have energy that isn’t a result of their mass; they can have energy of motion (kinetic energy) and energy due to where they are in the universe (potential energy). Ignoring potential energy for the moment, you may be wondering how to include the kinetic energy in our simple equation. As it turns out, special relativity says that the energy contributions obey the Pythagorean theorem,

a^2 + b^2 = h^2,

which relates the three sides of a triangle whose largest angle is 90°.

(I’ve called the longest side h instead of c to avoid confusion with the speed of light.) In our example, the total energy E is the longest side, and the “legs” are the rest energy (mc2) and the energy contribution due to momentum, written as pc.

In equation form,

E^2 = (pc)^2 + (mc^2)^2.

This equation only holds for objects moving at constant velocity, but the geometric relationship between the total energy and its contributions is very simple to grasp when shown as a right triangle. In fact, this isn’t the only place in special relativity where Pythagorean relations pop up! The formulas for time dilation and length contraction, two phenomena that pop up in special relativity, are governed by a similar equation. For instance, time dilation follows the triangle

where tmov is the time elapsed according to a moving clock, trest is the time read by a clock at rest, and s is the distance that the moving clock has covered over the elapsed time. How can two clocks read different times? I’ll save that question for another day.


Equation of the Day #6: Newton’s Second Law

Born on Christmas Day, 1642, Isaac Newton was one of the most brilliant minds the world has ever seen. He invented calculus, discovered that white light is composed of all the colors of the rainbow, formulated the law of universal gravitation (which we now use to send people and robots into orbit around Earth and other objects in the Solar System), and his famous laws of motion (among a host of other achievements). Today I’m focusing on his Second Law of Motion, which you’ve probably seen written as

{\bf F}=m {\bf a},

where F is force, m is mass, and a is acceleration. The boldface formatting indicates a vector, which is a mathematical entity that has both magnitude (or “strength”) and direction (up/down, left/right, forward/backward). What this means is that an object whose speed (the magnitude of velocity) or direction of travel is changing must be subject to an unbalanced force. This ties in to Newton’s First Law of Motion, which states that the natural state of an object is to move in a straight line at constant speed. However, the above equation is actually not fully general – it only applies when the accelerating object has constant mass. This works fine for projectile motion of a pitcher hurling a baseball or a cannonball launched at an enemy fortress, but it fails for, say, a rocket delivering a payload to the International Space Station, since fuel is being burned and ejected from the rocket. In general, Newton’s Second Law is given by

{\bf F}=\dfrac{d{\bf p}}{dt},

where p is the total momentum of the system. Momentum is a measurement of the “oomf” that an object has due to its motion. Explicitly, it is the product of mass and velocity, or {\bf p} = m{\bf v} . The expression d/dt indicates the rate of change of a quantity, in this case momentum, over time. So Newton’s Second Law states that a change in an object’s motion is due to an unbalanced force, which sounds like what I said for the equation {\bf F}=m {\bf a}, but this takes into account a change in mass as well.

Conceptually, this was a big breakthrough at the time and is something that students in introductory physics classes struggle with today. (Note: Newton figured all this stuff out and more by the time he was 26. Let that sink in.) On a quantitative level, this law allows us to predict the motions of objects that subjected to certain kinds of forces. If we know the nature of the force an object is subjected to, we can make predictions of what path the object will travel along. Similarly, if we know the trajectory of an object, we can predict the behavior of the force acting upon it. This was how Newton figured out the nature of gravity. Observations of his time showed that, from a sun-centered view of the Solar System, the planets orbit in ellipses, not circles. Newton surmised that this meant that the force of gravity must diminish with the square of the separation between two objects.

But how? The presence of acceleration in Newton’s Second Law indicates a differential equation. Acceleration is the rate of change of velocity, which itself is the rate of change of position (which I will denote as r). So, taking that into account, we get the equation

\dfrac{d^2{\bf r}}{dt^2} = \dfrac{{\bf F}}{m}

This can be solved for an expression of position as a function of time when we know the form that F takes. If the force is constant in space (like gravity near Earth’s surface), r takes on the form of parabolic motion. If the force obeys Hooke’s Law, F = -k \Delta r , then the solution is simple harmonic oscillation. And if the force obeys an inverse square law, the position takes the functional form of conic sections, which are ellipses for the planets bound to the Sun in our solar system.

Conic sections: 1. Parabola, 2. Circle (bottom) and ellipse (top), 3. Hyperbola. (Image: Wikimedia Commons)

This is the model we use when we send objects into orbit or when we want to model a baseball being hit out of the park (among many other things), and it has been extended to explain phenomena that Newton himself could not have imagined. When scientists or mathematicians describe equations as beautiful, this pops into my head. It has such a simple form, but explains a wealth of real phenomena.

Equation of the Day #5: The Wave Equation

Waves are ubiquitous throughout nature, from water waves to sound waves to light and even matter itself. For this reason, one of my favorite equations is the wave equation,

\Large\dfrac{1}{c^2} \dfrac{\partial^2 \psi}{\partial t^2} = \nabla^2 \psi.

This is a differential equation, where ∂/∂t notation indicates the derivative operator with respect to the variable t, which I’ll get back to in a moment. In this equation, \psi = \psi(x,y,z,t) represents the deformation the wave is making at a given point in space (x,y,z) at a given time t, and c is the speed of the wave. The right hand side is known as the Laplacian of \psi . Physically, this is a measure of how much \psi(x,y,z,t) deviates from its average local value. If \psi is at a minimum, \nabla^2\psi is positive; it is negative if \psi is at a maximum.

A derivative is the rate of change of a function, like the deformation \psi . So, \partial\psi/\partial t is the rate at which the deformation changes over time. The second derivative, which is in the wave equation, is the rate of change of this rate of change, like how acceleration is the rate of change of velocity, which is itself the rate of change of the position of an object. Thus, the left hand side indicates how the variation in \psi with respect to time changes in time at a given point in space. These spatial averages and temporal changes feed each other in a constant mutual feedback, resulting in the wave taking on a certain shape. All wave phenomena are governed by this equation (or a slight modification of it). In one dimension, the wave equation simplifies to

\dfrac{1}{c^2} \dfrac{\partial^2 \psi}{\partial t^2} = \dfrac{\partial^2 \psi}{\partial x^2},

which, for pure frequencies/wavelengths has the solutions

\psi(x,t) = A \sin(kx-\omega t) + B \cos(kx - \omega t),

where A and B are determined by appropriate boundary conditions (like if a string is fixed at both ends, or free at one end and fixed at the other), \omega is the angular temporal frequency of the wave, and k is the angular spatial frequency of the wave.

(Image: Wikimedia Commons)

This equation can be rewritten in terms of the wavelength \lambda (shown above) and period T of the wave,

\psi(x,t) = A \sin\left[\tau \left(\dfrac{x}{\lambda}-\dfrac{t}{T}\right)\right] + B \cos(kx - \omega t),

where \tau is the circle constant. These quantities are related via the speed of the wave, c = \lambda/T = \omega/k . These solutions govern things like vibrations of a string, sound made by an air column in a pipe (like that of an organ, trumpet, or didgeridoo), or even waves created by playing with a slinky. They also govern the resonances of certain optical cavities, such as lasers or etalons. Adding up a bunch of waves with pure tones can create waves of almost any imaginable shape, such as this:

(Image: Wikimedia Commons)

Or this:

(Image: Wikimedia Commons)

Waves do not have to be one dimensional, of course, but I’ll save the two- and three-dimensional cases for another entry.


Equation of the Day #4: The Pentagram of Venus

The above image is known as the Pentagram of Venus; it is the shape of Venus’ orbit as viewed from a geocentric perspective. This animation shows the orbit unfold, while this one shows the same process from a heliocentric perspective. There are five places in Venus’ orbit where it comes closest to the Earth (known as perigee), and this is due to the coincidence that

\dfrac{1\text{ Venusian Year}}{1\text{ Earth Year}} = \dfrac{224.701\ \text{days}}{365.256\ \text{days}} =0.6151877 \approx \dfrac{8}{13} =0.6153846

When two orbital periods can be expressed as a ratio of integers it is known as an orbital resonance (similar to how a string has resonances equal to integer multiples of its fundamental frequency). The reason that there are five lobes in Venus’ geocentric orbit is that 13-8=5 . Coincidentally, these numbers are all part of the Fibonacci sequence, and as a result many people associate the Earth-Venus resonance with the golden ratio. (Indeed, pentagrams themselves harbor the golden ratio in spades.) However, Venus and Earth do not exhibit a true resonance, as the ratio of their orbital periods is about 0.032\% off of the nice fraction 8/13 . This causes the above pattern to precess, or drift in alignment. Using the slightly more accurate fraction of orbital periods, 243/395 , we can see this precession.

This is the precession after five cycles (40 Earth years). As you can see, the pattern slowly slides around without the curve closing itself, but the original 13:8 resonance pattern is still visible. If we assume that 243/395 is indeed the perfect relationship between Venus and Earth’s orbital periods (it’s not; it precesses 0.8^\circ per cycle), the resulting pattern after one full cycle (1944 years) is

Which is beautiful. The parametric formulas I used to plot these beauties are

\begin{aligned} x(t) &= \sin t + r^{2/3} \sin\left(\frac{t}{r} \right)\\ y(t) &= \cos t + r^{2/3} \cos\left(\frac{t}{r} \right) \end{aligned}

Where t is time in years and r is the ratio of orbital periods (less than one).


Equation of the Day #3: Triangular Numbers

I like triangles. I like numbers. So what could possibly be better than having BOTH AT THE SAME TIME?! The answer is nothing!

The triangular numbers are the numbers of objects one can use to form a triangle.

Anyone up for billiards? Or bowling? (Image: Wikimedia Commons)

Pretty straightforward, right? To get the number, we just add up the total number of things, which is equal to adding up the number of objects in each row. For a triangle with n rows, this is equivalent to

\displaystyle T_n = 1+2+3+ \ldots + n = \sum_{k=0}^n k

This means that the triangular numbers are just sums from 1 to some number n . This gives us a good definition, but is rather impractical for a quick calculation. How do we get a nice, shorthand formula? Well, let’s first add sequential triangular numbers together. If we add the first two triangular numbers together, we get 1 + 3 = 4 . The next two triangular numbers are 3 + 6 = 9. The next pair is 6 + 10 = 16 . Do you see the pattern? These sums are all square numbers. We can see this visually using our triangles of objects.

(Image: Wikimedia Commons)

You can do this for any two sequential triangular numbers. This gives us the formula

T_n + T_{n-1} = n^2

We also know that two sequential triangular numbers differ by a new row, or n . Using this information, we get that

\begin{aligned} n^2 &= T_n + (T_n - n) \\ 2 T_n & = n^2 + n = n(n+1) \\ T_n &= \frac{n(n+1)}{2} = \begin{pmatrix} n+1 \\ 2 \end{pmatrix} \end{aligned}

Now we finally have an equation to quickly calculate any triangular number. The far right of the final line is known as a binomial coefficient, read “n plus one choose two.” It is defined as the number of ways to pick two objects out of a group of n + 1 objects.

For example, what is the 100^{\rm th} triangular number? Well, we just plug in n = 100 .

T_{100} = \dfrac{(100)(101)}{2} = \dfrac{10100}{2} = 5050

We just summed up all the numbers from 1 to 100 without breaking a sweat. You may be thinking, “Well, that’s cool and all, but are there any applications of this?” Well, yes, there are. The triangular numbers give us a way of figuring out how many elements are in each row of the periodic table. Each row is determined by what is called the principal quantum number, which is called n . This number can be any integer from 1 to \infty . The energy corresponding to n has n angular momentum values (l = 0, 1,\ldots, n-1) which the electron can possess, each of which has a total of 2l + 1 orbitals for an electron to inhabit, and two electrons can inhabit a given orbital. Summing up all the places an electron can be in for a given n involves summing up all these possible orbitals, which takes on the form of a triangular number.

\begin{aligned} \sum_{k=0}^{n-1}(2l +1) &= 2 \sum_{k=0}^{n-1} k + n \\ &= 2T_{n-1} + n \\ &= n(n-1) + n \\ &= n^2 \end{aligned}

The end result of this calculation is that there are n^2 orbitals for a given n , and two electrons can occupy each orbital; this leads to each row of the periodic table having 2\lceil (n+1)/2 \rceil^2 elements in the n^{\rm th} row, where \lceil x \rceil is the ceiling function. (This complication is due to the Aufbau principle, which dictates how energy levels fill up.) They also crop up in quantum mechanics again in the quantization of angular momentum for a spherically symmetric potential (a potential that is determined only by the distance between two objects). The total angular momentum for such a particle is given by

L^2 = \hbar^2 l(l+1) = 2 \hbar^2 T_l

What I find fascinating is that this connection is almost never mentioned in physics courses on quantum mechanics, and I find that kind of sad. The mathematical significance of the triangular numbers in quantum mechanics is, at the very least, cute, and I wish it would just be mentioned in passing for those of us who enjoy these little hidden mathematical gems.

There are more cool properties of triangular numbers, which I encourage you to read about, and other so-called “figurate numbers,” like hexagonal numbers, tetrahedral numbers, pyramidal numbers, and so on, which have really cool properties as well.


Equation of the Day #2: Cardioid

I made the above figure in Inkscape. A cardioid is the envelope formed by a set of circles whose centers lie on a circle and which pass through one common point in space. This image shows the circle on which the centers of the circles in the above image lie. A cardioid is also the path traced by a point on a circle which is rolling along the surface of another circle when both circles have the same radius (here is a cool animation of that).

What is the cardioid’s significance? Well, it looks like a heart, which is kind of cool. It’s also the (2D) pickup pattern of certain microphones (I have a USB cardioid microphone). If a sound is produced at a given point in space, the pickup pattern shows an equal intensity curve. So, if I place a microphone at the intersection point of all those circles, the outside boundary is where a speaker producing, say, a 440 Hz tone would have to be to be heard at a given intensity. So, the best place to put it would be on the side where the curve is most round (the bottom in this picture) without being too far away from the microphone.

Another interesting fact about the cardioid is that it is the reflection of a parabola through the unit circle (r = 1) . Here’s what I mean; in polar coordinates, the equation of the above cardioid is given by

r = 2a(1-\sin\theta)

where a is a scaling factor, and theta is the angle relative to the positive x-axis. The origin is at the intersection of the circles. The equation of a parabola opening upwards and whose focus is at the origin in polar coordinates is just

r = [2a(1-\sin\theta)]^{-1}

which is an inversion of the cardioid equation through r = 1 , or the unit circle.


Equation of the Day #1: Complex Numbers

Most of us are used to the real numbers. Real numbers consist of the whole numbers (0, 1, 2, 3, 4, \ldots) , the negative numbers (-1, -2, -3, \ldots), the rational numbers (1/2, 2/3, 3/4, 44/7, \ldots) , and the irrational numbers (numbers that cannot be represented by fractions of integers, such as the golden ratio, \sqrt{2} , or \tau ). All of these can be written in decimal format, even though they may have infinite decimal places. But, when we use this number system, there are some numbers we can’t write. For instance, what is the square root of -1 ? In math class, you may have been told that you can’t take the square root of a negative number. That’s only half true, as you can’t take the square root of a negative number and write it as a real number. This is because the square root is not part of the set of real numbers.

This is where the complex numbers come in. Suppose I define a new number, let’s call it i , where

i^2 = -1.

We’ve now “invented” a value for the square root of -1 . Now, what are its properties? If I take i^3 , I get -i , since i^3 = i\cdot i^2 . If I take i^4 , then I get i^2\cdot i^2 = +1 . If I multiply this by i again, I get i . So the powers of i are cyclic through i, -1, -i, and 1 .

This is interesting, but what is the magnitude of i , i.e. how far is i from zero? Well, the way we take the absolute value in the real number system is by squaring the number and taking the positive square root. This won’t work for i , though, because we just get back i . Let’s redefine the absolute value by taking what’s called the complex conjugate of i and multiplying the two together, then taking the positive square root. The complex conjugate of i is obtained by negating the imaginary part of i . Since i is purely imaginary (there are no real numbers that make up i ), the complex conjugate is -i . Multiply them together, and you get that -i\cdot i = -1\cdot i^2 = 1 , and the positive square root of 1 is simply 1 . Therefore, the number i has a magnitude of 1 . It is for this reason that i is known as the imaginary unit!

Now that we have defined this new unit, i , we can now create a new set of numbers called the complex numbers, which take the form z = a + bi , where a and b are real numbers. We can now take the square root of any real number, e.g. the square root of -4 can be written as \pm 2i , and we can make complex numbers with real and imaginary parts, like 3 + 4i .

How do we plot complex numbers? Well, complex numbers have a real part and an imaginary part, so the best way to do this is to create a graph where the abscissa (x-value) is the real part of the number and the ordinate (y-axis) is the imaginary part. This is known as the complex plane. For instance, 3 + 4i would have its coordinate be (3,4) in this coordinate system.

What is the magnitude of this complex number? Well, it would be the square root of itself multiplied by its complex conjugate, or the square root of (3 + 4i)(3 - 4i) = 9 + 12i - 12i +16 = 25 . The positive square root of 25 is 5 , so the magnitude of 3 + 4i is 5 .

We can think of points on the complex plane being represented by a vector which points from the origin to the point in question. The magnitude of this vector is given by the absolute value of the point, which we can denote as r . The x-value of this vector is given by the magnitude multiplied by the cosine of the angle made by the vector with the positive part of the real axis. This angle we can denote as \phi . The y-value of the vector is going to be the imaginary unit, i , multiplied by the magnitude of the vector times the sine of the angle \phi . So, we get that our complex number, z , can be written as z = r(\cos\phi + i\sin\phi) . The Swiss mathematician Leonhard Euler discovered a special identity relating to this equation, known now as Euler’s Formula, that reads as follows:

e^{i\phi} = \cos \phi + i\sin\phi

Where e is the base of the natural logarithm. So, we can then write our complex number as z = re^{i\phi} . What is the significance of this? Well, for one, you can derive one of the most beautiful equations in mathematics, known as Euler’s Identity:

e^{i\tau} = 1 + 0

This equation contains the most important constants in mathematics: e , Euler’s number, the base of the natural logarithm; i , the imaginary unit which I’ve spent this whole time blabbing about; \tau , the irrational ratio of a circle’s circumference to its radius, which appears all over the place in trigonometry; 1 , the real unit and multiplicative identity; and 0 , the additive identity.

So, what bearing does this have in real life? A lot. Imaginary and complex numbers are used in solving many differential equations that model real physical situations, such as waves propagating through a medium, wave functions in quantum mechanics, electromagnetic phenomena, and fractals, which in and of themselves have a wide range of real life application.