Category Archives: Physics

Climate Models and the Carbon Cycle

The carbon cycle:

To tell the story from the beginning, consider the carbon atom – 6 protons, 4 valence electrons, 4th most abundant element in the universe – basis of all life on earth. It’s locked up in rocks and plants, dissolved into our oceans, and mixed up with other gases in our atmosphere. As rocks it shows up as coal, limestone, or graphite for instance. In the rivers and oceans it’s mainly carbonic acid. In the atmosphere, carbon dioxide and methane gas.

While on the whole, restorative chemical processes [1] keep the relative distribution of carbon among these reservoirs fairly stable, each individual atom of carbon is in motion, traveling between the various phases, from gas to liquid to solid, between atmosphere and oceans and rocks and living matter. This is what’s known as the carbon cycle.


A carbon cycle drawn by: Courtney Kesinger

The carbon cycle has various loops, none of which is completely closed. For instance, in the fast carbon cycle, which is traversed on the time scale of a human life, carbon is taken up from the atmosphere by plants through photosynthesis, stored as sugars, then released back into the atmosphere when it is burned for energy, either by the plant itself, or by something that has consumed the plant, such as an animal, a microbe, or a fire.

But this loop is not closed. Dead plant matter which is buried before it has time to decompose do not release their carbon back into the atmosphere as a part of this fast cycle. Instead, it becomes coal, or oil, or natural gas, and is locked up for millions of years beneath the earth’s surface.

Before the industrial revolution, carbon stored in fossil fuels found its way into the atmosphere mainly through volcanic eruptions, as a part of the slow carbon cycle–called this, because a round-trip takes roughly 100 million years. In this leisurely cycle, rain dissolves atmospheric carbon, forms a weak acid – carbonic acid – which it then deposits into lakes and rivers and oceans. These ions are collected undersea by living organisms and built into shells. Carbon, now in solid form, settles to the sea floor when these organisms die, and builds up sedimentary rock layer by layer. Finally, the earth’s heat melts these rocks, and volcanoes and hot spots return carbon (including that which is contained in fossil fuels) to the atmosphere.

A key point about these natural processes is that they are roughly in balance. For instance, the rate of carbon release into the atmosphere, by respiration or volcanic activity, is matched by the rate of carbon absorption into plants and oceans. And this system is held in approximate equilibrium by various restoring forces. A sudden, small increase in the concentration of carbon in the atmosphere, absent other factors, leads to increased plant growth [2], more rain [1], and more direct absorption at the surfaces of oceans [3]. In other words, the oceans acidify to deplete this extra carbon.

But how much carbon can our oceans take up? When, if ever, would the climate then return to its pre-perturbed state? What would the earth look like in the interim, in the far term?

By unearthing and burning fossil fuels, in our cars, factories, and electrical plants, we are harnessing energy by shortcutting a process which naturally occurs on geological time scales. About 30 billion tons of carbon dioxide are now added per year into the atmosphere directly by the burning of fossil fuels [5]. A rate 100 times greater than that of volcanic emissions. As a result, atmospheric carbon, according to ice-core records which go back 800,000 years, is at its highest ever level [4].

Climate models:

We can use physical models to predict how the earth’s climate system might respond to different stimuli. To understand climate models, consider how a physical model can be used to predict the orbital motion of the planets. Given a set of parameters which describe the system (the position, mass, velocity of the planets and sun), the physical laws which govern the system (Newtonian physics or, more accurately, General Relativity), a certain set of simplifying assumptions (a planet’s interaction with another planet is insignificant compared to its interaction with the sun), and what emerges is the “future” of these original parameters. Some won’t change, such as the masses of the bodies, but others–their positions and velocities– will describe a trajectory.

Similarly, climate models aim to plot a trajectory for earth.


Black body radiation of the sun and earth after traversing the atmosphere.

How well such a model performs depends crucially on the validity of its assumptions and completeness of its knowledge–our knowledge. Afterall, they know only what we know. We know, for instance, that earth exchanges energy with outer space through radiation, or light. We know that carbon dioxide and methane strongly absorb and re-emit certain IR frequencies of light while remaining largely transparent to visible frequencies. When incoming radiation is visible light (sunlight) and outgoing radiation is IR, we expect that an increase in greenhouse gases leads to an imbalance favoring energy influx over outflux. And, as Dr. Scott Denning stated in an earlier post: “When you add heat to things, they change their temperature.”

Tiki the Penguin

A deeper question is where the extra energy will go. To that end, we model the earth’s land, oceans, ice sheets, and atmosphere, allow them to absorb energy as a whole and exchange heat with each other through various thermodynamic processes. We track their temperatures, their compositions, and their relative extent. In this way, we can get a rough idea of the global response to a given amount of energy imbalance, called “forcing”.

But it gets more complicated.

The response itself may alter the amount of external forcing. The loss of ice sheets decreases the earth’s reflectivity, increasing the planet’s energy absorption [7]. The thawing of permafrost and prevalence of hotter air are likely to elevate, respectively, levels of methane [9,10] and water vapor [12]–two additional greenhouse gases–in the atmosphere. These are examples of known feedback mechanisms.

If the planet’s response to an energy flow imbalance is to increase this imbalance, the feedback is positive: climate change accelerates. On the other hand, negative feedback slows further climate change by re-balancing the earth’s energy flux. Changes in the carbon cycle, as in the ocean’s acidification by CO2 uptake, is one example of negative feedback [1]. So far, about half of our CO2 emissions have found their way into our oceans [13].

It’s in this tug-of-war between positive and negative feedback mechanisms that the trajectory of earth’s future climate is drawn [8]. Ultimately, thermodynamics guarantees that the earth’s climate will find stability [11]. But we shouldn’t confuse a planet with a balanced energy budget with a necessarily healthy or habitable planet. Venus, for instance, has a balanced energy budget, and a composition very similar to earth. In other words, the question is not whether, but where.

The crucial role that climate models play in all this is that they help us catalogue and combine these separate pieces of knowledge. The more perfect the information, the more accurate its predictions. Right now, improving future accuracy of climate models depends heavily upon getting a good grasp of climate feedback mechanisms. As we slowly step toward a more complete understanding of our climate system, it’s important we continue to receive new science in context, reminding ourselves that each new study is a welcome refinement of our knowledge, that neither proves nor disproves global warming– simply moves us forward.



Newly Discovered Super Earth, GJ 1214b, is Likely a Water World

In a collaborative effort between members of five institutions, scientists have discovered the most promising Earth-like exoplanet to date. The nearly Earth-sized planet, projected to live within the habitable zone of its dim, red parent star, and to be composed of 75% ice or liquid water enveloped in a substantial atmosphere, may be the first known water world in existence. GJ 1214b, as it is called, is the product of an on-going survey project poised to yield further groundbreaking results in the search for life.


A physical reality was born out of the stuff of imagination when a little less than fifteen years ago the first planet was discovered outside of our solar system—an “exoplanet”, so it was termed, short for “extrasolar planet”, suspended in deep space and gravitationally bound to a faraway star. It was the spark that touched off a modern explosion of progress in an age old quest. The same impetus which had once set us afloat the unknown open seas has now trained our eyes on the sky. We’re looking for movement. A tiny flicker of a star which might give away the existence of another world like ours.

The biggest and closest planets were the first to be discovered. This is natural given our current methods of detection, largely indirect, which depend on recognizing gravitational or visual aberrations due to the presence of these dark bodies. “Hot Jupiters”, they were called– huge, gaseous planets which gravitate so near to their host stars that a complete orbit is made every few days. A fascinating find, but with surface temperatures of a few thousand degrees Celsius, these planets are unsupportive of life as we understand it to be.

As we tuned our instruments, painstakingly improving their precision, more planets came into view. This next generation of exoplanets were termed “Super Earths”. They were smaller, denser, with radii and masses only exceeding those of Earth by several fold. Their surfaces were sometimes solid or fluid. They inched ever closer to the all-important habitable zone, the narrow region around a star where water, the solvent for all carbon-based lifeforms, can exist in its liquid form.

GJ 1214b belongs in this category of exoplanets. Slightly larger than Earth, in the constellation Ophiuchus, it emerges as the most promising candidate yet in the ongoing search for life.

Earlier, using similar detection and analysis techniques, two such Super Earths had made headlines. Gliese 581D, which orbits a dim, red star and Corot-7b which belongs to a yellow star much like our Sun. The first is estimated to be gaseous, and to reside, despite its close proximity, inside a habitable zone, due to the low temperature of slow-burning red dwarf stars. The latter is the smallest exoplanet ever discovered, with a radius only 1.7 times that of Earth and a density which hints at a mostly iron composition. But the search for life on these two planets is stinted by several factors. Gliese 581D’s plane of orbit is inclined relative to our line of sight, which not only hinders precise determinations of its mass and size but also precludes direct or indirect study of its composition and atmosphere. On the other hand, Corot-7b, due to its close proximity to its bright sun, is conjectured to be essentially a big ball of lava.

The new kid on the block, the planet GJ 1214b, with a mass 6.5 times the mass of Earth and a radius 2.7 times Earth’s radius, belongs to another dim, red star. Current data predicts an average density comparable to that of water, a little lighter, possibly indicative of a substantial gaseous atmosphere. On top of that, the planet, though tightly bound to its sun with an orbital period of only a day and a half, is potentially habitable. It is not inconceivable that on this planet under a huge red sun, some or all of that water may be in its liquid form.

GJ 1214b was discovered by the transit method of exoplanet detection. This is made possible only by GJ 1214b’s special orbit, which takes it periodically in front of its parent star as seen from the Earth. Each time it crosses our line of sight to the star, the small planet occults a portion of the star’s light. This is perceived by the instruments as a slight dip in the star’s intensity. Measured over several orbits, these flutters, as shallow as 1% in depth, allude to the presence of a dark orbiting body. Follow up measurements that detect very slight wobbles in the position of the parent star due to the gravitational effects of the orbiting planet then confirm its existence. The magnitude of the first effect is proportional to the size of the planet, for the second, to its mass. Given the parent star’s estimated size and mass, the planet’s measurements can be computed to appropriate uncertainties.

But that’s not all. The light that reaches our telescopes, having reflected off or passed through the gaseous envelope of GJ 1214b, can be analyzed to reveal the content of its atmosphere. Each element, when present on the planet, is responsible for emitting and absorbing a signature set of wavelengths of light. Is the air thick with carbon dioxide, like Venus? Full of nitrogen, like Earth? Mostly water? Or something else altogether?

This promising research continues on. Even now, the MEarth Project is scanning the skies for other transiting planets around dim, red stars, as our most profound expectations are still waiting to be met. We push ever closer to the final generation of exoplanets: Earths.

An old piece: from 2010

Ring of Charges

A single charged particle moving in uniform circular motion undergoes centripetal acceleration and radiates light. The phenomenon is well-understood. A charged particle spit out in a jet by a supermassive black hole glows with this light. The Advanced Light Source at UC Berkeley exploits this light. And this light was missing from the atom, invalidating the Bohr theory of electron orbits.

But it’s also well-known that a conducting ring sustaining a constant current, such as a superconducting coil below the transition temperature, does not radiate, though it’s just a superposition of many of these single charges, arranged in a symmetric way. This latter case can be understood as a statics problem. If the ring is modeled as a continuous charge density, its configuration at any point in time cannot be distinguished from any other point in time, therefore, the fields also cannot change. Static fields do not radiate.

I wanted to understand how the transition happens, how, as we add more particles, the light turns off.


Using the equation for the electric field derived from the Lienard-Wiechert potential for a radiating point charge in the non-relativistic limit (Jackson E&M), I plotted the field at a considerable distance from the sources (x,y)=(5,10). As I was only interested in the dynamic component of the field, the plotted values have their means subtracted.

Here’s the field for one charge:

Two charges, evenly spaced:

Three charges:

Four charges:

Five charges (here we’re starting to run into MATLAB’s floating point number precision limit):

The single charge case can be understood as a rotating dipole, confirmed by the radiation pattern. Add a second charge and we’ve canceled the net dipole moment, only a quadrupole remains. The magnitude of the field is reduced by some factor involving v/c. The next order in the multipole expansion is an octupole, with the magnitude of the field further reduced by the same factor. We see this geometric progression as we add more charges:


Another thing to notice is that a ring of N charges has N-fold rotational symmetry and the period of the emitted radiation is reduced by a factor of N in each case. As N approaches infinity, the ring approaches a continuous charge density and both the amplitude and period of the emitted radiation go to zero.

MATLAB Virtual Optics Bench

We had a need to simulate beam propagation on our optics bench as a part of designing a new vortex trap, so I took to coding some of that up this weekend. Then, when I was most of the way through, I thought, why not build a UI and then other people could use it too?


The bundled-up EXE file [V2.0] is here, running it will extract the files to the current folder. But to then run the actual program you need the MATLAB compiler (MCR), so here‘s a version with it packaged in (caution: 300MB) if you don’t have that on your computer. If anyone wants the MATLAB code that went into it, here’s the link to that, also. The compiled code works on Windows 64-bit or 32-bit machines. But the pure MATLAB code is likely to run in any recent version of MATLAB.

This is just V1.0. And it’s very limited because I only implemented the optical elements that we needed- lenses, axicons, wave plates, apertures, etc. Also, I’ve done almost no optimization of the code, so the calculations are always done on an N x N grid with edge length D (N and D are set by the user). This software can’t really handle microscope applications, since 100x reduction in spot size requires resizing and resampling and though I’ve added a resample feature, you have to be fairly sparing about using it because the interpolation scheme works poorly when phase oscillations are rapid, so you’ll see artifacts sometimes. When I make improvements to the code, and I will before I release it to the group, I’ll update here too.

Update 1: I made wavelength an input parameter. Also I added tools for zooming in and out on the beam. [V1.1]
Update 2: Added Setup panel for user-specified resolution and edge length. Fixed the lag on calculating vortex wave plate element. Fixed some other bugs. [V1.2]
Update 3: Made window resizeable and open up smaller. Added image resampling and resizing. [V2.0]


The idea is pretty simple, the center panel always displays the beam profile at the plane of reference. The plane of reference is automatically set by the last optical element you added plus whatever propagation distance you specified behind it. You can undo your last step with the undo button. Or you can back the simulation all the way up to the initial beam by hitting start over. The history window displays all the manipulations done to the beam. It’s there so you can visualize the set up you’ve made so far.

  1. Enter the Setup information. Wavelength, Range, and Resolution. Range defines the box size for analysis, and is the length of one of the sides of the square. Default is 6mm. You want to choose a range that is large enough to accommodate any intermediate spot size in your optical path. Resolution is the number of pixels on a side. Default is 2048. Larger resolution takes up more ram and takes longer to compute.
  2. To initiate the beam, create a gaussian beam. Gaussian beams are defined by 2 parameters. Here, I’m choosing the parameters to be beam waist (w0) and distance to waist (z0). I’m assuming you’re working with a laser here so beam waist ~1mm, and distance to waist is the distance from your reference plane to the narrowest point of the beam (the focus). E.g. Say you want to make a (nearly) collimated gaussian beam with radius of 1.5mm. Enter 1.5 and 0. Say you want to make a converging beam, you want to choose a positive waist distance, so you can try .4 and 500. Diverging beams have negative waist distance.
  3. Choose an optical element or propagate the beam forward to the next reference plane.
    • Lens: focal length of lens (fl), distance to propagate after lens (z).
    • Aperture: usually you want to add an aperture if you’re not certain that your lens or objective can catch all the rays from the beam, this way diffraction from an aperture is taken into account. Input: radius of aperture (r0), and distance to propagate after aperture (z).
    • Axicon: creates a bessel beam. Input: opening angle (g), distance to propagate after axicon (z).
    • Wave Plate: creates optical vortex. Input: topological charge (l), distance to propagate after waveplate (z).
    • Propagation: this just propagates the beam by z. In other words, only moves reference plane forward or backward. Z can be positive or negative.
  4. Add more optical elements, apply “undo” liberally, examine the beam profile by moving reference frame forward and back. When you are happy with the beam you’ve created, copy down the elements you’ve added in the History panel.
  5. Any time you change the fields in the Setup panel, you want to hit the Start Over button to avoid errors.
  6. If your beam is becoming too large for the program’s field of view, resample and increase the range. Decrease the range if there is a lot of black space and you are losing detail. You can also add resolution, but most processors can’t handle much more than 4096 points on a side. Caution: resampling introduces artifacts, only resample if absolutely necessary and if the beam is not too divergent.


This is an example of creating a Bessel beam (using an axicon with g=0.5 degrees) and then reducing its size by 1/2 by a set of two lenses.


This is what it looks like when modeled by my application (V1.0).


Math notes:
Paraxial approximation throughout. Beam is propagated using the Fourier transform of the Fresnel kernel. If your beam exceeds the size of the computation region (that is, it does not go to zero at the boundaries), weird FT artifacts will start showing up with periodic repeating structure. If you see that, you need to reduce your beam size so it fits within the calculated area. There’s no ray tracing in any of this. I don’t really know how to do ray tracing…

How Common are the Magellanic Clouds?

This story has been a long time coming. This is not the final version, but I like it enough and I drank too much coffee to be able to sleep tonight so I’m gonna update my blog with it. Then, I’ll play some Dota. And then, maybe the sun will come up.

How Common are the Magellanic Clouds?

Deep in the southern sky are two hazy blobs of light. Early explorers adrift the dark seas saw them night after night–called them clouds. Though we now know that they’re galaxies, satellites of the Milky Way, the name still stuck. The Magellanic Clouds, the larger of which is only one hundredth the size of our galaxy, are among the farthest objects the eyes can see.

Now they’re clues to a great cosmological mystery: just what is our universe made of? The best guess, following decades of data, is a mixture of ordinary matter, cold dark matter, radiation, and dark energy. So to test this, scientists, with the aid of supercomputers, made universes.

The procedure is straightforward enough: Load these ingredients into an early universe soup, in their correct proportions. Give them a set of rules to obey, namely our known physical laws. Then, let it go.

For the most part, what they get is what we see, a universe with stars and gas and galaxies and clusters of galaxies. But some things weren’t quite right. For one, the Magellanic Clouds weren’t there.

Or rather, they just weren’t there usually. Of all the Milky Ways that formed in a typical simulation, only a few percent had satellites as large and as close as these. “So what?” one might ask. Sounds like a minor thing. But it was by minor things, historically, that theories came undone. If the models had made no prediction about large, close satellites, that would be one thing, but they did. They said, most likely, there should be none, and we had two.

Comb the sky, was one scientist’s idea. Risa Wechsler leads an astrophysics research group at Stanford University. She and her team saw an opportunity to test the standard model of cosmology, all they needed was more Milky Ways. And they would find them–22,000 galaxies which matched the brightness and size of our own–among the cataloged objects of the Sloan Digital Sky Survey.

Atop a mountain in New Mexico, the 2.5 meter telescope dedicated to the Sloan Survey has been sweeping the sky for the past decade. Its massive catalog of deep space objects has nearly a billion entries, but most objects, especially dim ones, are without redshift information.

Redshift is what astronomers use to calculate distance. As space expands, light, a wave–a series of peaks and troughs in space–stretches out with it and becomes redder. The reddest light, therefore, has traveled the longest so comes from the farthest away galaxies. Astronomers, by reconstructing an object’s original colors, can calculate its redshift, as long as it gives off enough light to work with. But most don’t.

Without distance data the cosmos is a mat on which stars and galaxies are pinned one on top of another. How do you tell a satellite from a background galaxy a billion light years away?

The answer from Wechsler’s team: you don’t, not for a single object at least. Which galaxy has which satellites, that’s information they didn’t need. Their question was “how common are the Magellanic Clouds?”, and its answer was there, somewhere in the data, they just had to dig.

Around each Milky Way they drew a small circle. Instead of sizing up the objects in it one by one, they counted everything–foreground, background, satellites and all. They compared this number to what they counted in a random section of the sky of the same size. The second number was, on average, slightly smaller than the first.

The difference, of course, was the satellites, and the difference was very small. Eighty percent of Milky Ways had no large satellite at all. Eleven percent had one, they found, and only three percent, like us, had two. Some scientists, based on this result and others, have started to question whether the Magellanic Clouds are even bound to us, or if, in some cosmic close encounter of sorts, they’re merely passing through.

Wechsler’s team is happy to leave that to the astronomers. As for them, what their simulations predicted exactly matched what they saw. A victory for the current cosmological model, no doubt, but a message closer to home as well: we are an oddity after all.

Here’s the final version. Thanks John for the edits!