Category Archives: Physics

Pinholes

The most hyped solar eclipse of the century passed over the U.S. mainland today. I viewed it through a homemade pinhole camera. Pinhole cameras, made of a hole, a length of empty space, and an imaging surface, have only two adjustable parameters: size of the hole, and distance between the hole and the surface, which is a focal length of sorts.  How does tuning these two parameters affect our image?

Image Size

The angle subtended by the sun is about 0.5 degrees in the sky.  It’s simple to see that the size of the image of the sun (L) does not depend on the diameter of the pinhole, but rather only on the distance f in the image below:

IMG_8081

Figure 1. Pinhole geometry

For a focal length of 1 m, L should be about 9 mm.

Angular Resolution

A more interesting question is how the image resolution might be affected by tuning these parameters.  It might seem that a smaller pinhole should create a sharper image (Figure 2), but there is a limit.

image2

Figure 2. Ray optics picture.  A larger aperture yields a blurrier image, since it allows rays from different parts of the object to be mapped onto the same spot on the image.

The smallest spot that can be formed on the imaging plane is ultimately limited by diffraction.  By making the aperture too small, diffraction can rapidly blur the resulting image.  That effect is illustrated below.

image1.JPG

Figure 3. Diffraction introduces uncertainty measurement of the ray’s initial direction.

The resolution of our system is determined by the more dominant of these two effects, so,

CodeCogsEqn (3)

which has the following dependence on D and f:

 

pinhole_diameter

Figure 4a.  Making the hole diameter too small is much more detrimental to image quality than making it too large

pinhole_focal

Figure 4b.  Resolution improves with focal length increase.

A contour map showing the resolution as a function of D and f gives a clearer picture:

pinhole_contour

Figure 5. Contour map of resolution as a function of D and f.  Blue is better angular discrimination.  Plotted as log(theta in degrees).

Based on the calculations above, the pinhole should have a diameter of around one millimeter, and the focal length should be made as large as allowable.

Eclipse Test

I placed 3 pinholes, sized approximately 0.8 mm, 0.2 mm, and 0.4 mm, in order, on an aluminum foil.  Then light sealed one of our moving boxes (f ~ 1 m) from our recent move.  Two hole cut-outs for the eyes allowed me to look inside (and take photos).

 

The different light levels for the full sun images can be seen in the right panel.

As the eclipse started I whipped out a slightly better camera to document its progress.

066B8621

Figure 6.  Mirror image of the sun beginning its eclipse.

Zoomed in series of images of the eclipse progress in Boston, MA.  The most coverage we had was around 65%.

 

 

There are several design improvements I would make on the next iteration.  First, I underestimated the importance of being able to get very close to the projected image.  I would need to place the eye holes closer or baffle in a different way so that the viewer can approach the imaging surface.  Relatedly, since pinhole cameras suffer from minimal tilt distortion I would tilt the focal plane so that the image can be viewed more head-on.

See you in 7 years.

 

Advertisements

Climate Models and the Carbon Cycle

The carbon cycle:

To tell the story from the beginning, consider the carbon atom – 6 protons, 4 valence electrons, 4th most abundant element in the universe – basis of all life on earth. It’s locked up in rocks and plants, dissolved into our oceans, and mixed up with other gases in our atmosphere. As rocks it shows up as coal, limestone, or graphite for instance. In the rivers and oceans it’s mainly carbonic acid. In the atmosphere, carbon dioxide and methane gas.

While on the whole, restorative chemical processes [1] keep the relative distribution of carbon among these reservoirs fairly stable, each individual atom of carbon is in motion, traveling between the various phases, from gas to liquid to solid, between atmosphere and oceans and rocks and living matter. This is what’s known as the carbon cycle.

carbon_cycle_1

A carbon cycle drawn by: Courtney Kesinger

The carbon cycle has various loops, none of which is completely closed. For instance, in the fast carbon cycle, which is traversed on the time scale of a human life, carbon is taken up from the atmosphere by plants through photosynthesis, stored as sugars, then released back into the atmosphere when it is burned for energy, either by the plant itself, or by something that has consumed the plant, such as an animal, a microbe, or a fire.

But this loop is not closed. Dead plant matter which is buried before it has time to decompose do not release their carbon back into the atmosphere as a part of this fast cycle. Instead, it becomes coal, or oil, or natural gas, and is locked up for millions of years beneath the earth’s surface.

Before the industrial revolution, carbon stored in fossil fuels found its way into the atmosphere mainly through volcanic eruptions, as a part of the slow carbon cycle–called this, because a round-trip takes roughly 100 million years. In this leisurely cycle, rain dissolves atmospheric carbon, forms a weak acid – carbonic acid – which it then deposits into lakes and rivers and oceans. These ions are collected undersea by living organisms and built into shells. Carbon, now in solid form, settles to the sea floor when these organisms die, and builds up sedimentary rock layer by layer. Finally, the earth’s heat melts these rocks, and volcanoes and hot spots return carbon (including that which is contained in fossil fuels) to the atmosphere.

A key point about these natural processes is that they are roughly in balance. For instance, the rate of carbon release into the atmosphere, by respiration or volcanic activity, is matched by the rate of carbon absorption into plants and oceans. And this system is held in approximate equilibrium by various restoring forces. A sudden, small increase in the concentration of carbon in the atmosphere, absent other factors, leads to increased plant growth [2], more rain [1], and more direct absorption at the surfaces of oceans [3]. In other words, the oceans acidify to deplete this extra carbon.

But how much carbon can our oceans take up? When, if ever, would the climate then return to its pre-perturbed state? What would the earth look like in the interim, in the far term?

By unearthing and burning fossil fuels, in our cars, factories, and electrical plants, we are harnessing energy by shortcutting a process which naturally occurs on geological time scales. About 30 billion tons of carbon dioxide are now added per year into the atmosphere directly by the burning of fossil fuels [5]. A rate 100 times greater than that of volcanic emissions. As a result, atmospheric carbon, according to ice-core records which go back 800,000 years, is at its highest ever level [4].

 
Climate models:

We can use physical models to predict how the earth’s climate system might respond to different stimuli. To understand climate models, consider how a physical model can be used to predict the orbital motion of the planets. Given a set of parameters which describe the system (the position, mass, velocity of the planets and sun), the physical laws which govern the system (Newtonian physics or, more accurately, General Relativity), a certain set of simplifying assumptions (a planet’s interaction with another planet is insignificant compared to its interaction with the sun), and what emerges is the “future” of these original parameters. Some won’t change, such as the masses of the bodies, but others–their positions and velocities– will describe a trajectory.

Similarly, climate models aim to plot a trajectory for earth.
 

bbspec

Black body radiation of the sun and earth after traversing the atmosphere.


How well such a model performs depends crucially on the validity of its assumptions and completeness of its knowledge–our knowledge. Afterall, they know only what we know. We know, for instance, that earth exchanges energy with outer space through radiation, or light. We know that carbon dioxide and methane strongly absorb and re-emit certain IR frequencies of light while remaining largely transparent to visible frequencies. When incoming radiation is visible light (sunlight) and outgoing radiation is IR, we expect that an increase in greenhouse gases leads to an imbalance favoring energy influx over outflux. And, as Dr. Scott Denning stated in an earlier post: “When you add heat to things, they change their temperature.”
 
penguins

Tiki the Penguin

A deeper question is where the extra energy will go. To that end, we model the earth’s land, oceans, ice sheets, and atmosphere, allow them to absorb energy as a whole and exchange heat with each other through various thermodynamic processes. We track their temperatures, their compositions, and their relative extent. In this way, we can get a rough idea of the global response to a given amount of energy imbalance, called “forcing”.

But it gets more complicated.

The response itself may alter the amount of external forcing. The loss of ice sheets decreases the earth’s reflectivity, increasing the planet’s energy absorption [7]. The thawing of permafrost and prevalence of hotter air are likely to elevate, respectively, levels of methane [9,10] and water vapor [12]–two additional greenhouse gases–in the atmosphere. These are examples of known feedback mechanisms.

If the planet’s response to an energy flow imbalance is to increase this imbalance, the feedback is positive: climate change accelerates. On the other hand, negative feedback slows further climate change by re-balancing the earth’s energy flux. Changes in the carbon cycle, as in the ocean’s acidification by CO2 uptake, is one example of negative feedback [1]. So far, about half of our CO2 emissions have found their way into our oceans [13].

It’s in this tug-of-war between positive and negative feedback mechanisms that the trajectory of earth’s future climate is drawn [8]. Ultimately, thermodynamics guarantees that the earth’s climate will find stability [11]. But we shouldn’t confuse a planet with a balanced energy budget with a necessarily healthy or habitable planet. Venus, for instance, has a balanced energy budget, and a composition very similar to earth. In other words, the question is not whether, but where.

The crucial role that climate models play in all this is that they help us catalogue and combine these separate pieces of knowledge. The more perfect the information, the more accurate its predictions. Right now, improving future accuracy of climate models depends heavily upon getting a good grasp of climate feedback mechanisms. As we slowly step toward a more complete understanding of our climate system, it’s important we continue to receive new science in context, reminding ourselves that each new study is a welcome refinement of our knowledge, that neither proves nor disproves global warming– simply moves us forward.

 


[1] http://earthobservatory.nasa.gov/Features/CarbonCycle/
[2] http://www.climatecentral.org/news/study-finds-plant-growth-surges-as-co2-levels-rise-16094
[3] http://www.sciencemag.org/content/290/5490/291
[4] http://cdiac.ornl.gov/trends/co2/ice_core_co2.html
[5] http://www.eia.gov/totalenergy/data/annual/index.cfm
[6] http://www.skepticalscience.com/graphics.php?g=12
[7] http://journals.ametsoc.org/doi/pdf/10.1175/1520-0442%282000%29013%3C0617%3AASIVIT%3E2.0.CO%3B2
[8] http://occri.net/climate-science/climate-modeling/sources-of-uncertainty
[9] http://www.sciencedaily.com/releases/2014/03/140327111724.htm
[10] http://www.sciencemag.org/content/312/5780/1612
[11] http://en.wikipedia.org/wiki/Stefan%E2%80%93Boltzmann_law#Thermodynamic_derivation
[12] http://geotest.tamu.edu/userfiles/216/dessler09.pdf
[13]http://www.sciencemag.org/content/305/5682/367.short

Newly Discovered Super Earth, GJ 1214b, is Likely a Water World

In a collaborative effort between members of five institutions, scientists have discovered the most promising Earth-like exoplanet to date. The nearly Earth-sized planet, projected to live within the habitable zone of its dim, red parent star, and to be composed of 75% ice or liquid water enveloped in a substantial atmosphere, may be the first known water world in existence. GJ 1214b, as it is called, is the product of an on-going survey project poised to yield further groundbreaking results in the search for life.

————-

A physical reality was born out of the stuff of imagination when a little less than fifteen years ago the first planet was discovered outside of our solar system—an “exoplanet”, so it was termed, short for “extrasolar planet”, suspended in deep space and gravitationally bound to a faraway star. It was the spark that touched off a modern explosion of progress in an age old quest. The same impetus which had once set us afloat the unknown open seas has now trained our eyes on the sky. We’re looking for movement. A tiny flicker of a star which might give away the existence of another world like ours.

The biggest and closest planets were the first to be discovered. This is natural given our current methods of detection, largely indirect, which depend on recognizing gravitational or visual aberrations due to the presence of these dark bodies. “Hot Jupiters”, they were called– huge, gaseous planets which gravitate so near to their host stars that a complete orbit is made every few days. A fascinating find, but with surface temperatures of a few thousand degrees Celsius, these planets are unsupportive of life as we understand it to be.

As we tuned our instruments, painstakingly improving their precision, more planets came into view. This next generation of exoplanets were termed “Super Earths”. They were smaller, denser, with radii and masses only exceeding those of Earth by several fold. Their surfaces were sometimes solid or fluid. They inched ever closer to the all-important habitable zone, the narrow region around a star where water, the solvent for all carbon-based lifeforms, can exist in its liquid form.

GJ 1214b belongs in this category of exoplanets. Slightly larger than Earth, in the constellation Ophiuchus, it emerges as the most promising candidate yet in the ongoing search for life.

Earlier, using similar detection and analysis techniques, two such Super Earths had made headlines. Gliese 581D, which orbits a dim, red star and Corot-7b which belongs to a yellow star much like our Sun. The first is estimated to be gaseous, and to reside, despite its close proximity, inside a habitable zone, due to the low temperature of slow-burning red dwarf stars. The latter is the smallest exoplanet ever discovered, with a radius only 1.7 times that of Earth and a density which hints at a mostly iron composition. But the search for life on these two planets is stinted by several factors. Gliese 581D’s plane of orbit is inclined relative to our line of sight, which not only hinders precise determinations of its mass and size but also precludes direct or indirect study of its composition and atmosphere. On the other hand, Corot-7b, due to its close proximity to its bright sun, is conjectured to be essentially a big ball of lava.

The new kid on the block, the planet GJ 1214b, with a mass 6.5 times the mass of Earth and a radius 2.7 times Earth’s radius, belongs to another dim, red star. Current data predicts an average density comparable to that of water, a little lighter, possibly indicative of a substantial gaseous atmosphere. On top of that, the planet, though tightly bound to its sun with an orbital period of only a day and a half, is potentially habitable. It is not inconceivable that on this planet under a huge red sun, some or all of that water may be in its liquid form.

GJ 1214b was discovered by the transit method of exoplanet detection. This is made possible only by GJ 1214b’s special orbit, which takes it periodically in front of its parent star as seen from the Earth. Each time it crosses our line of sight to the star, the small planet occults a portion of the star’s light. This is perceived by the instruments as a slight dip in the star’s intensity. Measured over several orbits, these flutters, as shallow as 1% in depth, allude to the presence of a dark orbiting body. Follow up measurements that detect very slight wobbles in the position of the parent star due to the gravitational effects of the orbiting planet then confirm its existence. The magnitude of the first effect is proportional to the size of the planet, for the second, to its mass. Given the parent star’s estimated size and mass, the planet’s measurements can be computed to appropriate uncertainties.

But that’s not all. The light that reaches our telescopes, having reflected off or passed through the gaseous envelope of GJ 1214b, can be analyzed to reveal the content of its atmosphere. Each element, when present on the planet, is responsible for emitting and absorbing a signature set of wavelengths of light. Is the air thick with carbon dioxide, like Venus? Full of nitrogen, like Earth? Mostly water? Or something else altogether?

This promising research continues on. Even now, the MEarth Project is scanning the skies for other transiting planets around dim, red stars, as our most profound expectations are still waiting to be met. We push ever closer to the final generation of exoplanets: Earths.


An old piece: from 2010

Ring of Charges

A single charged particle moving in uniform circular motion undergoes centripetal acceleration and radiates light. The phenomenon is well-understood. A charged particle spit out in a jet by a supermassive black hole glows with this light. The Advanced Light Source at UC Berkeley exploits this light. And this light was missing from the atom, invalidating the Bohr theory of electron orbits.

But it’s also well-known that a conducting ring sustaining a constant current, such as a superconducting coil below the transition temperature, does not radiate, though it’s just a superposition of many of these single charges, arranged in a symmetric way. This latter case can be understood as a statics problem. If the ring is modeled as a continuous charge density, its configuration at any point in time cannot be distinguished from any other point in time, therefore, the fields also cannot change. Static fields do not radiate.

I wanted to understand how the transition happens, how, as we add more particles, the light turns off.

jackson

Using the equation for the electric field derived from the Lienard-Wiechert potential for a radiating point charge in the non-relativistic limit (Jackson E&M), I plotted the field at a considerable distance from the sources (x,y)=(5,10). As I was only interested in the dynamic component of the field, the plotted values have their means subtracted.

Here’s the field for one charge:


Two charges, evenly spaced:


Three charges:


Four charges:


Five charges (here we’re starting to run into MATLAB’s floating point number precision limit):


The single charge case can be understood as a rotating dipole, confirmed by the radiation pattern. Add a second charge and we’ve canceled the net dipole moment, only a quadrupole remains. The magnitude of the field is reduced by some factor involving v/c. The next order in the multipole expansion is an octupole, with the magnitude of the field further reduced by the same factor. We see this geometric progression as we add more charges:

plot1

Another thing to notice is that a ring of N charges has N-fold rotational symmetry and the period of the emitted radiation is reduced by a factor of N in each case. As N approaches infinity, the ring approaches a continuous charge density and both the amplitude and period of the emitted radiation go to zero.

MATLAB Virtual Optics Bench

We had a need to simulate beam propagation on our optics bench as a part of designing a new vortex trap, so I took to coding some of that up this weekend. Then, when I was most of the way through, I thought, why not build a UI and then other people could use it too?

optics_UI

The bundled-up EXE file [V2.0] is here, running it will extract the files to the current folder. But to then run the actual program you need the MATLAB compiler (MCR), so here‘s a version with it packaged in (caution: 300MB) if you don’t have that on your computer. If anyone wants the MATLAB code that went into it, here’s the link to that, also. The compiled code works on Windows 64-bit or 32-bit machines. But the pure MATLAB code is likely to run in any recent version of MATLAB.

This is just V1.0. And it’s very limited because I only implemented the optical elements that we needed- lenses, axicons, wave plates, apertures, etc. Also, I’ve done almost no optimization of the code, so the calculations are always done on an N x N grid with edge length D (N and D are set by the user). This software can’t really handle microscope applications, since 100x reduction in spot size requires resizing and resampling and though I’ve added a resample feature, you have to be fairly sparing about using it because the interpolation scheme works poorly when phase oscillations are rapid, so you’ll see artifacts sometimes. When I make improvements to the code, and I will before I release it to the group, I’ll update here too.

Update 1: I made wavelength an input parameter. Also I added tools for zooming in and out on the beam. [V1.1]
Update 2: Added Setup panel for user-specified resolution and edge length. Fixed the lag on calculating vortex wave plate element. Fixed some other bugs. [V1.2]
Update 3: Made window resizeable and open up smaller. Added image resampling and resizing. [V2.0]


Instructions:

The idea is pretty simple, the center panel always displays the beam profile at the plane of reference. The plane of reference is automatically set by the last optical element you added plus whatever propagation distance you specified behind it. You can undo your last step with the undo button. Or you can back the simulation all the way up to the initial beam by hitting start over. The history window displays all the manipulations done to the beam. It’s there so you can visualize the set up you’ve made so far.

  1. Enter the Setup information. Wavelength, Range, and Resolution. Range defines the box size for analysis, and is the length of one of the sides of the square. Default is 6mm. You want to choose a range that is large enough to accommodate any intermediate spot size in your optical path. Resolution is the number of pixels on a side. Default is 2048. Larger resolution takes up more ram and takes longer to compute.
  2. To initiate the beam, create a gaussian beam. Gaussian beams are defined by 2 parameters. Here, I’m choosing the parameters to be beam waist (w0) and distance to waist (z0). I’m assuming you’re working with a laser here so beam waist ~1mm, and distance to waist is the distance from your reference plane to the narrowest point of the beam (the focus). E.g. Say you want to make a (nearly) collimated gaussian beam with radius of 1.5mm. Enter 1.5 and 0. Say you want to make a converging beam, you want to choose a positive waist distance, so you can try .4 and 500. Diverging beams have negative waist distance.
    gauss1
  3. Choose an optical element or propagate the beam forward to the next reference plane.
    • Lens: focal length of lens (fl), distance to propagate after lens (z).
    • Aperture: usually you want to add an aperture if you’re not certain that your lens or objective can catch all the rays from the beam, this way diffraction from an aperture is taken into account. Input: radius of aperture (r0), and distance to propagate after aperture (z).
    • Axicon: creates a bessel beam. Input: opening angle (g), distance to propagate after axicon (z).
    • Wave Plate: creates optical vortex. Input: topological charge (l), distance to propagate after waveplate (z).
    • Propagation: this just propagates the beam by z. In other words, only moves reference plane forward or backward. Z can be positive or negative.
  4. Add more optical elements, apply “undo” liberally, examine the beam profile by moving reference frame forward and back. When you are happy with the beam you’ve created, copy down the elements you’ve added in the History panel.
  5. Any time you change the fields in the Setup panel, you want to hit the Start Over button to avoid errors.
  6. If your beam is becoming too large for the program’s field of view, resample and increase the range. Decrease the range if there is a lot of black space and you are losing detail. You can also add resolution, but most processors can’t handle much more than 4096 points on a side. Caution: resampling introduces artifacts, only resample if absolutely necessary and if the beam is not too divergent.

Example:

This is an example of creating a Bessel beam (using an axicon with g=0.5 degrees) and then reducing its size by 1/2 by a set of two lenses.

example

This is what it looks like when modeled by my application (V1.0).

example_app


Math notes:
Paraxial approximation throughout. Beam is propagated using the Fourier transform of the Fresnel kernel. If your beam exceeds the size of the computation region (that is, it does not go to zero at the boundaries), weird FT artifacts will start showing up with periodic repeating structure. If you see that, you need to reduce your beam size so it fits within the calculated area. There’s no ray tracing in any of this. I don’t really know how to do ray tracing…