Category Archives: Science

Pinholes

The most hyped solar eclipse of the century passed over the U.S. mainland today. I viewed it through a homemade pinhole camera. Pinhole cameras, made of a hole, a length of empty space, and an imaging surface, have only two adjustable parameters: size of the hole, and distance between the hole and the surface, which is a focal length of sorts.  How does tuning these two parameters affect our image?

Image Size

The angle subtended by the sun is about 0.5 degrees in the sky.  It’s simple to see that the size of the image of the sun (L) does not depend on the diameter of the pinhole, but rather only on the distance f in the image below:

IMG_8081

Figure 1. Pinhole geometry

For a focal length of 1 m, L should be about 9 mm.

Angular Resolution

A more interesting question is how the image resolution might be affected by tuning these parameters.  It might seem that a smaller pinhole should create a sharper image (Figure 2), but there is a limit.

image2

Figure 2. Ray optics picture.  A larger aperture yields a blurrier image, since it allows rays from different parts of the object to be mapped onto the same spot on the image.

The smallest spot that can be formed on the imaging plane is ultimately limited by diffraction.  By making the aperture too small, diffraction can rapidly blur the resulting image.  That effect is illustrated below.

image1.JPG

Figure 3. Diffraction introduces uncertainty measurement of the ray’s initial direction.

The resolution of our system is determined by the more dominant of these two effects, so,

CodeCogsEqn (3)

which has the following dependence on D and f:

 

pinhole_diameter

Figure 4a.  Making the hole diameter too small is much more detrimental to image quality than making it too large

pinhole_focal

Figure 4b.  Resolution improves with focal length increase.

A contour map showing the resolution as a function of D and f gives a clearer picture:

pinhole_contour

Figure 5. Contour map of resolution as a function of D and f.  Blue is better angular discrimination.  Plotted as log(theta in degrees).

Based on the calculations above, the pinhole should have a diameter of around one millimeter, and the focal length should be made as large as allowable.

Eclipse Test

I placed 3 pinholes, sized approximately 0.8 mm, 0.2 mm, and 0.4 mm, in order, on an aluminum foil.  Then light sealed one of our moving boxes (f ~ 1 m) from our recent move.  Two hole cut-outs for the eyes allowed me to look inside (and take photos).

 

The different light levels for the full sun images can be seen in the right panel.

As the eclipse started I whipped out a slightly better camera to document its progress.

066B8621

Figure 6.  Mirror image of the sun beginning its eclipse.

Zoomed in series of images of the eclipse progress in Boston, MA.  The most coverage we had was around 65%.

 

 

There are several design improvements I would make on the next iteration.  First, I underestimated the importance of being able to get very close to the projected image.  I would need to place the eye holes closer or baffle in a different way so that the viewer can approach the imaging surface.  Relatedly, since pinhole cameras suffer from minimal tilt distortion I would tilt the focal plane so that the image can be viewed more head-on.

See you in 7 years.

 

Advertisements

An Optimization Problem

John sent me an amazing website called Embroidery Troubleshooting Guide.

It has a hilarious, potentially unintentional(?), html bug which causes all the text blocks, because of missing closing tags, to be nested inside one another, each 17% larger than its parent. By the bottom of the page, the text is much too large to fit on a screen, and all you see is contrasting blocks of color, sharp peaks and broad curves. It was, in a way, awe-inspiring– a tiny world worth exploring.

e

John has been long interested in this idea of semi-randomly generated design elements. In the past he has played with circles with radius R and center (x,y) determined by some weighted random algorithm. It looked alright, but the experience didn’t vary too much from iteration to iteration.

He also had some bands of color on the side of a page whose widths and hues were determined randomly. That also looked pleasant, but indistinct. It’s a much more difficult problem to vary the topology of your design.

Recently, a researcher in the CS department came to our group to talk about a method he developed for the optimization of geometry. These kinds of problems are often encountered when one has a specific function for a component to perform, such as a bracket to support as much load as possible, but there are constraints to work within, such as total amount of available material. If the geometry is simplified (say, it is approximated as a ring), the computation and optimization is simple. But what about an arbitrary geometry?

How do we choose our variables to smoothly vary? Specifically, how would our algorithm test for topological changes? The opening of a new hole, for instance, or the merging of a hole with a boundary, is not smoothly reachable by simply varying the lengths of existing arcs and the areas of existing embedded shapes.

He solved this problem by embedding the 2D geometry in a 3D space. A 3-dimensional function is stored, whose cross-sections represent the desired 2D geometrical object. The function’s parameters can be smoothly varied, which, along with the placement of the cross-section, will determine the 2D structure’s shape and topology.

e1

I bring up this example, because, looking at this zoomed-in text as a geometrical object, it’s amazing how many different shapes can be got from looking at different parts of the letter “e” for instance. In a way, John’s problem is an optimization problem just without the optimization bit. You want to explore as much of the configuration space as possible, but in a tractable way. And still, the most elegant way of invoking topological changes is to excerpt from some larger, well-defined function.

Super Resolution Microscopy

I heard it said a lot when I lived on the west coast. When it comes to business, ideas are worth nothing— it’s the execution that matters. More than I ever expected, that theme is echoed in science. And elsewhere. Anywhere I look, the things most worth doing are hard enough to do that, while it takes some decent smarts to think of the idea, it takes real vision to actually try it.


I met Eric Betzig the day he came to give a talk at Harvard, shortly before he won the Nobel Prize. He and my advisor were old friends, so a group of us students got to have lunch with the guy. To be tapped for hosting a speaker is kind of an honor for a new student.  He was a successful scientist in my field; had invented some microscopy technique I’d never heard of. I wore nicer clothes than usual.  I tried to make small talk. Anyways, over salad I described to him my project— the set up, the design, the challenges I was facing, the goal— to measure the repulsive Casimir force, a quantum electrodynamical force, using frustrated total internal reflection.

“Why?” he said, after a few seconds.

I didn’t realize quickly enough that he was challenging me.   I said something automatic– about fundamental limits.  I repeated the words from the research proposal: new materials, friction-less systems, the challenge of… He was shaking his head.

My face burned.  He had a reputation of being eccentric and brilliant. In his eyes I could see my ordinariness, my fear.

I should try to do something of actual value, he said. That I believe in.


The idea behind PALM is very simple, really. It’s the difference between the error on a single measurement and the error on the average of a bunch of measurements. Resolution in imaging is the limit of distinguishability of two nearby sources of light. The photons from each source land in a distribution, with some uncertainty in position on the detector determined by the width of this distribution. So each source appears not as a point, but as a spread out blob of light.

rayc

Two sources emitting at the same time getting closer and closer together will at some point appear indistinguishable. This limit, related to the width of each blob, is normally set by the size of the light-focusing element—an objective, a dish, or a lens. Incidentally this is why we have things like this:

are

But, imagine if we covered up all the emitters but one. We’d still get a blob of light on our detector, sure, with the same width as before, but now we can find with ease the center of that spot.

palm

The uncertainty on the peak, or mean, position is many times smaller than the width of the distribution. How many times smaller depends on how much light you have collected.

It turns out, you don’t need to do this one emitter at a time. As long as the data is sparse, that is, the bright points are separated by adequate dark space, a large number of fits can be done all at once.


Later that day, on the way out of Betzig’s talk, which had been packed (I sat on the stairs) someone in my group said he didn’t see what the big deal was. Someone else said, yeah, the ideas are basic, almost brute force, even.


Turning on and off emitters is not a trivial business. It’s not like we have little wires and little switches attached to each molecule, fluorophore, or each star, or whatever we’re imaging. We can imagine trying to use selective illumination to excite emitters, but our excitation light is subject to the same diffraction limits as the light that forms the image.

But some fluorescent molecules were discovered to be activatable. They can be switched from a “dark”, non-emitting state, to an “active” state by exposure to UV light. Then, they can be turned off again through the process of photo-bleaching. There is a rate associated with the activation and deactivation of fluorophores. It does not mean that each molecule goes gradually from being active to inactive, it means that first some molecules are activated, and then some more, until finally the whole batch is lit up.

So, send a relatively weak pulse of light, over the whole sample, enough to activate, in some randomly chosen manner, about 1% of the fluorophores. Image these until they go dark. Find the centers of this sparse set of emitters. Send another pulse. Image those. And again. Stack the resolved images together.

Schematics_ActivationLocalizationBleaching_PhotoActivatedLocalizationMicroscopy.tif


The first time they took an image with this technique, it took 12 hours. Seven years later, they’d moved on, were now showing videos of live cells at 100 frames per second. Cancer cells crawling through a collagen matrix. T-cells attacking their targets. Not a biologist, I didn’t know the technical significance of what I saw. Still, my hair stood on end. Still, it haunted me— like something I was not supposed to have seen.

I wasn’t sure that I’d gone to the same talk as my colleagues.


Conventional_vs_iPALM

scrm


I wish I could tell you a happy ending to my story. That I followed my passions, worked hard on some closely held idea and saw success, however moderate; that I’m on my way to not just graduation but contentment. But that wasn’t the way, and maybe that’s not the nature of the thing. I got rid of that original project, at least, but I’m still looking for something truly original, of value. That I believe in.

Climate Models and the Carbon Cycle

The carbon cycle:

To tell the story from the beginning, consider the carbon atom – 6 protons, 4 valence electrons, 4th most abundant element in the universe – basis of all life on earth. It’s locked up in rocks and plants, dissolved into our oceans, and mixed up with other gases in our atmosphere. As rocks it shows up as coal, limestone, or graphite for instance. In the rivers and oceans it’s mainly carbonic acid. In the atmosphere, carbon dioxide and methane gas.

While on the whole, restorative chemical processes [1] keep the relative distribution of carbon among these reservoirs fairly stable, each individual atom of carbon is in motion, traveling between the various phases, from gas to liquid to solid, between atmosphere and oceans and rocks and living matter. This is what’s known as the carbon cycle.

carbon_cycle_1

A carbon cycle drawn by: Courtney Kesinger

The carbon cycle has various loops, none of which is completely closed. For instance, in the fast carbon cycle, which is traversed on the time scale of a human life, carbon is taken up from the atmosphere by plants through photosynthesis, stored as sugars, then released back into the atmosphere when it is burned for energy, either by the plant itself, or by something that has consumed the plant, such as an animal, a microbe, or a fire.

But this loop is not closed. Dead plant matter which is buried before it has time to decompose do not release their carbon back into the atmosphere as a part of this fast cycle. Instead, it becomes coal, or oil, or natural gas, and is locked up for millions of years beneath the earth’s surface.

Before the industrial revolution, carbon stored in fossil fuels found its way into the atmosphere mainly through volcanic eruptions, as a part of the slow carbon cycle–called this, because a round-trip takes roughly 100 million years. In this leisurely cycle, rain dissolves atmospheric carbon, forms a weak acid – carbonic acid – which it then deposits into lakes and rivers and oceans. These ions are collected undersea by living organisms and built into shells. Carbon, now in solid form, settles to the sea floor when these organisms die, and builds up sedimentary rock layer by layer. Finally, the earth’s heat melts these rocks, and volcanoes and hot spots return carbon (including that which is contained in fossil fuels) to the atmosphere.

A key point about these natural processes is that they are roughly in balance. For instance, the rate of carbon release into the atmosphere, by respiration or volcanic activity, is matched by the rate of carbon absorption into plants and oceans. And this system is held in approximate equilibrium by various restoring forces. A sudden, small increase in the concentration of carbon in the atmosphere, absent other factors, leads to increased plant growth [2], more rain [1], and more direct absorption at the surfaces of oceans [3]. In other words, the oceans acidify to deplete this extra carbon.

But how much carbon can our oceans take up? When, if ever, would the climate then return to its pre-perturbed state? What would the earth look like in the interim, in the far term?

By unearthing and burning fossil fuels, in our cars, factories, and electrical plants, we are harnessing energy by shortcutting a process which naturally occurs on geological time scales. About 30 billion tons of carbon dioxide are now added per year into the atmosphere directly by the burning of fossil fuels [5]. A rate 100 times greater than that of volcanic emissions. As a result, atmospheric carbon, according to ice-core records which go back 800,000 years, is at its highest ever level [4].

 
Climate models:

We can use physical models to predict how the earth’s climate system might respond to different stimuli. To understand climate models, consider how a physical model can be used to predict the orbital motion of the planets. Given a set of parameters which describe the system (the position, mass, velocity of the planets and sun), the physical laws which govern the system (Newtonian physics or, more accurately, General Relativity), a certain set of simplifying assumptions (a planet’s interaction with another planet is insignificant compared to its interaction with the sun), and what emerges is the “future” of these original parameters. Some won’t change, such as the masses of the bodies, but others–their positions and velocities– will describe a trajectory.

Similarly, climate models aim to plot a trajectory for earth.
 

bbspec

Black body radiation of the sun and earth after traversing the atmosphere.


How well such a model performs depends crucially on the validity of its assumptions and completeness of its knowledge–our knowledge. Afterall, they know only what we know. We know, for instance, that earth exchanges energy with outer space through radiation, or light. We know that carbon dioxide and methane strongly absorb and re-emit certain IR frequencies of light while remaining largely transparent to visible frequencies. When incoming radiation is visible light (sunlight) and outgoing radiation is IR, we expect that an increase in greenhouse gases leads to an imbalance favoring energy influx over outflux. And, as Dr. Scott Denning stated in an earlier post: “When you add heat to things, they change their temperature.”
 
penguins

Tiki the Penguin

A deeper question is where the extra energy will go. To that end, we model the earth’s land, oceans, ice sheets, and atmosphere, allow them to absorb energy as a whole and exchange heat with each other through various thermodynamic processes. We track their temperatures, their compositions, and their relative extent. In this way, we can get a rough idea of the global response to a given amount of energy imbalance, called “forcing”.

But it gets more complicated.

The response itself may alter the amount of external forcing. The loss of ice sheets decreases the earth’s reflectivity, increasing the planet’s energy absorption [7]. The thawing of permafrost and prevalence of hotter air are likely to elevate, respectively, levels of methane [9,10] and water vapor [12]–two additional greenhouse gases–in the atmosphere. These are examples of known feedback mechanisms.

If the planet’s response to an energy flow imbalance is to increase this imbalance, the feedback is positive: climate change accelerates. On the other hand, negative feedback slows further climate change by re-balancing the earth’s energy flux. Changes in the carbon cycle, as in the ocean’s acidification by CO2 uptake, is one example of negative feedback [1]. So far, about half of our CO2 emissions have found their way into our oceans [13].

It’s in this tug-of-war between positive and negative feedback mechanisms that the trajectory of earth’s future climate is drawn [8]. Ultimately, thermodynamics guarantees that the earth’s climate will find stability [11]. But we shouldn’t confuse a planet with a balanced energy budget with a necessarily healthy or habitable planet. Venus, for instance, has a balanced energy budget, and a composition very similar to earth. In other words, the question is not whether, but where.

The crucial role that climate models play in all this is that they help us catalogue and combine these separate pieces of knowledge. The more perfect the information, the more accurate its predictions. Right now, improving future accuracy of climate models depends heavily upon getting a good grasp of climate feedback mechanisms. As we slowly step toward a more complete understanding of our climate system, it’s important we continue to receive new science in context, reminding ourselves that each new study is a welcome refinement of our knowledge, that neither proves nor disproves global warming– simply moves us forward.

 


[1] http://earthobservatory.nasa.gov/Features/CarbonCycle/
[2] http://www.climatecentral.org/news/study-finds-plant-growth-surges-as-co2-levels-rise-16094
[3] http://www.sciencemag.org/content/290/5490/291
[4] http://cdiac.ornl.gov/trends/co2/ice_core_co2.html
[5] http://www.eia.gov/totalenergy/data/annual/index.cfm
[6] http://www.skepticalscience.com/graphics.php?g=12
[7] http://journals.ametsoc.org/doi/pdf/10.1175/1520-0442%282000%29013%3C0617%3AASIVIT%3E2.0.CO%3B2
[8] http://occri.net/climate-science/climate-modeling/sources-of-uncertainty
[9] http://www.sciencedaily.com/releases/2014/03/140327111724.htm
[10] http://www.sciencemag.org/content/312/5780/1612
[11] http://en.wikipedia.org/wiki/Stefan%E2%80%93Boltzmann_law#Thermodynamic_derivation
[12] http://geotest.tamu.edu/userfiles/216/dessler09.pdf
[13]http://www.sciencemag.org/content/305/5682/367.short

Mind the Gap

My opinion piece got published in the Tech Review finally. They asked for a picture of me next to some science. This was all I could come up with.


Mind the Gap – The science communication problem

By Lulu Liu on June 18, 2013

Ask what a scientist’s job is and the answer might be that it’s to run a lab, to write grants, or even—if the scientist is feeling cheerful—to uncover knowledge. Ask a journalist the same question and you’ll probably hear delivering news, generating awareness, or, most cynically, selling papers.

Whose job is it to tell the story of science—to act as its representative in the public and policy spheres? You might think science journalists are the ambassadors. In the summer of 2010, I was one. I’m here to tell you: they’re not.

After finishing my bachelor’s in physics at MIT, I became an AAAS Mass Media Fellow at a newspaper in California. I arrived in early summer—as black oil was pouring out of the Macondo well in the Gulf of Mexico following the Deepwater Horizon explosion. Although the around-the-clock news coverage of the oil spill had abated somewhat, opinions abounded, and a local university professor became entangled in a public debate about whether oiled birds could be saved.

The professor, a wildlife expert who had devoted most of his life to rescue and recovery of oiled animals, believed rescue efforts should continue. But his opponents said they did no good.

In researching the story, I combed existing news. I found that a typical article on this subject began by informing me that there was no consensus. Then I got fed some quotes: Scientist A said no, while Scientist B said yes. Somehow both sides had studies that seemed to support them, yet none of it got me any closer to understanding the truth. How, I wondered, can rational people disagree about facts?

The story took me a week to write. The first time I called the professor, he pointed me to studies, regurgitated the same old points, and even had some colorful language for the other scientists. But a few days later, when I got him on the phone again to say what I’d learned about why the studies disagreed, he listened: I pointed out the different species of birds the different researchers sampled, the decades that separated them, their disparate methods of measurement, and more. After a pause, he said that yes, the rescued birds used to die—no matter what you did to help them—within a few days or weeks. Sometimes they would all die.

I learned that survival depended on a bird’s species, size, and habits; how much oil it had ingested; whether its home was destroyed. I learned that in recent years the scientist’s university had poured millions into improving rescue methods, and there was evidence that this effort had paid off in decreased mortality rates.

Now he had to defend his work, he said, as he feared a public outcry would spell the end of yet another thing that did good in the world. When I called the other scientists with what I’d learned, they agreed. It was not their intention to halt these efforts, they said; they had simply felt that the effectiveness of the programs had been misrepresented in the media.

Looking back, I understood why the scientists on both sides had exaggerated, bent the truth a little. They had come to regard the media with suspicion, as a threat to the viability of their research programs. I also understood why journalists were happy to fan the flames of controversy. There was immense public interest. But as I watched polarized readers hurl words like “heartless” and “wasteful” at each other in the comments, I realized that no understanding had been passed on.

That summer I learned that language matters. I learned that context matters, that a truthful narrative supported by facts is compelling on its own. And I learned that our inability to tell the story of science—of its goodness, its vision, its relentless truth-seeking—is eroding the public’s trust.

Science is not quick or glamorous, and we don’t need to make it look that way. It’s the piecemeal assembly of reality, fact by painstaking fact, and that is beautiful enough. Every time the incremental is reported as revolutionary, a disservice is done. To sensationalize scientific progress is to misrepresent it.

I think good judgment, and the will to exercise it, is the best quality a science writer can have. Because in a disagreement, I can trust such a writer to stand not squarely in the middle but as near as possible to the truth. And in science, there are no two versions of it.


Lulu Liu ‘09 is a second-year PhD student in applied physics at Harvard.