Skip to main content

Do the wave


by Nicole Gaynor

The SCEC PressOn project is leading the charge in more physically realistic, wave-based earthquake simulations. In addition to the computing power of the Blue Waters supercomputer, more efficient workflows have proven essential to the project’s success.

Earthquake risk is a thorny subject scientifically and in society.

In 2012, an Italian judge sentenced six Italian scientists and engineers and one government official to six years in prison for downplaying the risk of an impending earthquake in L’Aquila, Italy, in 2009. The charge was manslaughter. The judge held the scientists responsible for 29 of the 309 deaths because he said the scientists failed to properly analyze and explain the earthquake threat in the days leading up to the 6.3-magnitude quake.

Quakes are not only dangerous to people and property for the structural damage they can cause, but also for their potential to trigger landslides and tsunamis and cause fires along broken power and fuel lines. These beasts are notoriously difficult to predict because they occur on such a massive scale and often originate deep below the surface of the Earth.

“It’s incredible that scientists trying to do their job under the direction of a government agency have been convicted for criminal manslaughter,” said geoscientist Tom Jordan to Science magazine when the verdict was handed down. “We know that the system for communicating risk before the L’Aquila earthquake was flawed, but this verdict will cast a pall over any attempt to improve it. I’m afraid that many scientists are learning to keep their mouths shut.”

Jordan is not one of those quiet scientists. At the Southern California Earthquake Center (SCEC) where Jordan is the lead scientist, the SCEC PressOn project aims to improve long-term earthquake prediction (on the decadal time scale) and modeling as a step toward more specific and accurate earthquake hazard assessments.

Improve the model, improve prediction

As of November 10, 2014, a panel of three judges acquitted the scientists and engineers of manslaughter charges.

“The L’Aquila trial was a false prosecution of individuals for what was a failure of a national risk communication system,” says Jordan. “Judicial sanity prevailed, and the scientific community is greatly relieved.”

Despite the relief of acquittal, those who were cleared underwent more than two years of not knowing if they would be free or imprisoned. The seventh person, a government official, still faces a two-year suspended jail sentence pending further appeal. The prosecution and lawyers for victims’ families may pursue the case to the Court of Cassation, Italy’s highest court of appeal, according to Science magazine.

Discussion surrounding the case also brought to the public eye that earthquake prediction is a developing science—one in which Jordan is a leader.

Jordan’s PressOn project relies on a physics-based earthquake model called CyberShake. Rather than estimating how earthquake waves grow smaller as they propagate away for the earthquake fault using ground characteristics and algorithms developed based on observations, inventively called ground motion prediction equations (GMPEs), CyberShake explicitly calculates how earthquake waves ripple through a 3D model of ground structure.

Jordan and colleague Feng Wang developed a method by which they can directly compare the outcomes of GMPEs to CyberShake. The study, published in the Bulletin of the Seismological Society of America in 2014, found that progress in physics-based simulations may improve prediction and understanding of earthquakes by converting aleatory variability, or the random variability related to the limitations of the model itself, to epistemic uncertainty, which stems from lack of information to feed into the model. More observations and measurements of the Earth can then reduce epistemic uncertainties.

Earthquake basics

To see clearly why a physics-based model is a step forward, one must understand a little about how earthquakes work.

The Earth’s crust is made out of several tectonic plates that slowly float around on the molten outer core. Most earthquakes result from the plates moving relative to one another, a process called plate tectonics.

The edges of the plates are rough and get stuck on each other while the rest of the plate keeps moving, storing up energy, kind of like stretching a rubber band. When the plate edges finally unstick (or you let go of one end of the rubber band), all that pent-up energy is released and the plate jerks into place. Aftershocks happen when the plate overshoots its equilibrium point and continues to readjust over the coming days to years.

These sudden shifts propagate as three types of waves: P waves, S waves, and surface waves.

  • primary (P) waves: manifest as horizontal compression and dilation. If you hold a slinky in two hands and bounce it side to side, those are like P waves. P waves propagate through solid and liquid.
  • secondary (S) waves: vertical swells and troughs. If you move one hand up and down or side to side, you make S waves. S waves propagate through only solid materials and are slower than P waves.
  • surface waves: move along the surface of the earth rather than through the deeper layers. These are the slowest and cause the most destruction. Imagine the ground acting like the surface of the ocean.

The characteristics, timing, and damage pattern of these waves differ by distance from the origin of the earthquake and the type of rock or dirt at a given location. This is why high-quality earth structure data, in addition to wave propagation models, are crucial to accurate earthquake simulation.

Modeling a temblor

Though it may seem like an obvious approach to earthquake modeling—to simulate how an earthquake actually works rather than approximating the shaking based on observations—advances in computing power have made it possible only recently. The modus operandi with limited computing power is to use an empirical GMPE that includes factors like fault orientation and distance from the origin to estimate shaking intensity and are much less computationally intensive. The US Geological Survey currently uses these for its hazard assessments.

The sidebar image shows two hazard maps for the CyberShake test bed—the Los Angeles basin, where the Pacific and North American plates run into each other at the San Andreas Fault. Crustal deformation around this large fault creates many smaller cracks. A sudden shift in any of the cracks can cause an earthquake.

In seismology, shaking is often expressed as a percent of gravitational acceleration. For example, 0.2 g corresponds to 20% of the force that holds us to Earth’s surface. Think about the g-forces that push you into your seat when an airplane takes off. The model calculated that over 50 years there is a 2% chance of shaking that exceeds the force shown on the maps.

The CyberShake computational approach only improves on the existing approach if it can make use of accurate Earth structure data, and models of the Earth’s structure are difficult to develop because scientists have few direct observations of the Earth structure at great depths. CyberShake ingests Earth structure data from the surface down to about 100 km. The panel on the left used 1D ground structure data, which varies vertically only. The panel on the right used 3D ground structure data that changes vertically and horizontally. It’s like comparing the texture of a layer cake to an apple cobbler with walnuts. The 3D model shows much more heterogeneity and more closely resembles the actual ground structure. Otherwise the panels are identical. Both show data from the physics-based CyberShake.

On the right panel, numerals 2 and 3 indicate areas of increased shaking intensity in near-fault sedimentary basins and the Los Angeles basin.

“These basins act as wave guides,” said Jordan at the 2014 Blue Waters Symposium. “They act as essentially big bowls of jelly that shake during earthquakes and therefore very much affect the motion.”

Numerals 1 and 4 indicate areas where shaking intensity was lower in the 3D model due to scattering of the waves and hard rock. Wave scattering reduced the high shaking amplitudes near the San Andreas Fault (numeral 1) much more than Jordan and his team expected. In general, mapping variations in ground composition allows the model to more realistically simulate the way waves pass through the ground at each location, sometimes in unexpected ways.

Maechling says he thinks a model like CyberShake could become the basis of public seismic hazard estimates. With further research and development, it may help improve both long term seismic hazard maps and short term seismic hazard assessment for events like L’Aquila.

“GMPE’s formulas calculate a peak intensity measure at a given location for a given earthquake,” says Maechling. “CyberShake provides the full ground motion time series for that earthquake at a given location for a given earthquake. CyberShake provides extra information, such as duration of shaking, that GMPEs typically do not provide.”

Disclaimer: Due to changes in website systems, we've adjusted archived content to fit the present-day site and the articles will not appear in their original published format. Formatting, header information, photographs and other illustrations are not available in archived articles.

Back to top