Sorry it’s been a little quiet around here over the last month, but I think I had a good reason. I was getting ready for my first European radar conference: 7th European Conference on Radar in Meteorology and Hydrology in Toulouse. I’d never set foot in France before, and wanted to put my best one forward!
At the Météo France conference center, I presented a talk (about GBVTD analysis of W-band data we collected during VORTEX2) and two posters (both on EnKF assimilation of mobile radar data in supercells). I reconnected with domestic and international colleagues, as well as making some new acquaintances. Between sessions, we had receptions and banquets at several Toulouse landmarks, including City Hall and the 800-year-old Hotel Dieu.
I got to see updated versions of some research presented at last fall’s AMS Radar Meteorology Conference in Pittsburgh, as well as some intriguing new work from my European contemporaries. (There weren’t many tornado talks, but there aren’t as many tornadoes in Europe, after all!) On the final day, there were a couple of talks about radar-based aeroecology (detection and characterization of birds, bats, insects, etc.). Fascinating stuff. Biologists are finding gold in the data that we usually ditch in QC!
Outside the conference, downtown Toulouse was visually pleasing and gastronomically amazing. I took relaxed strolls through the streets and gardens in the evenings, admiring the wrought iron balconies and old chuches, nibbling cheese, and sipping wine. Oh, and taking in Euro Cup matches with the locals, too! The people were, by and large, friendly, and most of the waitstaff at restaurants spoke enough English to get our orders right. I visited 13th-century cathedrals, open-air markets, stunning museums, historic hotels, and verdant gardens.
I figured out early in my stay that I couldn’t possibly pack in all the activities I wanted to do in one week. It’s just as well, because I kept getting lost! And of all the cities I’ve visited, Toulouse was by far the best city to get lost in, slow down, and enjoy.
I’ve returned to find summer baking Oklahoma in earnest. It may not be too long before we dust off the dust devil chasing gear again!
I am a co-author on a paper in this month’s issue of Monthly Weather Review, entitled Impact of the Environmental Low-Level Wind Profile on Ensemble Forecasts of the 4 May 2007 Greensburg, Kansas, Tornadic Storm and Associated Mesocyclones. Dan ran a set of NCOMMAS ensemble forecasts of the Greensburg storm, assimilating reflectivity and velocity data from the Dodge City, Kansas WSR-88D (KDDC). He varied both the 0-3 km AGL velocity profile in the initial model environment to reflect the onset of a low-level jet, and also cut loose forecasts at different times to see how the vorticity swaths changed. Not surprisingly, the forecasts are better when the lead time decreases. However, there are still issues with the simulated Greensburg storm moving too quickly toward the east, possibly as a consequence of cold pool buildup.
We consider this paper a proof-of-concept study in support of the Warn-on-Forecast project. It demonstrates probabilistic forecasting of a tornadic mesocyclone’s track using operationally available data, albeit not in a real-time framework.
My contributions to this study were the dealiased KDDC data, the low-level VAD wind profiles, some of the graphics, and of course, help with the writing. (Fortunately, Dan is a good writer and didn’t need much help!)
One minor erratum – I just noticed that my listed affiliation is partly incorrect. I do work for CAPS, but my previous affiliation was CIMMS, not NWC. That’s an “oops” that I should have caught during the editing process. Fortunately, it doesn’t impact the science!
This past week, I was privileged to participate in the third annual Atmospheric Science Collaborations and Enriching Networks (ASCENT) workshop in Steamboat Springs, Colorado.* This workshop brings together female atmospheric scientists at different stages of their respective academic careers, about half of them recent Ph.D. recipients (junior scientists). Throughout the three-day workshop, the senior scientists shared their career stories and life lessons, while the junior scientists discussed their work via poster session and sought out collaborators. There was even a film crew – a media budget was written into the ASCENT grant – who documented the workshop, interviewed us, and shot tons of images and video that will soon be on the web for the world to see.
A room full of candid and intelligent women is a sight to behold. Everyone was so open, frank, and honest with one another. The vast majority of the attendees were atmospheric chemists. Their work on aerosols and pollution has implications that can potentially benefit millions of people, many generations into the future. I learned a great deal from them, and I hope they learned somewhat from me, the resident tornado geek, as well. I couldn’t help feeling like an odd woman out in the room sometimes. But I found a kindred spirit in Elissa E., a researcher from Los Alamos, who fires wired rockets into thunderstorms to trigger lightning flashes. People say that putting a radar in front of a tornado takes guts, but what she does is even more hardcore, in my opinion! Very modestly, she assures me that she launches the rockets from the safety of an underground bunker, and only after having been given the “go” by several assistants.
We also got to visit Storm Peak Laboratory, headed by Dr. Gannet Hallar (lead PI on ASCENT). After passing a sign that read “four wheel drive required,” and a tooth-chipping, 20-minute drive up a gravel road, we arrived on top of Mt. Werner to find the lab nestled among the ski lifts. The lab is about the size of a 3-bedroom house and accessible only by Snowcat for several months of the year. The link above goes to a great picture of the lab encrusted in snow and ice. They receive 500″ of snow annually, and researchers sometimes choose to spend weeks at a time at the lab in the dead of winter babysitting their instruments.
While the mountain vistas from the lab rooftop are breathtaking, and the lab has a full kitchen and numerous bunk beds, the researchers who work there are not vacationers. They are actively conducting experiments, installing and de-installing instruments, taking measurements and samples, and maintaining equipment year-round. They have documented the changing chemistry and aerosol content of the local atmospheric environment, giving the rest of us much-needed information about CCN concentrations and characteristics. I’m accustomed to dealing with cloud processes in terms of bulk microphysical parameterizations in NWP models; Storm Peak Lab actually gathers data that informs those parameterizations.
Doubtless there has never been a better time to be a female atmospheric scientist. Most of the overt barriers to women in science have been removed, thanks to laws (such as the Civil Rights Act and the Family and Medical Leave Act) that have been informed by science. I am happy to report that I have never experienced overt discrimination or harassment in my career – at least, not that I have been aware of.
However, when I walked across the stage at the OU School of Meteorology graduation ceremony this spring to receive my doctoral hood, I was the only female Ph.D. recipient out of 10. Why was I there, while my other female classmates chose to stop at the B.S. and M.S. levels? I’ve chatted with some of them informally; the familiar refrain is that they worry that they will not be able to sustain the energy level and workload required of an academic researcher. We see our professors come in late at night to slave away on grant proposals and papers. I must admit that the “lifestyle” doesn’t look all that appealing. Literature with titles like Where are All the Women Geoscience Professors?, Why So Slow?, and Why So Few? abound. I was saddened to learn that meteorology suffers from the lowest rate of female professorship among all the geosciences – In 2010, just a scant 12% of meteorology professors were women.
During the workshop, we shared strategies for coping with workplace issues that disproportionately affect women. There were plenty of horror stories from women who had suffered active discrimination, who were denied credit for work they did, who were rejected for positions on account of motherhood, and who had suffered resentment, harassment, or even assault by colleagues. But that was then; don’t we live in more enlightened times now? Not according to the statistics. It appears that many of the barriers left for us now are actually unconscious ones, either in our own minds or the minds of others. While many of us will swear to rejecting stereotypes of female scientists, our actions betray our unconscious biases. There’s the “bitch” dilemma – How does a woman assert herself without coming off as a bitch? (Consensus answer: “Be persistently pleasant.”) There are studies showing that women are held to higher standards of competence than men, women are less likely to negotiate for fear of appearing pushy, are pressed to do more service than men (“token woman syndrome”), and are more likely to have their credentials overlooked or questioned. We learned strategies for saying “no,” for compartmentalizing our time, for leveraging our institutions’ policies during demanding family times, for supporting other women (which is actually a major problem), and for gently reminding others of our need for space and respect.
Not all the strategies were abstract or hypothetical. For example, those of us who had not yet written grant proposals were invited by a participant from NSF to submit our names as potential proposal reviewers (thereby learning by reviewing what works and what doesn’t). I did not know that opportunity existed, because I assumed I had to submit a proposal before I would be asked to review, as in academic journals. (Major lesson: What you assume can hurt you! Always ask!) We were asked to participate in real-world research projects, select mentors, and continue correspondence after the end of the workshop. And of course, being a good science project, ASCENT included lengthy evaluation metrics and assurances that we will be checked up on periodically in the future to assess the impacts of the workshop.
As much as I enjoyed ASCENT, and as much as I can see the merits of gathering women in an all-female setting to share their strategies, I cannot help feeling that the very concept of “women’s issues” is still a major impediment. These are men’s issues, too. Men work and live with women. What good does it do women to gather and discuss ways to deal with the male-centric framework of scientific research, when it’s the framework itself that needs changing, and will require the involvement of men to change it? It’s not enough for a male scientist to simply say, “I’m not sexist, so I’m not part of the problem.” I once pointed out to my doctoral adviser that he now has a vested interest in ensuring a level playing field for me after graduation, because he has invested a great deal of time and money in my professional development. (To his credit, he has always let me have first authorship on papers I have written myself, and allowed me to present my own work whenever possible. I am shocked to hear that, even today, this is not always the case!)
My male colleagues should recognize that support for their female colleagues is not an accommodation that dilutes science, but a strategy for synergy and increased productivity throughout the whole of science. When the potential of half the scientist population is not being fully realized, that dilutes science. Happier, healthier, more productive colleagues (both male and female) will benefit everyone in the long run, and ultimately make our nation’s science stronger.
I’ve added a skill to my scientific skill tree recently. A skill that, in hindsight, seems intuitively obvious, but really wasn’t until I put it into practice.
A common quip in academia is, “Publish or perish.” Successful scientists publish. Prominent scientists publish a lot. Refereed journal articles narrate the maturation of our field, and prolific writers can exert a powerful influence on its direction, as well as keep the bean counters happy.
As a postdoc, I’m expected to publish a minimum of two refereed journal articles per year. At 7500 words apiece, that works out to an average of 58 words per work day. Of course, that’s not how we generally write. We tend to write in thousand-word spurts, just before a major deadline. The weeks leading up to a major conference, when we produce extended abstracts, abound with bloodshot eyes in front of LCD screens late at night.
On the recommendation of someone on the ESWN listserv, I recently acquired a slender, 150-page book with the intriguing title How to Write A Lot by Dr. Paul Silvia. He rails against what he refers to as “binge writing” (which I describe above). He approaches the problem of writing from the standpoint of a psychologist, and deconstructs some of the “specious barriers” that academics often cite as their justification for not writing more.
Dr. Silvia’s main message is this: Make a writing schedule, and stick to it. Think of the writing schedule like an exercise regimen, or a class to learn a new skill. Set aside a block of time each day, close your door, and focus only on writing. Be defensive; don’t schedule other appointments during that block of time. The writing schedule will become an ingrained habit, and soon you will never have to “find” the time to write.
My gut response to this message was, “Well, duh, that makes perfect sense!” Repetition and practice are crucial, because, much like a muscle, unused writing skills diminish over time. I honestly think the only reason that this approach never occurred to me was that no one ever told it to me explicitly. Or perhaps all my mentors are themselves binge writers. (That would be easy to change!)
I resolved to test Dr. Silvia’s approach. For the past three weeks, I’ve set aside a two-hour block each morning to write, keeping my office door closed and my e-mail logged out. I stick a “Writing time: Do not disturb” sign to my white board (mostly so that my bosses know I am actually in the office), and it has attracted some comments. But the proof is in the pudding: During those three weeks, I’ve generated about two-thirds of a manuscript, and I’m feeling pretty good about it!
The size of a book is no indication of the utility of its contents. This slender volume has had an immediate impact on my approach to writing, hopefully for the better. I may not write exactly 58 words each day, but I’d like to think I’m getting closer to a more even, temperate pace.
I received many thoughtful and passionate responses to my previous post regarding the upgrade of the El Reno / Piedmont /Guthrie tornado in Oklahoma to EF-5 based, in part on radar observations of 60 m AGL wind speeds. As I noted there/then, the EF scale, as was the F-scale before it is a damage scale, not a wind speed scale. Some have argued that, for this reason, actual wind speed measurements should have no bearing on the EF-scale rating, while others have argued that we should try to incorportate wind speed measurements in EF-scale ratings whenever they are available.
Let’s climb into our “way back machine” and go back to 1971. (Granted, this precedes my own birth by nearly a decade, but I digress.) Dr. Ted Fujita was motivated by the question, “How fast are tornado winds?” Doppler weather radar was still in its infancy, photogrammetry was only possible with high-quality, well-documented film, and in situ measurements of the winds were, logistically, all but impossible to collect (despite valiant attempts to do so). The way I see it, Dr. Fujita asked, “What evidence for wind speeds do tornadoes most consistently leave behind?” His answer: Damage. In 1971, in a paper proposing the Fujita scale, he writes,
“…one may be able to make extremely rough estimates of wind-speed ranges through on-the-spot inspection of storm damage. For instance, the patterns of damage caused by 50 mph and 250-mph winds are so different that even a casual observer can recognize the differences immediately. The logic involved is that the higher the estimate accuracy the longer the time required to make the estimate. Thus a few weeks of time necessary for an estimate with 5-mph accuracy can be reduced drastically to a few seconds if only a 100 mph accuracy is permissible in order to obtain a large number of estimates with considerably less accuracies… high wind-speed ranges result in characteristic damage patterns which can be distinguished by trained individuals with the help of damage specifications…”
Fujita clearly spells out his rationale for the scale; his strategy was to use damage as a proxy for wind speeds in the absence of near-surface wind speed measurements. Forty years later, thanks to innovations like miniaturized, in situ probes and mobile Doppler radar, obtaining near-surface wind speeds in tornadoes is not so far-fetched. Because only a handful of such instruments exist, and deployments are challenging (the presence of a mere tree or building between the radar and tornado can compromise the measurements), we are still not collecting near-surface wind speed measurements in tornadoes with any consistency. And, we are finding that the wind speed bins don’t always match up with the damage indicators in the EF scale.
In my opinion, this means the scale needs to be made more flexible, or possibly supplemented by a wind measurement-based alternative (i.e. two ratings, one damage based and one measurement-based). One could envision expanding the EF-scale into a second dimension (i.e. an EF matrix), the second dimension only expanded if reliable wind measurements (M) are available, and collapsed if they are not. The El Reno / Piedmont / Guthrie tornado would, for example, be rated EF-4 based on its damage, but M-5 based on the RaXPol wind measurements extrapolated to the surface via an objective method.
What I outline above is merely my own half-baked idea, and I am eager to hear other suggestions from people closer to the subject area. I am not a tornado damage expert; I am an observationalist. Keeping the damage-based scale certainly has its merit, primarily in the interest of maintaining consistency with the last 40 years of records (fraught with uncertainty though it may be; see Doswell and Burgess 1988). However, a blanket disregard for reliable remote or in situ wind measurements seems unwise, when obtaining tornado wind speeds was precisely Dr. Fujita’s objective.
PRELIMINARY DATA...
EVENT DATE: MAY 24, 2011
EVENT TYPE: TORNADO
EF RATING: EF-5
ESTIMATED PEAK WINDS (MPH): GREATER THAN 210 MPH
INJURIES/FATALITIES: UNKNOWN/9
EVENT START LOCATION AND TIME: 8 WNW BINGER 3:30 PM CDT
EVENT END LOCATION AND TIME: 4 NE GUTHRIE 5:35 PM CDT
DAMAGE PATH LENGTH (IN MILES): 75 MILES
DAMAGE WIDTH: UNKNOWN NOTE: RATING BASED ON UNIVERSITY OF OKLAHOMA MOBILE DOPPLER RADAR MEASUREMENTS.
I’m not certain if this is the first time mobile radar data have been used to upgrade a tornado rating, but it’s certainly an unusual occurrence. (If you know of such an instance, please post a comment!) EF-5 tornadoes are extremely rare events, mobile radar data collection in them, even rarer, and crucial near-surface wind measurements, rarer still. The Doppler velocities in the upgraded EF-5 tornado were collected at 60 m AGL, according to my former officemate and Ph.D. candidate, Jeff Snyder. Since RaXPol is such a new radar, he and other members of Howie’s team have been double- and triple-checking their measurements throughout the past week. So far, I’m told, the data are of reliable quality. But, the data will still have to be subjected to the scientific peer-review process in more formal studies yet to be composed.
For comparison, on 3 May 1999, a DOW measured winds over 300 mph in the Moore/Bridge Creek, OK F-5 tornado. In a 2002 paper about that data set, it was noted that lofted/centrifuged debris could actually contaminate the velocity measurements near the surface. In the Greensburg, KS, EF-5 tornado, which I studied as part of my dissertation research, Doppler velocities exceeded 180 mph, but only well above the surface. (We deployed too far away from the Greensburg tornado to collect data in that crucial near-surface layer – see the figure at right.)
Remember that the EF scale is not a wind scale. The wind speeds are estimates based on damage (which is the only evidence tornadoes consistently leave behind for us to study), rather than the other way around. For this reason, there may be forthcoming disagreements as to whether Doppler radar measurements can even be used to make an EF-scale determination. Stay tuned…
* An explanation of the EF scale (and how it differs from the original Fujita scale) can be found here.
Correction: The Mulhall, OK tornado was F-4, not F-5, and the 300+ mph measurement was in the Moore/Bridge Creek, OK tornado. Thanks to Roger Edwards and Mike Coniglio for the corrections!
When I was an undergraduate research assistant at the University of Wisconsin – Madison about a decade ago, I examined similar infrared satellite imagery for indications of drought. As we all learned in biology class, leaves maintain crucial range of moisture levels and temperatures in their interior largely by controlling water flux through their outermost few layers of cells. Tornadoes batter vegetation and shred leaves into tiny pieces, exposing their moist interiors. (I never saw the rain-wrapped 10 May 2010 Tecumseh, OK tornado pass by a few miles to my south, but shredded leaves rained down out of the sky for several minutes after it passed. I only see that kind of vegetation lofting in the immediate aftermath of a tornado.) If the plant isn’t killed outright, it becomes “stressed”, changing its reflectance characteristics at different wavelengths.
Healthy vegetation reflects strongly in the near-infrared wavelengths (around 0.9 microns), but stressed vegetation reflects weakly. In addition, stressed vegetation lights up in a band centered around 1.9 microns, as this diagram from NASA shows:
A logical “signature” for stressed vegetation, therefore, might be R(0.9 microns) – R(1.9 microns). When this value is positive, the vegetation can be inferred to be healthy. When it’s negative, indications are that the vegetation is stressed. This is likely not the exact formula used by the ASTER researchers; I just made that up on the spot. A robust index would be based on examination of many different wavelengths, different vegetation types, and over many seasons. (Obviously, this is not my area of expertise any longer!) One might be able to determine an optimal stressed vegetation indicator using a statistical technique like principal component analysis (PCA).
I remember seeing similar satellite images of the 3 May 1999 tornado damage swath through Moore. This paper, from Dr. May Yuan of the OU Department of Geography, contains prime examples. Neat stuff.
I was looking for inspiration on NSF Fastlane the other day, reading abstracts of recently-approved tornado and severe storm-related grants, when one titled “VORTEX8” caught my eye. Of course I clicked. On closer scrutiny, I noted that the date of the proposal’s initial amendment was “1 April 2111.” I was reading a grant abstract from 100 years in the future! In my astonishment, I refreshed the page and it immediately disappeared, presumably back into the rift in the time-space continuum from which it had emerged. I will attempt to re-create what I read, but the resulting piece is likely a woefully incomplete shadow of the real thing. Read on…
VORTEX8: For real this time!
Abstract: Despite the best efforts of research and forecast computing systems, tornadoes still manage to terminate a handful of taxpaying, backwater yahoos every year. The field phase of VORTEX8 will last from February to June of 2113. This time period has been selected to coincide with the changed climatological maxima for X-tremely severe thunderstorms and tornadoes across the Southern (February and March) and Northern (April and May) Great Plains regions.
Unlike previous field research efforts to study tornadoes (particularly VORTEX5, which was a spectacular waste of everyone’s time that we won’t mention further), the VORTEX8 instrumentation will be entirely automated and no human (or grad student) participation will actually be required. The instrument ‘swarm’ will consist of an assemblage of approximately 10,000,000,000 nanoradars, nanonets, and nanocams (drones), all controlled by an exa-scale uber-computer (queen) that will detect vorticity maxima through real-time assimilation of the swarm members’ trajectories and measurements, and automatically migrate the swarm to promising tornadoes (XEEF-3 or greater) via the “SASSInet.” The drones, powered by onboard fusion pico-reactors, will automatically disperse to designated locations in relation to the tornado, collecting data throughout its depth (both interior and exterior). For the duration of this experiment, the PIs have promised to disengage their respective drones’ battle protocols and thus the loss of drones due to ‘hostile scientific competition’ is expected to be negligible.
Data from the swarm will be assimilated in real time into micrometer grid scale computer models and disseminated to the public at large via the universal interjack.
Intellectual Merit: Thanks in large part to the seminal publication “The six degrees of tornadogenesis” in 2055, tornado warning lead time is now an unprecedented 5.2 hours. Nano-instrumentation has enabled tremendous strides in elucidating the dynamics, kinematics, and behavior of tornadoes, suction vortices, micro-vortices, and all manner of lesser whorls. However, the exact mechanism(s) by which tornadoes loft and transport individual, millimeter-scale particles and pieces of debris, such as individual aerosol particles and strands of animal fur, represent fertile territory that remains to be explored in a comprehensive observational and numerical study. Advances in the understanding of molecular-scale interactions and quantum entanglement are expected as a direct benefit of this experiment.
Plus, let’s face it, tornadoes are still the shiznit, and old scientists just can’t seem to let them go.
Broader Impacts: Although the usage of the automated sensor swarm will preclude the need for direct student participation in the field phase, it is estimated that approximately 140,000 graduate students will need to be employed to manually dealias the data from billions of nanoradars. In addition, at least fifteen Martian and four Titanian Ph.D. candidates candidate will observe the experiment. It is expected that these students will later apply some of the nano-swarm technology to studies of other vortices in the solar system, including but not limited to Martian dust devils and the Saturnian polar hexagon, respectively.
Okay, I’ve stretched that premise well past the breaking point. APRIL FOOLS’! Seriously, though, I yearn to know how tornado and severe storms field research will evolve in the next 100 years. Will there ever be another Project VORTEX? And what questions will we seek to answer in those efforts?
Yesterday saw the arrival in Norman of Howie Bluestein’s new Rapid-Scan, X-band, Polarimetric mobile Doppler radar (RaXPol for short). It’s a radar primarily intended for tornado research, but which also has a myriad of other potential applications.
Yes, we’ve had radars mounted on trucks since the mid-1990s. What makes RaXPol special? Watch this:
That video clip is not sped up; that really is an 8-foot dish rotating at 180 degrees per second! By gradually changing the elevation angle, it can potentially collect a full atmospheric volume of polarimetric data in less than 30 seconds. Why is that important? Tornadoes can change drastically on time scales of only a few seconds, so the faster scientists can collect volumes, the more information we’ll have about those rapid changes. The polarimetric capability will allow researchers to distinguish different types of hydrometeors, debris, and other scatterers in supercells and tornadoes.
One might expect the entire truck to wobble with a giant antenna swinging around on its bed. The engineers addressed that issue from the design stages. As can be seen in the video clip, the entire truck remains surprisingly static, even without the hydraulic levelers deployed. Seasick crew members will not be an issue.
And as for the problem of “beam-smearing” (insufficient dwell time) that might result from such a rapidly rotating antenna, the engineers implemented a multi-frequency Tx/Rx system. Conventional Doppler radar transmits pulses a single frequency, then “listens” for the echo of the transmitted signal. Imagine someone striking a single piano key, then listening for the echo of that note. In contrast, RaXPol transmits consecutive pulses at slightly different frequencies, then listens for the returned signal from all of them simultaneously. In the piano analogy, instead of striking only one key, you would sweep your fingers over several keys, then listen for the combined echoes of all the different notes. Dr. Andy Pazmany explains in this presentation how this “frequency hopping” technique works.
I feel a little silly blogging about this radar, because I’m not going to be using it. (That job belongs to Howie’s current crop of grad students.) But, I’ve been hearing about this radar for three years, ever since Howie’s first “woof” when he heard that the grant proposal had been funded, and I’ve never been ashamed to geek out over a shiny new instrument! I can’t wait to see what data the students end up collecting.
* In the abstract, Howie mentions two female Ph.D. students. I was one of them!