The following is a “weathertorial” concerning the, northeast “blizzard of Jan. 26-28, 2015.” This was written a few days after the event and posted at a different blog site at the time. I am republishing it here (slightly edited) because the impending storm has far too many similarities and forecasters and other media do not seem focused on explaining snowfall gradients.
Also, earlier today, the New York News published a story that attacked meteorologists for a lack of forecasting skills. I hope that someone at the News reads this blog post and learns some things about weather forecasting.
Rather than jumping to conclusions about the January 2015 snow event, as many have done, I took some time to research the information and look at data in some new ways. Hopefully, this will put the storm and its forecasts into a much needed, realistic, perspective.
The opinions expressed are mine alone, unless otherwise noted. DISCLAIMER: I worked for the National Oceanic and Atmospheric Administration from 1971 to 1995 (including the National Weather Service [NWS] from 1971-1985). As part of my overall involvement with the weather community, in recent years, I interact with NWS offices and TV weathercasters often.
During the afternoon of Jan. 26, 2015, Steve Tracton, a past colleague at the National Weather Service (NWS), posted a treatise on forecasting, overforecasting and underforecasting at his Facebook Page (posted below for reference). This was done prior to the onset of the “blizzard of 2015” (and thus did NOT involve any 20-20 hindcasting).
Having spent many years in a forecasting chair (and something which I do for my wife daily – definitely a high-risk forecasting assignment), I can relate to everything Tracton noted in his posting. It’s also important that the public appreciates what Tracton had to say. That’s because most folks only seem to know that “we (meteorologists) can’t ever get it right” and “I wish I had a job where I could be wrong ALL the time and still get paid.”
There are many days when I wish the Congressional Budget Office, economists and stock market pundits were subject to the same scrutiny. I’ll take being a meteorologist any day and I’ll remain proud that I routinely use solid science and solid thinking to make the best forecast possible. Rest assured that most meteorologists and even weather broadcasters (some of whom may not have the same levels of meteorological training) feel the same way! We try our best because we care about the people out there who use our work products every day.
When forecasting snowstorms, such as the multiday, Jan. 26-28, 2015, “potentially historic,” event, there are many factors to address. These include, but are not limited to:
(1) will a storm even form?
(2) where will it form?
(3) when will it form?
(4) what type of weather will it bring (e.g., snow, rain, wind, wind chill, blizzard conditions)?
(5) when will the weather event start and stop?
(6) how much of each type of weather will occur and how intense or significant might it be, and
(7) possibly the most important in this case, what will be the areal extent and gradient of the event (i.e., where will it stop geographically)
This storm event and the sheer magnitude of the expected snowfall (two to three feet of snow) appeared on the forecasting horizon three to five days in advance. The timing was almost right on for most locales and the storm’s central location was only about 100 miles or so from the forecast position (Fig. 1). Further, the storm (which hadn’t even formed yet when the first snow forecasts were issued) did undergo rapid cyclogenesis (deepening), verifying the “meteorological bomb” forecast. Snowfall reached the three-foot depth in some locations. Blizzard conditions (forecast days in advance) occurred.
All in all, this was a superb forecast. Such a forecast, with this degree of overall accuracy, would not have been possible 30 to 40 years ago. Better data sets (including satellite imagery), better models and improved scientific understanding are among the factors that played a role here.
What went wrong (and this about the only thing that went wrong) was the snowfall gradient (change in snowfall amount over distance) on the storm’s western edge. Here, while written county-by-county forecasts showed the expected snowfall gradient, the gradient concept was not as well noted and/or highlighted in weather statements, briefings and other dissemination modes. Hence, when New York City was pegged as being in the target zone of two feet of snow (a potentially historic event), that forecast was not tempered by the “50-mile rule.” This rule notes that a small change (not necessarily 50 miles) in storm track and/or the effect of another influence can cause the location of heavy snow to miss its mark by 50 miles. This rule is a paramount consideration in explaining large-scale winter storm snowfall (Fig. 2). There are other local factors that affect snowfall within the larger storm setting (e.g., banding, gravity waves, convection and the location of the coastal front). The gradient was well depicted in this forecast precipitation graphic (Fig. 3). This graphic (just one piece of information in a much, much larger suite of information) suggests that New York City snowfall would only be about one foot and that areas to the east of the Big Apple would bear the brunt of the storm.
In a similar vein, Boston was on the northern edge of the expected heavy snow area several days before the snow actually fell. Its observed snowfall exceeded the original forecast values.
Meanwhile observed snowfall across eastern Long Island, southeastern Massachusetts and Rhode Island matched expected snowfall numbers quite closely.
Snowfall gradients were dramatic. Consider these examples (Fig. 4 and Fig. 5):
– Central Park to Islip (central Long Island) 1 inch every 3.4 miles.
– In Worcester County, MA (west of Boston) 1 inch every 0.82 miles.
Thus, if the 50 to 100 mile shift in storm center position did not occur, New York City would have been buried in snow. If the storm had shifted an additional 50 miles to the east, and the heavy band had accompanied it, Boston could have received less than a foot of snow.
There are many ways to express the uncertainty involved with the myriad of factors mentioned earlier. Graphics with text overlays and more descriptive weather statements can help. However, the thinking of the NWS, as evidenced in a briefing on Jan. 27, 2015, at which Dr. Louis Uccelini, the NWS Director spoke, seems to be to add more probabilistic information.
Similarly, Jason Samenow, writing for the Capital Weather Gang (Washington Post) suggested that forecasts needed to involve a range of probabilities, rather than the worst case scenario.”
It is here that I have to disagree.
Airlines, school systems, agencies responsible for snow removal and others need to make plans in advance. Forecasters need to and must convey their best assessment of the situation. A few words, like, “…the expected snowfall gradient on the western side of the storm will be very intense. Small shifts in storm movement can cause significant changes to the forecast in these areas,” would be preferred to “there is a 20% chance of more than 18 inches of snow, but an 80% chance we’ll get at least 6 inches.” The statistical measure may be fine for emergency managers and scientists. For John and Jane Q. Public, and many others, however, I believe that the outcry on a future storm would likely be deafening, because no one would truly understand the math/statistics.
This is because the public knows little about probabilities (let alone many levels of grouped probabilities), and they seem to lack an understanding of basic weather principles, as well. One reason for this could be that weather is not often taught in schools nationwide past fifth grade. TV meteorologists, NWS web sites and credible online blogs can help, but not overcome, this shortfall.
Instead, what we need to do is bring weather and climate back into school curricula so that kids can again educate their parents about what we all face daily. We can easily use weather to teach physics, chemistry, decision-making, math, statistics, communication and, my favorite – “thinking.”
Then, as a society, we can turn from condemnation (ah, so easy) to helping correct situations and procedures, improve learning and more.
This January 2015 snowstorm and its fallout will remain newsworthy for a while. Hopefully, a meaningful dialogue and some solid ideas for improving public understanding and making forecasts more informative will ensue.
Meanwhile, the snowstorm finally wound down across eastern Massachusetts, Rhode Island and Long Island late on Jan. 27, 2015 and over Maine the next day. In these areas, there actually were reports of “historic” snowfalls (Fig. 6 and Fig. 7). Some of these included:
Worchester, MA – (storm total) 34.5 inches versus 33.0 inches (1997)
Providence, RI – (storm total) 19.1 inches is fourth greatest snowfall on record
Boston, MA – (storm total) 24.6 inches is sixth greatest snowfall on record
Portland, ME – storm total 23.8 inches is fourth greatest snowfall on record
Note that records for Worcester have been kept since at least 1883 and for Portland, ME since 1882. Thus, this snowfall broke records that dated back more than 130 years!
Worchester, MA – record daily snowfall (Jan. 27) 31.9 inches versus 11.0 inches (2011)
Boston, MA – record daily snowfall (Jan. 27) 24.4 inches versus 8.8 inches (2011); also snowiest January day on record
Providence, RI – record daily snowfall (Jan. 27) 16.0 inches versus 6.7 inches (2011)
Bangor, ME – record daily snowfall (Jan. 27) 13.3 versus 10.8 (1963)
Islip, NY – record daily snowfall (Jan. 27) 7.5 inches versus 4.5 inches (1987)
JFK Airport, NY – record daily snowfall (Jan. 27) 5.6 inches versus 4.3 inches (2011)
In addition, Boston, Hyannis and Nantucket, MA (and nearby areas had between 9 and 13 hours of blizzard conditions (near zero visibility due to snow and blowing snow and high winds).
All of the above showcase a rare and historic event for the area.
There’s much more to be said about all of the aspects of the “Blizzard of 2015” and its forecasting fallout. However, apologies by NWS and other forecasters were not needed. Forecasters did a very credible job overall and had, but one minor miscue. Yes, New York City, with its millions of residents got less snow than was advertised. However, people (and the media) would have still have found fault with something, even if the forecast was “perfect.”
So, I’d like to close by telling a story about Joseph Strub, a meteorologist-in-charge of the Minneapolis NWS office prior to 1980.
One day, Minneapolis awake to 6 inches of partly cloudy. The news media came into the office with microphones and videocameras at the ready. “Can you explain why you screwed up so badly?” queried the reporter.
Strub replied, “We missed it. But, my forecasters and I are working on the next storm system heading our way. Do you have any other questions?”
The media left the office quickly and quietly. There was no cover up, just the truth.
Years later, while working in the Fort Worth, TX NWS forecast office, I had a similar experience. The Dallas Morning News called and wanted to know why our forecasts were so bad for the month. I volunteered to look into the matter and get back to the reporter. He was surprised that I would be willing to do so, but accepted the fact that I would call him back within the hour (he obviously had my phone number).
When we spoke an hour later, I admitted that our errors were large (but small compared to the computer guidance values). After I described the forecast process to him and how data was limited in parts of Texas, he went back to his desk and reported, not about the large temperature errors, but rather, about the problems involved in forecasting in a region with high temperature and moisture variability and less than needed data sources.
The NWS Boston forecast office followed Strub’s lead on the morning of Jan. 28, 2015 (Fig. 8). Their Facebook post talked about the next weather system heading toward the Boston area. In fact, longer-term computer models and human forecasts suggest a series of storms (not as strong as this one) enroute for the Washington, DC – Boston, MA corridor in the ensuing 10 days.
Anyone, meteorologists included, can always improve upon what he/she does. And lessons learned have gotten meteorology to where it is today. But throwing stones and complaining about forecast errors is not the way to move forward.
Rather, I suggest New Yorkers, some in the media and others get their hands on a well-written basic weather book and learn something about what forecasting involves. And if the book doesn’t answer the questions, then I (and I know others who) would be willing to help improve the state of weather literacy to anyone who asks. Readers can contact me here or by posting comments online at any of my social media pages.
From Steve Tracton’s Facebook posting on Jan. 26, 2015
“Imminent blizzard of historic proportions predicted with seemingly total (100%) certainty to bury cities from Philadelphia, New York, Boston, Portland, etc. Rarely does one hear forecasts of snowstorms described with complete confidence being of historic, disastrous, life threatening, unprecedented, massive, etc. proportions even when only 24-36 hours in advance – not even with “historic…” preceded by likely, probably, potentially, etc.
I have no reason outright (with one caveat, below) to question the predictions other than there is long history of forecasts with comparable levels of hype – even at short ranges – becoming historic busts with forecasters eating “humble pie” and blaming it on the models. Just as we’ve had (far too) many “surprise snowstorms”, i.e., not (or grossly under) predicted storms, I’ve referred to the busts as surprise “no snowstorms”.
We’ll soon know whether the current predictions are on the mark or not. I’m hoping for this being a “big one” – even though DC is missing out – if for no other reason it marks the tremendous improvements made over the last few years in computer models/strategies, as well as the skill, expertise, and judgment of professional meteorologists within and beyond the National Weather Service (NWS).
The caveat mentioned above is the forecasts are predicated upon redevelopment of a “clipper” system over the Midwest with this secondary storm undergoing “bombogenesis” (rapid intensification) with an abundant moisture supply. Relatively small differences (errors) in the position and track of the low can be critical with the actual amounts and geographical distribution of snow (snow bands, for example) and winds contributing to drifting. I raise this as just one possibility but one that reduces the level of confidence (uncertainty) to something less than 100%. As I’ve often said, “the only certainty in weather predictions is uncertainty”, which varies from one cast to the next.”
© 2015, 2016 H. Michael Mogil
Originally posted 1/29/15; reposted 1/22/16