Welcome

Welcome to the RTMA/URMA VLab community!

The purpose of this community is to facilitate feedback and discussion on the RTMA/URMA system. 

Meeting notes are available under the Google Drive Folder linked above.

To learn more about our next upgrade, see the asset publication below.

Use the System Overview to learn more about the system in general.

Use the forum to ask questions about the system and join the discussion with other users and the development team. 

Note that there are two forums: one for precipitation issues and one for all other variables.

You can post to the precip issues forum by sending an email to qpe.rtma.urma.feedback.vlab@noaa.gov.  For all other issues, you can post by sending an email to rtma.feedback.vlab@noaa.gov.  Please note that you must have a user account to post to the forum.  If you do not have an account, please contact matthew.t.morris@noaa.gov.

We recently added the ability for NWS Regional or WFO personnel to request that stations be removed from the analysis.  To access this, click on the "Station Reject Lists and Requests" tab.

There has been recent interest in knowing exact station locations, especially those of METAR sites.  Our METAR information table is under the "METAR Location Info" tab.

Users may also be interested in the National Blend of Models VLab community.

We appreciate any feedback on how this page or community could be improved.  You can submit such feedback via the above email handle or forum.

 

What's New

December 2017 Implementation Summary

Document

Overview of upgrade scheduled for December 2017. Note that this was originally scheduled for October 2017, but has been pushed back due to technical issues.

Forums

Back

Analysis Uncertainty

RL
Ryan Leach, modified 7 Years ago.

Analysis Uncertainty

Youngling Posts: 18 Join Date: 1/19/16 Recent Posts

Background

I'm posting just to try to start a discussion about uncertainty fields in the URMA. We're still using the Obs database, but look forward to switching to URMA since it is much less work for us to maintain. Many forecasters in the office object to the URMA when they see the big differences between the Obs database and the URMA. And then if they find the uncertainty grids in URMA, they balk at it. How can temperature uncertainty be 2-3 degrees (as it ALWAYS is) when the observation and grid differ by 10 degrees at times? Of course I agree completely with that point. But I also point out that we have no idea what the uncertainty of the Obs database analysis is, since it doesn't have this field at all.

 

I think good uncertainty fields would be a big help in selling the URMA to the rest of the forecast staff, and I think it would be more useful for doing grid verification. For example, the current defacto standard for temperature forecasts in BOIVerify is to be within 3 degrees, but in reality, several analysis grid points may have much larger uncertainty. I would argue that a forecast error near to or less than the analysis uncertainty is a hit, even if that error is 7 degrees. Along those lines, I started thinking about uncertainty in our grids, and I came up with a few sources of uncertainty.

  • distance from observations that affect that point
  • sensor accuracy, roughly +-2 degrees for ASOS temperatures I think
  • terrain variation within a grid cell of the analysis

As I've looked in depth at terrain variation in my CWA (MSO) before, I thought I would take a closer look at that aspect of uncertainty alone.

Methods

First I downloaded the 30m elevation data from the USGS, then converted it to a text format that I could read in python. I wrote a procedure to read the elevation for each 30m topo data point and assign it to one of my GFE grid points. From that I was able to calculate the maximum, minimum, range, average, and standard deviation of hi-res topography values that correspond to each GFE grid point in my CWA.

There was roughly 8,400 30m elevation points per 2.5x2.5 km grid point in GFE. It wasn't the same for each cell due to differences in the map projections. It took well over 4-hours to ingest and analyze all the data points in GFE.

Next, I assumed an average lapse rate (-3.5F / 1000ft), well mixed lapse rate (-5.5F / 1000ft), and strong inversion (+10F / 1000ft) to calculate the variation of temperatures across a grid point just due to the varied elevation across it.

Results

Topograhpy

The first image below is the range (max-min) of elevations for each grid point in the MSO CWA. The next image (using the same colorscale) is the standard deviation of elevation values for each grid point.

Range

Standard Deviation

 

Above the graphic shows percent of grid points with an elevation range greater than or equal to the x-axis values. So 79% of grid cells in the MSO CWA (not the whole image, just the MSO CWA portion) have cover an area with an elevation range greater than 1,000 feet! Over a third of the area has grid cells with an elevation range over 2,000 feet.

Average lapse rate

Assuming an average lapse rate of -3.5F per 1,000 feet, the range (and standard deviation) of temperature values by grid cell is shown below. The pixels in yellow have a range 5-10 degrees of variation across a grid cell, but note that the standard deviation is much less. Gray areas are less than 3 degrees, and green is 3-5 degrees of variation.

Range

Standard Deviation

Well mixed, dry adiabatic lapse rate

Assuming a lapse rate of -5.5F per 1,000 feet, the range (and standard deviation) of temperature values by grid cell is shown below. The pixels in yellow have a range 5-10 degrees of variation across a grid cell, and the red pixels have more than 10 degrees of variation across the grid cell.

Range

Standard Deviation

Strong inversion

Assuming a lapse rate of +10F per 1,000 feet, the range (and standard deviation) of temperature values by grid cell is shown below. Strong inversions like this frequently occur in mountain cold pool areas of the west, especially in the winter.

Range

Standard Deviation

Discussion

So this only looked at one aspect of uncertainty, adding other sources as mentioned above would make the uncertainty greater, plus considering the uncertainty associated with the lapse rates used in any such analysis. This was a pretty simplistic analysis, I'm hoping someone with much better statistics can comment as well.

The uncertainty associated with the terrain variations across a grid point could be considered a source of uncertainty that is intrinsic to the analysis grid. In this case, I wonder if this could be utilized during the analysis process as one consideration for how strongly to weight an observation compared to the background. For instance, most of our ASOS stations are associated with grid points that have elevation ranges such that the standard deviation of temperature across the grid cell for all cases above is 0-2 degrees. I would hope that this, combined with the quality of the ASOS data, would cause them to be weighted very heavily. On the other hand, SNOTEL and RAWS weather stations are often located on slopes or in very hilly areas and might be weighted less.

 

Thanks for reading,

 

-Ryan

RL
Ryan Leach, modified 7 Years ago.

RE: Analysis Uncertainty

Youngling Posts: 18 Join Date: 1/19/16 Recent Posts
Images actually show up (for me at least) if you log into VLAB, but they aren't working in my e-mail.
 
Ryan NLeach
Senior Meteorologist, IMET
National Weather Service, Missoula
JC
Jacob Carley, modified 7 Years ago.

RE: Analysis Uncertainty

Youngling Posts: 69 Join Date: 12/17/14 Recent Posts
Hi Ryan,

Thanks very much for starting this discussion!

The analysis uncertainty field is a challenge to derive.  We are operating from a standpoint of never knowing the true state, but we do our best to estimate what that true state is (i.e. the analysis).  So assessing what the errors are in the analysis is inherently difficult since there is no truth against which to compare.  However we can provide an estimate.  I'll provide a general overview of some important points here:
  • The RTMA/URMA is a 2DVar anaysis algorithm and uses climatologically based background error variances.  These errors do not change with the flow.  The background error will be the same for a land-falling tropical storm as it will be under a prevailing ridge.  In other words, the background error is static.
  • To actually estimate the analysis error we use byproducts of the 2DVar analysis algorithm - a process leveraging quite a bit of linear algebra.  The analysis uncertainty also has the background error as an upper limit.
    • The estimate of the analysis uncertainty depicts patterns in the reduction of error quite well, but the magnitude of this reduction is often much too small (i.e. the analysis uncertainty is often too large).
  • Since the background error is static, the analysis error field reflects observation density.  Where we have observations we will have a reduction in the analysis uncertainty (i.e. a more certain analysis).
  • In data sparse or data void regions the analysis error is determined by the background error.
So - why does the analysis uncertainty field generally look the same/similar from one time to the next? Well, the background error is static (first bullet point) and the observations are typically stationary.  As a result the analysis uncertainty fields are usually quite similar one cycle to the next.

What's next? We need to improve the background error covariance.  One promising way to do this is through ensemble data assimilation methods where we can use the ensemble to estimate the background error.  The background error will evolve along with the ensemble forecast, thus adding flow dependence in the analysis.  We'll also have an ensemble of analyses available to us that we can use to provide a new estimate of the analysis uncertainty.

Getting to this point will take a few years, but this is the planned path forward as we advance the system (esp. towards three dimensions).

In the meantime, we have begun developing methods to build in some flow-dependence that should manifest as small, but meaningful improvements in the analysis uncertainty.  You should see this in complex terrain with the temperature field in the v2.7 RTMA/URMA system currently under science evaluation.

Thanks!
Jacob


On Sun, May 20, 2018 at 11:20 AM, VLab Notifications <VLab.Notifications@noaa.gov> wrote:
Images actually show up (for me at least) if you log into VLAB, but they aren't working in my e-mail.
 
Ryan NLeach
Senior Meteorologist, IMET
National Weather Service, Missoula

--
Ryan Leach RTMA/URMA Discussion Group Virtual Lab Forum https://vlab.noaa.gov/web/715073/home/-/message_boards/view_message/4115085 VLab.Notifications@noaa.gov

RL
Ryan Leach, modified 7 Years ago.

RE: Analysis Uncertainty

Youngling Posts: 18 Join Date: 1/19/16 Recent Posts

Jacob,

Thanks for that explanation. I do think we are confounding two different ideas. One is error, and the other is uncertainty.  The error is what you discussed above, and I'm glad to hear it is an area of active development.

But there is more to the uncertainty than just the error. The representativeness of the 2.5km grid also introduces some uncertainty in the analysis/forecast, especially in areas with a lot of variable terrain. That is mostly what I was discussing above.

I would hope the end result of the analysis would take both into consideration. The current analysis makes no sense when the "uncertainty" or "error" grid is 2-3 degrees but the analysis differs from the observations by 2 or 3 times the specified error of the sensors. 

Also, if the difference between the observations and first guess is high (cold pools come to mind), I think the static error under estimates the actual error. I think that is what you were talking about with the background error covariance. 

Thanks,

YL
Ying Lin, modified 7 Years ago.

RE: Analysis Uncertainty

Youngling Posts: 48 Join Date: 3/27/17 Recent Posts
Hi Ryan,

In re "images showing up when logged into VLAB, but not in email":  are you using Thunderbird mail in Linux?  I have this set-up, and cannot view images in Thunderbird mail.  I asked VLAB helpdesk about this back in Jan and they suggested to 1) make sure set up has View>Message Body As>Original HTML and 2) having the latest Thunderbird.  Neither solved my problem, but I found two work-arounds, in addition to logging into VLAB, as you've found: 1) view the images through email on phone/tablet (works in iOS, at least) 2) view the images through web interface of gmail, rather than Thunderbird mail.

Ying

On 05/20/2018 11:20 AM, VLab Notifications wrote:
Images actually show up (for me at least) if you log into VLAB, but they aren't working in my e-mail.
 
Ryan NLeach
Senior Meteorologist, IMET
National Weather Service, Missoula

--
Ryan Leach RTMA/URMA Discussion Group Virtual Lab Forum https://vlab.noaa.gov/web/715073/home/-/message_boards/view_message/4115085 VLab.Notifications@noaa.gov


-- 
Ying Lin
NCEP/EMC/Verification, Post-processing and Product Generation Branch
NCWCP Cubicle No. 2015
Ying.Lin@noaa.gov


Jeffrey Craven, modified 7 Years ago.

RE: Analysis Uncertainty

Youngling Posts: 90 Join Date: 9/24/12 Recent Posts
Ryan, an interesting comment.

I had a little trouble following along because of the images being in later e-mails, but I wanted to follow up to see what
your end game recommendation is.  

Do you feel after going through this exercise that having SD of 10 degrees or higher in complex terrain is valid?  

What are the next steps here?

Thanks for taking the time to post about this.

JPC

Jeff Craven
Chief, Statistical Modeling Branch
National Weather Service, W/STI-12
Meteorological Development Laboratory (MDL)
Room 10410, SSMC2
Silver Spring, MD 20910
(301) 427-9475 office
(816) 506-9783 cell/text
@jpcstorm

On Sat, May 19, 2018 at 5:05 PM, VLab Notifications <VLab.Notifications@noaa.gov> wrote:

Background

I'm posting just to try to start a discussion about uncertainty fields in the URMA. We're still using the Obs database, but look forward to switching to URMA since it is much less work for us to maintain. Many forecasters in the office object to the URMA when they see the big differences between the Obs database and the URMA. And then if they find the uncertainty grids in URMA, they balk at it. How can temperature uncertainty be 2-3 degrees (as it ALWAYS is) when the observation and grid differ by 10 degrees at times? Of course I agree completely with that point. But I also point out that we have no idea what the uncertainty of the Obs database analysis is, since it doesn't have this field at all.

 

I think good uncertainty fields would be a big help in selling the URMA to the rest of the forecast staff, and I think it would be more useful for doing grid verification. For example, the current defacto standard for temperature forecasts in BOIVerify is to be within 3 degrees, but in reality, several analysis grid points may have much larger uncertainty. I would argue that a forecast error near to or less than the analysis uncertainty is a hit, even if that error is 7 degrees. Along those lines, I started thinking about uncertainty in our grids, and I came up with a few sources of uncertainty.

  • distance from observations that affect that point
  • sensor accuracy, roughly +-2 degrees for ASOS temperatures I think
  • terrain variation within a grid cell of the analysis

As I've looked in depth at terrain variation in my CWA (MSO) before, I thought I would take a closer look at that aspect of uncertainty alone.

Methods

First I downloaded the 30m elevation data from the USGS, then converted it to a text format that I could read in python. I wrote a procedure to read the elevation for each 30m topo data point and assign it to one of my GFE grid points. From that I was able to calculate the maximum, minimum, range, average, and standard deviation of hi-res topography values that correspond to each GFE grid point in my CWA.

There was roughly 8,400 30m elevation points per 2.5x2.5 km grid point in GFE. It wasn't the same for each cell due to differences in the map projections. It took well over 4-hours to ingest and analyze all the data points in GFE.

Next, I assumed an average lapse rate (-3.5F / 1000ft), well mixed lapse rate (-5.5F / 1000ft), and strong inversion (+10F / 1000ft) to calculate the variation of temperatures across a grid point just due to the varied elevation across it.

Results

Topograhpy

The first image below is the range (max-min) of elevations for each grid point in the MSO CWA. The next image (using the same colorscale) is the standard deviation of elevation values for each grid point.

Average lapse rate

Assuming an average lapse rate of -3.5F per 1,000 feet, the range (and standard deviation) of temperature values by grid cell is shown below. The pixels in yellow have a range 5-10 degrees of variation across a grid cell, but note that the standard deviation is much less. Gray areas are less than 3 degrees, and green is 3-5 degrees of variation.

Well mixed, dry adiabatic lapse rate

Assuming a lapse rate of -5.5F per 1,000 feet, the range (and standard deviation) of temperature values by grid cell is shown below. The pixels in yellow have a range 5-10 degrees of variation across a grid cell, and the red pixels have more than 10 degrees of variation across the grid cell.

Strong inversion

Assuming a lapse rate of +10F per 1,000 feet, the range (and standard deviation) of temperature values by grid cell is shown below. Strong inversions like this frequently occur in mountain cold pool areas of the west, especially in the winter.

Discussion

So this only looked at one aspect of uncertainty, adding other sources as mentioned above would make the uncertainty greater, plus considering the uncertainty associated with the lapse rates used in any such analysis. This was a pretty simplistic analysis, I'm hoping someone with much better statistics can comment as well.

The uncertainty associated with the terrain variations across a grid point could be considered a source of uncertainty that is intrinsic to the analysis grid. In this case, I wonder if this could be utilized during the analysis process as one consideration for how strongly to weight an observation compared to the background. For instance, most of our ASOS stations are associated with grid points that have elevation ranges such that the standard deviation of temperature across the grid cell for all cases above is 0-2 degrees. I would hope that this, combined with the quality of the ASOS data, would cause them to be weighted very heavily. On the other hand, SNOTEL and RAWS weather stations are often located on slopes or in very hilly areas and might be weighted less.

 

Thanks for reading,

 

-Ryan


--
Ryan Leach RTMA/URMA Discussion Group Virtual Lab Forum https://vlab.noaa.gov/web/715073/home/-/message_boards/view_message/4114340VLab.Notifications@noaa.gov

RL
Ryan Leach, modified 7 Years ago.

RE: Analysis Uncertainty

Youngling Posts: 18 Join Date: 1/19/16 Recent Posts

> I had a little trouble following along because of the images being in later e-mails..

Yes, the forum software has been a challenge!

 

> Do you feel after going through this exercise that having SD of 10 degrees or higher in complex terrain is valid?  

Sometimes, yes.  My gut feeling is that 5-10 degrees in complex terrain far from an observation under steep lapse rates (inverted or not) would be reasonable. Part of uncertainty includes the odds of finding an observation anywhere in that grid cell that differs from the analysis by a given amount. IF we had good lapse rate information, we could reduce the uncertainty by comparing the elevation of the grid cell and the elevation of the actual observation. But we don't have that lapse rate information in the analysis, and if we did it would have its own uncertainty.

 

> What are the next steps here?

I really don't know. I think it will take people a lot smarter than me to figure this out. I understand it won't ever be perfect, I'm just looking for something more reasonable. 

BS
Bill Schneider, modified 7 Years ago.

RE: Analysis Uncertainty

Youngling Posts: 8 Join Date: 9/24/12 Recent Posts

Ryan,

Very interesting post. I have always "known" about the problems of representing the analysis and forecast on a 2.5 km grid but this is some good quantification of some of the issues. The fact is that a 2.5 km grid is just not fine enough to accurately forecast or analyze the weather in complex terrain over much of the west. A solution would be to redevelop the GFE/RTMA/URMA/NBM to use an irregular grid much like is done in the marine modeling community. By using an irregular grid you can represent complex terrain with much higher resolution where it is needed and much lower resolution in valleys where it is flat and the variation in parameters is typically much less. One of the reasons we are able to get away with the course 2.5 km grid in the west is that the grid boxes where the terrain is so steep that there is a 3000 ft variation in elevation are also too steep for habitation so they just don't receive the same scrutiny than say, the city of Missoula would.  However, these issues of terrain variation and accuracy do get raised and are important for example where we have a ski area that people use the point and click forecast for, but because of the elevation difference in the grid box there is no way to accurately represent either the analysis or forecast and we get lots of calls about why it is snowing at the ski area but the point and click forecast says rain.

One thing I would like to see is MDL concentrating on improving the quality of the RTMA/URMA/NBM products and addressing issues such as this vs adding more and more parameters. I know MDL is working both ends of the problem, and I know there is probably pressure to continue to add every parameter that is important to someone, but quality is more important than quantity at this phase of the game. 

Bill

SL
Steven Levine, modified 7 Years ago.

RE: Analysis Uncertainty

Youngling Posts: 174 Join Date: 11/13/14 Recent Posts
Bill,

I completely agree on the issue you mention of more fields vs. better fields.  We have a list of recommendations that came from our SOO-DOH working group (names are on the front VLab page), and we try to use that to drive development.  I also point out to Ryan that one of the items is an improved uncertainty analysis.

The upgrade we are going through now (thankfully) focuses more on improvements than additions.  In particular, with this upgrade there is a change with temperature that should help out quite a bit in the terrain you all have in the west.

The irregular grid idea is an interesting one, and I think would work in concept/theory.  There are also areas in the eastern US (Appalachians) where a finer grid would be helpful.  However, we are to some extent 'stuck' with the current NDFD grid for the time being.  

It is always worth knowing that the public thinks almost exclusively in point forecasts, not across grids; and frankly I don't think there is any level of education or outreach that can change that.  We try to stay aware of that as we continue development.  

Steve


On Wed, May 23, 2018 at 10:49 AM, VLab Notifications <VLab.Notifications@noaa.gov> wrote:

Ryan,

Very interesting post. I have always "known" about the problems of representing the analysis and forecast on a 2.5 km grid but this is some good quantification of some of the issues. The fact is that a 2.5 km grid is just not fine enough to accurately forecast or analyze the weather in complex terrain over much of the west. A solution would be to redevelop the GFE/RTMA/URMA/NBM to use an irregular grid much like is done in the marine modeling community. By using an irregular grid you can represent complex terrain with much higher resolution where it is needed and much lower resolution in valleys where it is flat and the variation in parameters is typically much less. One of the reasons we are able to get away with the course 2.5 km grid in the west is that the grid boxes where the terrain is so steep that there is a 3000 ft variation in elevation are also too steep for habitation so they just don't receive the same scrutiny than say, the city of Missoula would.  However, these issues of terrain variation and accuracy do get raised and are important for example where we have a ski area that people use the point and click forecast for, but because of the elevation difference in the grid box there is no way to accurately represent either the analysis or forecast and we get lots of calls about why it is snowing at the ski area but the point and click forecast says rain.

One thing I would like to see is MDL concentrating on improving the quality of the RTMA/URMA/NBM products and addressing issues such as this vs adding more and more parameters. I know MDL is working both ends of the problem, and I know there is probably pressure to continue to add every parameter that is important to someone, but quality is more important than quantity at this phase of the game. 

Bill


--
Bill Schneider RTMA/URMA Discussion Group Virtual Lab Forum https://vlab.noaa.gov/web/715073/discussions-forums-/-/message_boards/view_message/4132125 VLab.Notifications@noaa.gov

RL
Ryan Leach, modified 7 Years ago.

RE: Analysis Uncertainty

Youngling Posts: 18 Join Date: 1/19/16 Recent Posts

Thanks for responding Bill. 

I'm thinking about it more from the verification perspective. What does it mean if we have an MAE of 2 degrees? I think if our error compared to the analysis is on par with the uncertainty, we're doing good! I would rather look at a BOIVerify grid of mean z-score ((fcst - analysis) / std) than just MAE. I think it puts things into the context of what is possible. 

You have a good point about steep terrain not being inhabitable. But we have a lot RAWS stations in steep terrain because the fire and land managers do care about the weather out there. Road and rail also go through a lot of very steep places.

BM
Brian Miretzky, modified 7 Years ago.

RE: Analysis Uncertainty

Youngling Posts: 47 Join Date: 1/8/13 Recent Posts
Hi Ryan and all,

There is a misunderstanding of what Jacob (RTMA/URMA Lead) wrote. The uncertainty field is meant to include all possible sources of error. It is a work in progress and encourage all to take a look at the Vlab Community and WIKI for more description. The RTMA Good Enough group also made recommendations on improvements to the uncertainty analysis. As far as next steps it will be RTMA/URMA developmers and STI/AFS who will continue to work on this as they already have been. This type of feedback drives this process and is great to have and very useful.

Thanks,

Brian Miretzky
ERH SSD

On Thu, May 24, 2018 at 3:36 AM, VLab Notifications <VLab.Notifications@noaa.gov> wrote:

Thanks for responding Bill. 

I'm thinking about it more from the verification perspective. What does it mean if we have an MAE of 2 degrees? I think if our error compared to the analysis is on par with the uncertainty, we're doing good! I would rather look at a BOIVerify grid of mean z-score ((fcst - analysis) / std) than just MAE. I think it puts things into the context of what is possible. 

You have a good point about steep terrain not being inhabitable. But we have a lot RAWS stations in steep terrain because the fire and land managers do care about the weather out there. Road and rail also go through a lot of very steep places.


--
Ryan Leach RTMA/URMA Discussion Group Virtual Lab Forum https://vlab.noaa.gov/web/715073/home/-/message_boards/view_message/4136640 VLab.Notifications@noaa.gov

JC
Jacob Carley, modified 7 Years ago.

RE: Analysis Uncertainty

Youngling Posts: 69 Join Date: 12/17/14 Recent Posts
Hi Brian,

There is a misunderstanding of what Jacob (RTMA/URMA Lead) wrote. The uncertainty field is meant to include all possible sources of error.

Well put - and thanks very much for clarifying this.

This type of feedback drives this process and is great to have and very useful.

Absolutely!  This feedback is incredibly useful for us RTMA/URMA developers

Ryan, to your point here:

I think if our error compared to the analysis is on par with the uncertainty, we're doing good!

Yes - absolutely.  There is great potential for exactly this sort of application.  And we (the RTMA/URMA team) certainly understand there is healthy room for improvement in how we estimate the analysis uncertainty.  The outlook for such improvements is optimistic over the next couple years through the incorporation of ensemble information and further refinements to the background error.

Thanks,
Jacob

On Thu, May 24, 2018 at 9:02 AM, VLab Notifications <VLab.Notifications@noaa.gov> wrote:
Hi Ryan and all,

There is a misunderstanding of what Jacob (RTMA/URMA Lead) wrote. The uncertainty field is meant to include all possible sources of error. It is a work in progress and encourage all to take a look at the Vlab Community and WIKI for more description. The RTMA Good Enough group also made recommendations on improvements to the uncertainty analysis. As far as next steps it will be RTMA/URMA developmers and STI/AFS who will continue to work on this as they already have been. This type of feedback drives this process and is great to have and very useful.

Thanks,

Brian Miretzky
ERH SSD

On Thu, May 24, 2018 at 3:36 AM, VLab Notifications <VLab.Notifications@noaa.gov> wrote:

Thanks for responding Bill. 

I'm thinking about it more from the verification perspective. What does it mean if we have an MAE of 2 degrees? I think if our error compared to the analysis is on par with the uncertainty, we're doing good! I would rather look at a BOIVerify grid of mean z-score ((fcst - analysis) / std) than just MAE. I think it puts things into the context of what is possible. 

You have a good point about steep terrain not being inhabitable. But we have a lot RAWS stations in steep terrain because the fire and land managers do care about the weather out there. Road and rail also go through a lot of very steep places.


--
Ryan Leach RTMA/URMA Discussion Group Virtual Lab Forum https://vlab.noaa.gov/web/715073/home/-/message_boards/view_message/4136640 VLab.Notifications@noaa.gov


--
Brian Miretzky RTMA/URMA Discussion Group Virtual Lab Forum http://vlab.noaa.gov:8080/web/715073/home/-/message_boards/view_message/4137827 VLab.Notifications@noaa.gov

Bookmarks

Bookmarks
  • 2011 RTMA Paper (Weather and Forecasting)

    The most recent peer-reviewed paper on the RTMA. Published in Weather and Forecasting in 2011.
    7 Visits
  • Public RTMA/URMA Viewer

    Another viewer of the current RTMA/URMA, with an archive going back 24 hours. This version is open to the public, but does not contain information about the (many) restricted obs used.
    54 Visits
  • RAP downscaling conference preprint (23rd IIPS)

    This link is to a presentation from the (then) RUC group on how the downscaling process works. Although we now use the RAP, HRRR, and NAM, the logic of the downscaling code is mostly unchanged from this point.
    2 Visits