MCHB Conference Webcasts
2005 EMSC Annual Grantee Meeting
April 12-13, 2005
SHARRIE McINTOSH: Good morning, everybody.
UNIDENTIFIED SPEAKER: Good morning.
SHARRIE McINTOSH: Thank you for that. I just want to invite the three grantees up to the stage who served as the beta test grantees for the performance measures. If you all could make your way to the stage here. So while they do that, I'll go ahead and introduce them. We have Mike Merrill, who is the EMSC program director in Colorado . And we don't have Melody?
Okay. And then we have, next to Mike, we have Evelyn Lyons, EMC manager from the Illinois EMSC program; and finally, last but not least, Janet Houston, the director for the EMSC project in New Hampshire.
And so just to again reintroduce myself, Sharrie McIntosh from Luwen, the Luwen Group, and we've been working with the Resource Center and with HRSA over the past year and a half to help develop these performance measures. And I just want to say that it has been a really collaborative process. Both with the center, the Resource Center , with HRSA, and especially with the grantees who were involved in the meeting back in March, and then with the beta test grantees that are on the stage.
The driving force behind the performance measures was really to provide HRSA as well as the resource center a way to tell your story about what you've been able to accomplish and achieve with the EMSC program. So Tina has been talking about it's all about demonstrating results and accountability. So performance measures are indeed an important way to get you there.
And, finally, it's important, performance measures are important as a way to look at the impact that your program is having on improving pediatric emergency care to pediatric populations. So it really is part of telling your story, engaging different stakeholders and letting them know what your impact has been. And then showing that you've been able to demonstrate results. So it's all key to showing improvements.
So what I'd like to do is just see if I can get this off the screen here. What do I do?
UNIDENTIFIED SPEAKER: Escape.
SHARRIE McINTOSH: Escape. When in doubt press escape. So what I'd like to do is just go over the measures. Tina alluded to them a little bit and talked about the sub components about the three measures. So if you look in your packet of materials, you actually have a handout that includes the listing of the three measures with the various subcomponents and that's a one‑pager, and you have attached to that some of the information that we got from the site visits with the three grantees. I just want to talk a little bit in detail about the performance measures before we actually dive into the presentation.
So the first performance measure, does everybody have that handout in front of them? Okay. So the first performance measure as Tina said gets at operational capacity. So the ability of the state or territory to ensure operational capacity. And so when we were working with the center and HRSA and the grantees about what operational capacity means, we actually came up with four different components.
So I just want to go through the components, because this will help you kind of understand what we did during the site visit and some of the results we got from the site visits. So one of the components is looking at off line and on line medical direction so that prehospital agencies and providers receive on line and off line medical direction. So that's one element of operational capacity.
Then we're also looking at prehospital providers, having essential equipment and supplies on their vehicle, on their ambulance. The third component, again looking at operational capacities, that there's some kind of system in place for designating facilities that are equipped to stabilize and manage pediatric populations. So again another indicator of operational capacity.
And, finally, looking at whether hospitals and other providers have interfacility transfer agreements in place. So those four elements are what we are calling operational capacity. So just to give you a sense of what we're actually measuring or asking you to provide information on.
The second measure doesn't have any subcomponents, and it's the measure that looks at, if it's part of the recertification process for paramedics, whether built into that is some training or course work around pediatric emergency care. So that's the second measure.
And the third measure is looking at the state's ability to establish permanence of the EMSC program within the EMS system. And again this is looking at the stability of the program, the longevity of the program. So that's what we mean by permanence. And for this measure we looked at, I think, five different subcomponents, it's kind of grown and shrunk as we've refined these measures but we're now at five different subcomponents. One of the subcomponents looks at the establishment of an EMSC advisory committee so that's one way to get yourself entrenched within the larger system.
The other component looks at pediatric representation on various EMSC committees or boards. So making sure that you have a voice at that table.
The other component looks at FTEs and whether you have FTEs that are partially or completely funded to support the EMSC program.
The second to last component looks at in terms of legislation and mandates, whether there's a process in place that you develop to integrate pediatric issues within existing statutes and legislation and regulation.
And finally is looking at the ability of the state to collect information or data on emergency care provided to pediatric populations. All right. So we say there are three measures, and technically there are. But there are a lot of ‑‑ lots of subcomponents to each of the measures. I want to make sure I point that out to you before I launch out into the presentation.
What I want to focus on during this presentation is the site visits that we did to the three beta test grantees. So once we developed these measures one of the things, important next step was to figure out how do you move forward in terms of implementing these measures. So the three grantees here, happily, can I say happily, accepted our invitation to serve as beta test grantees and I just want to give you an overview of the beta test process, what was covered, what things we tried to get out of those site visits and then talk about some of the top level findings from the beta test site visit and again your handout has more of the specific detail. And then we also want to have a panel discussion with the grantees here. So to give you an opportunity to ask them questions, ask us questions, ask HRSA, the resource center, everybody that's represented here. But give them an opportunity to talk about their experience as a beta test site and actually thinking about implementing these performance measures.
All right. So the purpose of the beta test. As I mentioned, the main goal of the beta test was to understand the process for collecting, analyzing and reporting data for each of the measures. So it was really important for us to understand now that we have these measures how are they going to work in the real world. What's the feasibility of actually gathering information for these measures and then reporting that information up to HRSA.
The other important goal of the site visit, of the beta test, was to develop some kind of implementation manual. One of the things I think Mike and Janet are going to be talking with you about later on is about some of the TA activities that they'll be assisting you on. One of the activities is this implementation manual that will clearly articulate what are some of the important steps that you would take to implement the measures and then highlight some potential data sources that we have learned through our site visits with the three grantees.
And then also to identify particular technical assistance needs. So we heard a lot from the three grantees about what NEDARC, what NRC, what HRSA, what Luwen could do to help facilitate the performance measures. That was obviously useful information for us to hear.
All right. So we conducted the three site visits to the three grantees. And we spent a little bit under a day and a half at each of the sites. And we tried to, obviously, meet with the EMSC leadership there, but also the EMS and trauma leadership, different partner agencies who would be involved in gathering information around some of the measures. Family reps when they were available. I think we had a family rep at the Colorado site visit.
And finally individuals who would be responsible for some of the IT data collection. Obviously that's you know an important part of collecting information on these performance measures.
In selecting the grantees, we tried to select grantees that were different in terms of geographical location, obviously, but also in terms of where they were housed within a larger system. So where the EMSC program resided. We thought it was important to have some variation in that element. And also variation in terms of the kinds of data systems that they had in place or planning on developing, just to get a sense of, you know, grantees, different stages along developing a data collection system.
And also during the site visits we were lucky to have staff from NRC and NEDARC and Tina joined us for one of our site visits as well so they could hear some information and feedback from the grantees as well.
All right. So what was covered during the site visits: We tried to get clarification to the performance measures. So to help really flesh out the terminology and the wording of each of the measures, to make sure that things were clear, concise, when possible, and there wasn't any possibility for misinterpretation of the measure. So the grantees really provided some useful feedback on just wordsmithing some of the measures. Another thing we tried to cover during the site visits was the data collection process. And obviously this was an important thing to address during the site visits. So for those grantees who were set up in terms of having a system in place to report on the measures, we had them walk us through kind of what that process was. Say who was involved, what data sources did you tap into, what staff was involved for collecting, reporting, analyzing the information, and also a lot of the measures as you look through them may require you to get information from some external agencies, some partner agencies. So it was important to learn about whether good relationships existed with those partner agencies that would facilitate them sharing information with you. So that was something else we tried to discuss during the site visits. Data analysis and reporting. So again understanding what staff would be responsible for that, what were some of the limitations of the information that was collected. Grantees talked about for some of the measures maybe they're getting information on a sample of prehospital providers, not the full universe. That may be a limitation. To understand what some of the contextual factors were that existed that may have an impact on the information that's reported. And we also asked the grantees just to really offer up some thoughts on different best practitioners in terms of data collection strategies. So this obviously will be informative to the rest of the grantees and will be information that we feed into the implementation manual.
And we also talked about potential challenges to consider. I think we spent, oh, maybe a few minutes on this, during the site visits. Just kidding. So in terms of what are some potential challenges to collecting information for us to measure. It may be that there's a data infrastructure that needs to be kind of developed to actually collect and report information on this measure. It may be that, again, in terms of those external agencies that you have to do some relationship building with some of those partners, especially at the local county level. And also they're just unique challenges that are particular state faces depending on their political climate resources what have you. It's also important for us to hear about some of those potential challenges. And finally technical assistance needs that would help them respond to the measures.
All right. So in terms of the findings from the beta test site visits, again you have some information in your packet, but I just want to kind of highlight some of the top level findings that we got from the three grantees. In general, I think we walked away feeling as though that the data really was currently available or could be identified to address each of the performance measures. So the grantees that we spoke with did have some way of gathering this information for the three performance measures. So in terms of feasibility, it did seem that these measures were feasible.
And I just want to highlight a few examples. For example, performance measure number one, which gets at operational capacity, two of the sub components, the one that looks at on and off line medical direction, the other one that looks at essential equipment and supplies. For some of the states they have mandates in place around these two items and some of your states may as well. So if there was a mandate in place obviously that helped in terms of having information available to report on that performance measure.
For I think it was performance measure number two that looks at recertification of paramedics and making sure that emergency pediatric education is built into that, again, some states had requirements or mandates in place that got at this measure. So, again, it would be easier for them to collect information on that measure.
One of the points I do want to make, though, is that while grantees did have some of this information, some of it may be paper‑based. So it may be a question of having the information, but then needing to translate it, transfer it into some data collection system, some electronic database, and so that's something that we talked through. So in general the measures we felt were feasible in terms of reporting, gathering the information.
Grantees also identified various strategies for collecting information. So you'll notice in your handout, for each one of those measures and submeasures, grantees identified potential data collection sources.
So, for example, prehospital run reports. Prehospital data collection systems. For those first two components I talked about, off‑line, on‑line medical direction essential equipment and supplies, some of the states capture that information through their prehospital forms. So there were fields that indicated if a protocol was used. There was a field around treatment authorization and who provided that authorization. So, again, there were some tools in place to capture information about this measure.
Ambulance inspection. The grantees talked about this being a process that they could use again for those first two measures that I just mentioned. So obviously during an initial ambulance inspection, that would be an opportunity to capture whether that ambulance is being compliant with these first two elements. And then when you do your follow‑up site visits it would be another opportunity to verify that the ambulance has the equipment and supplies on board and that they have those protocols in place.
The grantees also talked about, again, as another data collection strategy actually doing site visits. And we talked about ways to leverage existing site visits to gather information around the performance measures. So when the regional coordinators go out to do their site visits, that might be another opportunity to collect information on some of these measures.
And then surveys. Just to acknowledge grantees did acknowledge that developing a survey, for example of prehospital agencies or pre hospital providers or hospitals would be a challenge. Surveying is always a challenge in terms of response rate and et cetera. But it was an option. And they talked about maybe needing some technical assistance from the resource center or NEDARC in this area.
Another I guess data collection strategy related to the facility recognition system. And this is again under performance measure number one, there's a subcomponent that looks at designating facilities for handling pediatric emergency populations. And so some of the states did have you know a facility recognition system in place. I think Illinois , they have the three‑tiered system and this information is captured in that system and tracked. And so for them, you know, they do have this information readily available to report on this measure.
Some other states, they may not have had a designation system for medical, but they did have it for trauma. And as part of that trauma designation system, they could also capture information on pediatric populations. So, again, there were some existing structures in place to help gather information for this particular measure.
We also brain stormed about other potential data collection sources. So JCAB, for example, as part of their accreditation process, they look at facility recognition and designation systems. They definitely look at interfacility transfer agreements, so that would be another place to get this information.
We talked about other in‑state accreditation or licensing agencies, and leveraging those relationships to gather that information as well. So, again, there's lots of different strategies for getting information for some of these measures. And then the last bullet here relates to performance measure two the education measure. The grantees talked about potentially using the EMT registry, so paramedics obviously are registered with that system. And there is the ability to capture information on whether an individual met their pediatric education requirements.
So, again, just another opportunity, another potential source of data collection.
Another finding from the beta test was to get recommendations from grantees for improving the clarity and utility of the performance measures. And you'll notice, especially for performance measure number three, the grantees talked a lot about making those into scales. Instead of it just being a yes/no response, setting it up in scale format so you can show progression towards the ultimate alcoholism. So, for example, the facility recognition component, that measure in performance measure number one that's not an easy thing to do. You know Illinois kind of shared with us their experience over the years trying to get to the point where three are now with their EDAP system.
So understanding that and acknowledging that, we thought it was important to have a measure that showed what were the first steps, the interim steps that lead up to that ultimate alcoholism. So then the grantee would be able to indicate, okay, I don't have the facility recognition system in place, but I've set up a task force who's tasked with having this as an important agenda item on the EMSC committee. I've met with the different stakeholders, and partner groups who I need to bring to the table around this issue. So there are things that you can do to show progress towards that ultimate performance measure. And again I think HRSA, the resource center, NEDARC; you know want this to be not a punitive system, but something that shows progression. We want to show what you're doing and where you're leading up to.
In terms of using a scale, we also talked about, again, for some of the subcomponents in performance measure number 3 there's a subcomponent that looks at pediatric representation in EMS committees or boards. So in terms of scale you look at okay you have those folks at the table but are they voting a member or not? That's important to know. That would show progression.
For the subcomponent that looks at establishing statutes and regulations around pediatric issues, again, the feedback we got from the grantees was that, you know, it just doesn't happen overnight. Just to say whether you have statutes and regs, yes or no, is not useful information. So again if that could be turned into a scale to show, okay, well, you know, the first step we have accomplished. We've actually, again, met with a task force who is going to try and lobby with whomever they need to lobby with to get these issues addressed and then you've drafted a bill and you have these mandates in place. So again showing some of those precursor activities that would lead up to the ultimate alcoholism.
Another useful feedback from the grantees was around definitions. We heard time and time again from the grantees you really need to be clear about how you're defining certain terms. And so that's obviously important. So we got some feedback from them about, for example, defining on line pediatric medical direction. What does that mean? Does it have to be a pediatrician? Can it be an ER physician, can it be a nurse? So what we decided in that scenario, instead of defining, kind of placing a definition on the grantees, having the grantees tell us how are you defining it, because there's so much variation, unique issues especially for rural states.
So, anyway, that was an important feedback to get from the grantees.
The next bullet here we actually added a measure. The measure that, the subcomponent measure I talked about establishing an EMSC advisory committee was something that we added again hearing from the grantees that that was important. We did actually delete some measures, too. So that was always nice. We did consolidate some measures. So we rolled up some measures into just combined a couple measures that kind of got at the same theme. And you'll see that again in your packet.
And finally we heard back from the grantees about specific technical assistance needs. So they talked about that implementation manual that's going to be great. Having some information in the grant guidance that really gives some instructions around these measures will be useful. Actually working with NEDARC, as I mentioned before, to help develop some surveys will be important. And another thing they mentioned was to really have some kind of venue for grantees to share best practitioners around data collection. So that would be useful, just in terms of being responsive to these performance measures.
So what I'd like to do now is to kind of open the dialog here for the three grantees to share their experiences being a beta test site and also just in terms of potential strategies for data collection, to talk a little bit about that, challenges and the last bullet, how to achieve buy‑in from different staff and partners on the importance of performance measures. So I think they're each going to take a few minutes; is that right? Why don't we take a few minutes to kind of share, could be related to these bullets or not, but just to share your experience with implementing the measures, and then we're going to try and open it up for Q and A. All right? And that mic, it should be turned on so you can pass it.