MCHB/EPI Miami Conference — December 7 - 9, 2005
Innovative Research on Teen and Unintended Pregnancy Prevention — Transcript
RUTH PETERSON: I'm very impressed you're all here on Friday when the beach is so close. So thank you for being here. And I'm very excited to be here myself to talk about Contraceptive Counseling Intervention, which uses an adaptation of motivational interviewing. And we have been working on a randomized control trial that we just finished so we'd like to share with you some preliminary results. And before I go any further I'd like to point out my project manager, down here faithfully in the second row. Jennifer Albright is with me today and if there's any questions that people have about the actual nuts and bolts of how we did this, Jennifer will be able to answer that question.
Background-wise, really we were motivated to look into contraceptive counseling intervention using motivational interviewing because we felt like there was a missed opportunity that healthcare providers had in discussing contraceptive counseling. I feel like providers are in a very unique position to address contraceptive counseling and contraceptive needs. But they don't often do this to an adequate degree.
Many times, as in the literatures, decided barriers for reimbursement, time, they're uncomfortable with the topic, generally though from a recent meta analysis that Merry-K Moos did, we really have been shown that we lack a standardized approach and a proven intervention for delivery of contraceptive counseling in clinical settings.
So our objectives were or to evaluate what we refer to as the WRAP study, Women's Reproductive Assessment Program. It's a behavioral based contraceptive counseling intervention and it's administered by health educators and it's in a primary healthcare setting. The WRAP intervention uses one-on-one counseling with the health educator and the participant and really the focus is on the effectiveness and the consistency of contraceptive use. And overall you're working with women to reduce their sexual risk-taking behaviors.
Motivational interviewing, and actually Beth will talk some in her intervention because she's using motivational interviewing, is a technique that was developed with substance abuse to address behaviors that individuals might look at for their risk-taking. And we basically modified it to contraceptive counseling. And in our intervention one of the key things is we wanted women to work on addressing the barriers that they had to changing their behaviors. We concluded in each counseling session with a negotiation of risk reduction steps.
We finally referred to our adaptation and motivational interviewing to the ESP model, which you'll see here in this slide. The E is for exploring the discrepancies between pregnancy and tension and contraceptive use and also between the risk of STIs and condom use. And this sounds so simple and duh, why doesn't everybody do this. But this really isn't happening many times in primary care settings where a provider is not actually asking the question with women about their intention. Or it's not like people want to walk around and get sexually transmitted infections. So if you can establish that there's a discrepancy between what they want and the behaviors that they have I think we've moved a long way down the road towards addressing contraceptive use.
The second phase of our adaptation of motivational interviewing was sharing information with our participants. And the third component was really trying to promote behaviors to reduce risk in a negotiation with them. And like I said before, that collimated with the risk reduction steps that women though about trying to adapt.
The WRAP study itself had an invention and control arm and as you see here, this is just a general summary of what happened and the year follow-up, just so people understand where we got out data and when we actually did the counseling sessions. The intervention arm was the only group that received the counseling intervention for contraception. And you'll see that they received that at baseline. And then they had a booster session at two months.
The control arm had contact with the study personnel only for sort of general preventive health counseling, no contraceptive counseling was mentioned. And you'll see we got data from folks at enrollment and then two, eight and 12-month surveys.
We enrolled women in the three primary care settings from 3-'03 to September of '05. And you'll see that we had to screen over 4,000 women to get enough women eligible for answering our research question. We had 1,330 refuse and we had 2,000 women who were not eligible due to the need to be within the age range of 18 to 40, I mean, sorry, 16 to 40. And also they needed to be at-risk of unintended pregnancy. So we ended up with 737 women enrolled and you'll see 365 went into the intervention arm and 372 into the control arm.
Our age breakdown, as you see here, is 41 percent were between the ages of 16 and 25. Our racial and ethnic breakdown is relatively similar to the population of North Carolina except our Latina recruitment was a little bit below what the North Carolina percent is. And I think that's actually reasonable given the fact that we were not administering the intervention in Spanish. Women did report sexual activity in the 30 days prior to enrollment among 70 percent of participants. And this becomes important as we move into the results.
One of the screening criteria, obviously, were that you needed to be at-risk of unintended pregnancy and so we tried to then define pregnancy attitude among the people who were enrolled in the study. And this is consistent with national data elsewhere, 64 percent of women were clear they did not want to be pregnant in the near future, 15 percent were clear they never wanted to be pregnant again and 21 percent were relatively ambivalent and they really didn't know whether or not they wanted to be pregnant or not. This is an interesting group, this 21 percent, because we had to decide whether or not we were going to include them in the study. And it was an area of great debate with lots of people involved. And we decided to include them because ambivalence is something that we thought was important to include as a group.
At baseline, this again is not great surprise we looked at what contraceptive use women reported. And I just put this up here so that you can get a sense that there's no great differences in what you would find in any national literature. Thirty-seven percent of all participants said they used oral contraceptives and 36 percent said they used condoms. I do want to point out this no method category. This is very interesting. Is that in the survey if you're asking women do you use a method and all of a sudden you've got 22 percent of your population going 'I don't use a method' then you could totally be panicked unless you're addressing with them whether or not they've been currently sexually active in 30 days. So I wanted to point out that when you take the population down to the 517 women who reported sexual activity, only five percent of women said they use no method. But it's an important thing for providers to understand too.
We looked then at the effectiveness of contraceptive use. This is a combined variable that takes into account the effectiveness of their primary method and also the consistency with which they use their method. For example, if somebody's on OCs you might put them in the highly effective group but then if you actually asked them how often they take their oral contraceptives and they say, "I only miss four or five a month" they actually are not in a highly effective category anymore and we drop them into a less effective use. But I do think this is interesting, again. I think it shows we have a lot of room for improvement in that we've had 59 percent of all participants reporting highly effective use. Granted, the sexually active women were better users and 71 percent of them reported highly effective use.
This is right now what we're showing when we look at the outcome of how many women in the study continued as highly effective users or improved their contraceptive effectiveness over the course of the study. And you'll see here that we have the red line is the control and the blue one is the intervention and we have the four data points that I mentioned. We have enrollment, we have the two-month survey, eight-month survey and the 12-month survey. And what you see is that we did have improvement in the proportion of women who were highly effective users or who improved their contraceptive use between baseline and the two month intervention. That's when we had the booster session, remember? And then it drops off after there's no more contact with the health educators.
One other interesting point is that we collected our data from participants at the two, eight and 12-month visits, either in person, by mail or over the Web. And this was one of the interesting twists in this project, that we had the opportunity to collect the information over the Web. I was totally surprised with how many women in our study population chose the option to respond to our survey over the Web. And what you see here, this is somewhat confusing, given my I's and C's at the bottom, but what you have, the first two bars represent the enrollment time period. The next two bars are the two month follow-up. The next two are the eight month. And the last two are the 12 month. And the dark blue is the Web surveys that came back to us. And the lighter color, the paper versions either that were completed in person or via the mail. And then the I's and C's at the bottom are intervention versus control.
The enrollment visit had to be face-to-face because we needed to get informed consent, so that is all paper. The two-month visit we wanted to see the intervention women back because we wanted to administer the booster session, which is why there's so much more paper happening for our intervention at two months. But otherwise participants could use the Web and they did use the Web. And we were thrilled with that because with the data tracking system, all the data just came flying back to our little Web based system and then it was there available for us. At 12 months women received an incentive for their participation in the study, which again usually required a clinic visit and a face-to-face visit. So that's why we had so many paper versions. Also women gave us urine samples at enrollment and the 12 month visit.
Conclusions, I think that this study shows that again many women that are receiving primary care are indeed at risk of unintended pregnancy. And they are indeed at risk due to the lack of effectiveness of contraceptive use. Many women told us they were on a highly effective method, yet when follow-up questions were asked about the consistency with which they used their methods, those women were not protected. I think busy providers often fail to ask those follow-up questions. We feel that this study shows that repeated counseling is needed to address women's pregnancy intention and the effectiveness of their contraceptive use. We were thrilled to see the changes that we see at two months and we're absolutely depressed to see that things fall off again at eight and 12 months. And I just think it shows again and again that repeated counseling is important.
I think in our study, we ended up using health educators because providers didn't actually want to spend the time doing the study with us. But our hope was that if the study was shown to be effective, that the intervention itself could be disseminated back out at a bigger level to providers and that the providers would actually be excited to have a standardized counseling intervention. And I do think that's true and I do think the providers that we've worked with actually have seen that the health educators were helpful for their women, that we learned all sorts of things from these participants that the providers actually didn't. So I think we should all have health educators in every single clinic doing contraceptive counseling all the time. So thank you. Are there any questions?
Oh, let me please acknowledge that this study was funded by CDC and used the mechanism similar to Roy 's where the Association for Teachers of Preventive Medicine was used as the funding stream.