Free Resources Page Header

Free Resources



NPTE Articles

Learn more about the exam

This section includes general articles about the National Physical Therapy Exam (NPTE). In addition, we’ve added articles on the issue of cheating in academic settings.

DnnForge - NewsArticles


In January 2013, FSBPT will make two important changes that will affect licensure candidates taking the NPTE PT and PTA exams. First, the test content will change slightly from the exam content that has been used since 2008. Many readers may recall FSBPT announcing the updated content outlines for 2013 earlier this year. Second, FSBPT will update the standard required to pass the exams. Or in other words, we will introduce a new passing score.

Setting a new standard is important for two reasons. One reason is that we want to make sure the standard we use is consistent with the new content outlines: since we’ve changed the content, we need to reevaluate the standard. Another reason is that we need to reconsider the standard periodically to keep pace with requirements in practice. Given these two important reasons, FSBPT began the process of revising the standards for NPTE PT and PTA early this year.

The process began by recruiting PTs and PTAs to serve on the NPTE Standard Setting Task Force. We recruited volunteers between February and April through multiple channels. We invited nominations from FSBPT jurisdiction board members and administrators, associate and honorary members, committee members, educators, the APTA Academic Council, and APTA staff. Additionally, we reached out to APTA section presidents, APTA state chapter presidents, and APTA special interest group chairs. From across these groups, individuals nominated 225 professionals, 96 of whom indicated availability for the PT task force and 55 of whom were available for the PTA task force.

From each of these groups, we chose 15 panelists for each task force. These panelists were selected at random, subjected to the constraint that there would be at least one panelist from each of the following areas of expertise: Orthopedics; Sports Medicine, Neurology, Pediatric; Cardiovascular/pulmonary; Academics/education; Acute Care and Geriatrics.

Furthermore, each of the following practice settings was to be represented by at least one panelist: Academic program administrator; Clinical faculty; Academic faculty; Private practice; Hospital/acute care; Pediatric (school setting or pediatric clinics); Rehabilitation; Geriatrics; Home health; Outpatient facility (orthopedic). An effort was made to identify at least two panelists with responsibility for hiring, supervising, and developing new therapists. Geographic representation was attained by identifying at least one panelist from each region (Northeast; Southeast; North Midwest; South Midwest; Rocky Mountain; Northwest; Southwest). To be included in a particular region, panelists must be residing and/or licensed in that region. Finally, the Federation sought to achieve an approximate female to male ratio of 3:2.

In order to maintain comparability to the current standard, we decided to keep the process as similar as possible to the 2007 standard setting meetings. We used the “Angoff” process for setting standards, which requires panelists to evaluate every item on a typical form and rate the difficulty of the item for a minimally competent entry-level candidate. We focus on a minimally competent entry-level candidate (one who is safe and effective) because the NPTE standards are the minimum necessary to obtain licensure: we are not trying to identify the superstars of the field, nor are we interested in candidates who are not competent. The Angoff task ratings generally take the form of “proportion correct”, and the cut score is calculated by adding all of the ratings to create a total test score. The Angoff method is widely reputed to be a difficult process, but remains the most commonly used for setting standards for certification and licensure.

Setting performance standards for a high-stakes test, especially one involving licensure, is always an arduous task. Often, panelists have very different perspectives on what is important in practice, what competent professionals know when they complete their education, and what is reasonable to learn on the job. Conversations can become heated and turn into arguments, and the process fails if the group can’t reach a happy medium where everyone feels comfortable (we never expect total consensus).

As such, we wanted to do everything we could to prepare the panelists for the standard setting process. We accomplished this through a series of training exercises described below.

  • Prior to the meetings, we sent out a pre-recorded webcast and excerpts from our technical documentation describing the lengths we go through to develop items. The purpose of this step was to show the rigor with which we develop exams forms, and to introduce the idea that their task was not to critique items, but to evaluate their difficulty.
  • Next, we created a streamlined data entry process which checked panelists’ ratings for errors, and allowed the FSBPT assessment team more time to review the ratings and discuss the results with the panelists.
  • When the panelists arrived, we provided an introduction with “ground rules” for the discussion, to let them know that it was normal to engage in discussion and debate, that we had purposefully represented a wide range of perspectives, and that there were no right or wrong answers. Also, we wanted to let them know that the deliberations were theirs, and that FSBPT staff and invited observers would not participate in discussions of item difficulty.
  • Before completing any ratings, we asked panelists to complete a half-length test, so they would have a better appreciation for how difficult items are when taken under exam-like conditions.
  • We also engaged in a lengthy discussion of what it means to be minimally competent. We organized this discussion by giving them a worksheet on which they could record activities thatare typically of all candidates, minimally competent candidates, and highly competent candidates. This process helped the panelists to base their judgments in some concrete ideas.
  • As a final step, we gave the panelists 15 items to rate as a practice task, and held a group discussion of the process. This step helped to clear up any remaining questions the panelists had.

Following the extensive training, we conducted three rounds of the Angoff process. After the first round, we engaged the group in a discussion of the items with the widest range of ratings (i.e., where the panelists disagreed the most), and any other items the panelists wanted to discuss. After that discussion, we provided the panelists with their personal passing score and the “conditional probability” for each item, or the likelihood that someone whose NPTE score is right at the current passing score would get the item right. We explained that these data were to serve as a “reality check” only, and that they could determine that current examinees were not performing consistently with the standard needed for work under the new standard of minimal competence.

After the second round, we followed the same process, and then presented the panelists their passing scores and the impact on the examinees based on 2011 data. The panelists then had an opportunity to revise their ratings one final time. The average passing score at the end of the third round was the recommendation carried forward to the FSBPT Board of Directors for approval. The recommended standards were very close to the current standard, with both levels of the NPTE being slightly more difficult, affecting about 1% of examinees at the PT level and 2.5% at the PTA level.

As a wrap-up, we asked panelists to do two things. First, we asked them to provide a rationale for the new cut scores, given that they were a little more demanding. At both levels, panelists noted the increase in medically complex cases: an aging population, higher incidences of diabetes, high blood pressure, and obesity. Similarly, both levels noted insurers requiring more rigorous documentation. At the PT level, panelists noted many items that might require a higher standard because of the growing prevalence of direct access, and the related need to make differential diagnoses in the absence of the patient seeing a physician. At the PTA level, panelists noted the same issue as well as the fact that many PTAs are being supervised remotely or are practicing in patients’ homes. Lastly, both panels noted the increased need for evidence-based practice, at the PTA level understanding research, and at the PTA level being good consumers of research.

Second, we asked them to evaluate the standard setting workshop. Both panels reported extremely positive evaluations of the standard setting workshop. Notably, all panelists reported an increasing understanding of the judgment task across rounds. For the PT panel, all panelists were comfortable with the recommended standard, and for the PTA panel, the majority were comfortable with the recommendation and those who felt the standard should be a little higher or lower were about evenly split. Everyone reported the discussions and feedback were useful, and all PT panelists and 13 of the PTA panelists were either confident or very confident that they had recommended an appropriate standard.

The FSBPT Board of Directors viewed the standard setting panels as effective, and approved the standards on July 19, 2012 for implementation in January 2013.

FSBPT would like to thank the PTs and PTAs who gave up their time to participate in this important process. The panelists were: Ukonnaya Bigelow, Alex Thompson, Andrea Levkowitz, Debora France, E. Christine DeCaro, Heather Bone, Janice Haas, Jason Brumitt, Jean Irion, Jessica Solberg, Joan Brassfield, Joseph Swinfen, Julie Ronnebaum, Kelly Terry, Keshia Patterson, Kevin Van Wart, Kris Ohlendorf, Laura Rauh, Lisa McCann, Lucinda Bouillon, Mandy Keefe, Marcus Sorenson, Mark Brown, Mary Wehde, Mary Ann Simon, Matthew Vraa, Nora Riley, Norman Johnson, Tania Tablinsky and William DiLeonard.

Actions: E-mail