FSBPT https://www.fsbpt.org RSS feeds for FSBPT 60 https://www.fsbpt.org/Free-Resources/NPTE-Articles/articleType/ArticleView/articleId/34/FSBPT-pursues-renewed-Buros-accreditation-in-2014#Comments 0 https://www.fsbpt.org/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=310&ModuleID=1685&ArticleID=34 https://www.fsbpt.org:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=34&PortalID=0&TabID=310 FSBPT pursues renewed Buros accreditation in 2014 https://www.fsbpt.org/Free-Resources/NPTE-Articles/articleType/ArticleView/articleId/34/FSBPT-pursues-renewed-Buros-accreditation-in-2014 Lorin Mueller, PhD, FSBPT Managing Director of Assessment and Anja Römhild, Buros Center for Testing, University of Nebraska – Lincoln In 2014 FSBPT will submit information to the Buros Center for Testing to be evaluated for accreditation of the National Physical Therapy Examinations (NPTE). This process requires a substantial documentation effort that reflects NPTE development processes and analysis of test data. The feedback FSBPT receives through the accreditation process helps to ensure that the NPTE program adheres to best practices in measuring the knowledge and skills necessary to perform effective physical therapy. Buros Accreditation Program The Buros accreditation program offered by the Buros Center for Testing (www.buros.org) was created in response to a growing need for quality assurance in the proprietary testing sector. As an independent, not-for-profit organization whose name has been associated with the critical evaluation of tests and testing programs since 1938, Buros provides psychometric expertise and independent professional judgment to assist testing organizations meet current quality standards of the testing community and to promote valid uses and interpretations of test scores. Buros accreditation is based on the Buros Standards for Proprietary Testing, which were developed from guidelines and standards issued by the testing community including the Standards for Educational and Psychological Testing, the Guidelines for Computer Based Testing, the Test Adaptation Guidelines, and the International Guidelines on Computer-based and Internet Delivered Testing. The accreditation process is conducted in two stages beginning with a general audit of the testing program’s processes and procedures (Stage 1) followed by an optional Stage 2 review of specific tests and test forms. The Stage 1 audit focuses on the review of the testing program’s organizational structure and capacity, the general processes for test development and validation, and policies and procedures regarding test administration, maintenance, and security. The Stage 2 review of the accreditation process focuses on psychometric indicators of quality for specific tests and test forms. At the completion of each audit stage, Buros provides the testing program with evidence of adherence to the Buros Standards and suggest ways in which the policies or procedures could be modified or improved to meet the highest expectations of the professional community. Successful testing programs who meet the Buros accreditation standards are able to communicate to the public a strong commitment to quality procedures and competence in testing and provide assurance to test users that the accredited tests and test forms meet psychometric standards of quality. Value to FSBPT The NPTE is one of FSBPT’s most important tools in fulfilling our mission to protect the public. One potential threat to accomplishing that end is the need to operate the examination in a secure environment: many testing programs operate in a vacuum without adequate feedback from their stakeholders and peers. Without this feedback, even the best testing programs might develop gaps in their development and analysis processes that reduce the ability of the examinations to produce the most valid pass fail decisions possible. FSBPT last achieved Stage 1 accreditation in 2009, and has maintained Stage 2 accreditation each year since. This year FSBPT will again be pursuing Stage 1 accreditation. The feedback we have received over the last few years have led to several important initiatives, including substantial improvements to NPTE security, more complete documentation of the examination development processes, a review of item fairness, and a better understanding of the meaning of scores for candidates educated outside of the United States. These efforts substantially enhance the quality and defensibility of the NPTE. Lorin Mueller, PhD, FSBPT Managing Director of Assessment Lorin joined the FSBPT’s Assessment Department in November 2011. Prior to joining FSBPT, Lorin spent 10 years as Principal Research Scientist at the American Institutes for Research in Washington, D.C. He has contributed his expertise in statistics, research design, and measurement to projects in a wide variety of areas, including high-stakes test development, work disability assessment, K-12 assessment, assessing students with cognitive disabilities, teacher knowledge, teacher performance evaluation, and school climate. He is a nationally-recognized expert in the field of setting standards for occupational assessments and has published or presented in nearly all of the areas he has worked. Lorin received his Ph.D. in Industrial and Organizational Psychology with a specialization in statistics and measurement in 2002 from the University of Houston. Anja Römhild, Buros Center for Testing, University of Nebraska – Lincoln Anja Römhild is project coordinator in the Psychometric Consulting unit at the Buros Center for Testing. She leads the work for the Buros accreditation program and has conducted numerous evaluations and reviews of tests and testing programs. In addition to her work in test accreditation, Ms. Römhild has worked on consulting projects in the areas of test validation, standard setting, alignment, and score equating and calibration. She is currently completing a doctorate in the Quantitative Qualitative and Psychometric Methods program at the University of Nebraska. host Fri, 09 May 2014 17:40:00 GMT f1397696-738c-4295-afcd-943feb885714:34 https://www.fsbpt.org/Free-Resources/NPTE-Articles/articleType/ArticleView/articleId/36/The-NPTE-Fixed-date-testing-and-more#Comments 0 https://www.fsbpt.org/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=310&ModuleID=1685&ArticleID=36 https://www.fsbpt.org:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=36&PortalID=0&TabID=310 The NPTE: Fixed date testing and more https://www.fsbpt.org/Free-Resources/NPTE-Articles/articleType/ArticleView/articleId/36/The-NPTE-Fixed-date-testing-and-more Susan Layton, FSBPT COO and Lorin Mueller, FSBPT Managing Director of Assessment Note: The following article was developed from an educational session at the 2012 FSBPT annual meeting. Alarming data on security breaches – item harvesting - has warranted a quick response. Last year, PT NPTE testing transitioned to fixed-date events and this year, PTA NPTE testing did the same. But while we could immediately get exams out there for fixed-date testing, there remain administrative items that need to be addressed. Communication Our website has a fixed-date testing page, and visitors are encouraged to link to that page. On another page is the information for each date, including when the jurisdiction has to approve the registration, when reservations for seats must be made and when scores will be available. We just added new language explaining what occurs if you are not able to test through no fault of your own. For instance, when we tested July 2, 2012, 90 examinees were displaced due to power outages at Prometric centers. So we explained how we would reschedule the test as soon as possible and most of the July 2 examinees tested within two weeks. Scheduling reports are updated weekly. Unfortunately, they do not contain a complete list of available seats, and the partial list is removed from the website 21 days before the exam, as that’s when the seats are released. (In high-density areas, seats are hard to come by). This report is what we were able to do quickly, but we know we can do better. To that end, we will be updating these reports in 2013. Here are some other improvements we’ve made. ATT letters, which are still being mailed, now contain a color insert urging scheduling as soon as the letter is received. We improved our candidate handbook online to make it easier to search and we have added webinars. We’ve also improved our candidate satisfaction surveys. In the past, we’ve asked questions concerning processing, security procedures and testing environment. Now we are also asking how far they had to travel to test, their thoughts on our customer service and whether they’ve read the candidate handbook. We have enhanced the online system for jurisdictions. The move from continuous to fixed-date testing has been very challenging from an administrative perspective. Now that people register six months in advance, we need to find a way to tell administrators what testing day they have chosen. Candidates can also now withdraw their testing request online. We are also working on a system to withdraw and re-register or switch dates without going through the whole process again. Why have deadlines at all? Deadlines maximize the likelihood that all candidates who are approved to test have a seat and allow FSBPT, jurisdiction licensing boards and Prometric time to process all candidates’ records. They reduce issues with guessing how many people will test andthey allow adjustments at Prometric centers to accommodate more candidates when possible. They also ensure forms are assigned correctly and help make certain ADA accommodations are ready. Score reporting We now have what we call “almost immediate” score reporting, which means the reporting will occur within five business days after testing. To understand the delay, you must first understand the process. We first receive test records from the Prometic centers, which now number in the thousands instead of hundred of records because of a reduced number of testing dates. Then we reconcile and verify the records as well as confirm the scores. The most difficult aspect is to identify and investigate any testing anomalies, such as missing records, potential security breaches and missing biometric data. All of this is an intense effort of active investigation to meet the reporting window. The consequences of making errors during this phase would be extremely problematic for our stakeholders and FSBPT. So at this point, it is not likely that this process will move any quicker. As for testing dates, we are still trying to determine when candidates want to test – before they graduate, just after or later. FSBPT now allows candidates to register for an examination within 90 days of graduation, a policy that 20 states accept. In 2013, there will be 4 PT and 4 PTA test dates, one in each quarter. By far, July appears to be the most popular choice for testing. Our goal is use an item just once, but we are not there yet. However, when we reach that goal, and are able to create more items, we may increase the number of tests per year. As a footnote, candidates can test three times in a one-year period. A fixed-date disadvantage? We are attempting to determine the impact of fixed-date testing, and, of course, the data is somewhat limited. Of those who took test prior to graduation date, 89.4% of the 827 passed. Of those who took the test on or after the graduation date, 88.1% of the 7,817 passed. The percentage difference is minimal. About 90% of examinees take their test within 90 days of graduation. We found that there are slightly lower scores for those who wait more than 120 days to take the test, and that the score decreases the longer the student is removed from the graduation date. The probability of failing increases significantly for those who take the test more than 200 days after graduation. That also explains why foreign-educated students tend to do worse on the tests; they are taking the test many months, if not years, after their graduation. New guidelines New content guidelines have been developed, and FSBPT is offering new services to examinees as well, including updates to the Practice Exam and Assessment Tool (PEAT) to reflect those new guidelines. They include extended availability to 60 days and revised performance feedback reports to offer scores by sections, raw scores, percent of correct answers and scale scores. The revised reports came about because of significant feedback to tweak them. We also will be making individual score reports available for 30 days free of charge. Behind the scenes, we are improving item fairness review procedures for U.S. and non-U.S. examinees, developing enhanced item writing tools and templates, researching new security analyses such as item harvesting and collusion and doing research on eligibility criteria.         Susan Layton, Chief Operating Officer, Federation of State Boards of Physical Therapy Susan Layton is the Chief Operating Officer for the Federation of State Boards of Physical Therapy. Her areas of responsibility include Assessment, Continuing Competence, Exam Services, Information Systems and Meeting Planning. Susan has a Master’s of Science in Management from the London Business School where she was a Sloan Fellow and a Bachelor’s of Science in Business Administration from the University of Mary Washington.         Lorin Mueller, PhD Managing Director of Assessment, Federation of State Boards of Physical Therapy Lorin joined the FSBPT’s Assessment Department in November of 2011. Prior to joining the FSBPT, Lorin spent 10 years as Principal Research Scientist at the American Institutes for Research in Washington, D.C. He has contributed his expertise in statistics, research design, and measurement to projects in a wide variety of areas, including high-stakes test development, work disability assessment, K-12 assessment, assessing students with cognitive disabilities, teacher knowledge, teacher performance evaluation, and school climate. He is a nationally-recognized expert in the field of setting standards for occupational assessments and has published or presented in nearly all of the areas he has worked. Lorin received his Ph.D. in Industrial and Organizational Psychology with a specialization in statistics and measurement in 2002 from the University of Houston.         host Mon, 17 Dec 2012 18:42:00 GMT f1397696-738c-4295-afcd-943feb885714:36 https://www.fsbpt.org/Free-Resources/NPTE-Articles/articleType/ArticleView/articleId/35/January-2013-Brings-a-New-Year-New-Content-New-Standards-and-New-Scoring#Comments 0 https://www.fsbpt.org/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=310&ModuleID=1685&ArticleID=35 https://www.fsbpt.org:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=35&PortalID=0&TabID=310 January 2013 Brings a New Year, New Content, New Standards, and New Scoring https://www.fsbpt.org/Free-Resources/NPTE-Articles/articleType/ArticleView/articleId/35/January-2013-Brings-a-New-Year-New-Content-New-Standards-and-New-Scoring Lorin Mueller, PhD, David Relling, PT, PhD and Richard Woolf, PT, DPT, CSCS Beginning with the first testing date in January 2013, the NPTE-PT and NPTE-PTA examinations will be based on new content outlines. These changes are necessary to keep pace with the changing practice requirements for entry-level physical therapists and physical therapist assistants. This article briefly describes the changes to the content specifications for the NPTE examinations, the revision of the passing standard to match the new content outlines, and the changes to the NPTE scaled scores that will be introduced in 2013. New Content Outlines The NPTE examinations are based on content outlines that specify the number of items on each examination form that must relate to each topic in a given practice area. These content outlines, also sometimes called “test specifications” or “test blueprints,” are necessary to ensure that each NPTE examination form represents an appropriate combination of topics and is equal to all other forms in difficulty. The new 2013 content outlines represent a refocusing of the examinations on core physical therapy principles rather than a major change in examination content. Information about the development of the new content outlines, along with a detailed description of the major content areas, can be found on the FSBPT website: https://www.fsbpt.org/ForCandidatesAndLicensees/NPTE/ExamDevelopment/. The changes that examinees will see in the new content outlines are summarized here. The changes represent the input of thousands of physical therapy practitioners who responded to surveys about current practice and dozens of experts who served on committees to help FSBPT interpret the survey data. For the NPTE-PT examination, items relating to Physical Therapy Examination of the Metabolic & Endocrine Systems were eliminated from the content outline. Committee members suggested that although these topics were important, the clinical examination activities related to these systems were increasingly performed by other healthcare providers. The content areas of Clinical Application of Foundational Sciences within the various systems were eliminated as separate categories, and appropriate items were reclassified to the remaining content areas in each body system. Similarly, appropriate items relating to Teaching and Learning were reassessed and moved to content areas and body systems areas that were more directly matched to the knowledge they measured. The redistribution of the Teaching and Learning items led to discontinuing this topic area. In addition, some of the items previously classified as relating to Research and Evidence-Based Practice were reallocated to the content areas and body systems that are the focus of the research scenario or data described in the item, and the items remaining in Research and Evidence-Based Practice, are more focused on research methods. A new area in the content outline is Physical Therapy Examination related to the Genitourinary System; one or two items on this topic will be included on each form starting in 2013. Lastly, the content area Safety, Protection, & Professional Roles was split into two areas: Safety & Protection and Professional Responsibilities. For the NPTE-PTA examination, many of the changes follow a similar theme. The area of Physical Therapy Data Collection of the Metabolic & Endocrine Systems was eliminated from the content outline because the committee members found that these activities were typically performed by other healthcare providers. Appropriate items relating to Teaching and Learning were reclassified into the Interventions content area of corresponding body systems. Items classified under the topic Clinical Application of Foundational Sciences were moved into the appropriate areas of Interventions, Physical Therapy Data Collection, or a new content area called Diseases/Conditions that Impact Effective Treatment. New areas in the content outline include Interventions related to the Gastrointestinal System and Genitourinary System. In 2013, there will be one or two items on each of these topics on each examination form. Lastly, the topic Safety, Protection, & Professional Roles was split into two topics: Safety & Protection and Professional Responsibilities. Tables 1 and 2 below show the number of items relating to each topic organized by body system. As the tables show, the body systems that are gaining many new items on both the PT and PTA exams are Cardiovascular/Pulmonary & Lymphatic Systems, Musculoskeletal System, and Neuromuscular & Nervous Systems. New Standards for Minimal Competence Any time we revise the content outlines, it is necessary to consider the impact of the new content outlines on the standards we used to determine minimal competence. In practice, the standard for minimal competence on the NPTE is the cut score for each examination form. Introduction of a new content outline may make an examination more difficult or easier, relative to the existing standard for minimal competence. At the same time, it is useful to determine whether the changes in practice that influence the new content outlines necessitate a change in how we define the standard for minimal competence. That is, are PTs and PTAs asked to do more or less at entry level than they were when we set the previous standard? To examine this question, FSBPT hosted expert panels in May and June to help us determine the standard for minimal competence at entry level for PTs and PTAs. These panels were representative of PTs and PTAs in practice with respect to national region, practice setting, area of expertise, and demographics. The panels were approximately equally split between educators, entry-level practitioners, and experienced practitioners who supervise entry-level practitioners. Our goal was to have panels that could have a discussion about the topics on the examination with the widest variety of perspectives represented. Each panel reviewed a simulated examination form constructed to exact FSBPT standards through a highly structured process. First, the panelists took a half-length version of the examination, as if they were examinees, to remind them of the difficulty of answering questions under exam-like circumstances. Next, we had a discussion of what constitutes minimum competence at entry level, using the content outlines to guide the discussion. Following that discussion, each panel reviewed the full form of the examination used in the simulation, rated each item in terms of the difficulty for entry-level practitioners, and then discussed their ratings. Each panel went through three rounds of rating and discussion. At the end of the process, both panels made slight adjustments to the entry level competence standards for the NPTE. To ensure that we weren’t raising the standard just for the sake of having higher standards, we engaged the panels in a discussion in which they provided specific examples of topics where the current performance on the examination was being elevated. For the PT examination, the panel noted that the increased prevalence of direct access meant that PTs would need to be more cognizant of factors that might affect treatment. Similarly, societal demographic changes are resulting in an increase in medically complex cases. These two factors also relate to the increasing importance of differential diagnosis for PT practice. The panel also noted that research and evidence-based practice is important, and as research findings become more easily available through electronic access, PTs must be increasingly aware of how to utilize the new information. Lastly, the PT panel noted that insurance reimbursement is driving a lot of change: requiring a higher emphasis on outcomes and documentation. For the PTA examination, many similar themes emerged. Increasing demand for PT and PTA services have made remote or limited supervision arrangements more common, such as when a PTA is called upon to provide in-home care. It is becoming increasingly important for PTAs to recognize “red flags” or contraindications and respond accordingly by stopping treatment and contacting the appropriate healthcare providers as necessary. The PTA panel also noted the increase in medically complex cases and the need for PTAs to be effective consumers of research. The PTA panel also noted the influence of insurance reimbursement issues in changing the standard for entry-level PTAs, specifically by requiring PTAs to be more familiar with a broader range of medical terminology used to document treatment and outcomes. The impacts of these changes on the passing standard for the NPTE are estimated to be minimal. In most cases, panelists wanted PT and PTA licensure candidates to perform a little better on a few items. In real terms, we expect the rising standard may impact less than 1% of PT candidates, and less than 3% of PTA candidates. So while the standards are changing, they are not changing drastically. New Scaling Procedures With the introduction of the new content outlines and new minimum competence standards, FSBPT also took the opportunity to make one more change to the NPTE examinations. Specifically, we wanted to make some minor refinements to the way we calculate scale scores on the examinations. Prior to 2013, we used an examinee’s raw score to calculate his or her scale score. The minimum passing score was set to a scale score of 600, and the top score was set to 800. This process meant that scores increased in a straight line, like a simple algebra equation. For all forms, in any given year, this equation was the same or very similar, since we do a lot to make sure that examination cut scores are very close across forms. However, the processes we used to actually build examination forms is much more complex than a simple linear algebra equation. We use a process known as “Item Response Theory” that provides us with very precise form equating and acknowledges that the difficulty of improving one’s score in subsequent administrations is very different for people who answer very few questions correctly (are guessing), people who answer correctly a number of questions that put them right around the cut score, and people who answer almost all of the questions correctly. Starting in January 2013, we will begin scaling the NPTE forms in the same way that we construct the examinations. Making this change required a significant change in our scoring software and procedures, but we feel the new scores will be more informative to examinees, especially those who may need feedback on their performance after failing the examination. The advantages of the new scaling procedures are listed below. • The process of calculating subscale scores for each content area and body system on standard score reports, Performance Feedback Reports, and School Reports will be slightly more accurate.• Some scale scores were almost never used because candidates did not score high enough to obtain them, particularly those above 750. The new process uses the entire range between 600 and 800, making score differentiation better for schools with high-scoring candidates.• We can report the extent to which scores are expected to vary across test forms, which may help examinees who have failed determine how far they were from passing. This information will be available in the new Performance Feedback Reports, available for all test administrations in January 2013 and later. We were also able to accomplish the rescaling in a way that will make scores maximally comparable for examinees who need scoring information the most: those who failed the examination. Figures 1 and 2 show the comparison of the 2012 scores and 2013 scores scaled on a “standardized knowledge scale” that we use to equate test forms. As the figures show, for examinees scoring below the cut score, total scale scores will remain very close. For examinees scoring above the cut score, scale scores will be somewhat higher in 2013 than they were in 2012. FSBPT anticipates that some schools may encounter difficulty if they intend to compare scores from 2012 to scores from 2013, especially those schools with a high proportion of students who score well above 600. We will produce a simple score transformation table in Excel to assist schools in making these comparisons. We will post this table on the FSBPT website under the Exam Development section. Looking Forward FSBPT is constantly looking at new ways to improve the NPTE examination and the services we offer in conjunction with the examination. Some readers may have already taken advantage of the new Practice Exam & Assessment Tool (PEAT). The new Performance Feedback Reports will be available to candidates beginning with the January 2013 administration. We believe that the new content outlines, new standards, and new scaling procedures all contribute to ensuring that PTs and PTAs entering practice are competent professionals who will provide effective treatment. Table 1: NPTE-PT Topic Areas # Items 2008-2012 # Items 2013-2017 Cardiovascular/Pulmonary & Lymphatic systems 23 33 Musculoskeletal System 36 61 Neuromuscular & Nervous Systems 34 50 Integumentary System 14 10 Metabolic & Endocrine Systems 8 7 Physical Therapy Examination (removed in 2013) 1 -- Gastrointestinal System 4 3 Genitourinary System 4 4 Physical Therapy Examination (new in 2013) 0 1 System Interactions (Multi-System in 2008-2012) 16 7 Equipment & Devices 10 5 Therapeutic Modalities 12 7 Safety & Protection 15 5 Professional Responsibilities 15 4 Teaching & Learning (questions redistributed, topic title removed in 2013) 11 -- Research & Evidence-Based Practice 13 4 Table 2: NPTE-PTA Topic Areas # Items 2008-2012 # Items 2013-2017 Cardiovascular/Pulmonary & Lymphatic systems 19 25 Musculoskeletal System 32 39 Neuromuscular & Nervous Systems 30 33 Integumentary System 9 7 Metabolic & Endocrine Systems 6 6 Physical Therapy Examination (removed in 2013) 1 -- Gastrointestinal System 2 2 Diseases/Conditions that Impact Effective Treatment -- 1 Interventions (new in 2013) -- 1 Genitourinary System -- 2 Diseases/Conditions that Impact Effective Treatment (new in 2013) -- 1 Interventions (new in 2013) -- 1 System Interactions (Multi-System in 2008-2012) 11 5 Equipment & Devices 9 10 Therapeutic Modalities 13 12 Safety & Protection 12 4 Professional Responsibilities 12 3 Teaching & Learning 4 -- Research & Evidence-Based Practice 3 2 Figure 1: 2012 and 2013 Scale Score Comparison for the PT Examination Figure 2: 2012 and 2013 Scale Score Comparison for the PTA Examination Lorin Mueller,PhD Managing Director of Assessment, Federation of State Boards of Physical Therapy Lorin joined the FSBPT’s Assessment Department in November of 2011. Prior to joining the FSBPT, Lorin spent 10 years as Principal Research Scientist at the American Institutes for Research in Washington, D.C. He has contributed his expertise in statistics, research design, and measurement to projects in a wide variety of areas, including high-stakes test development, work disability assessment, K-12 assessment, assessing students with cognitive disabilities, teacher knowledge, teacher performance evaluation, and school climate. He is a nationally-recognized expert in the field of setting standards for occupational assessments and has published or presented in nearly all of the areas he has worked. Lorin received his Ph.D. in Industrial and Organizational Psychology with a specialization in statistics and measurement in 2002 from the University of Houston. David Relling, PT, PhD, Associate Professor in the Dept. of Physical Therapy at the University of North Dakota in Grand Forks, ND. He has practiced in a variety of settings including acute care, orthopedics and long term care. Dave began his involvement with the Federation in 2005 as an item writer and has served as a member and then co-chair of the NPTE Exam Development Committee for physical therapists. He was inducted into the Academy of Advanced Item Writers in 2010. Dave is a current member of the FSBPT Board of Directors and the North Dakota Board of Physical Therapy. Richard D. Woolf, PT, DPT, CSCS, FSBPT Assessment Content Manager Richard Woolf is the Assessment Content Manager at the Federation of State Boards of Physical Therapy (FSBPT). He joined the FSBPT in 2008. He is also a Certified Strength and Conditioning Specialist with the National Strength and Conditioning Association. He received his Master of Physical Therapy from Northern Arizona University and his Doctorate of Physical Therapy from A.T. Still University -- Arizona School of Health Sciences. host Mon, 17 Dec 2012 18:41:00 GMT f1397696-738c-4295-afcd-943feb885714:35 https://www.fsbpt.org/Free-Resources/NPTE-Articles/articleType/ArticleView/articleId/37/New-NPTE-Standards-for-2013#Comments 0 https://www.fsbpt.org/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=310&ModuleID=1685&ArticleID=37 https://www.fsbpt.org:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=37&PortalID=0&TabID=310 New NPTE Standards for 2013 https://www.fsbpt.org/Free-Resources/NPTE-Articles/articleType/ArticleView/articleId/37/New-NPTE-Standards-for-2013 In January 2013, FSBPT will make two important changes that will affect licensure candidates taking the NPTE PT and PTA exams. First, the test content will change slightly from the exam content that has been used since 2008. Many readers may recall FSBPT announcing the updated content outlines for 2013 earlier this year. Second, FSBPT will update the standard required to pass the exams. Or in other words, we will introduce a new passing score. Setting a new standard is important for two reasons. One reason is that we want to make sure the standard we use is consistent with the new content outlines: since we’ve changed the content, we need to reevaluate the standard. Another reason is that we need to reconsider the standard periodically to keep pace with requirements in practice. Given these two important reasons, FSBPT began the process of revising the standards for NPTE PT and PTA early this year. The process began by recruiting PTs and PTAs to serve on the NPTE Standard Setting Task Force. We recruited volunteers between February and April through multiple channels. We invited nominations from FSBPT jurisdiction board members and administrators, associate and honorary members, committee members, educators, the APTA Academic Council, and APTA staff. Additionally, we reached out to APTA section presidents, APTA state chapter presidents, and APTA special interest group chairs. From across these groups, individuals nominated 225 professionals, 96 of whom indicated availability for the PT task force and 55 of whom were available for the PTA task force. From each of these groups, we chose 15 panelists for each task force. These panelists were selected at random, subjected to the constraint that there would be at least one panelist from each of the following areas of expertise: Orthopedics; Sports Medicine, Neurology, Pediatric; Cardiovascular/pulmonary; Academics/education; Acute Care and Geriatrics. Furthermore, each of the following practice settings was to be represented by at least one panelist: Academic program administrator; Clinical faculty; Academic faculty; Private practice; Hospital/acute care; Pediatric (school setting or pediatric clinics); Rehabilitation; Geriatrics; Home health; Outpatient facility (orthopedic). An effort was made to identify at least two panelists with responsibility for hiring, supervising, and developing new therapists. Geographic representation was attained by identifying at least one panelist from each region (Northeast; Southeast; North Midwest; South Midwest; Rocky Mountain; Northwest; Southwest). To be included in a particular region, panelists must be residing and/or licensed in that region. Finally, the Federation sought to achieve an approximate female to male ratio of 3:2. In order to maintain comparability to the current standard, we decided to keep the process as similar as possible to the 2007 standard setting meetings. We used the “Angoff” process for setting standards, which requires panelists to evaluate every item on a typical form and rate the difficulty of the item for a minimally competent entry-level candidate. We focus on a minimally competent entry-level candidate (one who is safe and effective) because the NPTE standards are the minimum necessary to obtain licensure: we are not trying to identify the superstars of the field, nor are we interested in candidates who are not competent. The Angoff task ratings generally take the form of “proportion correct”, and the cut score is calculated by adding all of the ratings to create a total test score. The Angoff method is widely reputed to be a difficult process, but remains the most commonly used for setting standards for certification and licensure. Setting performance standards for a high-stakes test, especially one involving licensure, is always an arduous task. Often, panelists have very different perspectives on what is important in practice, what competent professionals know when they complete their education, and what is reasonable to learn on the job. Conversations can become heated and turn into arguments, and the process fails if the group can’t reach a happy medium where everyone feels comfortable (we never expect total consensus). As such, we wanted to do everything we could to prepare the panelists for the standard setting process. We accomplished this through a series of training exercises described below. Prior to the meetings, we sent out a pre-recorded webcast and excerpts from our technical documentation describing the lengths we go through to develop items. The purpose of this step was to show the rigor with which we develop exams forms, and to introduce the idea that their task was not to critique items, but to evaluate their difficulty. Next, we created a streamlined data entry process which checked panelists’ ratings for errors, and allowed the FSBPT assessment team more time to review the ratings and discuss the results with the panelists. When the panelists arrived, we provided an introduction with “ground rules” for the discussion, to let them know that it was normal to engage in discussion and debate, that we had purposefully represented a wide range of perspectives, and that there were no right or wrong answers. Also, we wanted to let them know that the deliberations were theirs, and that FSBPT staff and invited observers would not participate in discussions of item difficulty. Before completing any ratings, we asked panelists to complete a half-length test, so they would have a better appreciation for how difficult items are when taken under exam-like conditions. We also engaged in a lengthy discussion of what it means to be minimally competent. We organized this discussion by giving them a worksheet on which they could record activities thatare typically of all candidates, minimally competent candidates, and highly competent candidates. This process helped the panelists to base their judgments in some concrete ideas. As a final step, we gave the panelists 15 items to rate as a practice task, and held a group discussion of the process. This step helped to clear up any remaining questions the panelists had. Following the extensive training, we conducted three rounds of the Angoff process. After the first round, we engaged the group in a discussion of the items with the widest range of ratings (i.e., where the panelists disagreed the most), and any other items the panelists wanted to discuss. After that discussion, we provided the panelists with their personal passing score and the “conditional probability” for each item, or the likelihood that someone whose NPTE score is right at the current passing score would get the item right. We explained that these data were to serve as a “reality check” only, and that they could determine that current examinees were not performing consistently with the standard needed for work under the new standard of minimal competence. After the second round, we followed the same process, and then presented the panelists their passing scores and the impact on the examinees based on 2011 data. The panelists then had an opportunity to revise their ratings one final time. The average passing score at the end of the third round was the recommendation carried forward to the FSBPT Board of Directors for approval. The recommended standards were very close to the current standard, with both levels of the NPTE being slightly more difficult, affecting about 1% of examinees at the PT level and 2.5% at the PTA level. As a wrap-up, we asked panelists to do two things. First, we asked them to provide a rationale for the new cut scores, given that they were a little more demanding. At both levels, panelists noted the increase in medically complex cases: an aging population, higher incidences of diabetes, high blood pressure, and obesity. Similarly, both levels noted insurers requiring more rigorous documentation. At the PT level, panelists noted many items that might require a higher standard because of the growing prevalence of direct access, and the related need to make differential diagnoses in the absence of the patient seeing a physician. At the PTA level, panelists noted the same issue as well as the fact that many PTAs are being supervised remotely or are practicing in patients’ homes. Lastly, both panels noted the increased need for evidence-based practice, at the PTA level understanding research, and at the PTA level being good consumers of research. Second, we asked them to evaluate the standard setting workshop. Both panels reported extremely positive evaluations of the standard setting workshop. Notably, all panelists reported an increasing understanding of the judgment task across rounds. For the PT panel, all panelists were comfortable with the recommended standard, and for the PTA panel, the majority were comfortable with the recommendation and those who felt the standard should be a little higher or lower were about evenly split. Everyone reported the discussions and feedback were useful, and all PT panelists and 13 of the PTA panelists were either confident or very confident that they had recommended an appropriate standard. The FSBPT Board of Directors viewed the standard setting panels as effective, and approved the standards on July 19, 2012 for implementation in January 2013. FSBPT would like to thank the PTs and PTAs who gave up their time to participate in this important process. The panelists were: Ukonnaya Bigelow, Alex Thompson, Andrea Levkowitz, Debora France, E. Christine DeCaro, Heather Bone, Janice Haas, Jason Brumitt, Jean Irion, Jessica Solberg, Joan Brassfield, Joseph Swinfen, Julie Ronnebaum, Kelly Terry, Keshia Patterson, Kevin Van Wart, Kris Ohlendorf, Laura Rauh, Lisa McCann, Lucinda Bouillon, Mandy Keefe, Marcus Sorenson, Mark Brown, Mary Wehde, Mary Ann Simon, Matthew Vraa, Nora Riley, Norman Johnson, Tania Tablinsky and William DiLeonard. host Mon, 10 Sep 2012 17:42:00 GMT f1397696-738c-4295-afcd-943feb885714:37 https://www.fsbpt.org/Free-Resources/NPTE-Articles/articleType/ArticleView/articleId/41/The-Development-of-Sanctioning-Reference-Points-for-Use-in-Board-Disciplinary-Decisions#Comments 0 https://www.fsbpt.org/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=310&ModuleID=1685&ArticleID=41 https://www.fsbpt.org:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=41&PortalID=0&TabID=310 The Development of Sanctioning Reference Points for Use in Board Disciplinary Decisions https://www.fsbpt.org/Free-Resources/NPTE-Articles/articleType/ArticleView/articleId/41/The-Development-of-Sanctioning-Reference-Points-for-Use-in-Board-Disciplinary-Decisions By Lisa R. Hahn and George C. Maihafer Forum, Spring 2011 This article was developed from a presentation at the Federation’s 2010 Annual Meeting in Denver, Colorado. The Virginia Department of Health Professions (DHP), an umbrella agency for 13 health regulatoryboards, started a comprehensive sanctioning reference study in 2001. The Board of Medicine and Boardof Nursing were the first boards to go through the process. All sorts of data were examined. DHP found, for instance, that an older woman physician was more likely to have her license suspended as compared to younger males. This was true when controlling for a variety of other factors including prior board history and seriousness of the offense. It was important to pull those biases out of the data. The DHP wanted solid recommendations in the form of sanctioning reference points, so researcherscreated a model based on the sentencing guidelines system used by Virginia’s felony level judges. Thejudicial guidelines system was successful because it was developed with complete judicial oversight, with data analysis assistance from social science researchers. Because board members change often, it was important to have consistent guidelines that stress accountability and transparency in what we do. We try to make sure that our boards have relatively predictable and valid case sanctions over a period of time. The Virginia Board of Physical Therapy was formed in 2000 because physical therapists were the largest licensed medical providers in the state without a board. We were always under the Board of Medicine as an advisory committee, and the Board of Medicine was simply not able to address our issues. Our board now addresses our issues in a much more appropriate manner. It’s important to provide an educational tool for new board members. When board members are actually faced with a violation, and don’t have any frame of reference or any past history, they may come up short. Virginia has a relatively stable board with respect to assignments, but we have had three new board members since 2006. Every one of them has the same questions when dealing with sanctions: “What should I do? How do I handle this?” That’s why it’s important to provide educational tools for board members. The sanction reference points developed by the Board of Physical Therapy, which cost about $20,000, relied on data covering 10 years of sanctioned cases. We read and coded every violation and looked at all factors important in making sanctioning decisions. Case categories were developed for abuse, fraud,standard of care and business practice, and each category was assigned points. If a case contained both abuse and standard of care issues, the one with a higher point value was scored. Other case and licensee factors, such as patient injury, were assigned points, and the total number of points was then used to determine a total worksheet score, which became translated into a sanction recommendation. The Sanction Reference Point Threshold Table contains more detailed sanctions that fit within thebroader sanctioning recommendations assigned to the point values found on a completed worksheet. Forinstance, the available sanctions for 45 to 60 points are reprimand, monetary penalty or corrective action and there are even more specific sanctions that fall within that range of points: Sanctioning Reference Points Threshold Table Worksheet score Available Sanctions 0-40 Reprimand Stayed $ penalty Monetary penalty 45-60 Reprimand Monetary penalty Stayed $ penalty Stayed suspension Probation (Terms) CE CE Audit Continuing in therapy Employer reports HPIP (Health Care Practitioner Intervention Program) Psych evaluation Supervision Shall not seek/accept employment allowing contact with patients 65- 110 Corrective Action Stayed suspension Probation (Terms same as above) 115 – or more Suspension Revocation/ accept surrender Recommend formal hearing The board turns in a Reference Point Cover Sheet, which contains the case number and type, therespondent’s name, the licensing number and the imposed sanction. Because this is a voluntary tool toguide and help the board, it does not have to abide by what is recommended if it feels there aremitigating or aggravating circumstances. If the board wants to deviate from the threshold table, however, it is asked to record a reason for departure. This information goes back to researchers in order for the worksheets to be updated to reflect current board sanctioning culture. The Sanctioned Reference Points includes all case types and provides continued assistance to the boards during the difficult process of assigning sanctions to case violations. Additional circumstances that may influence the Boards decision include: Prior history Dishonesty Motivation Remorse Restitution Multiple offenses Was it an isolated incident? The system also provides more exact definitions of cases that come before the board: Sanctioned Reference Point Cast Type Table Case Type Included Case Categories Applicable Points Abuse/ Impairment/ Inappropriate Relationship Any sexual mistreat of a patient. Impairment (alcohol, illegal substance, prescription drugs) Physical/mental/medical incapacitation Boundary issues 40 Fraud Unwarranted/unjust services Falsification of records Improper patient billing Falsifying license/renewal 20 Standard of Care Improper diagnosis/rx. Rx. with no license Failure to obtain/document CE 15 Business Practice Issues/Other Records, inspection, audit Required report not filed 10 We also want to make certain the sanction reference points don’t take into account factors such as race or ethnicity. Because only those factors deemed to be consistently important in sanctioning are included on the worksheet, it is hoped that any unwarranted biases that may influence sanctioning will be neutralized. Using the system also helps to predict future case loads and the need for probation services. With respect to methodologies, the fundamental question when developing the Sanction Reference Point System is deciding whether supporting analysis should be descriptive in nature (based on quantitative analysis of historical sanctioning practice) or whether it should be more prescriptive in nature (qualitatively based - on what sanctions should be in the future). The Virginia Board of Physical Therapy decided to merge both, so we used both qualitative and quantity of methods. The qualitative analysis primarily involved interviews with all past and present board members; past and current chairs and Assistant Attorney Generals who’ve been involved in cases over the last 10 years.The quantitative analysis was based on reviewing disciplinary cases, forming the sanction worksheet, identifying some offense factors and attempting to exclude factors which should not come into play when making these decisions. Sanction Reference Points weigh all the circumstances associated with a disciplinary violation. In order to validate our analysis, researchers were able to use the reference points to correctly predict 85% of all past case sanctions handed down. In essence, 15% of the sanctions over the past 10 years fell above or below what the worksheets recommended. In those instances, the board gave sanctions that were either more harsh or more lenient. Those cases could have had justifiable extenuating circumstances, for example very serious patient harm, or could have been due to other disparities that are not as easily explained. Lisa R. Hahn, MPA, currently serves as Executive Director of the Virginia Board of Physical Therapy, Long Term Care Administrators, and Funeral Directors and Embalmers. She has been active with the FSBPT for four years. George C. Maihafer currently at Graduate Program Director of the PhD program and Health Services Research for the School of Physical Therapy at Old Dominion University. He has served in various leadership roles in the Virginia Physical Therapy Association, including Chapter President and Chairperson of the Political Action Committee, Chief Delegate to the APTA House of Delegates. host Thu, 19 May 2011 17:46:00 GMT f1397696-738c-4295-afcd-943feb885714:41 https://www.fsbpt.org/Free-Resources/NPTE-Articles/articleType/ArticleView/articleId/40/Data-Forensics#Comments 0 https://www.fsbpt.org/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=310&ModuleID=1685&ArticleID=40 https://www.fsbpt.org:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=40&PortalID=0&TabID=310 Data Forensics https://www.fsbpt.org/Free-Resources/NPTE-Articles/articleType/ArticleView/articleId/40/Data-Forensics By John Fremer, President, Caveon Test Security This article was developed from a presentation given at the Federation’s 2010 annual meeting in Denver, Colorado. Caveon Test Security didn’t invent data forensics, which is a statistical methodology widely used in law enforcement and other investigations. However, in 2003, Caveon introduced it into the testing field, and now there are people working for other companies called data forensic analysts. There were no such things until Caveon came along in 2003. Millions of people who have never met me hate me because of my past 35 years with various education testing services. I have had a hand in many major exams that they have endured. For instance, I led the team that revised the Scholastic Aptitude Test (SAT). We met all of our schedules and all of the criteria and we were all better friends when it was completed than we were when it began. It’s probably one of my proudest accomplishments. I have been at Caveon, which is employee-owned and financed, for seven years. In addition to data forensics, we do security audits and investigations. Data forensics, though, is our best-known service. Data forensics is a special class of data analysis looking at actual responses for individual test questions that we call items. As opposed to mean scores or summaries, we can obtain much better information from individual responses from every test taker to every question on every single occasion. From that information, we create models, which indicate normal response to an item or items. It takes more time to answer a question if there is a lot of content in the screen but sometimes that’s not what you find. Sometimes you find people answer questions so quickly that they could not have read the questions, and sometimes you find that they spend amazing amounts of time on what you would think would be very easy-to-read questions. Then we look at the patterns of distribution of time and a range for them, but there are many people outside that range. For certain people taking a test, a lot of answers are answered very quickly but other questions take an enormous amount of time - and there is nothing in between. That’s not what you see if you look at 99.9% of test-takers. Why is it like that? There can be various unusual reasons why something happens that isn’t cheating, but when there are flags on multiple indicators, we sometimes get bizarrely unlikely outcomes. The results in a project for the Atlanta public schools, for instance, were one over 10 to the 52nd power. That would be similar to flipping two coins with the first one landing on its edge and the second landing on the edge of the first coin and staying there. Perhaps the first indicator of possible cheating is extremely high agreement among peers or groups of test-takers. The group always says, “Well, we study together, we had the same book and we had the same teacher.” But if you just look at all people who had the same book and same teacher, you don’t get results like theirs. Not only did they choose to get the same questions right and the same questions wrong, but when they are wrong they chose the same answers. How could that be possible on question…after question… after question? If you ask people if they did something wrong, they always say “No,” whether it's an individual, a program or a school, so that’s not useful information. If they said “Yes, we did it, we were wrong and we are sorry,” that would be meaningful. Another odd pattern is when there are multiple occasions of substantial gains or losses from one occasion to another. We only look for really amazing, extraordinary changes completely unlike what normal test-takers get and we still find them. Some test-takers’ decisions are completely inappropriate. FSBPT asks the toughest questions of any of our clients. It wants to know exactly what something means, why we reached a certain conclusion, how we do the analysis, if there are other interpretations and if we could run our data again. But once they are satisfied with a variety of different types of data, they want to take action. They use the data. They make it clear that there is no tolerance for cheating. That’s not true for all of our clients. In some public education environments in which Caveon works, we bring unmistakable data and they don’t want to take action. They don’t want to deal with any of the interest groups that will protest. That means inappropriately behavior continues. Other organizations, like FSBPT, use the information to good effect. The American Board of Internal Medicine used data forensics as part of an investigation and sanctioned 139 test-takers. That’s not a world in which you want to be sanctioned. GMAT is another group that’s very active in protecting its tests. The European association revoked 76 scores on its GMAT, which is its exam graduate management admission test. It’s the door to going to accredited business schools in the U.S. They banned 58 testers, and notified 100 schools around the world of their actions. That’s a good thing. We have to create the sense that cheating is not going to work and you are going to be sorry if you cheat. Caveon searches the media worldwide and we put out something called ‘Cheating in the News’ every two weeks. At one level, it's depressing, but it's also informative because it explains how technology is being used to cheat. It helps point out why data forensics is so important. Many high-stakes testing programs in licensing credentialing are now using data forensics. It's essential to act on the results. Don’t just find problems; get to the root of them, take action and tell the public what you did. You must act on evidence of misbehavior to really applaud the fairness and validity of your exams. It's costly, but it's something you need to do to protect our programs. John Fremer, PhD is the President of Caveon Test Security. He has 40 years of experience in the field of test publishing and test program development and revision, including management level positions at Educational Testing Service and The Psychological Corporation/Harcourt. In his 35-year career at Educational Testing Service, Fremer led the ETS component of the team that carried out a major revision of the SAT. Fremer also served as Director of Exercise Development for the National Assessment of Educational Progress, and was Director of Test Development for School, Professional, and Higher Education Programs. During 2000-2003, Fremer designed and delivered measurement training programs to international audiences for the ETS Global Institute. Fremer is a Past President of the National Council on Measurement in Education (NCME) and a former editor of the NCME journal Educational Measurement: Issues and Practice. Fremer also served as President of the Association of Test Publishers (ATP) and the Association for Assessment in Counseling (AAC). He was co-chair of the Joint Committee on Testing Practices (JCTP) and of the JCTP work group that developed the testing-industry-wide Code of Fair Testing Practices in Education; one of the most frequently cited documents in the field of educational measurement. Fremer is a co-editor of Computer-Based Testing: Building the Foundations for Future Assessments, (2002, Erlbaum.) and author of “Why use tests and assessments?” in the 2004 book, Measuring Up: Assessment Issues for Teachers, Counselors, and Administrators. John has a B.A. from Brooklyn College, City University of New York, where he graduated Phi Beta Kappa and Magna Cum Laude, and a Ph.D. from Teachers College, Columbia University, where he studied with Robert L. Thorndike and Walter MacGinitie. host Thu, 19 May 2011 17:45:00 GMT f1397696-738c-4295-afcd-943feb885714:40 https://www.fsbpt.org/Free-Resources/NPTE-Articles/articleType/ArticleView/articleId/39/Pharmacy-Security-Breach#Comments 0 https://www.fsbpt.org/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=310&ModuleID=1685&ArticleID=39 https://www.fsbpt.org:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=39&PortalID=0&TabID=310 Pharmacy Security Breach https://www.fsbpt.org/Free-Resources/NPTE-Articles/articleType/ArticleView/articleId/39/Pharmacy-Security-Breach By Carmen Catizone The primary goal of the National Association of Boards of Pharmacy (NABP) is to protect public health by regulating the practice of pharmacy. Members include all U.S. states, the District of Columbia, Guam, Puerto Rico, and the Virgin Islands, nine Canadian provinces, two Australian states, and New Zealand. When we had examination breaches, our recruiters, colleges and employers put significant pressure on our board of pharmacy members. They said, “You can’t do this because it’s going to cut down access;”“People won’t have vital services;” and “Pharmacy services are needed, so you need to back off and rethink your decision.” We have looked at security breaches from all perspectives, and we are doing what’s in the best interest of the patient in a fair and unbiased manner. Many of the candidates that were involved in these compromises and breaches are hyper-performing candidates that will pass no matter what. Other candidates may not be that well prepared or that knowledgeable; they are going to fail the test no matter what. Suppose a family member is involved in a serious accident and is rushed to the hospital for emergency surgery and the surgeon is one that compromised on his examination or was on the borderline and shouldn’t have passed that exam. Now he is going to perform surgery on a member of your family.Perhaps people don’t value physical therapists or pharmacists to the same level as surgeons, but it’s the same concept and the same responsibility. We have had to designate certain countries and certain schools that are no longer qualified to take our examinations because of breaches and compromise and lack of security. NABP accredits durable medical equipment suppliers and pharmacies. We accredited more than 30,000pharmacies last year. We accredit internet pharmacies and wholesalers on behalf of the state. We have surveyors and inspectors doing inspections in all pharmacies that we accredit throughout the United States. We also manage the Association of State Boards and as well as accreditation programs for the states. So the scope of our association extends beyond testing. We have all disciplinary information on all pharmacists, and we share that information with the state. Our organization has been around since 1904 and we know we have exam programs that have been compromised. Any organization that has an exam program and doesn’t believe it has been compromised is not seeing reality. It is an ongoing battle with a small percentage of people in all of our professions and occupations and no profession or occupation is exempt - including the FBI. We have compromised our foreign exam. We went with paper and pencil, but there’s really no way to maintain security anymore. We had a candidate that took the examination recently and spent five seconds on every question and answered C for every question. So we suspect the candidate had some sort of copying device in his glasses. He was just photocopying the items or taking pictures of the items on the computer screen. We went to a store to ask about copying devices. Without hesitation, the clerk said there were two items I could buy. The first one would work in almost any testing environment. It was what looked like a cigarette lighter, but was actually a camera. The other is regular-looking eye glasses that had a camera in the upper right-hand corner and a controller device where one could take a photograph of every screen or every page. The clerk would have been perfectly happy to sell me as many of these devices as I wanted. So we have to look at having new items on every test, which is going to drive up the exam cost and limit the times we can administer the exam. At the moment, we are involved in litigation against the Board of Regents of the University of Georgia in regard to faculty members organizing efforts with students to compromise our examinations. We first became aware of these faculty members’ efforts in 1994/1995. We worked with the University of Georgia to shut that down. They (individual faculty members and the university) signed a contract with us to not do this again. In 2006, the activity was once again widespread. We obtained a court order to go into a professor’s home and office to confiscate all his files, computers and any materials related to this operation. The compromise was so significant we had to shut down the national licensing exam for a number of months. The individual was regarded as a deity for his review program. He was being paid $1,000 a day for this service which compromised more than 600 of our items. We now have litigation against the University of Georgia for copyright trademark infringement. We also have gathered a list of students who used their email addresses concerning the data, and we are going to take action against them either civilly or criminally, whichever is relevant. Our board of directors is composed of 10 people elected by our members. We told them we cannot administer a valid examination because of the compromise and the board voted to shut down the examination. We informed the states, which said, in essence, “Please do what you need to do because we want to make sure it is a valid exam.” We also had situations where we noted some problems with particular schools in other countries. We did the forensic analysis and were able to identify certain schools in certain countries. Once again, our members stood behind us in support. We never received a single lawsuit from any individual candidate or any recruiter. We do not accept any scores for any tests outside the United States for two reasons. We cannot guarantee the security in those other countries despite the assurance that are given to us, and we wantto be able to take action against candidates who cheat. We can then go after those individuals or those institutions in the United States without having to go through a very rigorous, unfulfilling and expensive process to try to deal with another government. We have also invalidated scores of students from the U.S. in situations where the exam was compromised. We did the forensic data analysis. Only candidates for which there was no evidence to validate cancellation were allowed to keep the scores. If there was any question about the score, we invalidated it. Is it possible that some students not involved had scores invalidated? I am sure it was possible, but our choice was to be on the side of the patient and protection of public health. Our stand is that the integrity of the exam and integrity of the process is more important than some of the pressure that we received to do otherwise. Carmen Catizone, MS, RPh, DPh, is Executive Director/Secretary of the National Association of Boards ofPharmacy host Thu, 19 May 2011 17:44:00 GMT f1397696-738c-4295-afcd-943feb885714:39 https://www.fsbpt.org/Free-Resources/NPTE-Articles/articleType/ArticleView/articleId/38/Optical-Collusion#Comments 0 https://www.fsbpt.org/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=310&ModuleID=1685&ArticleID=38 https://www.fsbpt.org:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=38&PortalID=0&TabID=310 Optical Collusion https://www.fsbpt.org/Free-Resources/NPTE-Articles/articleType/ArticleView/articleId/38/Optical-Collusion By Sandra Neustel With just 1,400 students in every graduating class, optometry is a very, very small community. There are20 schools across the United States, of which three are so new they do not yet have a graduating class.Individual class size ranges from 26 to 150 students. In 1951, a National Board of Examiners in Optometry (NBO) was funded by the regulatory state boardsas well as by schools and colleges in optometry. It is focused entirely on creating exams. Of the eightboard members, four are from the regulatory boards, three from the schools and colleges and one fromthe public. There are also 10 staff members. Until 2010, there were no known incidents of cheating.Similar to the medical field, the optometry examination has a three-part structure, with twopaper-and-pencil written exams and a performance exam. Part one is a 500-item, multiple-choice exam administered in four sessions, with 125 items in each session. It is available to be taken twice a year. The 17 testing sites are in the cities of the schools, either in a gymnasium or a hotel ballroom. NBO hires its own proctors and does its own testing. Part two is a multiple-choice, paper-and-pencil exam, administered in two sessions in one day. It is presented as a medical record based on 60 patient cases, with five or six items related to each case, including use of images and photographs. Part three is a performance evaluation, and is administered by 500 volunteer examiners who rate the critical skills performance of candidates as they do a refraction and various exams on volunteer patients. Our pass rate in 2009 was between 80% and 90% for each part of the exam. There is also a fourth,separate, breakout score that focuses on disease treatment and management. It is administered withinone of our other exams. In 2009, 76% of the class passed all three exams without repeating any of theexams. Ultimately, 92% pass all exam parts before graduation; the ultimate pass grades for eachindividual exam were also reasonably high. Our cheating incident focused on the March 2010 administration of part one of the exam. One school alerted us of a piracy event involving one student. There had been a similar incident in 2009, but it wasrectified before the exam. The 2010 incident, however, was after the fact. We noticed and were concerned about a shift in performance for this particular group. There was a 5%differential from 2009 to 2010 which could have meant the exam was a bit easier, but the shift wasmysterious. Some items became incredibly easy while other items became more difficult. Ultimately, wegot a crash course in security and in cheating. We had to consider whether there was pre-knowledge ofour items or some other kind of breach. Our preliminary fact finding determined that something indeed had happened at this particular school, sowe decided to hold the score for all candidates. The vast majority of students taking this test were still inschool; we didn’t prevent anyone from practicing. NBO hired a private investigation firm to deal with the piracy and it came back with three different sets ofinterviews. Caveon compiled some data for NBO and did not find evidence of a wider breach; it couldn’teven find a statistical difference among the group that had the piracy scheme. It did find a pocketed,isolated set of collusions - people who copied. It identified four candidates in multiple sessions that hadvery similar patterns to their neighbors, and based on our previous exam agreement and policy, wecanceled the scores of those candidates. The four candidates came forth and were extremely cooperative. The excuse was, “Everybody is doing itacross the country. We have to do it in order to stay competitive.” The students were permitted to retakethe test. What was really disappointing was that the students told us a faculty member was orchestrating thecheating, which was later confirmed. The staff member that may have been involved is still working. Thenaive question is whether money is an issue. Optometry schools do compete. We finally released the student scores, but told them the scores were provisional. Since then, we havenot found links to other institutions in the optometry field. However, the NBO board decided to invalidatescores for all students at that particular school, which has never been publicly named. Students wereoffered a free re-take and most did very well. NBO believes it lost 165 total items from its item bank to piracy. Twenty-six students were distinctivelyidentified and banned from taking part one for two years. Effectively, they will be eliminated frombeginning active practice for one year after graduation day. The students may still appeal to our judicialboard, however. We are going to continue with the data forensic as it is a very important step for protecting our exams.We are more actively monitoring test preparation courses. NBO is also increasing communication withinstitutions, emphasizing the application statement and making it a multi-step process. We are viewingthis incident as an opportunity for increasing communication within the optometric field. Additionally, NBO is introducing new item types, specifically questions where students may have to selectthree or four correct responses rather than a single one. We are also systematically re-examining thestandard on our cut score for the part one exam to make certain we do have the bar set appropriately. We would like very much to go to a computer-based test, but have not done so for several reasons. Themain reason is that the test is so long. We would have to get our testing time down. The other reason isbecause our candidates are so localized. We have more than 120 candidates that would need to testwithin a reasonably short window in Memphis, Tennessee and there are not enough seats in Memphis todo that. Any organization that says it doesn’t have any security breaches may have its head in the sand. We allneed to be vigilant. Cheating may not be rampant or widespread, but it occurs in many different shapesand forms. We must protect our product and the integrity of our scores. Sandra Neutsel, PhD, is Director of Psychometrics and Research for the National Board of Examiners inOptometry. host Thu, 19 May 2011 17:43:00 GMT f1397696-738c-4295-afcd-943feb885714:38 https://www.fsbpt.org/Free-Resources/NPTE-Articles/articleType/ArticleView/articleId/42/When-PTs-Take-the-PTA-Exam#Comments 0 https://www.fsbpt.org/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=310&ModuleID=1685&ArticleID=42 https://www.fsbpt.org:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=42&PortalID=0&TabID=310 When PTs Take the PTA Exam https://www.fsbpt.org/Free-Resources/NPTE-Articles/articleType/ArticleView/articleId/42/When-PTs-Take-the-PTA-Exam By Jeanne DeKrey and Dargan Ervin Forum, Spring 2010 This article was developed from a presentation at the 2009 FSBPT annual meeting in San Diego, California.  Through laws or regulations, most jurisdictions prohibit someone who is educated as a physical therapist from taking the PTA exam. The major assumption or hypothesis we have heard from these jurisdictions is that individuals trained as PTs who practice as  PTAs will practice beyond their scope of practice as compared to individuals who are educated, licensed and working as PTAs. Still, there are currently at least five jurisdictions that do not prohibit the practice. What are the advantages, the disadvantages – and their assumptions? Are we ensuring public protection by taking either of these positions?    From a practice standpoint, one might think that an individual who is licensed or authorized to practice at a lower level than their educational level might have a tendency to practice outside their regulated scope.    ƒOne might even question whether it is appropriate for a physician to work under a physician assistant.  ƒThere may also be a supervisor issue where someone is not being supervised appropriately.  On the other side, there are certainly PTAs who are very smart and who would make wonderful PTs, but they may not have had the opportunity to attend a PT school or chose not to go to PT school.    There can be many assumptions about this practice.    There are no published studies or literature that supports one position over the other for allowing PTs to sit for the PTA exam. So instead, we first reviewed the difference between what we know about our PT and PTA education and mined that database to find support or information. Is there content in PTA programs that differs or is presented at a different depth or more in-depth than in a PT program education?    If one reviews the PTA Coursework Tool, all of the content within PTA education is also within the PT education. Truthfully, we would have been surprised to have found anything different.    Nonetheless, CAPTE criteria for PT programs also include the direction of the PTA; consideration of the needs of the patient/client; the assistant’s ability; jurisdictional law, practice guidelines policies, codes; ethics; and facility policies. The PTA in all jurisdictions is still subject to supervision by the physical therapist, so the PT needs to know how supervisory relationships work. The normal model of education for a PT includes a lot of information about rules, directions and supervision.    Do American-educated PTAs receive more in-depth education than the PT in some areas? For example, do PTs receive as much education about the role of PTAs as PTAs receive? On occasion, some PTAs have to check on PTs to let them know what’s appropriate. There are not many specifics to the depth of content, but it keeps coming back to the point that supervising physical therapists have the overriding responsibility.    It certainly makes sense to hypothesize that PTAs with a PT education might have a harder time with scope of practice boundaries. The three big states - California, New York and Texas - all allow PT-educated candidates to sit for the PTA exam and apparently have statutory authority do to that. They all participate in the Federation’s Exam, Licensure & Disciplinary Database (ELDD), so we feel we captured all of those populations. Just 2.6% of disciplinary action taken for PTA licensees was for those who had been educated as a PT. And there were no violations for exceeding scope of practice for PT-educated PTAs. It is a pretty clear answer. There doesn’t seem to be any evidence to support that the PT-educated working as PTAs are having that issue.    Another hypothesis is that a person who would chose to work as a PTA after being educated as a PT would only do so because he had multiple failures of the test. That’s not necessarily true according to data from the ELDD.    And that brings us back to the philosophical question. Does over-qualification mean that these folks shouldn’t practice, or does the focus really need to go to the supervisor? We would love to have a larger sample so we could go back to those individuals and have a better understanding of why they chose that path.    In New Hampshire, the board still had some concerns relative to an individual’s ability to understand the scope and the role of the PTA. It contacted a PTA educator and asked if he would spend some time with the individual, interview him and file a report. The program director said it appeared, based on the interview and the review of the candidate’s history, that this person understood the differences in practice between the PT and the role as the PTA. The PTA program director supported allowing this person to sit for the exam and the board subsequently allowed the individual to sit for the NPTE.    It’s certainly reasonable that a board makes sure it is doing all it can before this person has access to one of the determinants for entry-level competence. host Wed, 28 Apr 2010 17:46:00 GMT f1397696-738c-4295-afcd-943feb885714:42