Alternative to PANRE Performance Guide
Pilot Passing Score
To pass the Alternative to PANRE Pilot, an examinee must reach or exceed the passing score of 1150, which is determined via a process called standard setting. Standard setting requires a panel of Physician Assistant (PA) content experts with first-hand understanding of the core medical knowledge, skills, and abilities a recertifying PA should possess for safe and effective practice. The panelists are chosen to be representative of the entire PA profession on a set of criteria including practice setting, practice focus, experience in the field, and other demographic variables.
These content experts engage in a standard setting process known as the Angoff method, which requires panelists to review individual test questions and decide whether someone who is just barely qualified to practice would be able to answer the question correctly. The question-level ratings provided by the content experts are aggregated to define a recommended passing score. The recommended passing score then goes to the NCCPA Board of Directors for consideration and final approval.
Pilot Reference Group
Reference groups are provided to give examinees a sense of how they compare to their peers. The reference group presented in this score report comprises all PAs who completed the Pilot exam.
Pilot Score Scale
Performance on Pilot is reported on a scale-score metric. Scale scores are used to place all scores on a unified scale despite the difficulty of any specific test form, so examinees with the same level of ability will achieve the exact same score on Pilot regardless of the exam form taken. The Pilot scale has a minimum score of 1000, a maximum score of 1500, and a passing score of 1150.
The standard error of measurement (SEM) is used to create a range in which an examinee would score if that examinee took the test repeatedly with no additional preparation/remediation. The SEM does not affect pass/fail decisions.
Pilot Test Standardization
NCCPA built each Pilot exam to be equivalent in terms of the number of questions from each content blueprint category, and from each of the three assessment levels (see the Pilot Blueprint for more details on the levels). Prior to the finalization of the test forms, two independent committees of PAs review each test form to make sure the content covered by the questions is relevant and reflects current standards of care. Questions that are out of date are removed and replaced with questions that cover similar content and are of similar difficulty. After each quarter, NCCPA collected comments submitted by PAs about the test questions they encountered. Each test question that was brought to NCCPA attention was reviewed by an independent group of PAs. Any test question that was found to be flawed or out of date was removed from use and was not used in the calculation of final scores.
The content blueprint is a set of specifications that describes the relative proportion of questions from each content category and task area that should be included on a test form. This blueprint is the output of a practice analysis, which is an survey of the various diseases and disorders PAs encounter and the skills they use in practice. The last practice analysis was conducted in 2015. In addition, a thorough study was conducted to evaluate the content that is Core to the PA profession. A committee of PAs used this data in order to set the blueprint that became live in 2019. This blueprint can be found here.
Content area subscores provide examinees with information about their relative strengths and weaknesses based on their performance on the test. The subscores displayed in this score report provide the test-taker with information about performance in each content area and an indication of the standard error of measurement (SEM) for each content area. Large content areas (e.g., Cardiovascular) have greater numbers of questions, resulting in smaller SEM values. For the larger areas, the width of the SEM bar will be smaller. Smaller content areas (e.g., Hematology) have fewer questions, which leads to larger SEM values. Thus, the smaller content areas have a wider SEM bar.
To provide the most reliable estimates, subscores and the associated SEMs are estimated using statistical methods. Subscores should be interpreted with caution due to the number of questions in each area: a different set of questions, particularly in the smaller content blueprint areas, could result in a different subscore interpretation.