Pain is a very difficult thing to measure. There’s no lab test that can put it into an absolute number like a white blood cell count. Yet with 50 million chronic pain sufferers in the United States alone, there’s got to be a better way to measure pain than the visual analog scale (VAS). Using this scale, patients assign a number from zero to 10 to rate their pain (zero is no pain, 10 is the worst pain). This is so subjective, even the patients can’t tell if a rating of three today is better or worse than yesterday’s three.
Efforts are being made by pain researchers to develop an interactive, intuitive computer program that will help quantify (put into numbers) variable describing and defining pain (e.g., location, intensity, duration). There is also a need for some kind of chronic pain assessment tool that can measure improvement in pain levels. Being able to measure improvement would help researchers identify which treatment approaches are working best.
In this study, researchers from the National Research Centre for the Working Environment teamed up with the QualityMetric Incorporated company to develop a computerized prototype of an adaptive test for chronic pain. They actually conducted the research in two steps and report on the results of those two studies in this report.
In the first step, they selected a variety of test items from several other pain scoring tools (e.g., SF-36, Nottingham Health Profile, Oswestry Low Back Pain, Brief Pain Inventory, Fibromyalgia Impact Questionnaire) and put them all together in what they called a Chronic Pain Item Bank. There were 45 items covering everything from the frequency and intensity of pain to the effects pain has on function and sense of well-being. The scoring was set up for the test bank so that low scores would indicate a low level of function and high scores would mean the patient had a higher level of function. This type of scoring helps show the impact pain has on function.
The computer program was set up so that the computer chose the next question for each person taking the survey based on the answer given to the first question. The program allowed the computer to select the next question to ask each person depending on the answer (and score) for the previous question. The first question was always, How much pain have you had during the past four weeks?
The test had a cut-off point so that patients didn’t end up answering all 45 questions in the bank. The cut-off point was different for each person but based on a concept called precision levels or precision rule. The program was set up to make sure each person was asked questions in all four areas of content (e.g., pain location, intensity, duration, and impact). This method is called content balancing. With the precision rule and content balancing, most people taking the test answered between two and seven questions. Using these two programming methods, when a longer assessment was needed, the computer would present more questions.
Once the survey was pilot tested and ready to go, the second study was carried out. One hundred adults from the Dartmouth-Hitchcock Medical Center Pain Clinic were recruited to take the test. The people in the study completed the full 45-item test bank (a static test) also and took the computerized adaptive survey (a dynamic test). The scores were analyzed and compared between the static and dynamic test methods to find out how the two methods compare in measuring and describing pain and its impact on daily function.
They also looked at how long it took each person to complete both types of surveys. The computer program took an average of one and a half minutes to complete, whereas the full test bank took 10 minutes. The dynamic computer survey was just as accurate as the static full survey with fewer items and taking less time. The scores for all four content areas (pain location, intensity, duration, and impact) were equivalent between the two tests. This shows that the faster, shorter computerized method works just as well as the longer, more cumbersome full test procedure.
Most of the test participants reported no difficulty in completing the computer program using a table type personal computer (PC) and stylus (handheld tool used to touch the screen to answer questions). And they said the results were an accurate reflection of their pain experience. The participants were given a chance to make some suggestions for how to improve the test procedures. Their comments helped the researchers see that left-handed people, anyone with a visual impairment, and those using a wheelchair had some trouble using the table PC.
All in all, this computerized prototype for standardizing data collected on chronic pain has some merit. Fast and easy to use, this dynamic adaptive computer program can replace the longer surveys currently in use. Since this was a feasibility study with a small sample size, further testing with more people will be required before this tool can be released for general use. For now, it’s two-thumbs up for this dynamic pain assessment system that yields an accurate picture of pain patterns and the impact of pain on function.