Some management techniques take advantage of basic receivables-system information to monitor charge capture and forecast future exam caseloads. The models shown here use real data; a complex hospital-based practice was purposely chosen, with a trend line of January 2008 through August 2009. To understand the power of today’s receivables systems, consider that it took approximately two minutes to write the extraction query for these models, and the compilation took 30 seconds. This covers individual days spanning 19 months. A larger date range would, theoretically, build a more normalized population, but this practice made a significant change in coverage that affected hospital volumes after January 1, 2008.
No strong math background is needed to follow basic statistical techniques. The database extraction was exported to a standard spreadsheet program for modification to produce tables. The receivables-database program can produce statistics; however, in the absence of this feature, the raw data can be exported either to a spreadsheet or to a database program that has statistical functions to produce the same results.
The exam patterns seen for different days of the week can be used by the practice or its receivables vendor to monitor exam capture quickly. The same data can be used in forecasting.
Table 1 illustrates what the total extraction looks like. A month with a national holiday was chosen to meet a requirement of the analysis.
|Table 1: Extraction Example: July 2009|
Fridays are highlighted because Independence Day for 2009 fell on a Saturday, but was observed on the preceding Friday. Note how different the volume can be on a day that is a national or regional holiday. It more closely mirrors weekend volumes, which principally come from emergency-department patients. July 2009 was also chosen because it is outside the population reflected in the monitoring statistics (January 2008 through June 2009). There is an important reason for this, stemming from the inherent delay between the date of service for an exam and the date that it is posted to the receivables system. It can take up to 90 days to capture all exams fully for a given day. Therefore, July 2009 will offer a good opportunity to test actual capture against prior averages. Table 2 illustrates the statistics pertaining to July 2009 matched against those for the full range of dates.
|Table 2: Basic statistics on Table 1 data, versus entire population|
1/2008 TO 6/2009
The 31 days of July 2009 are organized by day of the week, with a special designation for a holiday falling on a weekday. The average is obtained by summing the exams per day, and then dividing by the number of days. The SD, or standard deviation, is the square root of the variance, and the variance is a measure of the database spread; it is the average of the distances from the mean, squared. These dispersion values help test the credibility of future results.
The three columns at the far right depict the universe of January 2008 through June 2009. The difference between the SD of July 2009 versus the SD of the prior 18 months suggests an important point about these data: seasonal variations. The Tuesday and Friday SDs of the July 2009 universe are the only ones higher than those of the prior universe, suggesting that the volumes for a day of the week, for a given month, might be better matched against those of the same month in different year.
|Table 3: July 2008 statistics, versus entire population|
|UNIVERSE: 1/2008 TO 6/2009|
In Table 3, in some cases, shows even smaller SDs. Statistically speaking, the smaller the SD, the more reliable the population universe used as a benchmark (Tables 4 and 5).
|Table 4: Use of the "Z" statistic with the universe: 1/2008 to 6/2009|
Consider whether the daily volumes per day of the week are random enough to infer a normalized population. Nearly everyone has heard the term bell-shaped curve. It is derived from a perfectly symmetrical graph, with the mean at the center and 49.99% of the actual data points on either side of it (the exact mean makes up the rest of the area under the curve). Thanks to the mathematical prowess of our predecessors, the cumulative area under the curve has been reduced to a table of standard normal probabilities.
The first step in use of the table is computation of a z score (standard score), which is determined by subtracting the population mean from the observation and then dividing the result by the SD. Negative values naturally pertain to observations below the mean. These are the ones that should be of most concern; however, just because an observation is above the mean, even by a large amount, that does not imply that all exams were captured for that date of service. Note the column E values in red. This model is set to isolate all observed volumes that are less than one SD below the mean. The advantage of a symmetrical population is its uniformity. Of the observations for each day of the week, 68% fall within one SD of the mean and 95% are within two SDs. This implies that 34% of the observations are within one SD below the mean. Therefore, we would be concerned with any column E value below 0.5 minus 0.34, which is 0.16. These dates should be more thoroughly examined for missing exams.
|Table 5: Use of the "Z" statistic with the universe: July 2008|
In Table 5, both databases isolate the same days of the week, but this one further isolates some Saturdays. This is likely to be a false positive because emergency-department visits are very random. The analysis suggests that using the statistics from the largest population is just as accurate a gauge as using a peer month. This practice, however, is located in a very large metropolitan area where there is not likely to be significant seasonal variation. If a practice has clear evidence that some periods in a year are universally busier than others, then multiple universes should be built to measure future capture.
Why use this monitoring technique? In terms of transaction detail, radiology receivables dwarf the next-largest type of receivables represented in a hospital system . There is a great deal of work necessary to merge demographic and clinical information to create a patient account. Mistakes can happen, especially with systems that still use a paper version of the interpretation and employ manual checking of these documents against a department log. This statistical technique is a relatively quick way to identify potential dates where there might be a capture problem that needs more attention.
If a practice feels that one SD is too rigorous a benchmark, resulting in false alarms, then it should move out further from the mean. I would suggest that two SDs is excessive. Tables 4 and 5 show, especially using the peer month, that there were specific dates that should have set off alarms because some volumes were outside two SDs, and others were well past one SD.
The use of statistics to monitor charge capture will eventually be supplanted by a technique already in the marketplace. It takes advanced IT capability to build the monitoring system, but it will enable a practice to gain real-time confirmation that all exams have been accounted for, whether for a day or for a month. It requires the capture of the accession number that is assigned in the RIS for all exam orders.
One might assume that production of averages by day of the week provides a basis for estimating future volumes. Table 6 provides the evidence.
|Table 6: Forecast of 2009, using 2008 averages|
|VOLUMES PER MONTH|
|Footnote: using 2009 avgs.||22,555||20,816||23,492||23,062||22,517||23,097||23,494|
|Variance %: 2008 avgs.||-3.00%||-5.00%||-3.00%||-5.00%||-4.00%||-4.00%||1.00%|
|Variance %: 2009 avgs.||0.00%||-2.00%||0.00%||-2.00%||-1.00%||-1.00%||5.00%|
The advantage of testing this assumption is available if the 2008 averages are used to predict 2009 averages, and then these monthly predictions are measured against actual volumes. The predictions cover seven months; August was omitted because the extraction was performed at the close of the August billing cycle, and significant August volume was not yet posted. Every month is short. This suggests growth in averages by day of the week.
|Table 7: Comparison of 2008 with 2009 averages.|
|WEEKDAY||2008||2009||Jan-09||09 vs. 08|
Table 7 shows a definite increase for every day of the week. The increases are generally linked to the variance between the predicted and actual volumes. The 2009 averages were also used to map the result against the actual volume. There is a close fit, except for July. Therefore, production of the averages per day, combined with an educated guess concerning next year’s growth (or decline), provides a useful way to begin preparing a budget.
I must emphasize that the forecast of volumes by month of service is the first step. There can be a delay of up to 10 days between the date of service and the date of posting for any date, with additional delays caused by unexpected complications. The prediction model then has to simulate what is actually posted to the receivables system each month by distributing the exams over two or three billing cycles.
Over time, there are definite patterns of exam volumes, by day of the week, that generally stay consistent for very busy practices. Today’s receivables systems can rapidly compile exam statistics by date of service to enable a practice to establish expected trends; these can then be used as benchmarks to test exam capture and predict future revenue streams.
Be careful not to run tests too early; allow sufficient time for the receivables system to post the exams. If the testing tool does not result in capture of missing exams (because there are none), then loosen the test by going out beyond one SD. We are only concerned with volumes below the mean, not above it. Falling one SD below the mean for a given day of the week has a 16% probability of happening. This, in my view, is sufficient to warrant specific examination of all documentation for that date of service. If you disagree, then use a less aggressive test.
Those practices in areas with known seasonal referral patterns can account for this by building peer-to-peer monthly comparisons. The z score, however, accounts for variability by using the SD, making it only marginally useful to bother with peer-to-peer timeframes.
James A. Kieffer, MBA is president of Proforma Financial Group Inc, Nashua, New Hampshire; James.Kieffer@comcast.net.