Using Adjustment Coding to Manage Practice Compliance

Twitter icon
Facebook icon
LinkedIn icon
e-mail icon
Google icon
The October 2008 ImagingBiz.com article Keep Payors Honest With the Practice Receivable System concentrated on advanced techniques for monitoring the insurance companies that compensate radiologists for their clinical services. This article will move the telescope a little farther away to illustrate quick ways to check how the revenue system is functioning and monitor how compliant the practice is within its rules-based environment. The key to this process is how best to categorize noncash credits that are indications of avoidable actions. After establishing codes for these credits, it is important to build a report that helps visualize patterns of loss to help a practice understand both the economic implications and their timing. A practice has been chosen that functions in a very complex clinical environment with a relatively difficult payor mix (one with a great many compliance requirements). It is not the purpose of this article to describe, in great detail, the noncash credit categories. They have been purposely disguised, as the billing company considers this proprietary information. They are all influenced either by payor rules or by the clinical/demographic environment in which the practice operates. Table 1 is an example of how extensive the adjustment coding can be in a complex environment, and it implies the challenges facing medical practice in securing payment for clinical services. The code designations have been retained; their descriptions are generic. They are organized in groups to facilitate their use in a readable trend report. The grouping is a key administrative strategy because every practice will differ in the priorities placed on transactions that cost it income.
image
Facsimile of report
The listing has been both color coded and numbered to indicate the grouping of the 60 codes. Refunds: These are debit entries (cash being the credit) aggregated against gross receipts in the report. Contract adjustment: This is generally the largest noncash credit. It is the difference between the practice fee and the payor allowance. There is nothing a practice can do about this credit, except at contract-negotiation time. File limit: This credit depicts an avoidable loss and is particularly significant for this practice, due both to the number of large payors with short claim-submission windows and to a chronic problem within the practice in amending flawed dictations that are sent back by the billing company for addenda. Credentialing: This is a single-code item quite significant to this large practice. Newly hired radiologists must be approved by the payors; the paperwork is extensive. While most billing companies offer to handle the procedural issues, they still require the practice and radiologist to supply critical licensure information far enough in advance of the radiologist’s start date to ensure that payors will accept claims. This has been a chronic and costly problem for this group. Compliance: This grouping has the largest number of individual codes. Some of the reasons are avoidable, especially where they concern justifying why an exam was performed. Practices are at the mercy of the referring physicians and hospitals, both of which may not comply with the rules for precertification. If a practice has a large amount of cases performed in its own freestanding centers, it can control the process by requiring the referring physicians to document properly why they need exams for their patients. Medical-assistance write-offs: The size of the state medical-assistance population using this hospital system necessitates isolation of credits that might otherwise be in the compliance grouping. There is also a file-limit code that, in this case, I have elected to keep in the file-limit grouping. This is a particularly difficult payor to manage because neither the state programs nor the patients readily respond to inquiries for necessary information. Courtesy: It is important to segregate free care because radiology practices do not have access to the same types of state pools that hospitals have. If the amount continues to climb, it could be useful to keep track of these credits as a tool for negotiation with the hospital. The next report will show the level of credits for a single division of this large practice. Bad debt: These codes identify losses on self-pay balances. The largest will be accounts that are transferred to a separate agency unrelated to the billing company. Other accounts are below the agency’s minimums and are written off directly. There are also system-driven codes where a small balance remains on the books, but is too small to pursue cost effectively. The system is programmed to credit the accounts after one or two statements. Accounts-receivable trends can be organized in numerous ways. The two illustrated here are organized by month of service and month of posting. These particular models have been extracted for all services within this practice division since January 1, 2006, showing information posted to the receivable system through November 30, 2008. The month-of-service model is shown in Table 2.
image
Facsimile of report
The information on each line is interrelated. All the data on the first line, for example, pertain to exams performed in January 2006. These 21,746 exams generated 19,784 work RVUs. The gross charges (at practice fee) were $2,707,820 and cash attributed to these charges (net of refunds) reached $757,609 by the close of the November 2008 billing cycle. The contract adjustments help approximate the average payor fee: $757,609/($757,609+$1,665,110)=31.27% (column N). Why? A contract adjustment is generally not posted unless there is a cash payment. The credits in columns G through K are predominantly total losses. Some of the bad-debt credits are attributed to the self-pay balances after payor payments, but most are from accounts that are strictly self-pay (no health insurance). It is useful to segregate the contract adjustments because this reveals a useful benchmark about overall payor fees. Column G is the loss from missing the deadline for filing the payor claim. Based upon the average payor fee schedule of the month, the cash loss is, at most: $27,235x0.3127=$8,500 (loss for all of 2006: $408,819x0.314=$128,000). The column-H credentialing loss could reflect an ongoing administrative problem, where some of the radiologists are not credentialed by a small-market payor. Columns I and J are similar types of losses, where J is specific to the state’s medical assistance program. Some of the losses, perhaps 25% to 35%, are avoidable because they pertain to preauthorization failures, or failure to offer enough evidence of clinical justification. The remaining losses are judged to be cases that never should have been performed, or for which there was no funding in the patient’s contract. The column-K and column-L credits exist for all hospital-based practices; there is no standard on what the write-offs should be. Nationally, collections on self-pay accounts average 5% to 8%, meaning that the bad debt on self-pay accounts is 92% to 95%. There is also a correlation with the volume of cases performed for emergency-department patients. Many areas of the United States have experienced large increases in the use of emergency departments for primary health care, triggering new levels of exam volume not covered by any insurance plan. For all practical purposes, the courtesy and bad-debt credits are the same. The only real difference is that courtesy write-offs occur almost immediately, while bad-debt credits are posted after some attempts were made to obtain payment. The column-M balance is the difference between the column-D charges and sum of all the credits (columns E to L). The benchmark ratios are facilitated by tracking receivables by month of service—something not possible using a receivable-reconciliation model based upon posted transactions per month. Column N has already been described as the approximation of the payor fee schedules. Column O is the resolution rate derived strictly from the cash and noncash credits: column E/sum (column E to column L). Column P is the conventional gross ratio: column E/column D. When the receivable balance for a given month-of-service universe reaches zero, the ratios in columns O and P will be identical. Column O is a valuable benchmark in the early stages of resolving a month-of-service universe because it is a predictor of the final gross ratio. Columns Q and R portray the final accounting, in the forms of cash per exam and work RVU. They are easy measures to produce, and they quickly show whether the system is maintaining constant exam income. The RVU data are useful because the average per exam can change, based upon exam mix. Therefore, tracking RVUs per exam (column S) and income per exam/RVU answers many questions. There clearly was erosion in the income per exam/RVU in 2007 because the RVU per exam was identical to 2006, yet income per exam/RVU declined slightly. The comparison is valid because most of the 2007 cycles have been liquidated; certainly, 2006 is complete. There is a clear change in the exam mix in 2008 that will drop the income per exam, even if the payment ratios match those of 2007. The early months of 2008 show similar ratios to those of 2007. This model is also an aged receivable report. This helps management in various ways. For example, the June 2006 month of service seems to have an unusually high balance, compared with the other 2006 months of service. Note the benchmarks. The column-N payor ratio is low, implying a relatively poorer mix for that month’s population, yet the achieved cash per exam and RVU are average. While it might seem to be a problem that there are even balances in the 2006 months of service, it is not uncommon to have litigation cases (auto/industrial accidents) that drag on for years. Some patients with very large balances also are on installment plans for which payoffs can take this long. The higher balances of 2008 are due to the accounts being worked for fewer periods of time; the majority of income attributed to a month of service is collected in the first four months. There will then begin a streaming of collections, at a decreasing rate, for the next 12 to 18 months. The values in the November 2008 line also reveal another aspect of system dynamics. At any point in time, there are 10 to 15 days’ worth of accounts for which the billing company is coding the transactions and validating payor coverage. The November 2008 billing cycle reflects about 60% of the actual exams performed in that month; the remainder will be posted in the December 2008 and January 2009 billing cycles. The last line of the table shows the sum of the receivable balance, as posted through the end of November 2008. This, plus the nonposted exams, makes up the total practice accounts receivable. Table 3 is another way of portraying the collection results attributed to dates of service beginning January 1, 2006.
image
Facsimile of report
Note the receivable balance at the close of the November 2008 billing cycle; it is identical to the sum of the month-of-service balances of Table 2. This is a standard production report that illustrates the debits and credits posted to the receivable system within each monthly billing cycle. The information in the line for January 2006 is the flip side of that described for the November 2008 line of Table 2. The posted exams/RVUs/charges are only 60% of those for exams with a January 2006 month of service. The values on this first line are really not all the transactions posted within this cycle, nor are they, for that matter, all the lines for 2006. The benefit of a database is the ability to isolate discrete populations. This is an account of the billing company that predates 2006. My construction of this database involved an extraction query that limited the transaction detail to only exams with a date of service after December 31, 2005. All transaction detail pertaining to earlier periods that normally would be posted in 2006 has been excluded. The column-F contract adjustments are triggered by the column-E cash. The remaining noncash credit codes follow differing operational dynamics. The filing-limits credits are completely absent for six months, and then are used continuously. There may have been a buildup of accounts with payor rejections that triggered the first large group of these credits in the December 2006 billing cycle. The 2007 cycles had some months, such as May and August, when large blocks of accounts were cleaned out of the system. If you jump back to Table 2, you can see that there were significant filing-limit rejections from January 2006 forward, but they were not cleared out until 2007. The column-H credentialing credits pertaining to the new hires in mid-2007 and January 2008 were booked in June and August 2008. The column-I and column-J compliance credits were also delayed in 2006; note the amounts in Table 2 for 2006, compared with Table 3. The column-K and column-L credit patterns confirm that courtesy credits occur almost immediately, while the bad-debt credits are posted after numerous statements/letters to the patient. The column expands and contracts according to the relative level of posted transactions for each billing cycle. The value is based on this formula: prior-month balance+column D–sum (columns E to L). The derived statistics for the month-of-posting model cannot contain the same metrics as the month-of-service model because the credits posted in a month have nothing to do with exam/RVU volume and charges. Column N is similar because the contract adjustments are generally posted at the same time as cash. The column-O ratio is not as credible because the noncash credits are not as linked to cash as in the month-of-service model. For example, the 2006 total ratio is higher than the Table 2 figure and the 2007 month-of-posting calculation. This stems from the inherent delay in posting large amounts of 2006 bad-debt credits in 2007. The other useful measurement from the month-of-posting patterns is the work RVU per exam; it tracks reasonably closely with the month-of-service model. 2008’s figure is somewhat higher because 2007 dates of service have been posted in the first quarter of 2008. Comparison lines are shown in this model, measuring 2007 against 2006 and measuring 2007 (11 months) against 2008 (11 months). The 2007-versus-2006 variances are attributed to the 10 to 15 days’ inventory of nonposted transactions. The 11-month comparison is a valid representation of transactions posted in the same number of cycles in 2007 and 2008. The Table 2 comparison of 2006 versus 2007 is very accurate because more than 95% of the 2007 cash has been collected by the close of the November 2008 billing cycle. The comparison of 2008 with 2007 will not be useful until late 2009.