Artificial intelligence is the new challenging frontier in the administration of workers’ compensation benefits. While there are cost savings benefits that will be achieved through deep learning and machine intervention, there are also serious ethical concerns coming to the forefront as this new technology evolves.
Deep machine learning is complicated and it is in an ambitious goal. The result may afford a good prediction, but it lacks an explanation of "why" in many instances.
The potential decrease costs in both administration and payment of workers’ compensation benefits have been alluded to in advertisements of software manufacturers. Already vendors are claiming reductions, including a 5% reduction in claims cost, a 50% decrease in the cost of medical-only claims and a 25%-60% reduction in attorney involvement.
AI programs transfer the decision-making role onto a computer algorithm rather than a human being. In others words, the plight of the injured worker is not left to the decision-making capacity of an adjuster, but rather a computer utilizing logarithms.
The logarithms can be biased concerning stereotypes such as racial, demographic, genetic, gender, economic, and/or religious. AI can be utilized to admit or deny claims, restrict temporary disability benefits and direct medical care.
The lack of privacy in the vast amount of data flowing into machine learning programs continues unabated. Some of the data is distributed without consent and without transparency. How much of this data is used by AI programs remains unknown. The process is unexplainable.
Computer-based learning systems have available vast amounts of data, from unknown sources, that form a gold mine of information. Insurance carriers and employers can use this data to reduce claims costs and ultimate payouts. The data grows daily from multiple sources, both within the workers’ compensation community as well as from collateral information resources. In the information world, the availability of electronic information grows constantly.
The deployment of artificial intelligence programs that involve deep machine learning raise significant issues involving questions as to the explainability of the decision-making process. The Explainability of Artificial intelligence (XAI), including the algorithms employed in the decision-making process, is problematic. An overriding question is who is responsible for the potential harm, since an individual cannot sue a computer?
The ethical dilemma created is that it is difficult to regulate logarithms. The federal government has taken the lead on this challenging issue. The Defense Advancement Research Projects Agency assesses how the components of AI can be explained and applied in a responsible manner. The components include: how rich, complex and subtle information is perceived; how the machine learns the information within an environment; how the information is abstracted to create new meanings; and how artificial intelligence can reason to plan and decide.
The integrity of workers’ compensation is being challenged by AI systems that lack explainability. The goal of employers and or insurance companies in utilizing AI for cost-claim reduction is noble. The playing field must remain balanced, and the right to “due process” in workers compensation programs needs to be preserved. The oversight by governance, policy and rules concerning XAI should be utilized to maintain the integrity of workers’ compensation programs.
Claimants' attorney Jon L. Gelman is the author of "New Jersey Workers’ Compensation Law" and co-author of the national treatise "Modern Workers’ Compensation Law." He is based in Wayne, New Jersey. This blog post is republished with permission.
Aug 20-21, 2019
The Employers' Fraud Task Force in collaboration with The Law Firm of Floyd, Skeren, Manukian and …
Aug 20, 2019
DWC offers a series of regional safety seminars that provide affordable training on common causes …
Aug 20, 2019
Join Us: Tuesday, August 20th at 12:00 PM CST (1:00 PM EST, 11:00 AM MST, and 10:00 AM PST) For an …