HUMAN HEALTH RISK ASSESSMENT AND ENVIRONMENTAL LAWYERS

Posted on October 29, 2012 by Angus Macbeth

The aim of this post is to encourage environmental lawyers to pay more attention to issues and developments in human health risk assessment.

Remedial clean ups under Superfund and RCRA are very largely driven by human health risk assessments carried out under EPA’s Integrated Risk Information System (IRIS) as applied to chemicals on the site.  The health-protective regulations under the Clean Air Act also are typically the product of statutorily mandated human health risk assessments.  Mass tort cases seeking medical monitoring and personal injury are often based on such assessments.  Just as the cost of clean up and CAA compliance are driven by these assessments, so too are numerous corporate decisions on what chemicals to use in manufacturing and commercial activity.

Despite its centrality to so many important activities, IRIS is cordoned off from most of the legal system. It is not rooted in or governed by any statute. Its results are not reviewable except in the context of their application to a particular site – and if that site is governed by Superfund, review, as a practical matter, is available only at the end of the remedial process. Perhaps because of this structure and because human health risk assessments are an intensely scientific undertaking, the presence of lawyers is very little felt.

Nonetheless, environmental lawyers should be aware of some on-going efforts aimed at examining and reforming IRIS and similar systems.

First, the Administrative Conference of the United States commissioned Prof. Wendy Wagner of the University of Texas School of Law to undertake a study entitled “Science in the Administrative Process: A Study of Agency Decisionmaking Approaches.” Prof. Wagner details in 80 pages how the processes of EPA (including IRIS), the Fish and Wildlife Service (endangered and threatened species listing) and the Nuclear Regulatory Commission use science in regulatory decision-making. These useful guides are followed by almost 40 pages of recommendations and suggestions of best practices on issues such as the role of OMB in reviewing proposed agency actions with a major scientific component and the right of staff scientists to dissent from agency actions. Not surprisingly, given Prof. Wagner’s professional background, most of the topics on which she focuses are readily accessible to lawyers.

On September 10, 2012, the Administrative Conference held a workshop open to the public on many of Prof. Wagner’s ideas and proposals. It did not appear to me that very many environmental lawyers were on the stage or in the audience, despite the fact that issues and reforms discussed were central to their professional lives.

Second, in 2009, the National Academies published “Science and Decisions: Advancing Risk Assessment.” The volume focuses on EPA and IRIS. It is a thorough review of the issues and challenges of risk assessment from scientists who are, from time to time, called on to review EPA’s handiwork. Although some of the advice is merely editorial – be succinct and to the point, one chart or figure can be worth a thousand words – the authors address many of the major scientific issues in risk assessment, e.g. the selection of default values given the known sensitivity of a lab animal to a chemical, the probable sensitivity of humans has to be “calculated” or how to treat cumulative risks where there is exposure to two or more chemicals.

EPA is now working on implementing many of the suggestions set out in “Science and Decisions.” In September, 2012, the comment period closed on the draft of EPA’s “Framework for Human Health Risk Assessment to Inform Decision Making.” This document responds in large part to “Science and Decisions,” addressing “the recommendation that EPA formalize and implement planning, scoping, and problem formulation in the risk assessment process and that the agency adopt a framework for risk-based decision making.” EPA is not done absorbing “Science and Decisions” and the National Research Council is not done with EPA. The Council will continue to review how EPA implements IRIS. There will be an emphasis on EPA’s weight-of-evidence analyses and recommended approaches for weighing scientific evidence for chemical hazard and dose-response assessments. See Review of the IRIS Process, National Academies Current Projects.

The ongoing initiatives will provide the structure and the process for human health risk assessments in the future. The work of environmental lawyers will be shaped by what the scientists decide. Environmental lawyers should be engaged in these debates and arguments now.

REFORMING EPA’S HUMAN HEALTH RISK ASSESSMENTS

Posted on March 14, 2012 by Angus Macbeth

Risk assessments carried out under EPA’s IRIS program have been the subject of critical notice in recent months. The human health risk assessments which EPA performs across a range of programs merit attention, given their broad impacts in practical contexts; for instance, they form the basis for Superfund cleanups and RCRA corrective actions. But because they constitute guidance, they are not subject to judicial review at the time they are published and have not received much scrutiny by lawyers. Here are four aspects of how EPA typically conducts human health risk assessments that deserve attention and reform:

1. Publication Bias. In conducting a human health risk assessment, EPA starts by conducting a literature search  and assembling the scientific papers that report a chemical’s effects or lack of effects on humans and relevant animal species. This appears to be a fair way to review the scientific understanding of the chemical’s possible effects on humans and animals, but it fails to take account of publication bias. This well known phenomenon favors publication of studies finding “positive” results – an association between the chemical and a biological effect – over those that do not. In risk assessments, the determination of a dose below which there is no observable effect is very important. Reviewing the published literature can be highly misleading on that central issue. See, e.g., Sena et al., “Publication Bias in Reports of Animal Stroke Studies Leads to Major Overstatement of Efficacy,”  PLos Biol 8(3) e1000344 (2010) (“published results of interventions in animal models of stroke overstate their efficacy by around one third.”). EPA needs to capture the results of research showing, at given doses, that a chemical has no effect on human or animal biological systems. A start in that direction would be to require researchers who receive government support to report such results.

2. Multiple Comparisons. A researcher on, say, the neurodevelopmental effect of a chemical on children or rats can have the treated subjects perform 20 different tests; at a 95% confidence level, the researcher finds one association which is written up and published without reporting on other tests that did not show an association. Having made 20 comparisons at the 95% confidence level, at least one association is likely to be spurious – the result of random chance. But if one does not know how many tests or comparisons were made, there is no basis for making a fair judgment as to the strength or value to give to the reported positive result. There is no requirement in law or custom that directs researchers to report the number of comparisons they made, and publication bias discourages the ambitious academic from reporting a large number of comparisons which would result in sober analysts putting lesser weight on the positive results reported. EPA needs to know how many comparisons a researcher made and what the results were. This could be achieved in large measure by requiring that government-supported researchers report such data; in addition, EPA could simply ask the researchers to provide this information before it relied on the published results in a weight-of-the-evidence review.

3. Meta analysis. In a weight of the evidence review, replication of results has great weight in persuading the reviewer that the results are sound; conversely, failure to replicate results detracts markedly from the weight that a study will be given. Being able to tell whether results are replicated or not replicated depends on having common metrics used in the studies; e.g., administering the same dose under the same conditions at the same age. This is very rarely done, thereby erecting barriers to accurate determination of the weight that should be given to experimental results. See, e.g., Goodman et al, “Using Systematic Reviews and Meta-Analyses to Support Regulatory Decision Making for Neurotoxicants: Lessons Learned from a Case Study of PCBs,” 118 Env. Health Perspectives 728 (2010). Again the federal agencies that support research financially should require that experiments be conducted and reported with sufficient common metrics to allow effective meta-analysis. Of course, this would not preclude measuring and reporting whatever else the authors chose.

4. Review of data relied on in critical studies. EPA typically relies on one or a few “critical studies” in performing its analysis and reaching conclusions as to the risks to human health that are presented by a chemical. EPA reviews the printed reports found in the peer reviewed journals carefully, but it very rarely asks to see the underlying data. To a lawyer, this seems perverse – a bias against examining the actual data that is said to support the Agency’s conclusion. With no falsification, there are a number of ways to present data that will affect such data’s ultimate implications. Statistical treatment is the most obvious example. Human health risk assessments are of major importance to the public health and frequently result in many millions of dollars of expenditure by companies guarding against the risks that EPA identifies. It is clearly important to make these judgments as accurate as possible. In these circumstances, at least for the critical studies, the Agency should routinely ask that the data underlying the printed article should be produced; it should then examine the data and the reported results should only be relied on where they are fully supported by the data.

Dealing with these four issues should contribute significantly to producing human health risk assessments that would command the respect of the knowledgeable public.

IRIS NEEDS A MAKEOVER

Posted on March 2, 2012 by Michael Hardy

Attorneys, environmental professionals and regulators understand the importance of the Integrated Risk Information System, known as IRIS.  In rule-making, permitting, or remediation, the IRIS provides the EPA’s assessment of the health effects possibly resulting from exposure to chemicals in the environment.  Whether trying to determine the hazard index, reference dose, cancer slope factor, or other critical toxicological end-point, the IRIS assessment of a specific chemical constitutes an important first step.  Currently, the EPA has completed risk assessments of approximately 550 chemicals  in the IRIS, and reports that another 55 are on-going. 

But there have been numerous, long standing and wide-ranging criticisms of the IRIS process.  For example, the National Academy of Sciences criticized the EPA’s IRIS assessment for formaldehyde because it failed to explain its criteria for: identifying epidemiologic and experimental evidence, assessing the weight of the evidence, and characterizing uncertainty and variability. The NAS noted that these criticisms applied with equal force to other IRIS chemical  assessments as well.

More recently, in December, 2011, the U.S. Government Accountability Office issued a report to a House subcommittee crediting EPA for making some improvements in the process since earlier criticisms by the GAO in 2008, but noting recurring and new issues remain.  The GAO previously noted the IRIS data base faced a serious risk of becoming obsolete because EPA could not keep pace with the pace of needed assessments.  Even now, the GAO reported, the IRIS continues to suffer from problems with timeliness and productivity and “issues of clarity and transparency.”  The GAO called on EPA to develop a better system to apprise stakeholders of the status of IRIS assessments.  As an example, the GAO suggested a minimum of a two year notice of intent to assess a specific chemical, coupled with annual Federal Register reports on the status of on-going and proposed assessments.

To improve the credibility of the risk assessments, the GAO recommended the agency heed the recommendations of the National Academies.  The National Academies proposed improvements such as standardized approaches to evaluate and describe study strengths and weaknesses and the weight of the evidence.  Additionally, to restore scientific and technical credibility, the National Academies suggested the agency should involve independent expertise like the EPA’s Board of Scientific Counselors.

The GAO reports EPA has been receptive to its constructive criticisms and suggestions.  But the GAO and the trade press observe it is unclear how the EPA will actually implement the various suggestions from the GAO and the regulated community.