Are we there yet, at the precipice, that is?

Posted on April 22, 2014 by Michael Rodburg

Apart from a relatively mild editorial in the New York Times, the April 13, 2014 report of the Intergovernmental Panel on Climate Change (IPCC) warning that despite global efforts, greenhouse gas emissions actually grew more quickly in the first decade of the 21st century than in each of the three previous decades, was greeted, let us say, rather tepidly. In essence, the IPCC report declared that meeting the consensus goal limit of two degrees Celsius of global warming by mid-century would require mitigation measures on an enormous scale which, if not begun within the next decade, would become prohibitively expensive thereafter. As the New York Times put it, this is “the world’s last best chance to get a grip on a problem that . . . could spin out of control.” 

Humankind’s track record for global cooperation on any scale is not good. When was the last time world peace broke out, or global poverty became a worldwide priority? The 2008 re-make of the 1951 classic film, The Day the Earth Stood Still, illustrates the problem. In the original movie, the alien civilization sent police robots to stop human aggression and nuclear weapons from spreading beyond Earth; in the re-make, the alien civilization decided that our species would have to be eliminated lest it destroy one of the rare planets in the universe capable of enormous biodiversity. In pleading with the alien for another chance, Professor Barnhardt says, “But it’s only on the brink that people find the will to change.  Only at the precipice do we evolve.” And, of course, eventually and after a pretty flashy show of power and destruction, the alien rescinds the death sentence, agreeing with the Professor that at the precipice, humans can change.

Are we there yet? At the precipice? Hard to know. As Seth Jaffe pointed out in his April 14, 2014 post, global giant ExxonMobil has recognized the reality of climate change, but doubts there is sufficient global will to do much about it.  On the other hand, the American Physical Society warmed the hearts of climate change skeptics in appointing three like-minded scientists to its panel on public affairs. I tend to agree with that great fictional academic, Professor Barnhardt; it will take something that all humankind recognizes as the clear and unmistakable hallmark of the precipice before we collectively put on the brakes. In the meantime, we muddle through to the next opportunity, the 21st Conference of the Parties in Paris in December 2014, the first such summit meeting on climate change since Rio in 1992.

Arsenic and Apple Juice: Are We Now Safe?

Posted on July 30, 2013 by Michael Rodburg

Earlier this month the FDA proposed an “action level” of 10 ppb for inorganic arsenic in apple juice (down from 23 ppb), bringing it to the same level as EPA’s drinking water MCL. One may view this action as the culmination of a campaign of sorts initiated by a 2011 Consumer Reports article whose cause was taken up by Dr. Oz. Yet, the FDA has been monitoring arsenic levels for many years and has never viewed the data as any cause for concern.  Should we now believe that the FDA has made us completely safe by adopting a drinking water standard for juice?  In a practical sense, yes, but in EPA-Superfund speak, not really; and that is the point of this post.

The poisonous propensities of arsenic have been the stuff of history and literature for centuries; the Poison of Kings and the King of Poisons. Remember elderberry wine from Arsenic and Old Lace? But, arsenic is, after all, not only naturally occurring but rather ubiquitous.  The human race has managed to live with some level of arsenic for a few millennia now without evident consequence.  Indeed, because of naturally occurring arsenic in groundwater in the western United States, the MCL is actually set “considering cost, benefits and the ability of public water systems to detect and remove contaminants using suitable treatment technologies.” If, in contrast, one turns to the gold-standard of “safe,” the one in a million excess cancer risk level, the drinking water standard required is .02 ppb;  that’s right folks, 500 times lower than the current MCL and FDA’s proposed new juice level.

What does it mean?  I think it points out that the ultra-conservatism of the “10 to the minus six” environmental risk standard leads to absurd results and hugely unnecessary costs.  I still recall with a smile a quite notorious Superfund site (which shall remain nameless to protect a client) that had literally dozens and dozens of polysyllabic chemicals at high levels in soils, groundwater and waste disposal units throughout several hundred acres.  In the baseline risk assessment, the only risk to exceed the 10-6 level was that from naturally occurring arsenic in the soil!

The more we know about the genetic basis and causes of cancer, the more we realize how poorly both our animal models and in vitro experiments perform in predicting cancerous effects.  (See E. Topol, The Creative Destruction of Medicine (Basic Books 2011) for a good discussion of the limitations and frustrations of our current methods and models for finding cancer fighting drugs.)  While we are a long way from tossing EPA’s current approach to carcinogenic risk, we should perhaps take into account far more than we do now the inherent limits of our understanding and incorporate more the practical necessity for “cost, benefits and the ability” to “remove contaminants using suitable treatment technologies.” And yes, my grandchildren will continue to drink their apple juice.

Sandy’s Aftermath: A First Thought

Posted on November 26, 2012 by Michael Rodburg

Perhaps the most surprising aspect of Superstorm Sandy’s destruction of the Jersey Shore is that some people were taken by surprise.  For decades, a central focus of coastal zone management and waterfront development restrictions has been to protect the fragile and shifting barrier islands, wetlands, and estuaries of the 130 miles of New Jersey at the intersection of land and ocean.  New Jersey’s Coastal Areas Facilities Review Act and its Waterfront Development Act contain among the toughest limitations in the nation to control growth and development and protect an environmentally sensitive ecosystem.  Over the decades, thousands and thousands of decisions have been made by legions of bureaucrats on projects big and small regarding application of land use regulations and the terms of permits and other approvals intended to preserve dunes, reduce beach erosion, prevent flooding and avoid loss of life and property as well as protect the environment.  Sandy seems to have made a mockery of the effort in the blink of an eye.

Sandy was not a black swan event—something heretofore not even contemplated and hence, unforeseeable.  The USGS modelers and their European counterparts had it right almost from the beginning.  Scientists have modeled not only storm tracking itself with better and better forecasts and therefore more warnings, but even the severity and effects of storm events.  These models have predicted the height and location of the storm surges and the resulting erosion and flooding with reasonable accuracy.  Plug in the real time coordinates and other data, and the models told us that the waves would attack the dunes and erode them back into the sea; that storm surge would carry the sand inland and that inundation would occur once the beach and dunes had surrendered to the sea and storm.

In Sandy’s immediate aftermath, two related themes have emerged to justify rebuilding in place.  Many have advocated continuing business as usual; after all, if this was the storm of the millennium, we have a thousand years before we have to worry about a similar event occurring again.  Others have suggested that by undertaking protective measures, we humans are still capable of living anywhere we choose. We just need bigger and better sea walls, flood gates, and other barriers; let the engineers figure it all out.  Eventually, however, these views will inform a more deliberate discussion about our ability to adapt to changing climate conditions—how and where shall we choose to confront Nature and how and where will we let her do as she is wont to do.  With billions of dollars at stake, this debate will get contentious, to be sure.  Climate change and weather volatility will not be easily accommodated.  The role of government in the process—as regulator, facilitator, first responder and insurer of last resort—will come under review.  The two character Chinese pictograph for the word “crisis” consists of the characters for “danger” and “opportunity.”  The crisis that is Sandy should remind us that we should not squander the opportunity to rethink our priorities and arrive at a better way to confront this danger in the future.

Will we ever have a national energy policy?

Posted on April 18, 2012 by Michael Rodburg

USEPA continues its program of death by a thousand cuts to the coal industry, but does the agency’s actions reflect a coherent national energy policy? On March 27, 2012 the EPA issued its new source performance standards for new power plants limiting CO2 emissions per megawatt-hour of produced electricity to a level about that of state-of-the-art, combined-cycle, gas-fired power plants. Importantly, industry observers claim that the level is far below what the best coal-fired power plants can achieve at least without commercially unavailable and quite expensive carbon capture technology.  While certain exceptions within the rule preclude stating that EPA has banned the use of coal in new plants, it comes pretty close.  That reminds me of an often repeated statement of an old client of mine back in the 1970’s whose recycled solvent fuel business and the EPA just didn’t get along that well—he would remark that “if coal were discovered today, EPA would never allow it to be burned.”  He appears to have been ahead of his time.

Of course one winner in this is natural gas.  With new sources of natural gas from shale and fracking having driven natural gas prices downward relative to coal and oil, old King Coal has been facing a distinct price disadvantage for years.  EPA had further disadvantaged coal and oil as a result of last year’s cross-state air pollution rule.  Last December, EPA’s MATS rule (mercury and air toxics standards) for power plants further adversely affected coal. Is EPA’s latest effort merely the coup de grace?

Don’t get me wrong.  I’m not a coal apologist.  One need not be a fan or sworn enemy of either natural gas or coal, of free markets or environmental regulation, to realize that something is going on that is important to our national energy situation with no one particularly in charge.  After all, coal mining, transportation and existing uses drive tens of thousands of jobs and the economy of such disadvantaged states as West Virginia.  Presidents and presidential candidates have decried our lack of a national energy policy for 30 years with meager results. 

My point is otherwise: What does the overall national interest—economic, energy and environment—have to say about the relative use of coal vs. natural gas vs. petroleum vs. nuclear power?  Should EPA’s rule, based on concerns for global warming and not immediate health and safety, trump everything else?  Should we increase our reliance on natural gas at the expense of coal?  Should we be at the mercy of market forces without regard to our long term, sustainable future?  Should we simply use a bumper sticker (“Drill, baby, drill”) instead of reasoned policy? 

What passes as policy is a series of regulatory silos each with its own raison d’etre—FERC, NRC, EPA, DOE. And, of course, Congress, some of whose members can’t wait to kill alternative energy policies (solar), decry subsidization for renewables while rejecting as nearly immoral attempts to eliminate out of date tax subsidies for oil and gas (Subsidies at today’s prices?  Give me a break!). EPA’s new rule, in isolation from everything else, is merely another example of our lack of a coherent national policy on energy.  It may be a good environmental rule, but is it good for the country?

USEPA Dioxin Toxicity Reassessment Again Delayed

Posted on October 10, 2011 by Michael Rodburg

Dioxins, a class of chemicals whose most notorious denizen is 2,3,7,8-terachlorodibenzodioxin, a/k/a TCDD, have been of public concern since the 1970's, but their pathway to regulatory consensus has been a series of twists and turns, potholes and dead ends ever since.  Once branded the most potent animal carcinogen ever tested, its human carcinogenicity remains controversial today.  On August 29, 2011, following swiftly on the heels of a Science Advisory Board (SAB) review critical of several aspects of USEPA’s May, 2010 reanalysis of key issues related to dioxin toxicity, USEPA announced  that it would delay the cancer risk portion of its final Integrated Risk Information System (IRIS) assessment and move only to a final non-cancer assessment by the end of January, 2012.  The USEPA reanalysis was in response to a 2006 critique by the National Academy of Sciences (NAS).

TCDD gained notoriety in the 1970s as a contaminant in Agent Orange, the defoliant of choice used during the Vietnam War between 1962 and 1971.  It is a chemical that is not commercially produced; rather it is the inadvertent by-product of numerous processes, including the manufacture of some chemicals, pulp and paper, and most combustion processes, including the burning of household waste.  Because of the ubiquity of the sources from which dioxins are produced, the public may be exposed through eating beef, dairy products, pork or fish, or by living near municipal waste incineration. 

USEPA's first risk assessment of dioxins was issued in 1984; seven years later it began a reassessment in a process that is ongoing.  USEPA's 1994 draft reassessment went through SAB review in 1995, which resulted in a revised reassessment in 2000, a second SAB review in 2000-2001, a second revised draft reassessment in 2003, a NAS review in 2006, a USEPA response to NAS' comments in 2010, and the August 26, 2011 SAB review of USEPA's response to the NAS report.  The beat goes on.

Dioxin levels in the environment, mostly in soil, sediments and biota, have been declining regularly since the early seventies as pollution control efforts have ratcheted down inadvertent production and emissions.  USEPA's reassessment impacts mostly whether and to what extent a site requires clean-up.  A significantly lowered USEPA cleanup target for dioxin in soils raises the specter of reopening hundreds of sites that were remediated under current guidance to a 1 part per billion target for residential soils and a 5-20 ppb target for non-residential soils.  USEPA estimates that 104 CERCLA sites may need to be re-evaluated if it adopts a substantially lowered target.  Even without a cancer risk assessment, USEPA's announcement that it would move forward with its non-cancer risk is likely to result in final guidance that sets a cleanup target for dioxin in residential soil at 72 parts per trillion, a 92.8% reduction from the current target, and a commensurate lowering for non-residential soils to .95 ppb. 

USEPA's decision to split the cancer and non-cancer assessments likely pleased no one, including USEPA Administrator Lisa Jackson, who stated in 2009 that the Agency would complete the assessment by December 2010.  Environmentalists have pushed hard on USEPA for years and are likely not pleased that the cancer analysis has been again derailed by scientific critique.  Many in industry have resisted lowered clean up levels for years, echoing many of the criticisms of USEPA's cancer risk analysis by the NAS and SAB.  SAB's 84 page report issued on August 26, 2011 generally lauded USEPA's efforts in its May, 2010 report responding to the 2006 NAS Report. 

Nonetheless, SAB provided additional recommendations "to further enhance the transparency, clarity, and scientific integrity" of the Report.  Two critical elements of TCDD assessment were singled out as deficiencies by SAB: "(1) nonlinear dose-response for TCDD carcinogenicity, and (2) uncertainty analysis of TCDD toxicity."  With everything else going on within and outside USEPA in the legislative, political and regulatory arena, it will be interesting to see if USEPA can or will meet its self-imposed deadline of end of January 2012 for the non-cancer risk assessment; surely the cancer assessment is not now likely to proceed with much haste.


For more information, please contact the author, Michael Rodburg.

Some Comments on CERCLA Contribution

Posted on April 5, 2011 by Michael Rodburg

CERCLA liability under section 107 is often characterized as strict, joint and several unmitigated by considerations of causation, fault or fairness. Contribution is different, however. Congress, in section 113(f)(1), specifically authorized the courts to allocate costs “using such equitable factors as the court determines are appropriate.” Illustrative of this fundamental difference is the fight over who shall pay what for the massive PCB cleanup of the Lower Fox River.

NCR is incurring the bulk of the costs based on discharges of PCBs incident to the manufacture of carbonless paper at a facility on the River. It sued numerous paper mills along the River based on their discharges of PCB containing wastewater incident to the recycling of trim and waste carbonless paper. In late December 2009, Judge Griesbach of the Eastern District of Wisconsin dismissed NCR’s suit for contribution against the paper mills based on NCR's knowledge of the content and risks associated with PCB-containing carbonless paper as manufacturer/developer of the product compared to the recycling paper mills.

Framed thus — in old fashioned terms about knowledge of dangers and avoidance of risk—it was no contest. NCR was denied contribution because of its knowledge, learned gradually over time, about the toxic nature of PCBs as against those who merely, and without access to NCR’s superior knowledge of the product, processed it for recycling. The Court’s analysis, it said, “is governed by traditional principles of equity, such as the relative fault of the parties, any contracts between the parties bearing on the allocation of cleanup costs, and the so called ‘Gore factors.’” The lengthy recitation of the largely undisputed facts was nothing less than a moral indictment of NCR’s actions and reactions as the knowledge about PCB toxicity and its threat to the environment came to be documented and disseminated; in short, nothing less than a fault-based conclusion.

The flip side of this case came down in February 2011. Judge Griesbach decided that the paper mills, which had incurred expenses related to various EPA and Wisconsin DNR orders and settlements, monitoring and investigation, were entitled to contribution from NCR for those portions of the River where both recycling and manufacturing PCB contamination occurred. This time around the Court was satisfied that its singular use of NCR’s “fault” as the sole determinant to deny NCR contribution in 2009 was likewise sufficient to grant the paper mills a right of contribution against NCR. In other words, fault or culpability can become the overriding factor and permit the court to eschew consideration of any other equitable factors, including Gore factors. One sees in the Court’s emphasis on charging the financial cost on those “responsible” for creating the hazardous conditions a tone and direction quite at variance with the rather automatic analysis of liability under section 107. Hence, although approximately half of the PCBs originated with the paper mills and not NCR’s manufacturing, the Court, on culpability grounds, was prepared to impose the entire cost on NCR exclusive only of amounts reimbursed to the defendants by insurance.

Product or Pollutant? You be the judge.

Posted on September 15, 2010 by Michael Rodburg

The unending war--or so it seems sometimes--between policyholders and insurers regarding coverage for "pollution" never ceases to reveal new ways at looking at the facts of American life. In the latest salvo, we find that what's good for rice farmers is bad for cotton farmers and therefore bad for those who help rice farmers.


In Scottsdale Insurance Co. v. Universal Crop Protection Alliance LLC, (8th Cir., No. 09-1774, September 8, 2010) the Eighth Circuit decided that a pollution exclusion clause in defendant's insurance policy barred coverage for its liability to cotton farmers adversely affected by a herbicide applied to rice farmers' fields.


The underlying suit was brought by a group of Arkansas cotton farmers against Universal Crop Protection Alliance LLC ("UCPA"), a member-owned cooperative and major purchaser, formulator and distributor of agriculture chemicals. A herbicide containing dichlorophenoxyacetic acid (i.e. 2,4 D), is beneficial in rice production and routinely applied to rice fields by spraying. Unfortunately, it was alleged, that herbicide destroys or seriously damages cotton crops. In Arkansas, the two crops are often grown in close proximity. In a suit commenced in federal court in the Eastern District of Arkansas in May 2007, a group of 80 Arkansas cotton farmers alleged that UCPA and four other herbicide manufacturers had allowed the rice field herbicide to drift off-target or, as later alleged, to re-loft from the fields to which they were applied, and drift onto their cotton fields thereby causing damage and destruction of their cotton crops. UCPA tendered the defense of the suit to its insurer, Scottsdale Insurance Co. The policy was a one year claims made policy that covered "physical injury to tangible property." The policy contained an exclusion from coverage for property damage that would not have occurred but for "the actual, alleged or threatened discharge, dispersal, seepage, migration, release or escape of pollutants." "Pollutants" was defined as including "any solid, liquid, gaseous or thermal . . . contaminant, including . . . chemicals." Scottsdale brought a declaratory judgment action seeking a declaration that it did not owe defense or indemnity for the underling suit by the cotton farmers against UCPA.


In March 2009, the district court granted the insurer's motion for summary judgment finding that the pollution exclusion clause barred coverage. On appeal, decided September 8, 2010, the Eighth Circuit affirmed, finding the pollution exclusion clause broad and unambiguous in the context of the case. Under either an off-target application or the later pleaded "relofting" theory, the insurer was relieved of coverage for the claim: "Neither theory 'arguably' falls outside the scope of exclusion."


The nearly metaphysical question which turns cases such as this one way or the other is when does a product become a pollutant? Would UCPA have been covered if a rice farmer also had cotton on the same farm? Or if the "customer" farmer claimed damage from the product to livestock that were inadvertently sprayed while grazing on the intended target field or ingested the herbicide while grazing nearby? Or, was the fact that the product "escaped" from its intended field of application to another's property enough to make it a pollutant once it went astray? Surprisingly, the insurance coverage question arises more frequently than one might expect, especially since the inception of the so-called "total" or "absolute" pollution exclusion clause. The Eighth Circuit opinion offers little guidance and less reasoning. Adopting a mechanical reading, the Court concluded that since 2,4 D was a toxic chemical and had "migrated," it was a pollutant and coverage was not available.


A far more satisfying approach--at least from a policyholder's perspective--is represented by the New Jersey Supreme Court's decision some five years ago in Nav-Its, Inc. v. Selective Insurance Co. of America, 869 A. 2d 929 (NJ 2005). There, a contractor was hired to paint and perform floor coating and sealing work in an office building. A building tenant claimed personal injury from exposure to the fumes. The insurer argued, similarly to Scottsdale that the pollution exclusion clause barred coverage as the claimed injury was the result of the release and consequent exposure to "pollutants," i.e. fumes. In holding for the insured, the New Jersey Supreme Court viewed its role as determining the underlying purpose for the exclusion, and concluded that product exposure of the type faced by the contractor was not "traditional" pollution. Painting and sealing fumes were a necessary consequence of handling the products and the damage they caused was within the coverage for products liability and completed operations.


Without belaboring the distinctions in the facts of these two cases, the point to be made is quite simple: All tangible products are composed of chemicals; they cause damage only when they come in contact with property or persons in a manner not intended by the original purpose for which they were made or used. If any such exposure automatically renders the product a pollutant, then coverage is illusory for a broad array of circumstances that are not "traditional" pollution in any sense of the word. Conversely, if the courts are inclined to examine policies for their "purpose" and "intent" from the perspective of the insured, they are far more likely to find coverage when the resulting exposure and harm is not "traditional" pollution.

Nanotechnology - Health and Risk Management Concerns

Posted on March 9, 2010 by Michael Rodburg

In June last year insurance giant Zurich issued a report of the work of its Emerging Risks Group study begun in 2006. The report stated that the risks with the greatest potential to affect Zurich and its customers are those associated with nanotechnology.

Similarly, an alphabet soup of regulators—foreign and domestic—is wrestling with largely unknown and largely theoretical risks. The human health and environmental alarms have been sounded by numerous commentators, without yet meaningful, documented empirical observation or controlled studies of human health and safety issues or environmental concerns. Regulation in a factual vacuum is potentially counterproductive and can stifle one of the 21st century’s most promising new technologies. But no one wants “another asbestos” or to have stood by silent in the spring while nanobots consume an ecosystem. This blog will skim the surface of an increasingly deeper and broader pond.

What is Nanotechnology?

Nanotechnology involves the manipulation of matter at the near atomic or nanometer scale--a nanometer is one billionth of a meter; a standard sheet of paper is 100,000 nanometers thick. Materials composed of or including devices and systems with components at the nanometer scale represent fundamentally new molecular organizations with highly different and potentially unpredictable properties and functions compared to their macromolecular cousins. The technology has found uses in a wide variety of commercial products including wound dressings, pregnancy tests, toothpastes, lubricants, paints, nonstick coatings, tennis racquets, air filters and many other products. In each of these products, the nano scale materials exhibit dramatically different characteristics than would be true of those materials at normal scale.  For example, gold is an excellent conductor of heat and electricity but simply reflects ordinary light. Properly structured gold nano particles absorb light and can actually convert light into heat (which, in turn, can be used for cutting purposes in thermal scalpels).  Nano sized particles of titanium dioxide provide UV protection while remaining transparent. Nano scale materials in thin films applied to eyeglasses, computer displays and cameras make them water repellant, anti-reflective or give them other useful physical characteristics.


Potential Health Issues

The primary human health concern for the extremely small size of nano materials is that they may be introduced into and affect the body in ways completely different than their bulkier macro cousins. See, e.g., Special Report, Nanotechnology: Benefits vs Toxic Risks, Functional Foods And Nutraceuticals (Feb. 2007) ("nanosized particles were found to traverse through lung tissue in unexpected ways, gaining access to blood and lymphatic systems"). 

The potential for different human health related characteristics such as enhanced adhesion, reactivity and absorption means that current methodologies for risk assessment simply are not applicable and safety data drawn from non-nano counterpart materials may be irrelevant.  See, Fischer Nanotechnology -- Scientific and Regulatory Challenges, 19 Villanova Envt. L. J. 315 (2008). For example, when inhaled, nano particles are deposited more efficiently and deeply into the respiratory tract than non-nano materials, and these nano materials may evade human body defense mechanisms that trap larger particles. In addition, nano materials themselves have sometimes been the subject of problematic animal studies. See Lynn, Size Matters: Regulating Nanotechnology 31 Harv. Envtl. L. Rev. 349 (2007).

Moreover, ordinary risk management tools may also simply “not work” in the presence of nano materials. For example, the use of facial masks designed for non-nano aerosols may not be effective for nano sized particles.

Nanotechnology concerns have been heightened by an article published in the European Respiratory Journal in which researchers reported that seven (7) young women suffered permanent lung damage following months of unprotected exposures to fumes and smoke containing nano particles in spray painting operations in China. The researchers concluded that the patients' illnesses appeared to be a "nanomaterial -- related disease.” While the results of this study have been questioned, the legitimacy of concerns with respect to high level environmental exposures to these materials remains. 

Regulatory Focus

An intense regulatory focus on developing an appropriate scientific basis for ensuring that nano materials do not present unreasonable human health concerns is underway. See e.g., Dept. of Health and Human Services, Approaches To Safe Nanotechnology - Managing The Health And Safety Concerns Associated With Engineered Nanomaterials (March 2009).  Giving further impetus to these concerns is the fact that there is a high concentration of nanotechnology applications in pharmaceutical, food and cosmetics applications, industry segments with direct and immediate human interactions. Every agency with jurisdiction over human and environmental health and safety has found or certainly will find reason to explore regulation. The USEPA has begun to issue rules about handling of and exposure to nano forms of alumina, silica and silver; the California Department of Toxic Substances is considering controls on carbon nanotubes. We can expect initiatives over time from the FDA and OSHA.

Insurance Company Reaction

For its part, the insurance industry has focused on product liability concerns. Insurance industry studies have expressed significant reservations about liability issues associated with nano materials. See Lloyd's of London Emerging Risks Team Report, Nanotechnology - Recent Developments, Risks and Opportunities (2007).  Indeed, one insurance carrier (Continental Western Insurance Group) has gone so far as to impose nano-technology exclusions in their standard CGL policies - notwithstanding the fact that no such claims have yet been presented. 


It is clear that nanotechnology offers tremendous scientific and commercial opportunities in the future. These opportunities, however, are likely to be accompanied by health and safety based product liability and environmental risks, and those risks need to be taken into account in the development and exploitation of these products.

This blog is based in part on a more expansive article: Michael Dore, Nanotechnology - Evaluate The Products Liability Risks, 198 N.J.L.J. 866 (December 14, 2009)