The Reasonable Robot
Get It Now
AI and people do not compete on a level-playing field. Self-driving vehicles may be safer than human drivers, but laws often penalize such technology. People may provide superior customer service, but businesses are automating to reduce their taxes. AI may innovate more effectively, but an antiquated legal framework constrains inventive AI. In The Reasonable Robot, Ryan Abbott argues that the law should not discriminate between AI and human behavior and proposes a new legal principle that will ultimately improve human well-being.
Opportunities, Constraints and Critical Supports for Achieving Sustainable Local Pharmaceutical Manufacturing in Africa: With a Focus on the Role of Finance
At the request of the Open Society Foundations Public Health Program (OSF-PHP), a Team of researchers assembled by Nova Worldwide Consulting undertook to study whether and to what extent gaps in the availability of financing are constraining the development of pharmaceutical manufacturing in Africa, especially to address COVID-19. In this context, pharmaceuticals are understood to include diagnostics, vaccines and treatments (DVT), as well as personal protective equipment (PPE). Assuming that gaps in the availability of or access to financing are acting as a constraint on local production, what steps or measures might be advocated to address those gaps?
The research Team — Frederick Abbott, Ryan Abbott, Joseph Fortunak, Padmashree Gehl Sampath and David Walwyn — represent a variety of disciplines and experience, including legal, economic and scientific/technical. The methodology of research for this study entailed preparation of an inception report, desk research, interviews of stakeholders, a small group learning session with a group of experts, preparation and distribution of a questionnaire at the firm level, discussion with civil society advocacy group representatives, as well as reliance on the experience of Team members.
Punishing Artificial Intelligence : Legal Fiction or Science Fiction
Whether causing flash crashes in financial markets, purchasing illegal drugs, or running over pedestrians, AI is increasingly engaging in activity that would be criminal for a natural person, or even an artificial person like a corporation. We argue that criminal law falls short in cases where an AI causes certain types of harm and there are no practically or legally identifiable upstream criminal actors. This Article explores potential solutions to this problem, focusing on holding AI directly criminally liable where it is acting autonomously and irreducibly. Conventional wisdom holds that punishing AI is incongruous with basic criminal law principles such as the capacity for culpability and the requirement of a guilty mind.
Treating the health care crisis
The Patient Protection and Affordable Care Act (PPACA) intends to take American health care in a new direction by focusing on preventive medicine and wellness-based treatment. But, in doing so, it does not adequately take into account the potential contribution of complementary and alternative medicine (CAM). CAM is already used by a large and growing number of individuals in the United States, although to date there is limited scientific evidence to support the efficacy of most CAM treatments. This article proposes statutory reforms to PPACA to encourage CAM research and development (R&D), and the use of demonstrably effective CAM treatments. A hybrid system of limited intellectual property protection and government prizes based on regulatory approval may be the best option for incentivizing R&D on CAM, along with increased funding for research through the National Institutes of Health. PPACA should require health insurance plans to reimburse for evidence-based CAM and empower an existing government agency (NCCAM) to regulate CAM standards and to recommend evidence-based CAM services. Together these policy and funding mechanisms should help reduce U.S. healthcare costs and improve quality of life.
Big Data and Pharmacovigilance
Data on individual patients collected through state and federal health information exchanges has the potential to usher in a new era of drug regulation. These exchanges, produced by recent health care reform legislation, will amass an unprecedented amount of clinical information on drug usage, demographic variables, and patient outcomes. This information could aid the Food and Drug Administration with post-market drug surveillance because it more accurately reflects clinical practice outcomes than the trials relied upon for drug approval. However, even with this data available, there is a weak market-driven impetus to use it to police drugs. This is fixable; the post-market drug regulatory process needs new incentives to boost third party participation. While this could be achieved with a variety of mechanisms, the best option for generating robust results may be an administrative bounty proceeding that will allow third parties to submit evidence to the Food and Drug Administration to contest the claimed safety and efficacy profiles of drugs already on the market. The case study of Merck’s former blockbuster drug Vioxx demonstrates how this system might work. In creating a new incentive that counters the powerful financial motivation of drug manufacturers to obscure or misrepresent safety profiles, this regime could lead to an improved balance of the risks and benefits of drugs used by the American public. More broadly, this article illustrates how the private sector can be incentivized to supplement regulatory activity in a complex field.
Documenting Medical Knowledge
Traditional medical knowledge is experiencing increased attention worldwide in light of global health care demand and the significant role of traditional medicine in meeting the public health needs of developing countries. Traditional medicines already comprise a multibillion dollar, international industry, and the biomedical sector is increasingly investigating the potential of genetic resources and traditional knowledge. Documenting and protecting these medicines is becoming a greater priority.Hopefully, this text will help traditional knowledge holders better understand the issues related to traditional medicine and intellectual property and make informed decisions about the best use of their knowledge.
Should Robots Pay Taxes ?
Existing technologies can already automate most work functions, and the cost of these technologies is decreasing at a time when human labor costs are increasing. This, combined with ongoing advances in computing, artificial intelligence, and robotics, has led experts to predict that automation will lead to significant job losses and worsening income inequality. Policy makers are actively debating how to deal with these problems, with most proposals focusing on investing in education to train workers in new job types, or investing in social benefits to distribute the gains of automation. The importance of tax policy has been neglected in this debate, which is unfortunate because such policies are critically important. The tax system incentivizes automation even in cases where it is not otherwise efficient. This is because the vast majority of tax revenues are now derived from labor income, so firms avoid taxes by eliminating employees. Also, when a machine replaces a person, the government loses a substantial amount of tax revenue—potentially hundreds of billions of dollars a year in the aggregate. All of this is the unintended result of a system designed to tax labor rather than capital. Such a system no longer works once the labor is capital. Robots are not good taxpayers. We argue that existing tax policies must be changed. The system should be at least “neutral” as between robot and human workers, and automation should not be allowed to reduce tax revenue. This could be achieved through some combination of disallowing corporate tax deductions for automated workers, creating an “automation tax” which mirrors existing unemployment schemes, granting offsetting tax preferences for human workers, levying a corporate self-employment tax, and increasing the corporate tax rate.
I Think,Therefore I invent
Abstract: Artificial intelligence has been generating inventive output for decades, and now the continued and exponential growth in computing power is poised to take creative machines from novelties to major drivers of economic growth. In some cases, a computer’s output constitutes patentable subject matter, and the computer rather than a person meets the requirements for inventorship. Despite this, and despite the fact that the Patent Office has already granted patents for inventions by computers, the issue of computer inventorship has never been explicitly considered by the courts, Congress, or the Patent Office. Drawing on dynamic principles of statutory interpretation and taking analogies from the copyright context, this Article argues that creative computers should be considered inventors under the Patent and Copyright Clause of the Constitution. Treating nonhumans as inventors would incentivize the creation of intellectual property by encouraging the development of creative computers. This Article also addresses a host of challenges that would result from computer inventorship, including the ownership of computer-based inventions, the displacement of human inventors, and the need for consumer protection policies. This analysis applies broadly to nonhuman creators of intellectual property, and explains why the Copyright Office came to the wrong conclusion with its Human Authorship Requirement. Finally, this Article addresses how computer inventorship provides insight into other areas of patent law. For instance, computers could replace the hypothetical skilled person that courts use to judge inventiveness. Creative computers may require a rethinking of the baseline standard for inventiveness, and potentially of the entire patent system.
The Reasonable Computer
Artificial intelligence is part of our daily lives. Whether working as chauffeurs, accountants, or police, computers are taking over a growing number of tasks once performed by people. As this occurs, computers will also cause the injuries inevitably associated with these activities. Accidents happen, and now computer-generated accidents happen. The recent fatality involving Tesla’s autonomous driving software is just one example in a long series of “computergenerated torts.” Yet hysteria over such injuries is misplaced. In fact, machines are, or at least have the potential to be, substantially safer than people. Self-driving cars will cause accidents, but they will cause fewer accidents than human drivers. Because automation will result in substantial safety benefits, tort law should encourage its adoption as a means of accident prevention. Under current legal frameworks, suppliers of computer tortfeasors are likely strictly responsible for their harms. This Article argues that where a supplier can show that an autonomous computer, robot, or machine is safer than a reasonable person, the supplier should be liable in negligence rather than strict liability. The negligence test would focus on the computer’s act instead of its design, and in a sense, it would treat a computer tortfeasor as a person rather than a product. Negligence-based liability would incentivize automation when doing so would reduce accidents, and it would continue to reward suppliers for improving safety. More importantly, principles of harm avoidance suggest that once computers become safer than people, human tortfeasors should no longer be measured against the standard of the hypothetical reasonable person that has been employed for hundreds of years. Rather, individuals should be judged against computers. To appropriate the immortal words of Justice Holmes, we are all “hasty and awkward” compared to the reasonable computer.
Punishing Artificial Intelligence
Whether causing flash crashes in financial markets, purchasing illegal drugs, or running over pedestrians, AI is increasingly engaging in activity that would be criminal for a natural person, or even an artificial person like a corporation. We argue that criminal law falls short in cases where an AI causes certain types of harm and there are no practically or legally identifiable upstream criminal actors. This Article explores potential solutions to this problem, focusing on holding AI directly criminally liable where it is acting autonomously and irreducibly. Conventional wisdom holds that punishing AI is incongruous with basic criminal law principles such as the capacity for culpability and the requirement of a guilty mind. Drawing on analogies to corporate and strict criminal liability, as well as familiar imputation principles, we show how a coherent theoretical case can be constructed for AI punishment. AI punishment could result in general deterrence and expressive benefits, and it need not run afoul of negative limitations such as punishing in excess of culpability. Ultimately, however, punishing AI is not justified, because it might entail significant costs and it would certainly require radical legal changes. Modest changes to existing criminal laws that target persons, together with potentially expanded civil liability, are a better solution to AI crime.
Artificial Intelligence, Big Data and Intellectual Property: Protecting Computer-Generated Works in the United Kingdom
Big data and its use by artificial intelligence (AI) is changing the way intellectual property is developed and granted. For decades, machines have been autonomously generating works which have traditionally been eligible for copyright and patent protection. Now, the growing sophistication of AI and the prevalence of big data is positioned to transform computer-generated works (CGWs) into major contributors to the creative and inventive economies. However, intellectual property law is poorly prepared for this eventuality. The UK is one of the few nations, and perhaps the only EU member state, to explicitly provide copyright protection for CGWs. It is silent on patent protection for CGWs.
This chapter makes several contributions to the literature. First, it provides an up-to-date review of UK, EU and international law. Second, it argues that patentability of CGWs is a matter of first impression in the UK, but that CGWs should be eligible for patent protection as a matter of policy. Finally, it argues that the definition of CGWs should be amended to reflect the fact that a computer can be an author or inventor in a joint work with a person.
Hal the inventor
Big data and its use by artificial intelligence is disrupting innovation and creating new legal challenges. For example, computers engaging in what IBM terms “computational creativity” are able to use big data to innovate in ways historically entitled to patent protection. This can occur under circumstances in which an artificial intelligence, rather than a person, meets the requirements to qualify as a patent inventor (a phenomenon I refers to as “computational invention”).
Yet it is unclear whether a computer can legally be a patent inventor, and it is even unclear whether a computational invention is patentable. There is no law, court opinion, or government policy that directly addresses computational invention, and language in the Patent Act requiring inventors to be individuals and judicial characterizations of invention as a “mental act” may present barriers to computer inventorship. Definitively resolving these issues requires deciding whether a computer qualifies as an “inventor” under the Patent and Copyright Clause of the Constitution: “The Congress shall have the power…to promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries.”
Whether computers can legally be inventors is of critical importance for the computer and technology industries and, more broadly, will affect how future innovation occurs. Computational invention is already happening, and it is only a matter of time until it is happening routinely. In fact, it may be only a matter of time until computers are responsible for the majority of innovation and potentially displacing human inventors. This chapter argues that a dynamic interpretation of the Patent and Copyright Clause permits computer inventors. This would incentivize the development of creative artificial intelligence and result in more innovation for society as a whole. However, even if computers cannot be legal inventors, it should still be possible to patent computational inventions. This is because recognition of inventive subject matter can qualify as inventive activity. Thus, individuals who subsequently “discover” computational inventions may qualify as inventors. Yet as this chapter will discuss, this approach may be inefficient, unfair, and logistically challenging.
The Sentinel Initiative as a Knowledge Commons
The Sentinel System (Sentinel) is a national electronic safety-monitoring system for post-market evaluation of drugs and devices created by the US Food and Drug Administration (“FDA” or the “Agency”). Sentinel now has the ability to access data on more than 178 million individuals by tapping into existing databases maintained largely by private health care insurers and providers (Health Affairs 2015: 3). Unlike other post-market surveillance tools that primarily rely on third parties to submit reports of adverse events to FDA, Sentinel allows FDA to proactively monitor for safety issues in near real time. Sentinel is one of many initiatives designed to use electronic health records (EHRs) for purposes other than patient care (secondary use), and it may be the most successful domestic example of secondary use (Anonymous interviewee). Janet Woodcock, the director of the FDA’s Center for Drug Evaluation and Research (CDER), has argued that Sentinel could “revolutionize” product safety (Abbott 2013: 239)
Ryan Abbott, Helen Lavretsky, Donald Chang, and Harris Eyre, The Roles of Tai Chi, Qi Gong, and Mind-Body Practices in the Treatment and Prevention of Psychiatric Disorders, In Complementary and Integrative Treatments in Psychiatric Practice (Patricia L. Gerbarg, Phillip R. Muskin, and Richard P. Brown, eds., 2018)
- Winner of Gold Medal Nautilus Book Award as the 2017 best book in the Psychology category.
Ryan Abbott and Helen Lavretsky, The Use of Tai Chi and Qi Gong for Treatment and Prevention of Mental Disorders, In Integrative Psychiatry, 36(1) Psychiatr Clin North Am 109-19 (Phillip R. Muskin, Patricia L. Gerbarg, and Richard P. Brown eds., 2013)
Michael Cohen, Susan Natbony, and Ryan Abbott. Complementary and alternative medicine in child and adolescent psychiatry: legal considerations, 22(3) Child Adolesc Psychiatr Clin N Am. 493-507 (2013)
Epstein, Richard and Ryan Abbott, FDA Involvement in Off-Label Use: Debate Between Richard Epstein and Ryan Abbott, 44 Sw. L. Rev. 1 (2014)
Peer Reviewed Publications
Frederick M. Abbot, Ryan B. Abbot, Joseph Fortunak, Padmashree Gehl Sampat & David Walwy, Opportunities, Constraints and Critical Supports for Achieving Sustainable Local Pharmaceutical Manufacturing in Africa: With a Focus on the Role of Finance, Nova Worldwide Consulting (2021) <https://nova-worldwide.com/OSF-PHP_report>
Harris A. Eyre, Andrew Robb, Ryan Abbott and Malcolm Hopwood, Mental Health Innovation Diplomacy: An Underrecognised Soft Power, 53(5) Australian and New Zealand Journal of Psychiatry (ANZJP) (2019) [Commentary]
Ryan Abbott, Inventive Machines: Rethinking Invention and Patentability, In Intellectual Property and Digital Trade in the Age of Artificial Intelligence and Big Data 113–119 (Xavier Seuba, Christophe Geiger and Julien Penin eds., 2018) [Published remarks]