Artificial Intelligence and the Professions

I recently came across a welcome bit of good news (which many of us crave these days!). Through a vehicle intriguingly titled the “Ethics and Governance of Artificial Intelligence Fund,” several philanthropies are providing generous support for an important and challenging undertaking: an investigation of the ethical and moral considerations of advances in AI. The beneficiaries are two highly appropriate and complementary university centers: the Media Lab at the Massachusetts of Institute of Technology and the Berkman Klein Center for Internet and Society at Harvard University.

This philanthropic support could not be more timely. After a number of false (or overly-hyped) starts in the last half-century, the field of artificial intelligence is coming of age. As the size of computational devices gets ever smaller, the question-answering and problem-solving capacities of our technologies increase steadily. We can still argue about whether “artificial intelligence” is truly intelligent—that is, intelligent in the way that we humans think we are intelligent; and it is clear that much of artificial intelligence still depends on “brute force” processing of tons of information rather than the kinds of elegant heuristics that we human beings allegedly employ.

Until recently, I had thought that one arena of human life was unlikely to be affected by artificial intelligence—the practice of the learned professions. Of course, I knew that almost all workers make use of technological aids, and, as a (self-proclaimed) professional, I have for decades used computer programs that help me to array and analyze data, write and edit easily, and—indeed (perhaps especially)—organize my life. But I thought of these as mere adjuncts to my “real” work of thinking, advising, planning, executing, and professing.

Thanks to recent attention in the press—and Richard and Daniel Susskind’s book of The Future of the Professions—I now realize that I was naïve. The subtitle of the Susskinds’ book is telling: How technology will transform the work of human experts. Large parts of professions are now carried out far more rapidly—and in many cases, more accurately—by AI programs and devices than by even the most skilled and speediest human beings. It’s become an open question whether, and to what extent, we will need flesh and blood accountants to handle and authenticate our books; live physicians to commission and interpret our MRIs; animated teachers who stand and deliver in front of us rather than well-designed lessons online.

But for the most part, discussions of these trends have ignored or minimized what is at the core, the proverbial “elephant in the room”: the responsibility of professionals to make complex judgments, and notably ethical ones, under conditions of uncertainty. The auditor has to decide which items to include or exclude, how to categorize them, what recommendations to give to the client, when to report questionable practices, to whom, and in what format. The medical practitioner has to decide which tests to commission, which findings to emphasize, and how to explain the possible courses of a disease process to patients and families who differ widely in background, knowledge, and curiosity. The teacher has to decide which topics are most important, what to emphasize (or minimize) in the current context (including time constraints, snow days, and epochal world events), which kinds of feedback are useful to specific students in specific contexts, and which kinds are better kept under wraps for now.

“To be sure,” you might respond. But these kinds of knowledge and “moves” can and are being built into AI. We can and should consider varying contexts; we can have different responses for different clients, even for the same clients on different days or under different circumstances; we can tweak programs based on successes and failures according to specified standards; and, anyway, we cannot be confident that human practitioners—even ones with the best of motives—necessarily handle such challenges very well.

Since I am (hopefully) less naïve than at earlier times, I won’t attempt to bat down or ignore these rejoinders. But I raise the following considerations:

1. Over the decades, professionals have developed understandings of what is proper professional behavior and what is not. To be sure, sometimes these consensus judgments are honored as much in the breach as in the observance; but at least they are standards, typically explicit ones. (One journalism ethical code—that of The New York Times—runs for over fifty pages.) Any artificial intelligence program that takes on professional (as opposed to purely technical) competence needs to be explicit about the ethical assumptions built into it. Such discussions have commenced—for example, with reference to the norms governing driverless automobiles that get into traffic jams or accidents.

2. It is illusory to think that there will be one best approach to any professional challenge—be it how to audit accounts, interpret radiological information, or fashion a lesson. Indeed, different approaches will have different ethical orientations—implicit or explicit. Far better that the assumptions be made explicit and that they have to contend with one another publicly on the ethical playing field… as happens now in discussions among philosophers, cognitive scientists, and neuroscientists.

3. Among competing artificial intelligence approaches to professional quandaries, how do we decide which to employ? We could create AI “meta-programs” that make these decisions—but for now, I’d rather let human professionals make these discernments. As the Romans famously queried “Quis custodiet ipsos custodies?” (“Who guards the guardians?”)

4. What happens if, for some reason, AI breaks down (for example, if the “hackers of ethics” have their day)? (More than a few gifted hackers pass through the portals of the two Cambridge institutions that have been generously funded.) In such post-diluvium “after the flood” cases, we will desperately need well-educated human beings who themselves have come to embody professional expertise and judgment.

5. A personal point: As I write these lines, I am dealing with a medical situation that will take months to resolve. I am fortunate—indeed, more than fortunate—to have a skilled medical team that is helping me to deal with these challenges. No doubt each member makes use of all of the “computational intelligence” at his or her disposal. But I also have conversations—in person, on the telephone, or on line—with these physicians frequently (in one case, on a daily basis). These personal interactions with live, empathic human beings have enormous positive impact on my well-being. When I have recovered, I expect to write about the sense of professional calling which still exists, at least in the prototypical profession of medicine.

6. Maybe my grandchildren or great-grandchildren will be equally satisfied having “conversations” with AI programs, although I can’t conceive of a situation where I would be. And this is in large part because my physicians are human beings. In some ways, as a fellow human being, I know where they are “coming from”—and in some ways, they also know where I am “coming from”; where I am going; and how the many pieces fit (or don’t) fit together. As fellow human beings, we share both an evolutionary background and a common fate.

And so, as the researchers and practitioners commence their important work on the ethics of AI, I hope that they will keep in mind those capacities and potentials that represent the “better angels” of human nature—those civilized and professional virtues and values that took centuries to develop but can so easily be scuttled and forgotten.