The Professional Ethicist

Michel de Montaigne: An Unexpected Lens on Professions in the 16th Century and in the 21st Century

While I am student of the professions, I have not studied their history systematically. Of course, I realize that there were educators and physicians in the classical Greek era; the Romans created many pivotal political and military roles, as well as highly skilled practitioners in engineering and architecture (and that’s not to mention what has been wryly dubbed “the oldest profession”). Still, in my “mental model” of the professions, I have conceived of them as a modern phenomenon—distinctly different from medieval guilds and trades—closely tied to the creation of formal educational institutions, legal requirements, ethical codes, and the possibility of losing one’s license.

Stimulated by Sara Bakewell’s remarkable book How to Live: Or a Life of Montaigne in One Question and Twenty Attempts at an Answer, I have been reading through an old translation of the essays of Michel de Montaigne. Montaigne lived in France in the 16th century (1533-1592)—a time so different from ours. Life was short and dangerous, most children did not survive the first days or years of life, war was constant, and cruelty towards enemies was accepted and even encouraged. Royalty had tremendous power but was also vulnerable to upheavals, typically sudden and violent; members of the upper social classes, particularly men, were accustomed to being protected and served around the clock by members of the lower classes. Over the centuries, Montaigne has been widely read and widely cherished (though for two centuries, he was on the Catholic Church index of forbidden books). He wrote about his own life with unprecedented directness, candor and wit. And he did so in scores of short pieces in which he poured out his thoughts in a stream-of-consciousness manner. For that reason, he is considered to have invented the literary form called the essay.

While I was reading the essays and (to be frank) daydreaming, I was quite surprised—and awakened!—to encounter the following passage:

“In reading histories, which is everybody’s subject, I use(d) to consider what kind of men are the authors; if they be persons that profess (NOTE THE WORD!) nothing but mere letters, I, in and from them, principally observe and learn style and language; if physicians, I then rather incline to credit what they report of the temperature of the air, of the health and complexions of princes, of wounds and diseases; if lawyers, we are from them to take notice of the controversies of right and wrong, the establishment of laws and civil government, and the like; if divines, the affairs of the Church, ecclesiastical censures, marriages and dispensations; if courtiers, manners and ceremonies; if soldiers, the things that properly belong to their trade, and principally, the accounts of the actions and enterprises wherein they were personally engaged; if ambassadors, we are to observe negotiations, intelligences, and practices, and the manner how they are to be carried on” (p. 13-14, Essays of Montaigne, Xist Classics).

The wording of the era may seem exotic, but the list of the professions, and what Montaigne expected to obtain from their respective practitioners, is quite familiar—lawyers dealing with controversies in the law, physicians focused on wounds, diseases, and general health. In a subsequent moment of day dreaming, my thoughts leapt to a ceremony—dating back almost to Montaigne’s time—that I witness each year. I refer to the Commencement (graduation) ceremonies held in late spring in Harvard Yard. The President, and other leaders of the University, confer degrees on individuals from a dozen different faculties, and in each case, note the privileges and obligations attendant to those who will practice those respective professions. And the list is quite like Montaigne’s—scholars (Arts and Sciences); physicians (Medical School); lawyers (Law School); religious leaders (Divinity School); ambassadors (School of Government).

Taking off from Montaigne’s musings, if we were to undertake a schematic analysis of the sweep of the professions over five centuries, what might some of the similarities and differences be? Here’s my stab:

Similarities:

-Professions address fundamental human needs—tending to sickness, resolving disputes, educating the young, protecting citizens from harm.

-Often, professions tackle complex issues that are not readily resolved.

-Professions take advantage of the latest knowledge and lore; sometimes this is kept under wraps (privileged knowledge).

-Certain individuals are recognized as practitioners, and perhaps masters of that lore; others are made fun of (e.g. in the plays of Molière or Shakespeare—“Let’s kill all the lawyers” [Henry VI, Part 2]).

-Apprentices seek to identity and learn from masters of the specified professions.

Differences:

-New professions arise, others fade away. Barbers are no longer seen as professionals; journalists are aspiring professionals; going forward, those who design the “rules of the internet” are likely to be considered professionals.

-There are now formal educational institutions and requirements. In the United States, following the publication and dissemination of the Flexner Report (1910), fly-by-night medical training institutions were phased out, and a far more rigorous set of criteria applied to institutions that could award medical degrees (such highly regulated institutions are less the rule in some other countries, and that’s why students who are not admitted to medical school in the U.S. often acquire degrees in other nations). Relatedly, medical curricula are now scrutinized (for example, by the Association of American Medical Colleges.

-Numerous ethical codes that are published; to take one profession, physicians are expected to adhere to them and, at least in principle, one can lose one’s medical license (even if in practice, physicians are rarely expelled from the profession unless they are convicted of crimes).

Taking a perspective that stretches back to Montaigne’s time, while also looking ahead, what trends might we expect?

In the traditional professions (e.g. law, medicine, engineering, university teaching), there will be continuing efforts to establish and monitor training and to maintain and even increase the status of these professions. I don’t think that these efforts will be successful. So many occupations strive to have the status of professions; various educational interventions, many disreputable, are once again springing up. Expertise is not held in high regard (unless your own health or well-being is at stake). Unless the traditional professions can demonstrate unequivocally that their graduates can perform in a way that others do not, it’ll be difficult to maintain the hallowed status of, say, a degree from a flagship law school.

Also, there will continue to be a proliferation of paraprofessionals, who carry out more specific tasks, and these often well-trained experts are likely to blur further the line between the traditional professional and her close colleagues. Finally, as more tasks traditionally associated with the professions are carried out by computational algorithms and devices, the unique contribution of “the professional” will be more difficult to discern.

As readers of this blog will know, I am not sanguine about these trends—I continue to hope that the special status of the professional will endure. If it is to endure, I think it will have less to do with technical knowledge and years of schooling of the professional. Rather the survival of “the professional” will come to be associated with the way she comports herself—in terms of relations to colleagues and clients, ability to communicate effectively, monitoring of relevant trends (positive and troubling) in the broader society, and, most important, being able to give clear, objective, and disinterested advice, knowledgeable syntheses of what is known and what is unclear, and wise recommendations within her sphere of competence.

Interestingly, these desirable human traits go way back in history—even in pre-history. We can see them in Plato’s descriptions of wise rulers and in the Biblical portrayal of judges. By implication, we can also see them in Restoration comedies—when professionals are ridiculed, it is because they do not live up to the expectation of excellence that we hope for. And they are also discernable, in a positive sense, in Montaigne’s writings.

In the best of both worlds, we will continue to have individuals who possess high levels of knowledge as well as acute judgment and shafts of wisdom, and who merit the comment, “She is a true professional.”

Artificial Intelligence and the Professions

I recently came across a welcome bit of good news (which many of us crave these days!). Through a vehicle intriguingly titled the “Ethics and Governance of Artificial Intelligence Fund,” several philanthropies are providing generous support for an important and challenging undertaking: an investigation of the ethical and moral considerations of advances in AI. The beneficiaries are two highly appropriate and complementary university centers: the Media Lab at the Massachusetts of Institute of Technology and the Berkman Klein Center for Internet and Society at Harvard University.

This philanthropic support could not be more timely. After a number of false (or overly-hyped) starts in the last half-century, the field of artificial intelligence is coming of age. As the size of computational devices gets ever smaller, the question-answering and problem-solving capacities of our technologies increase steadily. We can still argue about whether “artificial intelligence” is truly intelligent—that is, intelligent in the way that we humans think we are intelligent; and it is clear that much of artificial intelligence still depends on “brute force” processing of tons of information rather than the kinds of elegant heuristics that we human beings allegedly employ.

Until recently, I had thought that one arena of human life was unlikely to be affected by artificial intelligence—the practice of the learned professions. Of course, I knew that almost all workers make use of technological aids, and, as a (self-proclaimed) professional, I have for decades used computer programs that help me to array and analyze data, write and edit easily, and—indeed (perhaps especially)—organize my life. But I thought of these as mere adjuncts to my “real” work of thinking, advising, planning, executing, and professing.

Thanks to recent attention in the press—and Richard and Daniel Susskind’s book of The Future of the Professions—I now realize that I was naïve. The subtitle of the Susskinds’ book is telling: How technology will transform the work of human experts. Large parts of professions are now carried out far more rapidly—and in many cases, more accurately—by AI programs and devices than by even the most skilled and speediest human beings. It’s become an open question whether, and to what extent, we will need flesh and blood accountants to handle and authenticate our books; live physicians to commission and interpret our MRIs; animated teachers who stand and deliver in front of us rather than well-designed lessons online.

But for the most part, discussions of these trends have ignored or minimized what is at the core, the proverbial “elephant in the room”: the responsibility of professionals to make complex judgments, and notably ethical ones, under conditions of uncertainty. The auditor has to decide which items to include or exclude, how to categorize them, what recommendations to give to the client, when to report questionable practices, to whom, and in what format. The medical practitioner has to decide which tests to commission, which findings to emphasize, and how to explain the possible courses of a disease process to patients and families who differ widely in background, knowledge, and curiosity. The teacher has to decide which topics are most important, what to emphasize (or minimize) in the current context (including time constraints, snow days, and epochal world events), which kinds of feedback are useful to specific students in specific contexts, and which kinds are better kept under wraps for now.

“To be sure,” you might respond. But these kinds of knowledge and “moves” can and are being built into AI. We can and should consider varying contexts; we can have different responses for different clients, even for the same clients on different days or under different circumstances; we can tweak programs based on successes and failures according to specified standards; and, anyway, we cannot be confident that human practitioners—even ones with the best of motives—necessarily handle such challenges very well.

Since I am (hopefully) less naïve than at earlier times, I won’t attempt to bat down or ignore these rejoinders. But I raise the following considerations:

1. Over the decades, professionals have developed understandings of what is proper professional behavior and what is not. To be sure, sometimes these consensus judgments are honored as much in the breach as in the observance; but at least they are standards, typically explicit ones. (One journalism ethical code—that of The New York Times—runs for over fifty pages.) Any artificial intelligence program that takes on professional (as opposed to purely technical) competence needs to be explicit about the ethical assumptions built into it. Such discussions have commenced—for example, with reference to the norms governing driverless automobiles that get into traffic jams or accidents.

2. It is illusory to think that there will be one best approach to any professional challenge—be it how to audit accounts, interpret radiological information, or fashion a lesson. Indeed, different approaches will have different ethical orientations—implicit or explicit. Far better that the assumptions be made explicit and that they have to contend with one another publicly on the ethical playing field… as happens now in discussions among philosophers, cognitive scientists, and neuroscientists.

3. Among competing artificial intelligence approaches to professional quandaries, how do we decide which to employ? We could create AI “meta-programs” that make these decisions—but for now, I’d rather let human professionals make these discernments. As the Romans famously queried “Quis custodiet ipsos custodies?” (“Who guards the guardians?”)

4. What happens if, for some reason, AI breaks down (for example, if the “hackers of ethics” have their day)? (More than a few gifted hackers pass through the portals of the two Cambridge institutions that have been generously funded.) In such post-diluvium “after the flood” cases, we will desperately need well-educated human beings who themselves have come to embody professional expertise and judgment.

5. A personal point: As I write these lines, I am dealing with a medical situation that will take months to resolve. I am fortunate—indeed, more than fortunate—to have a skilled medical team that is helping me to deal with these challenges. No doubt each member makes use of all of the “computational intelligence” at his or her disposal. But I also have conversations—in person, on the telephone, or on line—with these physicians frequently (in one case, on a daily basis). These personal interactions with live, empathic human beings have enormous positive impact on my well-being. When I have recovered, I expect to write about the sense of professional calling which still exists, at least in the prototypical profession of medicine.

6. Maybe my grandchildren or great-grandchildren will be equally satisfied having “conversations” with AI programs, although I can’t conceive of a situation where I would be. And this is in large part because my physicians are human beings. In some ways, as a fellow human being, I know where they are “coming from”—and in some ways, they also know where I am “coming from”; where I am going; and how the many pieces fit (or don’t) fit together. As fellow human beings, we share both an evolutionary background and a common fate.

And so, as the researchers and practitioners commence their important work on the ethics of AI, I hope that they will keep in mind those capacities and potentials that represent the “better angels” of human nature—those civilized and professional virtues and values that took centuries to develop but can so easily be scuttled and forgotten.

The Letter of Recommendation: Professional Judgment Under Siege

As a veteran professional, considered to have expertise in education and social science, I am often asked for advice. The requests run the gamut from where to study, to what to study, to how to succeed in one or another competitive arena. I do my best to be helpful—which often includes the admission that I don’t know enough to offer help.

Among the areas for which my professional judgment is most often sought is the letter of recommendation. I am asked to write a variety of letters. These range from recommending a young person for admission to a secondary school or college to recommending a senior colleague for a prize or a month long residency at a picturesque conference site. In the former case, I’m buoyed by the knowledge that there are many good places where the candidate can study. In the latter case, there is often already a lot of public knowledge about the candidate and so my support is probably symbolic rather than substantive.

The most challenging letters: those requested by young scholars who are applying for full-time tenure track teaching jobs. (Sometimes I have been the chief doctoral adviser for the scholar; at other times, I am one of her teachers or on her dissertation committee.) These jobs are highly competitive, with dozens or even hundreds of qualified candidates for each coveted position. Not infrequently, I (as well as other colleagues) will be asked to write letters for more than one candidate for the same job!

I am always suspicious of claims that “things used to be easier” or “more straightforward” in the past—and in reading C. P. Snow’s novels about academe in England, I learned that intrigue has always hovered over coveted appointments. But things certainly used to be different.

In the first half of the 20th century, nearly all appointments at selective institution (in the United States, United Kingdom, and other countries) came about through personal recommendations—in writing, in person, or by phone. The operating principle was the “old boys’ network”—and literally boys, since almost no “girls” were part of the network. When there was an opening at an institution, or when a senior scholar had a promising student, relevant “old boys” would get in touch with one another and have a presumably frank discussion of strengths and weaknesses. (Having read some correspondence from that era, I have been impressed by how candid the letters were—critiques were at least as prominent as raves.) In that sense, one can say that these recommendations were truthful.

But it’s equally important to point out that scholars had their favorites—and having a candidate who carried on your work or agreed with your view of the field or was personally helpful to you were undoubtedly fingers on the scale of a positive recommendation.

The “old boys’ network” needed to be exploded, and in the last several decades, it clearly has been. To begin with, while sexism has hardly disappeared, the range and variety of candidates is much greater, with women and minorities at least in the pool even when there has not been special encouragement for their candidacy. All jobs must be publicly advertised. Further, in many places there are “sunshine” rules, such that either letters are made public or—more typically—the letter writer is warned that confidentiality cannot be guaranteed.

Efforts have been undertaken to make such letters more objective. One ploy is to ask the letter writer to compare the candidate to other candidates in her cohort—either others who are explicitly named or ones whom the writer himself is asked to nominate. Another ploy, common for admission to a highly competitive program, is to ask the letter writer to rate the candidate in terms of her percentile rank with respect to properties like originality, expression in writing, oral expression, etc. An example: “In oral expression, as compared to other candidates, is this candidate in the upper 1%, the upper 5% the upper 10%, etc.?”

A complicating factor—especially salient in the United States—is what I’d term “letter inflation.” We are all familiar with grade inflation—the tendency over the decades to give students ever higher grades (in many institutions of higher learning, the Gentleman C has been promoted the Gentleman A minus). With respect to letters, I’ve observed the same trend in the United States—letters often compete with one another for superlatives. Indeed, of the many letter writers whom I know personally or “on paper,” only one of them is relatively candid about the flaws in a candidate.

So, in the light of all of these obstacles, what is left, if anything, of professional judgment? Faced with other letters that are likely to be laden with superlatives, as well as the prospect of public exposure of critical remarks (not to mention the possibility of a lawsuit filed by an unsuccessful job candidate!), are there any principles to which a letter writer should adhere in order to convey his or her professional judgment in a reliable way?

Here is what I would recommend:

1. When asked by a job candidate for a letter of recommendation, be prepared to say “no” and to give reasons that are candid, though not, of course, gratuitously nasty. I often explain that I don’t know the candidate well enough to be helpful or that I have already agreed to write for someone else or that I don’t think that the candidate is appropriate for the job. Better to be tough at the beginning than to find yourself in a quagmire.

2. Refuse to do rank orderings or checklists. Here’s the standard boilerplate that I use: “As a matter of personal policy, I do not complete ratings questionnaires as a portion of recommendations.” Why this refusal? One almost never sees checklists that are not completely skewed to the positive—so much so that checking off “Top 10%,” rather than “Top 1%,” can be the kiss of death.

3. Be purely descriptive whenever possible. For example, when it comes to a discussion of the candidate’s research, put it in your own words and be explicit about its contribution as well as its limitations.

4. State in a positive way the candidate’s strong features—letter readers will be interested in how you see her strengths.

5. If possible, touch on the candidate’s less strong features—or indicate areas where you don’t feel competent to comment (for example, if you know the candidate’s research but not her teaching, it is fine to state that).

If, for whatever reason, you cannot be explicit about a candidate’s weaknesses, be silent. Leave it to the readers of the letter to make inferences about what is not discussed. To avoid unintentionally harming the candidate, I always have a last line that reads, “Please let me know if I can provide any further information.” If there is indeed a follow-up, you are free to say, “I am not comfortable commenting on that issue.” Don’t lie!

As you can probably tell, this state of affairs does not please me. I’d much rather be completely candid and have others be equally candid with me. (In that sense, despite its obvious flaws, I have sympathy for the normative behavior in earlier times.) But that is not the world in which we live, and it is unfair to treat a job candidate in a way that unfairly jeopardizes her chance for a livelihood. But I come to the reluctant conclusion that, at least in the United States, letters of recommendation are not a site where one can expect candid professional judgment.

Since these issues will not go away, and they affect likely all who would read this piece, I’d be eager to hear others’ ideas about the professional judgment involved in letters of recommendation and how to exercise one’s professional judgment in a responsible way. Feel free to write your own recommendations below!

Truth and Goodness: Taking a Page from Ronald Reagan!

In a previous blog, I lamented the emergence of a “post truth”-“false news” public space—one where there is essentially no belief in truth, nor even in the possibility of establishing it. Given my interest in ethical behavior, I wondered whether it is possible to offer visions of “the good” when there is no longer a belief in the search for—indeed, even the possibility of ever establishing—truth.

I rejected two options: 1) surrender to postmodern skepticism about the possibility of rendering judgments of truth; and 2) clinging to the Olympian view that truth may ultimately be established but is not a viable goal for ordinary mortals in ordinary time.

While searching for a plausible alternative in real time, I suddenly remembered the words uttered by President Ronald Reagan as he laid out his stance toward the then still formidable Soviet Union. Reflecting on the possibility of mutually reducing or even eliminating nuclear weapons, the 40th president said, “Trust, but verify.”

(There are various wordings and translations of this phrase, which may date back to classical times—for my purposes, it’s the two key terms that are instructive.)

Turning first to verification, of course anyone can make any kind of assertion at any time. Those who encounter the assertion need to determine on what basis it has been made. And so, if, for example, the Soviet Union (or the United States) claimed to have reduced its stockpile of weapons, there needed to be surveillance methods whereby the accuracy of the claim could be ascertained.

The scholarly disciplines and forms of technical expertise that humans have developed over the centuries have embedded in them ways, methods, and algorithms on the basis of which claims can be judged. Sometimes, of course, the methods of verification are controversial, as are their realm of their proper application. Yet, within, say, economics or psychology or astronomy or civil engineering or neurosurgery, certain methods are widely accepted; only a cynic or an ignoramus would ignore or bypass them completely. Why re-invent the disciplinary wheel?

Experts frequently agree when the evidence is inconclusive, and then these experts are challenged to indicate conditions under which claims might be supported with greater confidence.

Each of us is better off if we can judge claims and methods ourselves or in discussion with other knowledgeable peers. But that state of affairs demands that we have achieved significant expertise; and life is far too short to allow any individual to attain expertise in more than a few, usually quite closely-related areas. No more polymaths in the tradition of Leonardo da Vinci!

Enter the second arrow in the Reagan quiver: that of trust. Only a fool trusts all claims blindly; only a skeptic does not trust anyone under any conditions.

And so the challenge for all of us is to determine who(m) to trust, and under what circumstances. In my own case, there are certain publications and certain websites that I have come to trust because they are disinterested in the best sense of that word.  Rather than seeking to find evidence to support a position to which they have already been committed, these publications carry out fresh investigations, are careful in their reporting, and—importantly—are quick to point out errors and to correct course. In cases of doubt, I’ll turn to The New York Times, National Public Radio, The Economist, and their respective websites (not, of course, to their opinion pages and columns).

Depending on the area of expertise, there are also certain individuals whose judgments, opinions, and conclusions I have come to trust. (Out of respect for their privacy, I am not going to name them, but they know who they are!) What these individuals will claim or conclude with respect to a particular case cannot be anticipated; rather, these knowledgeable individuals weigh each case on its merits, come to the best conclusion that they can, and freely admit when the case remains unclear or indeterminate. And, as in the case of the publications to which I have just made reference, these trustworthy individuals do not hesitate to indicate when an earlier conclusion or claim was off base.

With respect to trust, there is one potential source about which I am particularly skeptical: one’s own intuitions. Intuitions are sometimes well-founded; but when it comes to issues of import, especially as they affect others, evidence, argument, and consideration of counterclaims need to be given pride of praise. I recall an old saw: “No one ever went so wrong as the person who relied primarily on his own judgment.” (If this makes you think of a current political figure, you and I are thinking along similar lines.)

Bottom Line: If we are to continue to believe in the possibility of ascertaining what is true, we have two primary allies: 1) the methods of verification of the several fields of knowledge and practice; and 2) the existence of persons, publications, and institutions whose track record merits trust. It’s best if we can continue to draw on both of these allies, with the relative importance of each ally, depending on the particular issue and its ramifications.

So while Ronald Reagan was contemplating reductions in the arsenal of nuclear weapons, his pithy phrase helps us to think about the validity of the various claims that we encounter—claims that are essential to consider if there is to be any progress in judging and achieving the good.

The Good: Can We Have It in the Absence of Truth?

For over half a century, I’ve been obsessed with the nature of truth, beauty, and goodness. I see them as central in education and, indeed, in life—I would not want to live in a world where human beings could not distinguish truth from falsity; did not value beauty; and did not seek what is good and desist from what is bad.

In the last quarter century, I have argued that a principal reason—perhaps the principal reason—for education is to help young people understand (and act upon) this trio of virtues. These are the themes of my books The Disciplined Mind and its update in Truth, Beauty and Goodness Reframed. This past term, I taught a course on the topic—I jokingly dubbed it “Truth Beauty and Goodness Reframed Reframed.” And in an ongoing study of education, I speak about the space between LIteracies (the goal of the first years of school) and the LIvelihoods (the attainment of reasonable employment toward the end of adolescence) as the LIberal Arts and Sciences—the study, appreciation, and realization of these three virtues.

But any thought that I had cracked the secret of the virtues has been exploded during the past year by the political events in the United States. Voters in America had the choice between one presidential candidate who approached issues of truth with the hair-splitting logic of a lawyer; and another candidate who baldly lied and then lied about his lies. As if to finish the final funeral of truth, we have an electorate, many of whom do not seem to care about rampant lying; and the creation of a new category—fake or false news: news which is simply made up for propaganda purposes and is then circulated as if it had been carefully researched and validated.

How does this newly emerging state-of-affairs relate to the virtues? Until 2016, I had assumed that truth was a widely accepted goal—we might even say a widely accepted good—even though, of course, it is not always achieved. And so we could turn our attention to what I consider the heartland of goodness: the relations that obtain among human beings, those to whom we are close as well as those with whom we have only a distant, transactional relationship.

But I have had to come face-to-face with an uncomfortable, if not untenable situation: if we don’t agree about what is true, and if we don’t even care about what is true, then how can we even turn our attention to what is good, let alone care about what is good, and what is not? (In thinking about this issue, I’ve been aided by the excellent discussions with my students at the Harvard Graduate School of Education.)

So here’s my current thinking:

Option #1. A Post Post-Modern View: If we throw out the possibility of ascertaining truth, or even caring about truth, then goodness must be scuttled. If P and Not P are equally valid (or equally invalid), there is no possibility of making an ethical or moral judgment. All are good, all are bad, flip a coin.

Option #2. An Olympian View of Goodness: For the sake of argument, let’s concede that we ordinary humans are not able during our lifetimes to make judgements of what is true and what is not true and hence are stymied in our evaluation of “the good.” There might still be judgments of goodness which are based on some absolute standard: standards of justice (that exist in some document, be it a constitution or the Bible); standards of the good (that are made by God or by the gods); standards of posterity (that are made by historians many years hence); or standards of philosophers (what Plato or Kant or Rawls might deem to be good).

I certainly favor Option #2 over Option #1. But I propose another way of thinking of this issue.

If there is any view of good that can be put forth as universal, or close to universal, it is that one should not kill innocent people (The Seventh Commandment—Thou shalt not Kill; The Golden Rule: Do onto others…). So let us stipulate that principle as a “Given Good.” In making a judgment about the relation among human beings, we can therefore conclude that one who kills one or more innocent persons is a bad person and/or has committed a bad act. (By extension, one could then say that individuals who save innocent persons or who penalize killers of innocent persons are good persons.)

Following this line of argument, we need now to determine the truth of the matter: whether a killing took place, who carried out the killing and why, what is the status of the person who was killed, and what, if anything, should be done with the identified killer.

Allegation: John killed Joe.

In what I have termed “neighborly morality,” these questions can usually be answered without too much difficulty. People who live in a neighborhood know one another, they see what is going on and why, and nowadays they can record (and replay) happenings instantly on various recording devices. If Joe’s murder is observed by other individuals, and/or recorded for posterity, then only a crazy person will deny that it has happened.

Of course, determining the motive of the killer and the status of the killed can be more challenging. But again, in a neighborhood, individuals will generally be well-known by those whom they see each day, and the planned or accidental nature of the killing will be apparent, as well as the behavior of the killer in the aftermath of the deed.

And so, in brief, if establishing what happened, what is true, is relatively straightforward, and judgments of good/bad can be validly made… except by the extreme post-modernists or by those who are crazy.

But now let’s consider killing that occurs outside the neighborhood, often of a large number of persons, and often by agents whose motivation and activities are far more difficult to ascertain.

Allegation: Serb leader Radovan Karadzic killed thousands of innocent Bosnians and Croats

Allegation: Syrian leader Bashar Al Assad is killing thousands of innocent Syrians.

Allegation: Russian leader Vladimir Putin poisoned several of his political opponents.

In these latter cases, the norms of neighborly morality do not apply. The alleged killers are not known personally by most of the victims and observers. Nor do the alleged killers directly carry out the killings—the lines of authority, and the details of the killing, are much more difficult to ascertain. Indeed, in the absence of such personal culpability and of documentation of the circumstances of murder, the killings can almost seem like crimes that did not happen or perpetrator-free crimes: As Josef Stalin cynically quipped, “A single death is a tragedy; a million deaths is a statistic.”

In the second decade of the twenty first century, such heinous crimes do not always go unpunished. Using the precedent of the Nuremberg Trials in post-World War II Europe, we now have an International Criminal Court. And at least occasionally, a leader like Karadzic can be held accountable for mass deaths—in his case, he was found guilty of genocide, war crimes and crimes against humanity. But for this result to occur, one needs to have massive amount of evidence, the power to arrest and extradite, and the decision of a court that proceeds according to international law. No wonder that more distant forms of killing typically go unpunished.

Even in the case of the conviction of Karadzic, consensus about the crime and punishment can remain elusive. The charge of genocide is very difficult to sustain; indeed, over a century after the killing of one million Armenians, Turkish leaders refuse to discuss or even use the term genocide. Militant Serbs believe that they are in a justifiable struggle to vindicate their own history and sustain their own culture, a struggle dating back to the battle of Kosovo in 1389! Paradoxically, for many Serbs, the actions of the late 20th century were a retaliation against neighbors whom they have loathed over the centuries.

So if truth is so difficult to establish, where is the dry land? Once we leave the neighborhood, on what bases can we render judgments of what is good and what is not, especially when cases are less clear-cut than the Syrian or the Serbian cases?

I find two sources of hope:

  1. Understanding the means, the methods, and the evidence on which assertions are made. If one is dealing with contemporary or historical political events, one needs to know how to make sense of journalism, eyewitness reports, historical documents, and other putative sources of evidence. This approach applies equally well to science, medicine, art, and indeed any way of marshalling and evaluating evidence.

  2. Identifying individuals and sources who are trustworthy. Even the most polymathic among us cannot be expected to be able to evaluate all argument and evidence by ourselves. And so it is especially important to identify those persons (known personally or known through the media) and those sources of information that we find to be regularly accurate and reliable. This does not mean that such persons or sources are always right. None can pass that test! Rather it means that when they are wrong, they acknowledge it. It also means that their judgments are not always predictable; rather, they evaluate each case on its merits.

In my next blog, I’ll turn my attention to the ethics of roles. I’ll pursue how, on the basis of these two promising sources, we can establish—or, perhaps, more precisely RE-establish—a firmer link between truth and goodness.