Skip to main content

Legitimate Lies: Omission, Commission, and Cheating


My name is Andrea Pittarello, and I am an Assistant Professor in the Psychology Department at the University of Groningen (The Netherlands). I am mainly interested in behavioral ethics (e.g., cheating) and I seek to understand what leads people from all walks of life to bend the rules and serve their self-interest.

In a recent paper with Enrico Rubaltelli (University of Padova) and Daphna Motro (University of Arizona), we asked whether people are more likely to lie by withholding the truth (i.e., a lie of omission) or by actively breaking the rules (i.e., lie of commission). Imagine that you are selling your car and the engine is on its last legs. A lie of commission would be telling a potential customer that the engine works perfectly, whereas a lie of omission would be failing to mention the problem and let the customer find out about it on his own. From a utilitarian point of view, the two lies should be the same: After all, lying is always wrong, and the way it is brought about should not affect our judgments. However, philosophers and psychologists found that the two lies are considered differently by observers and by the law. To date, most of the work on omission and commission focused on moral judgment, and we know very little about how this distinction is reflected into actual cheating behavior.

To answer this question, we devised a simple “Heads or Tails” game. We told participants that the computer would flip a coin and determine whether they won or lost the game. Only if the coin landed on Heads participants would win a monetary prize, however, unbeknownst to participants, the coin always landed on Tails. Participants learned that the software had some bugs and might not be always accurate in determining whether they won or lost the game. Participants in the commission condition received correct feedback that they did not win the game. Participants in the omission condition saw the coin landing on the losing side, but were incorrectly informed that they had won the game. Next, we asked all participants whether they had won or not. Since they were informed about the possible bugs, participants in the commission condition could overtly lie and report that the computer was wrong, and that they actually won. On the contrary, participants in the omission condition could simply fail to report any error and cash the prize.
Across four studies, we consistently found that participants were far more likely to lie by withholding the truth (omission) than by overtly reporting that the computer was wrong (commission). In addition, we found that lies of omission were deemed as more acceptable, legitimate, and justifiable than those of commission.

Our findings rely on a simple cheating task, when in reality ethical dilemmas can be far more complex. This notwithstanding, they can tell us a bit more about when and why people are likely to bend the rules and cheat. Within organizations, for instance, employees can act dishonestly by not reporting ethical violations, failing to speak up, or colluding with corrupt managers, especially when doing so begets personal benefits. Similarly, sellers can “forget” to mention a hidden problem when selling a product. Increasing the transparency of several business practices and limiting situations in which people can passively, or easily, profit for unethical behavior seems a promising avenue to foster compliance with ethical rules.

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph