University of Edinburgh
School of Economics
31 Buccleuch Place, Office 2.14
Edinburgh, EH8 9JT, UK
Email: ziegler{AT}ed.ac.uk
Abstract: Interactive decision-making relies on strategic reasoning. Two prominent frameworks are (1) models of bounded reasoning, exemplified by level-k models, which keep reasoning implicit, and (2) epistemic game theory, which makes reasoning explicit. We connect these approaches by "lifting" static complete-information games into incomplete-information settings where payoff types reflect players' reasoning depths as in level-k models. We introduce downward rationalizability, defined via minimal belief restrictions capturing the basic idea common to level-k models, to provide robust and well-founded predictions in games where bounded reasoning matters. We then refine these belief restrictions to analyze the foundations of two seminal models of bounded reasoning: the classic level-k model and the cognitive hierarchy model. Our findings shed light on the distinction between hard cognitive bounds on reasoning and beliefs about co-players' types. Furthermore, they offer insights into robustness issues relevant for market design. Thus, our approach unifies key level-k models building on clear foundations of strategic reasoning stemming from epistemic game theory.
Working Paper: arXiv
Abstract: We introduce and formalize misalignment, a phenomenon of interactive environments perceived from an analyst's perspective where an agent holds beliefs about another agent's beliefs that do not correspond to the actual beliefs of the latter. We demonstrate that standard frameworks, such as type structures, fail to capture misalignment, necessitating new tools to analyze this phenomenon. To this end, we characterize misalignment through non-belief-closed state spaces and introduce agent-dependent type structures, which provide a flexible tool to understand the varying degrees of misalignment. Furthermore, we establish that appropriately adapted modal operators on agent-dependent type structures behave consistently with standard properties, enabling us to explore the implications of misalignment for interactive reasoning. Finally, we show how speculative trade can arise under misalignment, even when imposing the corresponding assumptions that rule out such trades in standard environments.
Working Paper: arXiv
Abstract: The Deferred Acceptance (DA) algorithm is stable and strategy-proof, but can produce outcomes that are Pareto-inefficient for students, and thus several alternative mechanisms have been proposed to correct this inefficiency. However, we show that these mechanisms cannot correct DA's rank-inefficiency and inequality, because these shortcomings can arise even in cases where DA is Pareto-efficient.
We also examine students' segregation in settings with advantaged and marginalized students. We prove that the demographic composition of every school is perfectly preserved under any Pareto-efficient mechanism that dominates DA, and consequently fully segregated schools under DA maintain their extreme homogeneity.
Working Paper: arXiv
Abstract: The Deferred Acceptance (DA) mechanism can generate inefficient placements. Although Pareto-dominant mechanisms exist, it remains unclear which and how many students could improve. We characterize the set of unimprovable students and show that it includes those unassigned or matched with their least preferred schools. Nevertheless, by proving that DA's envy digraph contains a unique giant strongly connected component, we establish that almost all students are improvable, and furthermore, they can benefit simultaneously via disjoint trading cycles. Our findings reveal the pervasiveness of DA's inefficiency and the remarkable effectiveness of Pareto-dominant mechanisms in addressing it, regardless of the specific mechanism chosen.
Working Paper: arXiv
Abstract: Performance of binary classifiers is often measured against a reference classifier when the true class is not readily observable. Reference classifiers are commonly also imperfect. A prominent example are diagnostic tests whose performance is measured against an established reference test. An imperfect reference generally leads to partial identification of relevant performance measures, which induces ambiguity in the interpretation of a predicted class. Seidenfeld and Wasserman (1993) define dilation as an extreme notion of non-informativeness when information is ambiguous. This paper characterizes precise conditions under which dilation arises in the context of binary classifiers when the reference classifier is imperfect and develops procedures for statistical inference based on methods for subvector inference in moment inequality models. We illustrate the usefulness of our approach through the application of two distinct case studies: radiologists' assessment of CT Chest scans for COVID-19 in Ai et al. (2020), and the utilization of artificial intelligence for the detection of COVID-19 in Mei et al. (2020).
Working Paper: Link
Abstract: Information provision is a significant component of business-to-business interaction. Furthermore, the provision of information is often conducted bilaterally. This precludes the possibility of commitment to a grand information structure if there are multiple receivers. Consequently, in a strategic situation, each receiver needs to reason about what information other receivers get. Since the information provider does not know this reasoning process, a motivation for a robustness requirement arises: the provider seeks an information structure that performs well no matter how the receivers actually reason. In this paper, I provide a general method to study how to optimally provide information under these constraints. The main result is a representation theorem, which makes the problem tractable by reformulating it as a linear program in belief space. Furthermore, I provide novel bounds on the dependence among receivers’ beliefs, which provide even more tractability in some special cases.
Games and Economic Behavior 136 (2022), pp. 559–585
Abstract: We study players interacting under the veil of ignorance, who have—coarse—beliefs represented as subsets of opponents' actions. We analyze when these players follow maxmin or maxmax decision criteria, which we identify with pessimistic or optimistic attitudes, respectively. Explicitly formalizing these attitudes and how players reason interactively under ignorance, we characterize the behavioral implications related to common belief in these events: while optimism is related to Point Rationalizability, a new algorithm—Wald Rationalizability—captures pessimism. Our characterizations allow us to uncover novel results: (i) regarding optimism, we relate it to wishful thinking á la Yildiz (2007) and we prove that dropping the (implicit) ``belief-implies-truth'' assumption reverses an existence failure described therein; (ii) we shed light on the notion of rationality in ordinal games; (iii) we clarify the conceptual underpinnings behind a discontinuity in Rationalizability hinted in the analysis of Weinstein (2016).
Paper: Published version or arXiv preprint
Games and Economic Behavior 132 (2022), pp. 592–597
Abstract: In this note, I explore the implications of informational robustness under the assumption of common belief in rationality. That is, predictions for incomplete-information games which are valid across all possible information structures. First, I address this question from a global perspective and then generalize the analysis to allow for localized informational robustness.
Paper: Published version or arXiv preprint
Games and Economic Behavior 119 (2020), pp. 197–215
Abstract: Economic predictions often hinge on two intuitive premises: agents rule out the possibility of others choosing unreasonable strategies ('strategic reasoning'), and prefer strategies that hedge against unexpected behavior ('cautiousness'). These two premises conflict and this undermines the compatibility of usual economic predictions with reasoning-based foundations. This paper proposes a new take on this classical tension by interpreting cautiousness as robustness to ambiguity. We formalize this via a model of incomplete preferences, where (i) each player's strategic uncertainty is represented by a possibly non-singleton set of beliefs and (ii) a rational player chooses a strategy that is a best-reply to every belief in this set. We show that the interplay between these two features precludes the conflict between strategic reasoning and cautiousness and therefore solves the inclusion-exclusion problem raised by Samuelson (1992). Notably, our approach provides a simple foundation for the iterated elimination of weakly dominated strategies.
Paper: Published version or Preprint
Abstract: We study the behavioral implications of Rationality and Common Strong Belief in Rationality (RCSBR) with contextual assumptions allowing players to entertain misaligned beliefs, i.e., players can hold beliefs concerning their opponents’ beliefs where there is no opponent holding those very beliefs. Taking the analysts’ perspective, we distinguish the infinite hierarchies of beliefs actually held by players (“real types”) from those that are a byproduct of players’ hierarchies (“imaginary types”) by introducing the notion of separating type structure. We characterize the behavioral implications of RCSBR for the real types across all separating type structures via a family of subsets of Full Strong Best-Reply Sets of Battigalli & Friedenberg (2012). By allowing misalignment, in dynamic games we can obtain behavioral predictions inconsistent with RCSBR (in the standard framework), contrary to the case of belief-based analyses for static games—a difference due to the dichotomy “non-monotonic vs. monotonic” reasoning.
Working Paper: arXiv
Covid Economics 66 (2021), pp. 1–36
Made obsolete and superseded by Binary Classifiers as Dilations (see above).
Abstract: New binary classification tests are often evaluated relative to another established test. For example, rapid Antigen test for the detection of SARS-CoV-2 are assessed relative to more established PCR tests. In this paper, I argue that the new test (i.e. the Antigen test) describes ambiguous information and therefore allows for a phenomenon called dilation -- an extreme form of non-informativeness. As an example, I present hypothetical test data satisfying the WHO's minimum quality requirement for rapid Antigen tests which leads to dilation. The ambiguity in the information arises from a missing data problem due to imperfection of the established test: the joint distribution of true infection and test results is not observed. Using results from Copula theory, I construct the (usually non-singleton) set of all these possible joint distributions, which allows me to address the new test's informativeness. This analysis leads to a simple sufficient conditions to make sure that a new test is not a dilation. Finally, I illustrate my approach with applications to actual test data. Rapid Antigen tests satisfy my sufficient condition easily and are therefore informative. A bit more problematic might be other approaches like chest CT scans.
Working Paper: arXiv or Covid Economics
Abstract: Using recent data from voluntary mass testing, I provide credible bounds on prevalence of SARS-CoV-2 for Austrian counties in early December 2020. When estimating prevalence, a natural missing data problem arises: no test results are generated for non-tested people. In addition, tests are not perfectly predictive for the underlying infection. This is particularly relevant for mass SARS-CoV-2 testing as these are conducted with rapid Antigen tests, which are known to be somewhat imprecise. Using insights from the literature on partial identification, I propose a framework addressing both issues at once. I use the framework to study differing selection assumptions for the Austrian data. Whereas weak monotone selection assumptions provide limited identification power, reasonably stronger assumptions reduce the uncertainty on prevalence significantly.
Working Paper: arXiv