My research uses game-theoretic and information-economic models to study strategic interaction under uncertainty. Current projects focus on innovation races, patent strategy, and the competitive dynamics of AI development.
Journal of the European Economic Association, 2025, 23(4), 1309–1349
In a model inspired by neuroscience, we study choice between lotteries as a process of encoding and decoding noisy perceptual signals. The implications of this process for behavior depend on the decision-maker's understanding of risk. When the aggregation of perceptual signals is coarse, encoding and decoding generate behavioral risk attitudes even for vanishing perceptual noise. We show that the optimal encoding of lottery rewards is S-shaped and that low-probability events are optimally oversampled. Taken together, the model can explain adaptive-risk attitudes and probability weighting, as in prospect theory. Furthermore, it predicts that risk attitudes are influenced by the anticipation of risk, time pressure, experience, salience, and availability heuristics.
Evidence suggests that consumers do not perfectly optimize, contrary to a critical assumption of classical consumer theory. We propose a model in which consumer types can vary in both their preferences and their choice behavior. Given data on demand and the distribution of prices, we identify the set of possible values of the consumer surplus based on minimal rationality conditions: every type of consumer must be no worse off than if they either always bought the good or never did. We develop a procedure to narrow the set of surplus values using richer data sets and provide bounds on counterfactual demands.
This article investigates the role of private information in patent races. Although prior work assumes that firms observe their rivals' progress, R&D is often conducted in secrecy. We analyze how the race dynamics change when progress is private and examine whether voluntary disclosure is strategically beneficial, even without direct payoff consequences. We show that a firm may disclose its breakthrough to discourage a rival's R&D effort, but only when the rival has not yet done so and R&D efficiency is sufficiently low. The unique equilibrium takes one of three forms: no-revelation, instant-revelation, or mixed-revelation.
This study examines the trade-off between patenting and secrecy in innovation races, considering a model where two firms simultaneously compete in developing two products that can be substitutes or complements. Patenting ensures a claim on the product but discloses information to rivals, while secrecy may delay immediate profits for future technology leadership. We find that firms have more incentives to patent if they are impatient and if there are no significant technological spillovers. In a scenario where firms are patient and moderate technological spillovers exist, they exhibit a greater tendency to patent products acting as perfect complements rather than perfect substitutes. These findings are in line with the empirical evidence by Cohen, Nelson, and Walsh (2000), who argue that firms are more likely to keep the innovation secret in "simple" industries, where goods have many potential substitutes, as opposed to "complex" industries, where a new product involves many complementary components.
Frontier AI Lab Competition and Model Release Decisions
This project studies when a firm that develops a state-of-the-art AI model might choose to use it only internally rather than releasing it to the public (e.g., via an API). AI inference has a distinctive feature as a product: it serves as an input into further R&D — for instance, it can augment human labor in programming and other tasks — and the firm selling access benefits from observing how users employ the model. I study under what competitive conditions a frontier AI lab would keep its most powerful model behind closed doors. This question has important policy implications: if leading labs withhold their most capable models, policymakers and the public may underestimate the true state of AI capabilities and the associated risks, leaving critical infrastructure unprepared for threats such as an accidental leak or theft of a powerful model's parameters.