Teaching

I design my courses from scratch where possible, building on The Economy by the CORE project for undergraduate economics and Ariel Rubinstein’s open-access materials for game theory. I have taught at several universities and welcome visiting teaching opportunities.

Selected Courses

Game Theory
B.A. — SAS, University of Tyumen, 2020
Osborne’s An Introduction to Game Theory.
Applied Economics
B.A. — SAS, University of Tyumen, 2022
Built on CORE’s The Economy, covering Units 1–12 and 19–21.
Intermediate Microeconomics
B.A. — American University of Armenia, 2025
Co-taught with Caio Lorecchio (I taught Part I). Varian for theory, CORE for applications.
Advanced Industrial Organization
M.A. — U. Duisburg–Essen, 2024
Belleflamme & Peitz. Oligopoly models and product differentiation.
Preparatory Mathematics for PhD
Ph.D. — RGS Econ / U. Duisburg–Essen, 2024
Analysis and linear algebra for incoming doctoral students.

AI Safety Education

Mentored Projects (SAIGE)

Convergence or Divergence? The Future of Frontier AI Capabilities and Implications for Catastrophic Risk
Before ChatGPT, many expected Google to dominate AI through its unmatched data assets. Instead, OpenAI leapfrogged incumbents — only for competitors like Anthropic, Google, and Meta to close the gap within months. Open-source models now trail the frontier by roughly six months to a year. This pattern raises a fundamental question for AI safety: will the capabilities of frontier AI models continue to converge, or will they diverge — and what does each scenario imply for catastrophic risk?

Existing work has either documented capability convergence empirically (e.g., Stanford AI Index, AISI Frontier Trends Report, Epoch AI) or modeled AI races game-theoretically with a focus on the safety-versus-speed tradeoff (e.g., Han et al., 2022; Armstrong et al., 2016). Industrial organization analyses of AI market structure (Vipra & Korinek, 2024; Gans, 2024) largely set aside the safety implications of their findings. This project aims to bridge the gap by asking: given the strategic interaction between frontier labs, which market structure is more likely to emerge — and what does this mean for governance?

The project will examine three key drivers of convergence and divergence — training data, algorithmic advances, and compute costs — with particular attention to the role of data. Several forces may sustain convergence: shared access to public training corpora, rapid diffusion of algorithmic innovations, and falling costs of replicating frontier performance. Other forces may drive divergence: escalating training costs, proprietary synthetic data pipelines, and potential first-mover advantages from self-improving AI systems.

The project will combine qualitative analysis of the AI industry landscape with game-theoretic modeling — ranging from simple strategic-form games to innovation race models, calibrated to mentee skills. The intended outputs are a research blog post accessible to the AI safety community and an accompanying formal analysis. Strong mentee contributions during the program could lead to coauthorship on a subsequent economics research paper.
Economic Decision-Making Under Deep Uncertainty About AI's Trajectory
Co-mentored with Wim Howson Creutzberg
The future of AI could unfold in very different ways. In one scenario, AI automates most cognitive work and the economy grows explosively. In another, AI brings steady but modest productivity gains, much like earlier waves of IT adoption. These futures have radically different implications for how much people should save, what they should invest in, and which skills will retain their value. Yet the question of how to make such decisions when you genuinely do not know which future is coming has received almost no formal attention.

On the modeling side, Trammell & Korinek (2023) lay out a useful taxonomy of transformative AI (TAI) growth scenarios, and other important contributions — Aghion, Jones & Jones (2018), Acemoglu (2024), Benzell & Ye (2024) — work out the economic consequences of specific AI futures. On the empirical side, Andrews & Farboodi (2025) study what financial markets currently believe about TAI. What is missing is the normative question: given genuine uncertainty over which scenario will materialize, how should a forward-looking decision-maker allocate resources? That is the gap this project aims to fill.

Mentees will build a tractable model in which an investor faces uncertainty over whether AI leads to moderate or explosive growth, and chooses how much to save and how to split wealth across assets — broad equity, AI-intensive capital, human-capital-linked claims, and a safe asset — whose payoffs depend on which future arrives. The project will study how optimal choices shift with the perceived likelihood of explosive TAI, risk aversion, and ambiguity aversion (discomfort with poorly defined probabilities). The approach combines analytical work on a stylized model with numerical illustrations. The intended outputs are a research blog post and a technical working paper, with potential for coauthorship on a subsequent publication.