Horizon Project SEAL
Horizon project SEAL (Strategic rEasoning for sociALly good mechanisms)
The project, funded by Horizon-Marie Skłodowska-Curie Actions, aims to design a logical framework based on Strategy Logic for formally verifying and designing mechanisms for social choice.
Leading researcher: Munyque Mittelmann
Project coordinator: Aniello Murano
Starting date: August 2023
Duration: 24 months
Project webpage at CORDIS
Main achievements
Verification of Bayesian Mechanisms
We have proposed an approach for formally verifying Bayesian mechanisms with Probabilistic Strategy Logic (PSL). We have shown how to use it to encode classic notions from Mechanism Design, including Bayesian-Nash equilibrium and incentive compatibility. This approach allows for automatic verification of a wide class of Bayesian mechanisms and motivates further research on applications of logic-based approaches for AMD. Unlike previous proposals, the automated verification of Bayesian mechanisms using Probabilistic Strategy Logic (PSL) can take into account a wide range of settings (e.g. randomized, indirect, and Bayesian mechanisms). Furthermore, thanks to the great expressiveness of the specification language, PSL, the verification ex-ante, interim, and ex-post of complex solution concepts and properties is fully automated through model checking of logical formulas.
Results published at SPIRIT@AIxIA 2023 and presented at the workshop AASG@AAMAS 2024.
Incentive Design
We introduced Incentive Design, a new class of problems for equilibrium verification in multi-agent systems. In our model, agents attempt to maximize their utility functions, which are expressed as formulae in temporal logic with quantitative semantics. We assume agents are rational in the sense that they adopt strategies consistent with game-theoretic solution concepts such as Nash equilibrium. For each solution concept we consider, we analyze the problems of verifying whether an incentive scheme achieves a societal objective and finding one that does so, whether it be social welfare or any other aggregate measure of collective well-being. We study both static and dynamic incentive schemes, showing that the latter are more powerful than the former.
Results were published at KR 24.
Probabilistic Verification under Imperfect Information
We have investigated the strategy logics PATL and PATL∗ (fragments of PSL) under imperfect information. Since the model-checking problem is undecidable in general in this setting, we restrict our attention to agents with memoryless (positional) strategies. We present novel decidability and complexity results when the model transitions are stochastic and agents play uniform strategies. As the main result, we show that, in stochastic MAS under II, model-checking PATL is in EXPTIME when agents play probabilistic strategies. We also show that model-checking PATL∗ is PSPACE-complete when the proponent coalition is restricted to deterministic strategies. Under the same assumption, the model checking of PATL is on the second level of the polynomial hierarchy. We illustrate the usefulness of this setting with meaningful examples based on online approval-based elections, probabilistic social choice, and security games.
Probabilistic Natural Strategies
We have proposed Behavioral Natural Strategies, a probabilistic extension of Natural Strategies, and variants of the probabilistic temporal logics PATL and PATL* to handle such strategies (the resulting logics are called NatPATL and NatPATL*, resp.). As the main result, we show that, in stochastic MAS, NatPATL model-checking is NP-complete when the active coalition is restricted to deterministic strategies. We also give a 2NEXPTIME complexity result for NatPATL* with the same restriction. In the unrestricted case, we give an EXPSPACE complexity for NatPATL and 3EXPSPACE complexity for NatPATLs. We provide examples based on secure voting.
Results were published at AAAI 24.
Quantitative Module Checking
We have investigated the problem of quantitative module checking in Multi-Agent Systems, which formalizes the verification of (possibly multi-agent) systems that must adapt their behavior to the input they receive from the environment, also viewed as an authority. The quantitative setting enables the verification of different levels of satisfaction. We consider specifications given in Quantitative Alternating-time Temporal logics and investigate complexity and expressivity results. We illustrate the approach with an example based on a weighted voting game.
Results were published at AAMAS 24.
Verification of Games with Imperfect Information
To enable reasoning about the strategic behavior of agents with partial observability in general game playing we presented a formal translation from games with imperfect information specified in GDL-II to models of Epistemic Strategy Logic (SLK). We prove the correctness of this translation and show how crucial properties of general games, including playability and the existence of Nash equilibria, can be expressed as formulas in SLK. Finally, we demonstrate the application of an existing model-checking system to verify the properties of GDL-II games.
Results were published at KR 24.
Publications
★ M. Mittelmann, B. Maubert, A. Murano, L. Perrussel. "Formal Verification and Synthesis of Mechanisms for Social Choice". Artificial Intelligence. 339, 104272 (2025).
★ Y. Li, E. Lorini, M. Mittelmann. "Rational Capability in Concurrent Games". To appear at AAMAS 2025. ArXiv.
★ R. Galimullin, M. Gladyshev, M. Mittelmann, N. Motamed. "Changing the Rules of the Game: Reasoning about Dynamic Phenomena in Multi-Agent Systems". To appear at AAMAS 2025. ArXiv.
R. Berthon, J.-P. Katoen, M. Mittelmann, A. Murano. "Robust Strategies for Stochastic Multi-Agent Systems" (extended abstract). To appear at AAMAS 2025.
★ D. Hyland, M. Mittelmann, A. Murano, G. Perelli, M. Wooldridge. "Incentive Design for Rational Agents". KR 2024.
★ Y. He, M. Mittelmann, A. Murano, A. Saffidine, M. Thielscher. "Verification of General Games with Imperfect Information using Strategy Logic". KR 2024.
★ W. Jamroga, M. Mittelmann, A. Murano, G. Perelli. "Playing Quantitative Games Against an Authority: On the Module Checking Problem". AAMAS 2024.
★ F. Belardinelli, W. Jamroga, M. Mittelmann, A. Murano. "Verification of Stochastic Multi-Agent Systems with Forgetful Strategies". AAMAS 2024.
★ R. Berthon, J.-P. Katoen, M. Mittelmann, A. Murano. "Natural Strategic Ability in Stochastic Multi-Agent Systems". AAAI 2024. ArXiv.
F. Belardinelli, W. Jamroga, M. Mittelmann, A. Murano. "Strategic Abilities of Forgetful Agents in Stochastic Environments" (short paper). KR 2023. ArXiv.
M. Mittelmann, B. Maubert, A. Murano, L. Perrussel. "Verification of Bayesian Mechanisms with Strategy Logic" (extended abstract). SPIRIT@AIxIA 2023.
Other activities
Organization of workshops:
Organization of the workshop SPIRIT @ AIxIA 24.
Organization of the workshop LAMAS&SR 24.
Organization of the PhD day at EUMAS 23.
Divulgation activities:
Researchers' Night (Toulouse, 2024): Atelier "Enchères et stratégies : l'IA à votre service".
Tutorials and short courses:
Tutorial "Strategic Reasoning and Strategy Representation" @ AAMAS 2025.
Course "Formal Aspects of Strategic Reasoning and Game Playing " @ ESSAI & ACAI 24.
Tutorial "Formal Aspects of Strategic Reasoning in Multi-Agent Systems" @ KR 24.
Tutorial "Strategic Reasoning in Automated Mechanism Design" @ KR 23.
Workshop presentations:
Presentation "Incentive Design for Rational Agents" accepted at the workshop GAIW@AAMAS 25.
Presentation "Formal Verification for Fostering the Design of Mechanisms for Social Good" at the workshop AASG@AAMAS 24.
Presentation "Playing Quantitative Games Against an Authority: On the Module Checking Problem" at the workshop AAPEI@ECAI 24.
Project Poster

Poster presented at the Researchers' Night
(Toulouse, 2024)