Logo Utrecht University

TULIPS – The Utrecht Logic in Progress Series


Here we archive past talks (abstracts, handouts, links, …).


Classicality and the ST approach to paradoxes

This talk provides an overview on the ST approach to paradoxes detailed and defended in CERvR (2012) and CERvR (2013). The ST approach has a remarkable feature: it gives you (in a certain sense) classical logic for a classical vocabulary,  but it doesn’t lead you to absurdities in the presence of tolerance principles or a truth-predicate. It follows from the previous that ST cannot be fully classical. Now explaining exactly what ‘full classicality’ means is much trickier than it might appear at first glance (cf. Barrio, Pailos and Szmuc (2020) and related papers). The non-classicality of ST can be investigated by looking at the validity of metainferences. Alternatively, the non-classicality of ST can be investigated by looking at inferences, but in relation to properties other than validity. There is a correspondence between these two approaches (this is stated, or at least, indicated in Cobreros, La Rosa and Tranchini).


Barrio, E. A., Pailos, F., & Szmuc, D. (2020). A hierarchy of classical and paraconsistent logics. JPL, 49(1): 93-120.
CERvR [Cobreros, P., Egré, P., Ripley D., and van Rooij R.] (2012), Tolerant, Classical, Strict, JPL 41(2): 347-385.
CERvR [Cobreros, P., Egré, P., Ripley D., and van Rooij R.] (2013), Reaching transparent truth, Mind 122: 841-866.
Cobreros, P., La Rosa, E. and Tranchini, L. (2020) (I Can’t Get No) Antisatisfaction, Synthese https://rdcu.be/ciJmi


​Time: 16.00 – 17.15

Location: Online


Complete Proof Systems for Parikh’s Game Logic

Game logic was introduced by Rohit Parikh in the 1980s as a generalisation of propositional dynamic logic (PDL) for reasoning about outcomes that players can force in determined 2-player games. Semantically, the generalisation from programs to games is mirrored by moving from Kripke models to monotone neighbourhood models. Parikh proposed a natural PDL-style Hilbert system which was easily proved to be sound, but its completeness remained an open problem.

In our LISC 2019 paper, we introduce a cut-free sequent calculus for game logic, and two cut-free sequent calculi that manipulate annotated formulas, one for game logic and one for the monotone μ-calculus, the variant of the polymodal μ-calculus where the semantics is given by monotone neighbourhood models instead of Kripke structures. We show these systems are sound and complete, and that completeness of Parikh’s axiomatization follows. Our approach builds on ideas and results by Afshari & Leigh (LICS 2017) in that we obtain completeness via a sequence of proof transformations between the systems. A crucial ingredient is a validity-preserving translation from game logic to the monotone μ-calculus.

Joint work with Sebastian Enqvist, Johannes Marti, Clemens Kupke and Yde Venema.

​Time: 16.00 – 17.15

Location: Online


Linear bi-intuitionistic calculus

In this talk I will overview the logic bi-LC obtained by adding the linearity (Goedel-Dummett) axiom to the bi-intuitionistic calculus bi-IPC. I will discuss some important properties of this logic and highlight the many differences with the Goedel-Dummett intermediate logic LC. I will also mention some open problems about bi-LC and related systems.

​Time: 16.00 – 17.15

Location: Online


Canonical Formulas for the Lax Logic

In this talk, we will investigate the method of canonical formulas for propositional lax logic. We construct “lax canonical formulas” using the local finiteness of the variety of nuclear implicative semilattices. These formulas provide a uniform axoimatization method for lax logics. Then we extend generalized Esakia duality to account for nuclear implicative semilattice homomorphisms between nuclear Heyting algebras, and explore the feasibility of lax analogues of subframe logics. This leads to the introduction of steady logics — logics characterized by generalized lax canonical formulas that only partially encode the modal structure of nuclear Heyting algebras. Finally, we use these generalized lax canonical formulas to obtain a number of preservation theorems, for example, we will prove that for each Kripke-complete intermediate logic the least lax logic containing it is also Kripke-complete.

​Time: 16.00 – 17.15

Location: Online


Recapture Results and Classical Logic

The discussion on whether non-classical approaches to the semantics paradoxes can recover classical reasoning for non-problematic statements (what is sometimes referred to in the literature as ‘classical recapture’) is of the utmost importance for non-classical logicians. To show that classical logic can be recovered in certain contexts—and thus bolster the case for the suitability of their favored logic—these theorists sometimes establish so-called ‘recapture results’. 

However, in a recent paper Jack Woods has argued that there is a problem with this strategy. In proving recapture results non-classical theorists typically employ classical principles that they themselves reject. In view of this one can conclude that these theorists are not entitled to these results, given that, presumably, they are not available unless classical logic is used. What non-classical logicians need to show, according to Woods, is that, by the standards of their own logic, classical recapture is possible. In this talk I will try to resist Woods’ objection. In particular, I will suggest that there is an ambiguity in his line of argument, one that can be made visible only if one focuses on the specific instances of the classical principles that are needed to obtain recapture results. Once the ambiguity is removed, one can see that there is no problem with the recapture strategy. 

​Time: 16.00 – 17.15

Location: Online


Disjunction properties in modal proof complexity

When studying the proof complexity of classical logic, one of the basic tools is feasible interpolation, which gives a way to transform circuit lower bounds into length-of-proof lower bounds. A similar role in the proof complexity of modal logics is played by the feasible disjuction property, which is at the core of known lower bounds on modal Frege and extended Frege (EF) proof systems.

In this talk, we will discuss the feasibility of variants of the disjunction property (and more general extension rules) in some modal logics, and its application for length-of-proof separation of EF from SF (substitution Frege) proof systems for modal logics of unbounded branching. Finally, we will have a look at the complexity of the disjunction property in basic logics of bounded branching.

​Time: 16.00 – 17.15

Location: Online


Explanatory proofs (or grounding proofs): philosophical framework, core ideas and results

In this talk we explore the concept of explanatory proofs, namely proofs that not only show that a theorem is true, but also provide reasons why it is true. After a philosophical introduction of this kind of proof, we provide a characterization and a formalization. We conclude the talk by offering a plethora of results/directions of future research that explanatory proofs lead to.

​Time: 16.00 – 17.15

Location: Online


  • January 12, Raheleh Jalali (Utrecht)

On Hard Theorems

Given a proof system, how can we specify the “hardness” of its theorems? One way to tackle this problem is taking the lengths of proofs as the corresponding hardness measure. Following this route, we call a theorem hard when even its shortest proof in the system is “long” in a certain formal sense. Finding hard theorems in proof systems for classical logic has been an open problem for a long time. However, in recent years as a significant progress, many super-intuitionistic and modal logics have been shown to have hard theorems.

In this talk, we will extend the aforementioned result to also cover a variety of weaker logics in the substructural realm. We show that there are theorems in the usual calculi for substructural logics that are even hard for the intuitionistic systems. More precisely, we provide a sequence of substructural tautologies {A_n}_{n=1}^{\infty} in a way that even their shortest intuitionistic proofs are exponentially long in the lengths of A_n.

​Time: 16.00 – 17.15

Location: Online.


  • December 22, Edi Pavlovic (Helsinki) [joint work with Norbert Gratzl]

A More Unified Approach to Free Logics

Free logics is a family of first-order logics which came about as a result of examining the existence assumptions of classical logic. What those assumptions are varies, but the central ones are that (i) the domain of interpretation is not empty, (ii) every name denotes exactly one object in the domain and (iii) the quantifiers have existential import. Free logics usually reject the claim that names need to denote in (ii), and of the systems considered in this paper, the positive free logic concedes that some atomic formulas containing non-denoting names (including self-identity) are true, while negative free logic rejects even the latter claim.

These logics have complex and varied axiomatizations and semantics, and the goal of the present work is to offer an orderly examination of the various systems and their mutual relations. This is done by first offering a formalization, using sequent calculi which possess all the desired structural properties of a good proof system, including admissibility of contraction and cut, while streamlining free logics in a way no other approach has. We then present a simple and unified system of generalized semantics, which allows for a straightforward demonstration of the meta-theoretical properties, while also offering insights into the relationship between different logics (free and classical). Finally, we extend the system with modalities by using a labeled sequent calculus, and here we are again able to map out the different approaches and their mutual relations using the same framework.

​Time: 16.00 – 17.15

Location: Online.


  • December 8, Bogdan Dicher (Lisbon) [joint work with Francesco Paoli]

Metainferential harmony: Harmony without Identity and Cut

Proof-theoretic semantics aims to explain the meaning of the logical constants in terms of the inference rules that govern their behaviour in proofs. One of its central concepts is that of harmony: roughly, the match between the rules stipulating the conditions for introducing a logical constant in a proof and those for eliminating it from a proof. There are many accounts of harmony, most of them are developed against a background that assumes the rules of Identity and Cut, taken to codify the reflexivity and transitivity of logical consequence. We have argued elsewhere that the proof-theoretic project should be approached relative to a logic, i.e., relative to a consequence relation, and that the consequence relation relevant for proof-theoretic semantics is the one given by the sequent-to-sequent derivability relation in Gentzen systems. This relation is always reflexive, monotonic, and transitive, but it being so does not depend on the availability of sequent rules codifying these properties. In this talk we investigate the prospects for an account of harmony adequate for logics that lack the structural rules of Identity and Cut.​

Time: 16.00 – 17.15

Location: Online.


Proof Theory for Modal Tense Logic and Symmetric Modal Logic via Linear Nested Sequents

Modal logics offering the possibility of travelling backwards along the accessibility relation, such as modal tense logic Kt or symmetric modal logic KB, have often proved difficult to capture satisfactorily from a proof-theoretic point of view. In particular, while ordinary sequent systems, e.g., for KB exist, they rely on an analytic cut rule for completeness. Both tense logic Kt and symmetric modal logic KB have been given cut-free calculi in much more expressive frameworks, arguably the simplest being that of nested sequent calculi. However, the machinery necessary for nested sequents as well as the immediate connection to the semantics raise the question whether these logics can also be captured in a simpler framework.

As a possible contender, recently there has been a resurgence of work in the linear nested sequent framework, an intermediate framework between ordinary and full nested sequents. What was lacking so far, however, were convincing examples for the added expressivity of this frameworks over ordinary sequents, which did not depend on the linear structure of the models for the logics in question.

In this talk I will present joint work with Rajeev Gore showing that modal tense logic Kt and symmetric modal logic KB can indeed be captured in the linear nested sequent framework, thus providing positive answers to both of the above questions.

Time: 16.00 – 17.15

Location: Online


  • October 13, Lorenzo Rossi(Munich), [joint work with Michael Glanzberg]

Truth and Quantification

Theories of self-applicable truth have been motivated in two main ways. First, if truth-conditions provide the meaning of (several kinds of) natural language expressions, then self-applicable truth is instrumental to develop the semantics of natural languages. Second, a self-applicable truth predicate is required to express generalizations that would not be expressible in natural languages without it. In order to fulfill their semantic and expressive roles, we argue, the truth predicate has to be studied in its interaction with linguistic constructs that are actually found in natural languages and extend beyond first-order logic — modals, indicative conditionals, arbitrary quantifiers, and more. Here, we focus on truth and quantification. We develop a Kripkean theory of self-applicable truth to for the language of Generalized Quantifiers Theory. More precisely, we show how to interpret a self-applicable truth predicate for the full class of type ⟨1, 1⟩ (and type ⟨1⟩) quantifiers to be found in natural languages. As a result, we can model sentences such as ‘Most of what Jane said is true’, or ‘infinitely many theorems of T are untrue’, and several others, thus expanding the scope of existing approaches to truth, both as a semantic and as an expressive device.

Time: 16.00 – 17.15

Location: Online


A Note on Synonymy in Proof-theoretic Semantics

The topic of identity of proofs was put on the agenda of general (or structural) proof theory at an early stage. The relevant question is: When are the differences between two distinct proofs (understood as linguistic entities, proof figures) of one and the same formula so inessential that it is justified to identify the two proofs? The paper addresses another question: When are the differences between two distinct formulas so inessential that these formulas admit of identical proofs? The question appears to be especially natural if the idea of working with more than one kind of derivations is taken seriously. If a distinction is drawn between proofs and disproofs (or refutations) as primitive entities, it is quite conceivable that a proof of one formula amounts to a disproof of another formula, and vice versa. The paper develops this idea.

Time: 16.00 – 17.15

Location: Online


Validity in Context. Logical Pluralism and 2D-Semantics

Logical pluralism – the view that there is more than one correct logic – is typically connected to claims about the semantics of the logical vocabulary. A number of authors have claimed that the alleged plurality of correct logical systems can be traced back to the context-sensitivity of the expression “valid” in natural language. In the talk, I explore in what ways the most prominent account of context-sensitivity – Kaplan’s logic LD – can be employed to support this claim. I develop three possible readings of the Kaplanian character of “valid” and assess their weaknesses and strengths.

Time: 16.00 – 17.15

Location: Online.


  • June 30, Giuliano Rosella(Torino), [joint work with Vita Saitta & Matteo Plebani]

Truthmakers, Incompatibility and Modality

We present an extension of Fine’s truthmaker semantics, obtained by adding an incompatibility relation to Fine’s state space. The resulting framework allows us to improve the semantic clauses for negation and define important notions such as those of (im)possible state and possible world in a new and natural way. We prove some interesting results concerning the interaction between incompatibility, negation and possible worlds, e. g. that possible worlds behave classically with respect to negation; we also give a semantics for the Logic of Exact Entailment, FDE and S5. At the end, we explore various attempts to use the resources of the framework to provide truthmaking clauses for modal sentences.

Time: 16.00 – 17.00

Location: Online


Reasoning with words, types and tensors

Computational linguists attempt to make a computer reason about natural language. For this, words and sentence need to be encoded in a suitable numerical form. This is where vector-based semantics comes in: by embedding words in a space in such a way that words similar in meaning will be measurably similar, we allow for efficient representations that encode (some sort of) relationships between words. This idea and its use in deep neural networks have led to successful application in the domain of natural language processing; anno 2020 we have access to several sentence encoders (e.g. Skip-Thoughts of Kiros 2015, InferSent of Conneau et al. 2017, Universal Sentence Encoder of Cer et al. 2018) and contextualised word embeddings (ELMo of Peters et al. 2018, BERT of Devlin et al. 2019). These neural vector approaches are able to map arbitrary text to some vectorial encoding, using state of the art deep learning techniques.

In contrast to purely neural approaches are formal compositional approaches to semantics, that take the meaning of a sentence to be built up by the meaning of individual words, and the way those are combined (Montague 1970). In a type-driven approach, words are assigned types that reflect their grammatical role in a sentence or text, and composition of words is driven by a logical system that assigns a function-argument structure to a sequence of words (Moortgat 2010). Such an approach can be neatly integrated with vector-based semantics of meaning, where individual word meaning is given by the way words are distributed in a large text corpus (Coecke et al. 2010, 2013, Moortgat & Wijnholds 2017).

Earlier talks in this series (Moortgat, Greco) discussed theoretical aspects of compositional distributional models; in this talk I will take the point of view of representation learning: how to learn representations for words and sentences in a way that aligns with some of the key ideas of typelogical grammar?

Compositional tensor-based distributional models assume that individual words are to be represented by tensors, whose order is determined by their grammatical type; such tensors represent multilinear maps, where composition is effectuated by function application. Learning such representations has been done, but often resulted in a large number of parameters to be learnt. We discuss some of these approaches (Grefenstette 2013, Polajnar 2014) as well as a recent new proposal (work in progress) that builds on the adjective skipgram model of (Maillard&Clark 2016).

I end with remarks on the feasibility of tensor-based representation learning, and some suggestions of the options that we are currently investigating, to create wide-coverage type-driven embeddings for Dutch (and beyond!).

Time: 16.00 – 17.00

Location: Online


Gibbardian Collapse and Trivalent Conditionals

This paper discusses the scope and significance of a triviality result stated by Allan Gibbard for indicative conditionals, showing that any indicative conditional that satisfies the Law of Import-Export, is supraclassical, and stronger than the material conditional, must in fact collapse to the material conditional. Gibbard’s result is taken to pose a dilemma for a truth-functional account of conditionals: give up Import-Export, or embrace the two-valued analysis. We show that this dilemma can be averted in trivalent logics of the conditional based on Reichenbach and de Finetti’s idea that a conditional with a false antecedent is undefined. Import-Export and truth-functionality hold without triviality in such logics. We unravel some implicit assumptions in Gibbard’s proof, and discuss a recent generalization of Gibbard’s result due to Branden Fitelson.

Time: 16.00 – 17.00

Location: Online.


Describable Nuclea, Negative Translations and Extension Stability

What do soundness/completeness of negative translations of intutionistic modal logics, extension stability of preservativity/provability logics and the use of nuclea on Heyting Algebra Expansions (HAEs) to form other HAEs have in common? As it turns out, in all those cases one appears to deal with a certain kind of subframe property for a given logic, i.e., the question whether the logic in question survives a transition to at least some types of substructures of its Kripke-style or neighourhood-style frames. The nucleic perspective on subframe logics has been introduced by Silvio Ghilardi and Guram Bezhanishvili (APAL 2007) for the purely superintuitionistic syntax (without modalities or other additional connectives). It has not been used so much in the modal setting, mostly because the focus in the field tends to be on modal logics with classical propositional base, and nuclea are a rather trivial notion in the boolean setting. However, things are very different intuitionistically. Since the 1970’s, nuclea have been studied in the context of point-free topologies (lattice-complete Heyting algebras), sheaves and Grothendieck topologies on toposes, and finally arbitrary Heyting algebras (Macnab 1981). Other communities may know them as “lax modalities” or (a somewhat degenerate case of) “strong monads”.

We marry the nuclei view on subframe properties with the framework of “describable operations” introduced to study subframe logics in Frank Wolter’s PhD Thesis (1993). Wolter’s original setting was restricted to classical modal logics, but with minimal care his setup can be made to work intuitionistically and nuclea provide the missing ingredient to make it fruitful. From this perspective, we revisit the FSCD 2017 study soundness and completeness of negative translations in modal logic (joint work with Miriam Polzer and Ulrich Rabenstein) and our present study of extension stability for preservativity logics based on the constructive strict implication (jointly with Albert Visser). Various characterization and completeness results can be obtained in a generic way.

Time: 14.00 – 15.00

Location: Online.

This was a joint event with A|C, the Algebra|Coalgebra Seminar of the ILLC, Amsterdam. (http://events.illc.uva.nl/alg-coalg/)


Vector spaces as Kripke frames

The compositional distributional approach in computational linguistics has opened the way for an integration of the lexical aspects of meaning into Lambek’s type-logical grammar program. This approach is based on the observation that a sound semantics for the associative, commutative and unital Lambek calculus can be based on vector spaces by interpreting fusion as the tensor product of vector spaces. We build on this observation and extend it to a ‘vector space semantics’ for the general modal Lambek calculus, based on modal algebras over a field K, i.e. vector spaces endowed with a bilinear binary product and a linear binary relation. We regard K-algebras as ‘Kripke frames’ the complex algebras of which are complete residuated lattices. This perspective makes it possible to establish a systematic connection between vector space semantics and the standard Routley-Meyer semantics of modal sub-structural logics.

Time: 16.00 – 17.00

Location: Online.


  • March 10, Dominik Klein (Bayreuth/Bamberg), joint work with Soroush Rafiee Rad and Ondrej Majer.

Four-Valued Probabilities

First degree entailment (fde) is a four valued propositional logic that complements the classical truth values True and False with two non-classical truth values Neither and Both. The latter two account for the possibility of the available information being incomplete or containing contradictory evidence about some proposition. In this talk, we present a probabilistic extension of fde, i.e. we allow agents to have probabilistic beliefs about the truth and falsity of a proposition.

More specifically, we present two probabilistic versions of fde, one related to the bi-relational, the other to the four-valued semantics of first degree entailment. We provide sound and complete axiomatizations for both and identify a translation function between the two frameworks. Moreover, we identify policies for conditionalization and aggregation on the probability functions defined. More specifically, we analyze four-valued equivalents of Bayes’ and Jeffrey updating and also provide policies for aggregating information from different sources.

Time: 16.00 – 17.30

Location: 101, Drift 25



Words as tensors; proofs as multilinear maps

In this talk, I report on a current NWO project that aims to combine the ‘proofs-as-programs’ view on derivational semantics as one finds it in typelogical grammars with a vector-based, distributional modelling of lexical semantics.

Typelogical grammars derive from the ‘Syntactic Calculus’ introduced by Lambek in 1958/1961. The Lambek calculus is a precursor of the ‘parsing as deduction’ method in computational linguistics: the conventional linguistic categories are turned into formulas of a grammar logic; the judgment whether a phrase is syntactically well-formed is the outcome of a process of deduction. Present-day categorial grammars address the expressive limitations of the original syntactic calculus by extending the formula language with control operations that allow for restricted forms of reordering and restructuring. These control operations play a role analogous to the exponentials/modalities of linear logic; they situate typelogical grammars in the broader family of substructural logics.

Compositional interpretation, in a typelogical grammar, takes the form of a structure-preserving map (a homomorphism) that sends types and proofs of the syntax to their counterparts in an appropriate semantic algebra. In the formal semantics approach in the tradition of Montague grammar, the target models are based on a set-theoretic universe of individuals, truth values, and function spaces over them. Semantics of open class lexical items (nouns, verbs, …) is ignored: these items are treated as unanalysed non-logical constants.

From a Natural Language Processing perspective, this agnostic attitude with respect to lexical semantics is disappointing. In fact, a core ingredient of the highly successful NLP applications we see today is the modelling of word meanings a vector in a high-dimensional semantic space representing a word’s context.

To obtain a vector-based compositional interpretation for typelogical grammars, the key idea is simple: a word with an atomic type, say a noun ‘cat’, is interpreted as a vector in Noun space; a word with an n-place functional type, say a transitive verb ‘chase’ requiring a subject and an object, is interpreted as a tensor of rank n+1; the derivation of a phrase ‘dogs chase cats’ then is interpreted as a multilinear map acting on the meanings of the constituting words to produce a vector in Sentence space, where the syntactic combination of the verb with its subject and object is modelled by tensor contraction.

I discuss some challenges for semantic modelling in terms of finite-dimensional vector spaces and (multi)linear maps arising from tensions between the demands of lexical and derivational aspects of meaning.

Time: 16.00 – 18.00

Location: 203, Drift 25


Loeb constructively meets μ

A famous result of Solovay identifies the classical modal Loeb system (GL) as the provability logic of Peano Arithmetic. Under this perspective, Goedel’s Second Incompleteness Theorem and Loeb’s Theorem correspond to explicit definability of
fixpoints of modalized formulas in GL. Van Benthem has observed that this definability of fixpoints entails that adding standard fixpoint operators of positive formulas does not increase the expressive power of GL. Visser has analyzed this observation syntactically and extended it to a proof that GL is a retract of the modal μ-calculus in a suitably defined category of interpretations. A fully polished theory of such embeddings and translations involving the fixpoint operator and similar binders is yet to be developed; our current efforts focus on utilizing the theory of higher-order rewrite systems (HRS).

Furthermore, we face some interesting problems when studying intuitionistic analogs of such results. First, while the fixpoint theorem for modalized formulas works in intuitionistic GL, the propositional reasoning underlying derivations of van Benthem and Visser is essentially classical. Deep results about the intuitionistic propositional calculus (IPC)  established by Pitts and Ruitenburg can be used to overcome this limitation.

Second, over IPC (and sometimes even classically) it is for several reasons natural to look at a more general syntactic setup replacing the box with Lewis arrow, which arithmetically is standardly interpreted in terms of preservativity (although it does admit many other interpretations). As it turns out, there are two major incompatible ways of computing explicit fixpoints for formulas whose principal connective is the strict implication, each requiring a different base logic. Our approach here can be described as the first study of reverse mathematics of explicit fixpoints.

Time: 16.00 – 18.00

Location: 103, Drift 25


Unifying Two Views of Conditionals

In this talk, my goal is to draw together two traditions that often share similar motives, but reach different views about the logic of conditionals. Lewis and Stalnaker developed a semantics for conditionals based on selection functions, sometimes called the `variably-strict’ conditional. Routley and Meyer developed a semantics for conditionals based on a ternary relation, sometimes called an `enthymematic-relevant’ conditional. These traditions agree that the material and strict conditional are inadequate-and for similar reasons-but traditionally, these camps also disagree on some important details.

My approach will blend together the idea that antecedent selections play a role in evaluating all conditionals together with the idea that the logic of conditionals is enthymematic. Much of the work is accomplished by shifting attention from the selection function at individual points of models to a `type-raised’ selection function that `holds at whole propositions’. A conditional proposition [If A, B] can then be seen as the proposition that grounds the maximally extensive type-raised selection function on proposition [A] that embeds into proposition [B]. This perspective allows us to capture all different kinds of conditionals—both counterfactuals and implications—by imposing frame conditions on type-raised selection functions.

Time: 16.00 – 18.00

Location: 103, Drift 25


  • June 4, Aybüke Özgün (ILLC), joint work with Francesco Bert (University of St Andrews)

Dynamic Hyperintensional Belief Revision

We propose a dynamic hyperintensional logic of belief revision for non-omniscient agents, reducing the logical omniscience phenomena affecting standard doxastic/epistemic logic as well as AGM belief revision theory. Our agents don’t know all a priori truths; their belief states are not closed under classical logical consequence; and their belief update policies can be subject to ‘framing effects’: logically or necessarily equivalent contents can lead to different revisions. We model both plain and conditional belief, then focus on dynamic belief revision. The key idea we exploit to achieve non-omniscience focuses on topic- or subject-matter-sensitivity: a feature of belief states which is gaining growing attention in the recent literature.
Time: 16:00 – 17:30
Location: 117, Janskerkhof 2-3


  • May 7, Amir Tabatabai (Utrecht University)

Steenen voor Brood: A Classical Interpretation of the Intuitionistic Logic

In 1933, Gödel introduced a provability interpretation for the intuitionistic propositional logic, IPC, to establish a formalization for the BHK interpretation, reading intuitionistic constructions as the usual classical proofs [1]. However, instead of using any concrete notion of a proof, he used the modal system S4, as a formalization for the intuitive concept of provability and then translated IPC into S4 in a sound and complete manner. His work then suggested the problem to find the missing concrete provability interpretation of the modal logic S4 to complete his formalization of the BHK interpretation via classical proofs.

In this talk, we will develop a framework for such provability interpretations. We will first generalize Solovay’s seminal provability interpretation of the modal logic GL to capture other modal logics such as K4, KD4 and S4. The main idea is introducing a hierarchy of arithmetical theories to represent the informal hierarchy of meta-theories and then interpreting the nested modalities in the language as the provability predicates of the different layers of this hierarchy. Later, we will combine this provability interpretation with Gödel’s translation to propose a classical formalization for the BHK interpretation. The formalization suggests that the BHK interpretation is nothing but a plural name for different provability interpretations for different propositional logics based on different ontological commitments that we believe in. They include intuitionistic logic, minimal logic and Visser-Ruitenburg’s basic logic. Finally, as a negative result, we will first show that there is no provability interpretation for any extension of the logic KD45, and as expected, there is no BHK interpretation for the classical propositional logic.

[1]. K. Gödel, Eine Interpretation des Intuitionistichen Aussagenkalküls, Ergebnisse Math Colloq., vol. 4 (1933), pp. 39-40.

Time: 16:00 – 17:30

Location: Room 105, Drift 25


Monadic Second Order Logic as a Model Companion

I will talk about a connection between monadic second order logic and more traditional first order model theory, which has emerged in my recent joint work with Ghilardi [1,2].
Monadic second order (MSO) logic, when interpreted in discrete structures, is closely related to certain formal models of computation. For example, the MSO-definable sets of finite colored linear orders (“words”) are exactly the regular languages from automata theory. MSO logic and its connection to automata has also been studied on many more structures, including omega-indexed words and binary trees.
In more traditional model theory, one typically studies first order logic as a generalization of certain constructions in mathematics. A fundamental insight there was that the theory of algebraically closed fields can be generalized to a purely logical notion of “existentially closed model”. The syntactic counterpart of this notion is called the “model companion” of a first order theory.
We prove that MSO logic, both on omega-words [1] and on binary trees [2], can be viewed as the model companion of a finitely axiomatized universal first order theory. In each case, this universal theory is closely connected to well-known modal fix-point logics on such structures.
[1] S. Ghilardi and S. J. van Gool, A model-theoretic characterization of monadic second-order logic on infinite words, Journal of Symbolic Logic, vol. 82, no. 1, 62-76 (2017). https://arxiv.org/pdf/1503.08936.pdf
[2] S. Ghilardi and S. J. van Gool, Monadic second order logic as the model companion of temporal logic, Proc. LICS 2016, 417-426 (2016). https://arxiv.org/pdf/1605.01003.pdf

Time: 16:00 – 17:30

Location: Room 302, Drift 25


Analyzing the Behavior of Transition Systems

We develop a novel way to analyze the possible long-term behavior of transition systems. These are discrete, possibly non-deterministic dynamical systems. Examples range from computer programs, over neural- and social networks, to ‘discretized’ physical systems. We define when two trajectories in a system—i.e., possible sequences of states—exhibit the same type of behavior (e.g., agreeing on equilibria and oscillations). To understand the system we thus have to analyze the partial order of ‘types of behavior’: equivalence classes of trajectories ordered by extension.

We identify subsystems such that, if absent, this behavior poset is an algebraic domain—and we characterize these behavior domains. Next, we investigate the natural topology on the types of behavior, and we comment on connections to logic. Finally, and motivated by the ‘black box problem’ of neural networks, we mention ways to transform a system into a simple system with essentially the same behavior.

Time: 16:00 – 17:30

Location: Room 302, Drift 25


Finite Axiomatizability and its Kin.

In my talk I will first, briefly, discuss some background concerning finite axiomatizability. I will illustrate that some theories, like Peano Arithmetic and Zermelo Fraenkel Set Theory, are not finitely axiomatizable in particularly tenacious ways.

There are two notions that are quite close to finite axiomatizability. The first notion is axiomatizability by a finite scheme and the second notion is *finding a finitely axiomatized extension of the given theory in an expanded language such that the extension is conservative over the original theory*.

Vaught’s theorem tells us that a wide class of recursively enumerable theories is axiomatizable by a finite scheme. I will briefly discuss this result.

I present results by Kleene and Craig & Vaught that show that, for a wide class of recursively enumerable theories, a finitely axiomatized extension in an expanded language exists, such that the extension is conservative over the original theory. I discuss work by Pakhomov & Visser that tells us that, under reasonable assumptions, there is no best such extension.

The paper by Pakhomov & Visser can be found here: https://arxiv.org/abs/1712.01713

Time: 16:00 – 17:30

Location: Room 302, Drift 25


The De Jongh Property for Bounded Constructive Zermelo-Fraenkel Set Theory

The theory BCZF is obtained from constructive Zermelo-Fraenkel set theory CZF by restricting the collection schemes to bounded formulas. We prove that BCZF has the de Jongh property with respect to every intermediate logic that is characterised by a class of Kripke frames. For this proof, we will combine Kripke semantics for subtheories of CZF (due to Iemhoff) and the set-theoretic forcing technique.

The paper is available for download under https://eprints.illc.uva.nl/1662/1/djp-illcpp.pdf


  • February 19, Allard Tamminga (University of Groningen and Utrecht University)

An Impossibility Result on Methodological Individualism

We prove that the statement that a group of individual agents performs a deontically admissible group action cannot be expressed in a well-established multi-modal deontic logic of agency that models every conceivable combination of actions, omissions, abilities, obligations, prohibitions, and permissions of finitely many individual agents. Our formal result has repercussions for methodological individualism, the methodological precept that any social phenomenon is to be explained ultimately in terms of the actions and interactions of individuals. We show that by making the relevant social and individualistic languages fully explicit and mathematically precise, we can precisely determine the situations in which methodological individualism is tenable. (Joint work with Hein Duijf and Frederik Van De Putte)


How to adopt a logic

What is commonly referred to as the Adoption Problem is a challenge to the idea that the principles for logic can be rationally revised. The argument is based on a reconstruction of unpublished work by Saul Kripke. As the reconstruction has it, Kripke essentially extends the scope of William van Orman Quine’s regress argument against conventionalism to the possibility of adopting new logical principles. In this paper, we want to discuss the scope of this challenge. Are all revisions of logic subject to the regress problem? If not, are there interesting cases of logical revisions that are subject to the regress problem? We will argue that both questions should be answered negatively.

Paper: https://carlonicolai.github.io/Adop4.pdf