Logo Utrecht University

TULIPS – The Utrecht Logic in Progress Series

Archive

Here we archive past talks (abstracts, handouts, links, …).

2020

Reasoning with words, types and tensors

Computational linguists attempt to make a computer reason about natural language. For this, words and sentence need to be encoded in a suitable numerical form. This is where vector-based semantics comes in: by embedding words in a space in such a way that words similar in meaning will be measurably similar, we allow for efficient representations that encode (some sort of) relationships between words. This idea and its use in deep neural networks have led to successful application in the domain of natural language processing; anno 2020 we have access to several sentence encoders (e.g. Skip-Thoughts of Kiros 2015, InferSent of Conneau et al. 2017, Universal Sentence Encoder of Cer et al. 2018) and contextualised word embeddings (ELMo of Peters et al. 2018, BERT of Devlin et al. 2019). These neural vector approaches are able to map arbitrary text to some vectorial encoding, using state of the art deep learning techniques.

In contrast to purely neural approaches are formal compositional approaches to semantics, that take the meaning of a sentence to be built up by the meaning of individual words, and the way those are combined (Montague 1970). In a type-driven approach, words are assigned types that reflect their grammatical role in a sentence or text, and composition of words is driven by a logical system that assigns a function-argument structure to a sequence of words (Moortgat 2010). Such an approach can be neatly integrated with vector-based semantics of meaning, where individual word meaning is given by the way words are distributed in a large text corpus (Coecke et al. 2010, 2013, Moortgat & Wijnholds 2017).

Earlier talks in this series (Moortgat, Greco) discussed theoretical aspects of compositional distributional models; in this talk I will take the point of view of representation learning: how to learn representations for words and sentences in a way that aligns with some of the key ideas of typelogical grammar?

Compositional tensor-based distributional models assume that individual words are to be represented by tensors, whose order is determined by their grammatical type; such tensors represent multilinear maps, where composition is effectuated by function application. Learning such representations has been done, but often resulted in a large number of parameters to be learnt. We discuss some of these approaches (Grefenstette 2013, Polajnar 2014) as well as a recent new proposal (work in progress) that builds on the adjective skipgram model of (Maillard&Clark 2016).

I end with remarks on the feasibility of tensor-based representation learning, and some suggestions of the options that we are currently investigating, to create wide-coverage type-driven embeddings for Dutch (and beyond!).

Time: 16.00 – 17.00

Location: Online

 

Gibbardian Collapse and Trivalent Conditionals

This paper discusses the scope and significance of a triviality result stated by Allan Gibbard for indicative conditionals, showing that any indicative conditional that satisfies the Law of Import-Export, is supraclassical, and stronger than the material conditional, must in fact collapse to the material conditional. Gibbard’s result is taken to pose a dilemma for a truth-functional account of conditionals: give up Import-Export, or embrace the two-valued analysis. We show that this dilemma can be averted in trivalent logics of the conditional based on Reichenbach and de Finetti’s idea that a conditional with a false antecedent is undefined. Import-Export and truth-functionality hold without triviality in such logics. We unravel some implicit assumptions in Gibbard’s proof, and discuss a recent generalization of Gibbard’s result due to Branden Fitelson.

Time: 16.00 – 17.00

Location: Online.

 

Describable Nuclea, Negative Translations and Extension Stability

What do soundness/completeness of negative translations of intutionistic modal logics, extension stability of preservativity/provability logics and the use of nuclea on Heyting Algebra Expansions (HAEs) to form other HAEs have in common? As it turns out, in all those cases one appears to deal with a certain kind of subframe property for a given logic, i.e., the question whether the logic in question survives a transition to at least some types of substructures of its Kripke-style or neighourhood-style frames. The nucleic perspective on subframe logics has been introduced by Silvio Ghilardi and Guram Bezhanishvili (APAL 2007) for the purely superintuitionistic syntax (without modalities or other additional connectives). It has not been used so much in the modal setting, mostly because the focus in the field tends to be on modal logics with classical propositional base, and nuclea are a rather trivial notion in the boolean setting. However, things are very different intuitionistically. Since the 1970’s, nuclea have been studied in the context of point-free topologies (lattice-complete Heyting algebras), sheaves and Grothendieck topologies on toposes, and finally arbitrary Heyting algebras (Macnab 1981). Other communities may know them as “lax modalities” or (a somewhat degenerate case of) “strong monads”.

We marry the nuclei view on subframe properties with the framework of “describable operations” introduced to study subframe logics in Frank Wolter’s PhD Thesis (1993). Wolter’s original setting was restricted to classical modal logics, but with minimal care his setup can be made to work intuitionistically and nuclea provide the missing ingredient to make it fruitful. From this perspective, we revisit the FSCD 2017 study soundness and completeness of negative translations in modal logic (joint work with Miriam Polzer and Ulrich Rabenstein) and our present study of extension stability for preservativity logics based on the constructive strict implication (jointly with Albert Visser). Various characterization and completeness results can be obtained in a generic way.

Time: 14.00 – 15.00

Location: Online.

This was a joint event with A|C, the Algebra|Coalgebra Seminar of the ILLC, Amsterdam. (http://events.illc.uva.nl/alg-coalg/)

 

Vector spaces as Kripke frames

The compositional distributional approach in computational linguistics has opened the way for an integration of the lexical aspects of meaning into Lambek’s type-logical grammar program. This approach is based on the observation that a sound semantics for the associative, commutative and unital Lambek calculus can be based on vector spaces by interpreting fusion as the tensor product of vector spaces. We build on this observation and extend it to a ‘vector space semantics’ for the general modal Lambek calculus, based on modal algebras over a field K, i.e. vector spaces endowed with a bilinear binary product and a linear binary relation. We regard K-algebras as ‘Kripke frames’ the complex algebras of which are complete residuated lattices. This perspective makes it possible to establish a systematic connection between vector space semantics and the standard Routley-Meyer semantics of modal sub-structural logics.

Time: 16.00 – 17.00

Location: Online.

 

  • March 10, Dominik Klein (Bayreuth/Bamberg), joint work with Soroush Rafiee Rad and Ondrej Majer.

Four-Valued Probabilities

First degree entailment (fde) is a four valued propositional logic that complements the classical truth values True and False with two non-classical truth values Neither and Both. The latter two account for the possibility of the available information being incomplete or containing contradictory evidence about some proposition. In this talk, we present a probabilistic extension of fde, i.e. we allow agents to have probabilistic beliefs about the truth and falsity of a proposition.

More specifically, we present two probabilistic versions of fde, one related to the bi-relational, the other to the four-valued semantics of first degree entailment. We provide sound and complete axiomatizations for both and identify a translation function between the two frameworks. Moreover, we identify policies for conditionalization and aggregation on the probability functions defined. More specifically, we analyze four-valued equivalents of Bayes’ and Jeffrey updating and also provide policies for aggregating information from different sources.

Time: 16.00 – 17.30

Location: 101, Drift 25

 

2019

Words as tensors; proofs as multilinear maps

In this talk, I report on a current NWO project that aims to combine the ‘proofs-as-programs’ view on derivational semantics as one finds it in typelogical grammars with a vector-based, distributional modelling of lexical semantics.

Typelogical grammars derive from the ‘Syntactic Calculus’ introduced by Lambek in 1958/1961. The Lambek calculus is a precursor of the ‘parsing as deduction’ method in computational linguistics: the conventional linguistic categories are turned into formulas of a grammar logic; the judgment whether a phrase is syntactically well-formed is the outcome of a process of deduction. Present-day categorial grammars address the expressive limitations of the original syntactic calculus by extending the formula language with control operations that allow for restricted forms of reordering and restructuring. These control operations play a role analogous to the exponentials/modalities of linear logic; they situate typelogical grammars in the broader family of substructural logics.

Compositional interpretation, in a typelogical grammar, takes the form of a structure-preserving map (a homomorphism) that sends types and proofs of the syntax to their counterparts in an appropriate semantic algebra. In the formal semantics approach in the tradition of Montague grammar, the target models are based on a set-theoretic universe of individuals, truth values, and function spaces over them. Semantics of open class lexical items (nouns, verbs, …) is ignored: these items are treated as unanalysed non-logical constants.

From a Natural Language Processing perspective, this agnostic attitude with respect to lexical semantics is disappointing. In fact, a core ingredient of the highly successful NLP applications we see today is the modelling of word meanings a vector in a high-dimensional semantic space representing a word’s context.

To obtain a vector-based compositional interpretation for typelogical grammars, the key idea is simple: a word with an atomic type, say a noun ‘cat’, is interpreted as a vector in Noun space; a word with an n-place functional type, say a transitive verb ‘chase’ requiring a subject and an object, is interpreted as a tensor of rank n+1; the derivation of a phrase ‘dogs chase cats’ then is interpreted as a multilinear map acting on the meanings of the constituting words to produce a vector in Sentence space, where the syntactic combination of the verb with its subject and object is modelled by tensor contraction.

I discuss some challenges for semantic modelling in terms of finite-dimensional vector spaces and (multi)linear maps arising from tensions between the demands of lexical and derivational aspects of meaning.

Time: 16.00 – 18.00

Location: 203, Drift 25

 

Loeb constructively meets μ

A famous result of Solovay identifies the classical modal Loeb system (GL) as the provability logic of Peano Arithmetic. Under this perspective, Goedel’s Second Incompleteness Theorem and Loeb’s Theorem correspond to explicit definability of
fixpoints of modalized formulas in GL. Van Benthem has observed that this definability of fixpoints entails that adding standard fixpoint operators of positive formulas does not increase the expressive power of GL. Visser has analyzed this observation syntactically and extended it to a proof that GL is a retract of the modal μ-calculus in a suitably defined category of interpretations. A fully polished theory of such embeddings and translations involving the fixpoint operator and similar binders is yet to be developed; our current efforts focus on utilizing the theory of higher-order rewrite systems (HRS).

Furthermore, we face some interesting problems when studying intuitionistic analogs of such results. First, while the fixpoint theorem for modalized formulas works in intuitionistic GL, the propositional reasoning underlying derivations of van Benthem and Visser is essentially classical. Deep results about the intuitionistic propositional calculus (IPC)  established by Pitts and Ruitenburg can be used to overcome this limitation.

Second, over IPC (and sometimes even classically) it is for several reasons natural to look at a more general syntactic setup replacing the box with Lewis arrow, which arithmetically is standardly interpreted in terms of preservativity (although it does admit many other interpretations). As it turns out, there are two major incompatible ways of computing explicit fixpoints for formulas whose principal connective is the strict implication, each requiring a different base logic. Our approach here can be described as the first study of reverse mathematics of explicit fixpoints.

Time: 16.00 – 18.00

Location: 103, Drift 25

 

Unifying Two Views of Conditionals

In this talk, my goal is to draw together two traditions that often share similar motives, but reach different views about the logic of conditionals. Lewis and Stalnaker developed a semantics for conditionals based on selection functions, sometimes called the `variably-strict’ conditional. Routley and Meyer developed a semantics for conditionals based on a ternary relation, sometimes called an `enthymematic-relevant’ conditional. These traditions agree that the material and strict conditional are inadequate-and for similar reasons-but traditionally, these camps also disagree on some important details.

My approach will blend together the idea that antecedent selections play a role in evaluating all conditionals together with the idea that the logic of conditionals is enthymematic. Much of the work is accomplished by shifting attention from the selection function at individual points of models to a `type-raised’ selection function that `holds at whole propositions’. A conditional proposition [If A, B] can then be seen as the proposition that grounds the maximally extensive type-raised selection function on proposition [A] that embeds into proposition [B]. This perspective allows us to capture all different kinds of conditionals—both counterfactuals and implications—by imposing frame conditions on type-raised selection functions.

Time: 16.00 – 18.00

Location: 103, Drift 25

 

  • June 4, Aybüke Özgün (ILLC), joint work with Francesco Bert (University of St Andrews)

Dynamic Hyperintensional Belief Revision

We propose a dynamic hyperintensional logic of belief revision for non-omniscient agents, reducing the logical omniscience phenomena affecting standard doxastic/epistemic logic as well as AGM belief revision theory. Our agents don’t know all a priori truths; their belief states are not closed under classical logical consequence; and their belief update policies can be subject to ‘framing effects’: logically or necessarily equivalent contents can lead to different revisions. We model both plain and conditional belief, then focus on dynamic belief revision. The key idea we exploit to achieve non-omniscience focuses on topic- or subject-matter-sensitivity: a feature of belief states which is gaining growing attention in the recent literature.
Time: 16:00 – 17:30
Location: 117, Janskerkhof 2-3

 

  • May 7, Amir Tabatabai (Utrecht University)

Steenen voor Brood: A Classical Interpretation of the Intuitionistic Logic

In 1933, Gödel introduced a provability interpretation for the intuitionistic propositional logic, IPC, to establish a formalization for the BHK interpretation, reading intuitionistic constructions as the usual classical proofs [1]. However, instead of using any concrete notion of a proof, he used the modal system S4, as a formalization for the intuitive concept of provability and then translated IPC into S4 in a sound and complete manner. His work then suggested the problem to find the missing concrete provability interpretation of the modal logic S4 to complete his formalization of the BHK interpretation via classical proofs.

In this talk, we will develop a framework for such provability interpretations. We will first generalize Solovay’s seminal provability interpretation of the modal logic GL to capture other modal logics such as K4, KD4 and S4. The main idea is introducing a hierarchy of arithmetical theories to represent the informal hierarchy of meta-theories and then interpreting the nested modalities in the language as the provability predicates of the different layers of this hierarchy. Later, we will combine this provability interpretation with Gödel’s translation to propose a classical formalization for the BHK interpretation. The formalization suggests that the BHK interpretation is nothing but a plural name for different provability interpretations for different propositional logics based on different ontological commitments that we believe in. They include intuitionistic logic, minimal logic and Visser-Ruitenburg’s basic logic. Finally, as a negative result, we will first show that there is no provability interpretation for any extension of the logic KD45, and as expected, there is no BHK interpretation for the classical propositional logic.

[1]. K. Gödel, Eine Interpretation des Intuitionistichen Aussagenkalküls, Ergebnisse Math Colloq., vol. 4 (1933), pp. 39-40.

Time: 16:00 – 17:30

Location: Room 105, Drift 25

 

Monadic Second Order Logic as a Model Companion

I will talk about a connection between monadic second order logic and more traditional first order model theory, which has emerged in my recent joint work with Ghilardi [1,2].
Monadic second order (MSO) logic, when interpreted in discrete structures, is closely related to certain formal models of computation. For example, the MSO-definable sets of finite colored linear orders (“words”) are exactly the regular languages from automata theory. MSO logic and its connection to automata has also been studied on many more structures, including omega-indexed words and binary trees.
In more traditional model theory, one typically studies first order logic as a generalization of certain constructions in mathematics. A fundamental insight there was that the theory of algebraically closed fields can be generalized to a purely logical notion of “existentially closed model”. The syntactic counterpart of this notion is called the “model companion” of a first order theory.
We prove that MSO logic, both on omega-words [1] and on binary trees [2], can be viewed as the model companion of a finitely axiomatized universal first order theory. In each case, this universal theory is closely connected to well-known modal fix-point logics on such structures.
[1] S. Ghilardi and S. J. van Gool, A model-theoretic characterization of monadic second-order logic on infinite words, Journal of Symbolic Logic, vol. 82, no. 1, 62-76 (2017). https://arxiv.org/pdf/1503.08936.pdf
[2] S. Ghilardi and S. J. van Gool, Monadic second order logic as the model companion of temporal logic, Proc. LICS 2016, 417-426 (2016). https://arxiv.org/pdf/1605.01003.pdf

Time: 16:00 – 17:30

Location: Room 302, Drift 25

 

Analyzing the Behavior of Transition Systems

We develop a novel way to analyze the possible long-term behavior of transition systems. These are discrete, possibly non-deterministic dynamical systems. Examples range from computer programs, over neural- and social networks, to ‘discretized’ physical systems. We define when two trajectories in a system—i.e., possible sequences of states—exhibit the same type of behavior (e.g., agreeing on equilibria and oscillations). To understand the system we thus have to analyze the partial order of ‘types of behavior’: equivalence classes of trajectories ordered by extension.

We identify subsystems such that, if absent, this behavior poset is an algebraic domain—and we characterize these behavior domains. Next, we investigate the natural topology on the types of behavior, and we comment on connections to logic. Finally, and motivated by the ‘black box problem’ of neural networks, we mention ways to transform a system into a simple system with essentially the same behavior.

Time: 16:00 – 17:30

Location: Room 302, Drift 25

 

Finite Axiomatizability and its Kin.

In my talk I will first, briefly, discuss some background concerning finite axiomatizability. I will illustrate that some theories, like Peano Arithmetic and Zermelo Fraenkel Set Theory, are not finitely axiomatizable in particularly tenacious ways.

There are two notions that are quite close to finite axiomatizability. The first notion is axiomatizability by a finite scheme and the second notion is *finding a finitely axiomatized extension of the given theory in an expanded language such that the extension is conservative over the original theory*.

Vaught’s theorem tells us that a wide class of recursively enumerable theories is axiomatizable by a finite scheme. I will briefly discuss this result.

I present results by Kleene and Craig & Vaught that show that, for a wide class of recursively enumerable theories, a finitely axiomatized extension in an expanded language exists, such that the extension is conservative over the original theory. I discuss work by Pakhomov & Visser that tells us that, under reasonable assumptions, there is no best such extension.

The paper by Pakhomov & Visser can be found here: https://arxiv.org/abs/1712.01713

Time: 16:00 – 17:30

Location: Room 302, Drift 25

 

The De Jongh Property for Bounded Constructive Zermelo-Fraenkel Set Theory

The theory BCZF is obtained from constructive Zermelo-Fraenkel set theory CZF by restricting the collection schemes to bounded formulas. We prove that BCZF has the de Jongh property with respect to every intermediate logic that is characterised by a class of Kripke frames. For this proof, we will combine Kripke semantics for subtheories of CZF (due to Iemhoff) and the set-theoretic forcing technique.

The paper is available for download under https://eprints.illc.uva.nl/1662/1/djp-illcpp.pdf

 

  • February 19, Allard Tamminga (University of Groningen and Utrecht University)

An Impossibility Result on Methodological Individualism

We prove that the statement that a group of individual agents performs a deontically admissible group action cannot be expressed in a well-established multi-modal deontic logic of agency that models every conceivable combination of actions, omissions, abilities, obligations, prohibitions, and permissions of finitely many individual agents. Our formal result has repercussions for methodological individualism, the methodological precept that any social phenomenon is to be explained ultimately in terms of the actions and interactions of individuals. We show that by making the relevant social and individualistic languages fully explicit and mathematically precise, we can precisely determine the situations in which methodological individualism is tenable. (Joint work with Hein Duijf and Frederik Van De Putte)

 

How to adopt a logic

What is commonly referred to as the Adoption Problem is a challenge to the idea that the principles for logic can be rationally revised. The argument is based on a reconstruction of unpublished work by Saul Kripke. As the reconstruction has it, Kripke essentially extends the scope of William van Orman Quine’s regress argument against conventionalism to the possibility of adopting new logical principles. In this paper, we want to discuss the scope of this challenge. Are all revisions of logic subject to the regress problem? If not, are there interesting cases of logical revisions that are subject to the regress problem? We will argue that both questions should be answered negatively.

Paper: https://carlonicolai.github.io/Adop4.pdf