The Department of Linguistics is delighted to announce that Kaili Vesik has successfully defended her doctoral dissertation titled Gradual Error-Driven Learning and Typology of Finnic Vowel Interactions on June 26, 2025.
Supervised by Anne-Michelle Tessier, with committee members Gunnar Hansson and Kathleen Currie Hall, Dr. Vesik’s dissertation investigates the acquisition of complex vowel patterns in Finnic languages using Gradual Learning Algorithms within the Optimality Theory framework. Her work explores learning biases that enable successful grammar acquisition from positive data, using a typologically-informed and constraint-rich approach. Through extensive modeling across three Finnic languages, her study contributes significant insights into phonological learning, typology, and the interplay between markedness and faithfulness in constraint-based theories.
Please join us in congratulating Dr. Vesik on this remarkable achievement!
Abstract:
Abstract:
Gradual Error-Driven Learning and Typology of Finnic Vowel Interactions
The goal of this dissertation is to determine the kinds of learning biases necessary for a Gradual Learning Algorithm (Boersma & Hayes, 2001)-type learner to acquire target grammars for Finnic vowel patterns, from positive data and with a typologically-informed constraint set. I begin by describing the relevant vowel patterns, including varying degrees of participation in inventory gaps, positional restrictions, and progressive back-front vowel harmony. I propose Optimality Theory analyses for those patterns, using a single large constraint set. This set includes 72 markedness constraints of two types: inventory bans based on stringency relations (de Lacy, 2002), and harmony-driving no-disagreement constraints (Pulleyblank, 2002). It also includes two faithfulness constraints: one general (Ident(Back)) and one specific to the first syllable (Ident-σ1(Back)).
I explore challenges that arise when modeling acquisition of Finnic vowel patterns, using a constraint set with overlapping violation profiles. The first is a lack of space for markedness constraints to be ranked distinctly from both of a surrounding pair of faithfulness constraints. The second is that oscillation of antagonistic pairs of constraints prevents them from moving downward, out of the way of those that should be highest ranked. The third is that specific, coincidentally unviolated constraints can result in the mischaracterization of general patterns (bans against particular vowels) as specific ones (only applying in a particular context).
I present three novel implementations of learning biases to alleviate these challenges: (1) the adaptation of Hayes’s (2004) Favour Specificity principle to online learning; (2) variations on Magri’s (2012) modified promotion rate to more conservatively promote winner-preferring constraints (cf. The Credit Problem, Dresher,1999); and (3) an adaptation of Albright and Hayes’s (2006) bias favouring more general markedness constraints, calculated based on observed inputs. Models with 1080 different combinations of settings were tested on their acquisition of three sample languages (North Estonian, Finnish, and North Seto). These biases produced several very successful learners, acquiring grammars that account for input patterns with 99.99% success. The broad experimental landscape allows for considerable confidence in the success of the biases, while also providing a foundation for future research into interactions between biases.