In this course we will cover the basics of tree automata and tree transducers and their applications in the areas of parsing and statistical machine translation.

In the first third of the course, we will recall tree automata and tree transducers in full detail with a strong emphasis on examples. Nevertheless we will establish the basic theorems on the expressive power of these devices and on some properties of the recognizable tree languages that tree automata recognize. This part will provide the foundation for the following two parts.

In the second part, we will discuss the convergence of context- free parsing models to tree automata or regular tree grammars (also known as context-free grammars with latent variables in the area of parsing). These models are the currently best performing (grammar) models in statistical parsing for an important subset of natural languages (incl. English). In addition, we will investigate the main algorithms used in statistical parsing, so that an intuitive understanding of the mechanics at work is obtained. We conclude this part by show-casing an actual parser, demonstrate its training and decode step as preparation for the final part.

In the last part, we will switch to statistical machine translation. More precisely, we will mostly cover syntax-based machine translation, in which the translation is based (at least in part) on (syntactic) trees (like those produced by a parser). Syntax-based systems are dominant for a few, but important language pairs like Chinese-English and Arabic-English. These systems are typically based on some form of tree transducer (or synchronous grammar). We will the prominent models and show how to obtain the rules for those models and how typical decoders use (approximations of) the obtained model to perform the translation. We show that the theoretical constructions offer a good alternative to the standard decode approaches and substantiate this claim with practical evaluations.