Prof
Anya
Belz
Academic biography
Prof Anya Belz is best known for her work in Natural Language Generation and evaluation of Natural Language Processing systems. Anya joined DCU’s School of Computing as Professor of Computer Science (Natural Language Processing), and the SFI ADAPT Research Centre as Science Lead in the Digital Content Transformation Strand, in May 2021.
Anya’s research is in the intersection of machine learning and natural language processing (NLP) and her projects and publications span automatic language generation, text analysis, evaluation and image description. She has chaired two large-scale international research networks on computer vision and language processing, a series of summer schools on neural machine learning, and a series of international benchmarking exercises in language generation.
Her work has resulted in some of the earliest methods for statistical language generation (powering robust real-world systems, e.g. for weather forecast generation and image description for the blind), the introduction of comparative evaluation methods in language generation (enabling effective assessment of the relative benefit of methodological alternatives), integrated approaches to vision and language processing (facilitating better methods for searching, annotating, and managing image and language data), and most recently formal methods for assessing and quantifying reproducibility in NLP.
Her research has been funded by 16 grants from the Engineering and Physical Sciences Research Council (EPSRC), the British Academy and the European Commission (EC), and has been recognised by two best paper awards and the NLG nomination for the inaugural NAACL Test of Time award, as well as by advisory roles and senior reviewing appointments for funding bodies such as EPSRC and the EC.
Anya's main current research project is the EPSRC-funded ReproHum project on Testing and Quantifying the Reproducibility of Human Evaluations in Natural Language Processing. ReproHum is partnering with 20 leading NLP labs world-wide to carry out the first multi-test, multi-lab study of reproducibility in NLP.