Cognitive Modeling and Computational Linguistics (CMCL) 2024

CMCL 2024 will have both oral presentations and poster presentations. The complete program is under definition.

Programme

Thursday, August 15th, 2024, Bangkok (UTC+07.00)

08:45 - 09:00 Opening Remarks

09:00 - 09:40 Invited Speaker 1: Sandro Pezzelle (chair: Tatsuki Kuribayashi)

9:40 - 10:40 Session 1 (Oral Presentations) (chair: Giulia Rambelli)

  • 11 Hierarchical syntactic structure in human-like language models. Michael Wolfman, Donald Dunagan, Jonathan Brennan, John T. Hale (archival)
  • 8 Do large language models resemble humans in language use?. Zhenguang G. Cai, Xufeng Duan, David A. Haslett, Shuqi Wang, Martin J. Pickering (archival)
  • 4 Evaluating Vision-Language Models on Bistable Images. Artemis Panagopoulou, Coby Melkin, Chris Callison-Burch (archival)

10:40 - 11:00 Coffee Break

11:00 - 12:20 Session 2 (Poster Session)

  • 3 BAMBINO-LM: (Bilingual-)Human-Inspired Continual Pre-training of BabyLM. Zhewen Shen, Aditya Joshi, Ruey-Cheng Chen (archival)
  • 9 The Curious Case of Representational Alignment: Unravelling Visio-Linguistic Tasks in Emergent Communication. Tom Kouwenhoven, Max Peeperkorn, Bram Van Dijk, Tessa Verhoef (archival)
  • 10 Modeling Overregularization in Children with Small Language Models. Akari Haga, Saku Sugawara, Akiyo Fukatsu, Miyu Oba, Hiroki Ouchi, Taro Watanabe, Yohei Oseki (Findings of ACL 2024)
  • 14 Language models’ probability distributions are calibrated to cognitive profiles: An investigation of the predictive power of surprisal and entropy. Patrick Haller, Lena Sophia Bolliger, Lena Ann Jäger (Findings of ACL 2024)
  • 16 What Makes Language Models Good-enough?. Daiki Asami, Saku Sugawara (Findings of ACL 2024)
  • 17 Do LLMs Agree with Humans on Emotional Associations to Nonsense Words?. Yui Miyakawa, Chihaya Matsuhira, Hirotaka KATO, Takatsugu Hirayama, Takahiro Komamizu, Ichiro Ide (archival)
  • 20 Predict but Also Integrate: an Analysis of Sentence Processing Models for English and Hindi Nina Delcaro, Luca Onnis, Raquel G. Alhama (archival)
  • 21 Transformer Attention vs Human Attention in Anaphora Resolution. Anastasia Kozlova, Albina Akhmetgareeva, Aigul Khanova, Semen Kudriavtsev, Alena Fenogenova (archival)
  • 22 Tree-Planted Transformers: Unidirectional Transformer Language Models with Implicit Syntactic Supervision. Ryo Yoshida, Taiga Someya, Yohei Oseki (Findings of ACL 2024)
  • 26 How Much Does Non-verbal Communication Conform to Entropy Rate Constancy?: A Case Study on Listener Gaze in Interaction. Yu Wang, Yang Xu, Gabriel Skantze, Hendrik Buschmeier (Findings of ACL 2024)
  • 27 Daily auditory environments in French-speaking infants: A longitudinal dataset. Estelle Hervé, Clément François, Laurent Prevot (archival)
  • 28 VerbCLIP: Improving Verb Understanding in Vision-Language Models with Compositional Structures. Hadi Wazni, Kin Ian Lo, Mehrnoosh Sadrzadeh (non-archival)
  • 40 What does Kiki look like? Cross-modal associations between speech sounds and visual shapes in vision-and-language models. Tessa Verhoef, Kiana Shahrasbi, Tom Kouwenhoven (archival)
  • 46 How Useful is Context, Actually? Comparing LLMs and Humans on Discourse Marker Prediction. Emily Sadlier-Brown, Millie Lou, Miikka Silfverberg, Carla L. Hudson Kam (archival)

12:20 - 14:00 Lunch

14:00 - 15:00 Session 3 (Oral Presentations) (chair: Giulia Rambelli)

  • 18 Large language models fail to derive atypicality inferences in a human-like manner. Charlotte Kurch, Margarita Ryzhova, Vera Demberg (archival)
  • 45 Diachronic change in verb usage statistics predicts differences in sentence processing across the lifespan. Ellis Cain, Rachel Ryskin (archival)
  • 32 How can large language models become more human?. Daphne Wang, Mehrnoosh Sadrzadeh, Miloš Stanojević, Wing-Yee Chow, Richard Breheny (archival)

15:00 - 15:40 Invited Speaker 2: Frank Keller (chair: Yohei Oseki)

15:40 - 16:00 Coffee Break

16:00 - 17:20 Session 4 (Poster Session)

  • 7 Locally Biased Transformers Better Align with Human Reading Times. Andrea Gregor de Varda, Marco Marelli (archival)
  • 19 Structural Similarities Between Language Models and Neural Response Measurements. Antonia Karamolegkou, Jiaang Li, Yova Kementchedjhieva, Mostafa Abdou, Sune Lehmann, Anders Søgaard (Neurips 2023 NeurReps Workshop)
  • 24 Exploring Spatial Schema Intuitions in Large Language and Vision Models. Philipp Wicke, Lennart Wachowiak (Findings of ACL 2024)
  • 25 Evaluating Lexical Aspect with Large Language Models. Bolei Ma (archival)
  • 29 Analysing and Validating Language Complexity Metrics Across South American Indigenous Languages. Felipe Ribas Serras, Miguel de Mello Carpi, Matheus Castello Branco, Marcelo Finger (archival)
  • 30 Can You Learn Semantics Through Next-Word Prediction? The Case of Entailment. William Merrill, Zhaofeng Wu, Norihito Naka, Yoon Kim, Tal Linzen (Findings of ACL 2024)
  • 33 So many design choices: Improving and interpreting neural agent communication in signaling games. Timothée Bernard, Timothee Mickus (Findings of ACL 2023)
  • 34 The Emergence of High-Level Semantics in a Signaling Game. Timothée Bernard, Timothee Mickus, Hiroya Takamura (in SEM 2024)
  • 35 Morphology Matters: Probing the Cross-linguistic Morphological Generalization Abilities of Large Language Models through a Wug Test. Dang Thi Thao Anh, Limor Raviv, Lukas Galke (archival)
  • 37 Evaluating Grammatical Well-Formedness in Large Language Models: A Comparative Study with Human Judgments. Zhuang Qiu, Xufeng Duan, Zhenguang Cai (archival)
  • 42 Evaluating Semantic Relations in Predicting Textual Labels for Images of Abstract and Concrete Concepts. Tarun Tater, Sabine Schulte im Walde, Diego Frassinelli (archival)
  • 49 LLMs’ morphological analyses of complex FST-generated Finnish words. Anssi Moisio, Mathias Creutz, Mikko Kurimo (archival)
  • 53 PUB: A Pragmatics Understanding Benchmark for Assessing LLMs’ Pragmatics Capabilities. Settaluri Lakshmi Sravanthi, Meet Doshi, Pavan Kalyan Tankala, Rudra Murthy, Raj Dabre, Pushpak Bhattacharyya (Findings of ACL 2024)
  • ARR4 An Eye Opener Regarding Task-Based Text Gradient Saliency. Guojun Wu, Lena Sophia Bolliger, David Robert Reich, Lena Ann Jäger (archival)
  • ARR6 Improving Language Models for Emotion Analysis: Insights from Cognitive Science. Constant Bonard, Gustave Cortal (archival)

17:20 - 18:00 Invited Speaker 3: Aida Nematzadeh (chair: Jixing Li)

18:00 - 18:10 Closing Remarks