Publications pertinent to multimodal input and mathematics include: *Semi-synchronous Speech and Pen Input* by Yasushi Watanabe, Kenji Iwata, Ryuta Nakagawa, Koichi Shinoda and Sadaoki Furui, *Hamex – A Handwritten and Audio Dataset of Mathematical Expressions* by Solen Quiniou, Harold Mouchère, Sebastián Peña Saldarriaga, Christian Viard-Gaudin, Emmanuel Morin, Simon Petitrenaud and Sofiane Medjkoune, *Multimodal Mathematical Expressions Recognition: Case of Speech and Handwriting* by Sofiane Medjkoune, Harold Mouchere, Simon Petitrenaud and Christian Viard-Gaudin, *Multimodal Interfaces That Process What Comes Naturally* by Sharon Oviatt and Philip Cohen, *Developing Handwriting-based Intelligent Tutors to Enhance Mathematics Learning* by Lisa Anthony, *Analysis of Mixed Natural and Symbolic Language Input in Mathematical Dialogs* by Magdalena Wolska and Ivana Kruijff-Korbayová and *Interpretation of Mixed Language Input in a Mathematics Tutoring System* by Helmut Horacek and Magdalena Wolska.

Publications pertinent to mathematical sketches and diagrams include: *Mathematical Sketching: An Approach to Making Dynamic Illustrations* by Joseph J. LaViola Jr, *A Sketch-based System for Teaching Geometry* by Gennaro Costagliola, Salvatore Cuomo, Vittorio Fuccella, Aniello Murano and Via Ponte Don Melillo, *Intelligent Understanding of Handwritten Geometry Theorem Proving* by Yingying Jiang, Feng Tian, Hongan Wang, Xiaolong Zhang, Xugang Wang and Guozhong Dai, *Hierarchical Parsing and Recognition of Hand-sketched Diagrams* by Levent Burak Kara and Thomas F. Stahovich, *Combining Geometry and Domain Knowledge to Interpret Hand-drawn Diagrams* by Leslie Gennari, Levent Burak Kara, Thomas F. Stahovich and Kenji Shimada and *Multi-domain Sketch Understanding* by Christine Alvarado.

Publications pertinent to multimodal input, note-taking and context include: *Speech Pen: Predictive Handwriting based on Ambient Multimodal Recognition* by Kazutaka Kurihara, Masataka Goto, Jun Ogata and Takeo Igarashi, *Development of Note-taking Support System with Speech Interface* by Kohei Ota, Hiromitsu Nishizaki and Yoshihiro Sekiguchi, *Unsupervised Vocabulary Selection for Real-time Speech Recognition of Lectures* by Paul Maergner, Alex Waibel and Ian Lane, *Dynamic Language Model Adaptation Using Presentation Slides for Lecture Speech Recognition* by Hiroki Yamazaki, Koji Iwano, Koichi Shinoda, Sadaoki Furui and Haruo Yokota, *Rhetorical Structure Modeling for Lecture Speech Summarization* by Pascale Fung, Justin Jian Zhang, Ricky Ho Yin Chan and Shilei Huang and *Topic Segmentation and Retrieval System for Lecture Videos based on Spontaneous Speech Recognition* by Natsuo Yamamoto, Jun Ogata and Yasuo Ariki.