Jump to navigation

The University of Arizona Wordmark Line Logo White
UA Profiles | Home
  • Phonebook
  • Edit My Profile
  • Feedback

Profiles search form

Adarsh Pyarelal

  • Assistant Professor, School of Information
  • Member of the Graduate Faculty
Contact
  • adarsh@arizona.edu
  • Bio
  • Interests
  • Courses
  • Scholarly Contributions

Degrees

  • Ph.D. Physics
    • University of Arizona, Tucson, Arizona, United States
    • Hidden Higgses and Dark Matter at Current and Future Colliders
  • B.A. Physics
    • Reed College, Portland, Oregon, United States
    • Contribution of the neutral pion Regge trajectory to the exclusive central production of η(548) mesons in high energy proton/proton collisions

Related Links

Share ProfilePersonal Website

Interests

No activities entered.

Courses

2025-26 Courses

  • Machine Learning
    INFO 521 (Spring 2026)

2024-25 Courses

  • Directed Research
    INFO 692 (Spring 2025)
  • Intro to Machine Learning
    INFO 521 (Spring 2025)
  • Intro to Machine Learning
    INFO 521 (Fall 2024)

2023-24 Courses

  • Intro to Machine Learning
    INFO 521 (Spring 2024)
  • Intro to Machine Learning
    ISTA 421 (Spring 2024)
  • Directed Research
    INFO 692 (Fall 2023)
  • Intro to Machine Learning
    INFO 521 (Fall 2023)
  • Intro to Machine Learning
    ISTA 421 (Fall 2023)

2022-23 Courses

  • Capstone
    INFO 698 (Spring 2023)
  • Independent Study
    INFO 699 (Spring 2023)

2016-17 Courses

  • Intro to Scientif Comput
    PHYS 105A (Spring 2017)

2015-16 Courses

  • Meth Exper Physics I
    PHYS 381 (Spring 2016)
  • Meth Exper Physics II
    PHYS 382 (Spring 2016)
  • Meth Exper Physics IV
    PHYS 483 (Spring 2016)

Related Links

UA Course Catalog

Scholarly Contributions

Journals/Publications

  • Erikson, J. A., Alt, M., Pyarelal, A., & Kapa, L. (2024). Science Vocabulary and Science Achievement in Children with Developmental Language Disorder and typical Language Development. Language, Speech, and Hearing Services in Schools.
  • Kling, F., Pyarelal, A., & Su, S. (2015). Light Charged Higgs Bosons to AW/HW via Top Decay. Journal of High Energy Physics, 11, "051".

Proceedings Publications

  • Liu, C., Noriega-Atala, E., Pyarelal, A., Morrison, C. T., & Cafarella, M. (2025).

    Variable Extraction for Model Recovery in Scientific Literature

    . In AI and Scientific Discovery Workshop @ NAACL 2025., 1--12.
    More info
    Due to the increasing productivity in the scientific community, it is difficult to keep up with the literature without the assistance of AI methods. This paper evaluates various methods for extracting mathematical model variables from epidemiological studies, such as ‘infection rate (𝛼),” ‘recovery rate (𝛾),” and ‘mortality rate (𝜇).” Variable extraction appears to be a basic task, but plays a pivotal role in recovering models from scientific literature. Once extracted, we can use these variables for automatic mathematical modeling, simulation, and replication of published results. We also introduce a benchmark dataset comprising manually-annotated variable descriptions and variable values extracted from scientific papers. Our analysis shows that LLM-based solutions perform the best. Despite the incremental benefits of combining rule-based extraction outputs with LLMs, the leap in performance attributed to the transfer-learning and instruction-tuning capabilities of LLMs themselves is far more significant. This investigation demonstrates the potential of LLMs to enhance automatic comprehension of scientific artifacts and for automatic model recovery and simulation.
  • Pyarelal, A., Pyarelal, A., Culnan, J. M., Culnan, J. M., Qamar, A., Qamar, A., Krishnaswamy, M., Krishnaswamy, M., Wang, Y., Wang, Y., Jeong, C., Jeong, C., Chen, C., Chen, C., Miah, M., Miah, M., Hormozi, S., Hormozi, S., Tong, J., , Tong, J., et al. (2025).

    MultiCAT: Multimodal Communication Annotations for Teams

    . In Findings of the Association for Computational Linguistics: NAACL 2025, 1077--1111.
    More info
    Successful teamwork requires team members to understand each other and communicate effectively, managing multiple linguistic and paralinguistic tasks at once. Because of the potential for interrelatedness of these tasks, it is important to have the ability to make multiple types of predictions on the same dataset. Here, we introduce Multimodal Communication Annotations for Teams (MultiCAT), a speech- and text-based dataset consisting of audio recordings, automated and hand-corrected transcriptions. MultiCAT builds upon data from teams working collaboratively to save victims in a simulated search and rescue mission, and consists of annotations and benchmark results for the following tasks: (1) dialog act classification, (2) adjacency pair detection, (3) sentiment and emotion recognition, (4) closed-loop communication detection, and (5) vocal (phonetic) entrainment detection. We also present exploratory analyses on the relationship between our annotations and team outcomes. We posit that additional work on these tasks and their intersection will further improve understanding of team communication and its relation to team performance. Code & data: https://doi.org/10.5281/zenodo.14834835
  • Zhang, L., Lieffers, J., & Pyarelal, A. (2025).

    Enhancing Interpretability in Deep Reinforcement Learning through Semantic Clustering

    . In The Thirty-ninth Annual Conference on Neural Information Processing Systems.
  • Noriega-Atala, E., Vacareanu, R., Ashton, S. T., Pyarelal, A., Morrison, C. T., & Surdeanu, M. (2024). When and Where Did it Happen? An Encoder-Decoder Model to Identify Scenario Context. In EMNLP 2024 Findings.
  • Soares$^circ$, P., Pyarelal, A., Krishnaswamy, M., Butler, E., & Barnard, K. (2024). Probabilistic Modeling of Interpersonal Coordination Processes. In Forty-first International Conference on Machine Learning (ICML 2024).
  • Surdeanu, M., Morrison, C. T., Pyarelal, A., Torres Ashton, S., Vacareanu, R., & Noriega-Atala, E. (2024). When and Where Did it Happen? An Encoder-Decoder Model to Identify Scenario Context. In Findings of the Association for Computational Linguistics: EMNLP 2024.
    More info
    Abstract: We introduce a neural architecture finetuned for the task of scenario context generation: The relevant location and time of an event or entity mentioned in text. Contextualizing information extraction helps to scope the validity of automated finings when aggregating them as knowledge graphs. Our approach uses a high-quality curated dataset of time and location annotations in a corpus of epidemiology papers to train an encoder-decoder architecture. We also explored the use of data augmentation techniques during training. Our findings suggest that a relatively small fine-tuned encoder-decoder model performs better than out-of-the-box LLMs and semantic role labeling parsers to accurate predict the relevant scenario information of a particular entity or event.
  • Zhang, L., Lieffers, J., Shivanna, P., & Pyarelal, A. (2024). Deep Reinforcement Learning with Vector Quantized Encoding. In RLC Workshop on Interpretable Policies in Reinforcement Learning (InterpPol) 2024.
  • "Miah, M., Pyarelal, A., & Huang, R. (2023, dec). Hierarchical Fusion for Online Multimodal Dialog Act Classification. In Findings of the Association for Computational Linguistics: EMNLP 2023.
  • "Qamar, A., Pyarelal, A., & Huang, R. (2023, dec). Who is Speaking? Speaker-Aware Multiparty Dialogue Act Classification. In Findings of the Association for Computational Linguistics: EMNLP 2023.
  • Pyarelal, A., Duong, E., Shibu, C. J., Soares, P., Boyd, S., Khosla, P., Pfeifer, V., Zhang, D., Andrews, E. S., Champlin, R., Raymond, V. P., Krishnaswamy, M., Morrison, C., Butler, E., & Barnard, K. (2023).

    The ToMCAT Dataset

    . In Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track.
  • Surdeanu, M., Morrison, C. T., Barnard, J. J., Bethard, S. J., Paul, M., Luo, F., Lent, H., Tang, Z., Bachman, J. A., Yadav, V., Nagesh, A., Valenzuela-Escárcega, M. A., Laparra, E., Alcock, K., Gyori, B. M., Pyarelal, A., & Sharp, R. (2019). Eidos & Delphi: From Free Text to Executable Causal Models. In Modeling the World’s Systems, 2019.

Profiles With Related Publications

  • Mihai Surdeanu
  • Clayton T Morrison
  • Jacobus J Barnard
  • Steven Bethard

 Edit my profile

UA Profiles | Home

University Information Security and Privacy

© 2026 The Arizona Board of Regents on behalf of The University of Arizona.