Jump to navigation

The University of Arizona Wordmark Line Logo White
UA Profiles | Home
  • Phonebook
  • Edit My Profile
  • Feedback

Profiles search form

Adarsh Pyarelal

  • Assistant Professor, School of Information
  • Member of the Graduate Faculty
Contact
  • adarsh@arizona.edu
  • Bio
  • Interests
  • Courses
  • Scholarly Contributions

Degrees

  • Ph.D. Physics
    • University of Arizona, Tucson, Arizona, United States
    • Hidden Higgses and Dark Matter at Current and Future Colliders
  • B.A. Physics
    • Reed College, Portland, Oregon, United States
    • Contribution of the neutral pion Regge trajectory to the exclusive central production of η(548) mesons in high energy proton/proton collisions

Related Links

Share ProfilePersonal Website

Interests

No activities entered.

Courses

2025-26 Courses

  • Machine Learning
    INFO 521 (Spring 2026)

2024-25 Courses

  • Directed Research
    INFO 692 (Spring 2025)
  • Intro to Machine Learning
    INFO 521 (Spring 2025)
  • Intro to Machine Learning
    INFO 521 (Fall 2024)

2023-24 Courses

  • Intro to Machine Learning
    INFO 521 (Spring 2024)
  • Intro to Machine Learning
    ISTA 421 (Spring 2024)
  • Directed Research
    INFO 692 (Fall 2023)
  • Intro to Machine Learning
    INFO 521 (Fall 2023)
  • Intro to Machine Learning
    ISTA 421 (Fall 2023)

2022-23 Courses

  • Capstone
    INFO 698 (Spring 2023)
  • Independent Study
    INFO 699 (Spring 2023)

2016-17 Courses

  • Intro to Scientif Comput
    PHYS 105A (Spring 2017)

2015-16 Courses

  • Meth Exper Physics I
    PHYS 381 (Spring 2016)
  • Meth Exper Physics II
    PHYS 382 (Spring 2016)
  • Meth Exper Physics IV
    PHYS 483 (Spring 2016)

Related Links

UA Course Catalog

Scholarly Contributions

Journals/Publications

  • Erikson, J., Alt, M., Pyarelal, A., & Kapa, L. (2025). Science Vocabulary and Science Achievement of Children With Language/Literacy Disorders and Typical Language Development. Language, Speech, and Hearing Services in Schools, 56(1). doi:10.1044/2024_LSHSS-24-00025
    More info
    Purpose: This study examined science achievement; science vocabulary knowledge; and the relationship between science vocabulary, language skills, and science achievement in school-age children with language/literacy disorders (LLDs) and typical language development (TD). Method: Thirty-nine sixth graders (11 with LLDs) completed standardized assessments and researcher-designed science vocabulary measures over Zoom. Scores for the AIMS Science, a standardized science assessment administered to all fourth-grade public-school students in Arizona, served as the outcome measure for science achievement. Linear regression analyses were performed to examine the relationships among science achievement, general language skills, and science vocabulary knowledge. Group comparisons (TD vs. LLD) were also completed for science achievement and science vocabulary measures. Results: General language skills, science vocabulary breadth, and science vocabulary definition scores uniquely predicted science achievement, as measured by AIMS Science scores. General language skills predicted performance on the science vocabulary breadth and definition tasks. Participants with LLDs scored significantly lower on science achievement and vocabulary measures relative to their peers with TD. Conclusions: Students with LLDs demonstrated poorer science achievement outcomes and more limited knowledge of science vocabulary breadth and semantic depth. Greater science vocabulary knowledge was associated with higher science test scores for children with LLDs and TD. These findings indicate that increasing science vocabulary knowledge may improve science achievement outcomes for students with LLDs or TD.
  • Erikson, J. A., Alt, M., Pyarelal, A., & Kapa, L. (2024). Science Vocabulary and Science Achievement in Children with Developmental Language Disorder and typical Language Development. Language, Speech, and Hearing Services in Schools.
  • Pyarelal, A., & Su, S. (2020). Higgs Assisted Razor Search for Higgsinos at a 100 TeV pp Collider. Science China-physics Mechanics & Astronomy, 63(10). doi:10.1007/s11433-019-1517-5
    More info
    A 100 TeV proton-proton collider will be an extremely effective way to probe the electroweak sector of the minimal supersymmetric standard model (MSSM). In this paper, we describe a search strategy for discovering pair-produced Higgsino-like next-to-lightest supersymmetric particles (NLSPs) at a 100 TeV hadron collider that decay to Bino-like lightest supersymmetric particles (LSPs) via intermediate Z and SM Higgs bosons that in turn decay to a pair of leptons and a pair of b-quarks respectively: $$\widetilde\chi _2^0\widetilde\chi _3^0 \to ({\rm{Z}}\widetilde\chi _1^0)(h\widetilde\chi _1^0) \to bb\;\ell \ell + \widetilde\chi _1^0\widetilde\chi _1^0$$ In addition, we examine the potential for machine learning techniques to boost the power of our searches. Using this analysis, Higgsinos up to 1.4 TeV can be discovered at the 5σ level for Binos with mass of about 0.9 TeV using 3000 fb−1 of data. Additionally, Higgsinos up to 1.8 TeV can be excluded at 95% C.L. for Binos with mass of about 1.4 TeV. This search channel extends the multi-lepton search limits, especially in the region where the mass difference between the Higgsino NLSPs and the Bino LSP is small.
  • Kling, F., Li, H., Pyarelal, A., Song, H., & Su, S. (2019). Exotic Higgs decays in Type-II 2HDMs at the LHC and future 100 TeV hadron colliders. Journal of High Energy Physics, 2019(6). doi:10.1007/JHEP06(2019)031
    More info
    The exotic decay modes of non-Standard Model (SM) Higgses in models with extended Higgs sectors have the potential to serve as powerful search channels to explore the space of Two-Higgs Doublet Models (2HDMs). Once kinematically allowed, heavy Higgses could decay into pairs of light non-SM Higgses, or a non-SM Higgs and a SM gauge boson, with branching fractions that quickly dominate those of the conventional decay modes to SM particles. In this study, we focus on the prospects of probing Type-II 2HDMs at the LHC and a future 100 TeV pp collider via exotic decay channels. We study the three prominent exotic decay channels: A → HZ, A → H±W∓ and H± → HW±, and find that a 100-TeV pp collider can probe most of the region of the Type-II 2HDM parameter space that survives current theoretical and experimental constraints with sizable exotic decay branching fraction through these channels, making them complementary to the conventional decay channels for heavy non-SM Higgses.
  • Kling, F., Pyarelal, A., & Su, S. (2015). Light Charged Higgs Bosons to AW/HW via Top Decay. Journal of High Energy Physics, 11, "051".

Proceedings Publications

  • Liu, C., Noriega-Atala, E., Pyarelal, A., Morrison, C. T., & Cafarella, M. (2025).

    Variable Extraction for Model Recovery in Scientific Literature

    . In AI and Scientific Discovery Workshop @ NAACL 2025., 1--12.
    More info
    Due to the increasing productivity in the scientific community, it is difficult to keep up with the literature without the assistance of AI methods. This paper evaluates various methods for extracting mathematical model variables from epidemiological studies, such as ‘infection rate (𝛼),” ‘recovery rate (𝛾),” and ‘mortality rate (𝜇).” Variable extraction appears to be a basic task, but plays a pivotal role in recovering models from scientific literature. Once extracted, we can use these variables for automatic mathematical modeling, simulation, and replication of published results. We also introduce a benchmark dataset comprising manually-annotated variable descriptions and variable values extracted from scientific papers. Our analysis shows that LLM-based solutions perform the best. Despite the incremental benefits of combining rule-based extraction outputs with LLMs, the leap in performance attributed to the transfer-learning and instruction-tuning capabilities of LLMs themselves is far more significant. This investigation demonstrates the potential of LLMs to enhance automatic comprehension of scientific artifacts and for automatic model recovery and simulation.
  • Pyarelal, A., Pyarelal, A., Culnan, J. M., Culnan, J. M., Qamar, A., Qamar, A., Krishnaswamy, M., Krishnaswamy, M., Wang, Y., Wang, Y., Jeong, C., Jeong, C., Chen, C., Chen, C., Miah, M., Miah, M., Hormozi, S., Hormozi, S., Tong, J., , Tong, J., et al. (2025).

    MultiCAT: Multimodal Communication Annotations for Teams

    . In Findings of the Association for Computational Linguistics: NAACL 2025, 1077--1111.
    More info
    Successful teamwork requires team members to understand each other and communicate effectively, managing multiple linguistic and paralinguistic tasks at once. Because of the potential for interrelatedness of these tasks, it is important to have the ability to make multiple types of predictions on the same dataset. Here, we introduce Multimodal Communication Annotations for Teams (MultiCAT), a speech- and text-based dataset consisting of audio recordings, automated and hand-corrected transcriptions. MultiCAT builds upon data from teams working collaboratively to save victims in a simulated search and rescue mission, and consists of annotations and benchmark results for the following tasks: (1) dialog act classification, (2) adjacency pair detection, (3) sentiment and emotion recognition, (4) closed-loop communication detection, and (5) vocal (phonetic) entrainment detection. We also present exploratory analyses on the relationship between our annotations and team outcomes. We posit that additional work on these tasks and their intersection will further improve understanding of team communication and its relation to team performance. Code & data: https://doi.org/10.5281/zenodo.14834835
  • Zhang, L., Lieffers, J., & Pyarelal, A. (2025).

    Enhancing Interpretability in Deep Reinforcement Learning through Semantic Clustering

    . In The Thirty-ninth Annual Conference on Neural Information Processing Systems.
  • Noriega-Atala, E., Vacareanu, R., Ashton, S. T., Pyarelal, A., Morrison, C. T., & Surdeanu, M. (2024). When and Where Did it Happen? An Encoder-Decoder Model to Identify Scenario Context. In EMNLP 2024 Findings.
  • Soares$^circ$, P., Pyarelal, A., Krishnaswamy, M., Butler, E., & Barnard, K. (2024). Probabilistic Modeling of Interpersonal Coordination Processes. In Forty-first International Conference on Machine Learning (ICML 2024).
  • Surdeanu, M., Morrison, C. T., Pyarelal, A., Torres Ashton, S., Vacareanu, R., & Noriega-Atala, E. (2024). When and Where Did it Happen? An Encoder-Decoder Model to Identify Scenario Context. In Findings of the Association for Computational Linguistics: EMNLP 2024.
    More info
    Abstract: We introduce a neural architecture finetuned for the task of scenario context generation: The relevant location and time of an event or entity mentioned in text. Contextualizing information extraction helps to scope the validity of automated finings when aggregating them as knowledge graphs. Our approach uses a high-quality curated dataset of time and location annotations in a corpus of epidemiology papers to train an encoder-decoder architecture. We also explored the use of data augmentation techniques during training. Our findings suggest that a relatively small fine-tuned encoder-decoder model performs better than out-of-the-box LLMs and semantic role labeling parsers to accurate predict the relevant scenario information of a particular entity or event.
  • Zhang, L., Lieffers, J., Shivanna, P., & Pyarelal, A. (2024). Deep Reinforcement Learning with Vector Quantized Encoding. In RLC Workshop on Interpretable Policies in Reinforcement Learning (InterpPol) 2024.
  • "Miah, M., Pyarelal, A., & Huang, R. (2023, dec). Hierarchical Fusion for Online Multimodal Dialog Act Classification. In Findings of the Association for Computational Linguistics: EMNLP 2023.
  • "Qamar, A., Pyarelal, A., & Huang, R. (2023, dec). Who is Speaking? Speaker-Aware Multiparty Dialogue Act Classification. In Findings of the Association for Computational Linguistics: EMNLP 2023.
  • Pyarelal, A., Duong, E., Shibu, C. J., Soares, P., Boyd, S., Khosla, P., Pfeifer, V., Zhang, D., Andrews, E. S., Champlin, R., Raymond, V. P., Krishnaswamy, M., Morrison, C., Butler, E., & Barnard, K. (2023).

    The ToMCAT Dataset

    . In Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track.
  • "Pyarelal, A., Banerjee, A., & Barnard, K. (2022).

    Modular Procedural Generation for Voxel Maps

    . In Computational Theory of Mind for Human-Machine Teams, 13775.
  • "Zhang, L., Lieffers, J., & Pyarelal, A. (2022).

    Using Features at Multiple Temporal and Spatial Resolutions to Predict Human Behavior in Real Time

    . In Computational Theory of Mind for Human-Machine Teams, 13775.
  • Alexeeva, M., Kadowaki, J., Morrison, C. T., Pyarelal, A., Sharp, R., & Valenzuela-escarcega, M. A. (2020). MathAlign: Linking Formula Identifiers to their Contextual Natural Language Descriptions. In 12th International Conference on Language Resources and Evaluation, LREC 2020, 2204-2212.
    More info
    Extending machine reading approaches to extract mathematical concepts and their descriptions is useful for a variety of tasks, ranging from mathematical information retrieval to increasing accessibility of scientific documents for the visually impaired. This entails segmenting mathematical formulae into identifiers and linking them to their natural language descriptions. We propose a rule-based approach for this task, which extracts LaTeX representations of formula identifiers and links them to their in-text descriptions, given only the original PDF and the location of the formula of interest. We also present a novel evaluation dataset for this task, as well as the tool used to create it.
  • Surdeanu, M., Morrison, C. T., Barnard, J. J., Bethard, S. J., Paul, M., Luo, F., Lent, H., Tang, Z., Bachman, J. A., Yadav, V., Nagesh, A., Valenzuela-Escárcega, M. A., Laparra, E., Alcock, K., Gyori, B. M., Pyarelal, A., & Sharp, R. (2019). Eidos & Delphi: From Free Text to Executable Causal Models. In Modeling the World’s Systems, 2019.

Profiles With Related Publications

  • Shufang Su
  • Clayton T Morrison
  • Mihai Surdeanu
  • Steven Bethard
  • Jacobus J Barnard

 Edit my profile

UA Profiles | Home

University Information Security and Privacy

© 2026 The Arizona Board of Regents on behalf of The University of Arizona.