Jump to navigation

The University of Arizona Wordmark Line Logo White
UA Profiles | Home
  • Phonebook
  • Edit My Profile
  • Feedback

Profiles search form

Dalal N Alharthi

  • Assistant Professor
  • Member of the Graduate Faculty
  • Assistant Professor, BIO5 Institute
Contact
  • dalharthi@arizona.edu
  • Bio
  • Interests
  • Courses
  • Scholarly Contributions

Biography

Assistant Professor at the University of Arizona with a Ph.D. Degree in Computer Science from the University of Califonia, Irvine. Equipped with work experience in both academia and industry. Strong engineering /architecture skills, skilled in Cloud Computing (AWS, Azure, and GCP); Cloud Security; Container Security; Automation; Network Security; Violent Python; Palo Alto Networks; Active Directory; Web Development/Security; Pentesting/Ethical Hacking; Digital Forensics and Incident Response (DFIR); Cybersecurity Strategy, Standards, Policies, and Controls; Awareness Training Programs, and more.

Prior to joining the University of Arizona, Dr. Alharthi worked as a Cloud Security Engineer at Farmers Insurance, a Resident Engineer at Palo Alto Networks, and Prisma Cloud Consultant at Dell. She was awarded the Division of Teaching Excellence and Innovation (DTEI) Fellowship by the University of California, Irvine, and obtained both CompTIA Security+ and AWS Solutions Architect certifications.

Dr. Alharthi’s research interests are as follows. Cloud Security; Container Security; Penetration Testing; Digital Forensics and Incident Response (DFIR); Human-Computer Interaction (HCI); Privacy; Cybersecurity Education; and Machine Learning. She is also interested in conducting research on the intersection between Cybersecurity and Public Administration; Cybersecurity and Business Administration; and Cybersecurity and Education.

Degrees

  • Ph.D. Computer Science
    • University of California Irvine

Work Experience

  • Palo Alto Networks (PAN) and Dell Inc. (2020 - 2021)
  • Farmers Insurance (2019 - 2020)

Awards

  • Senior Fellow for the Mathematics of Intelligences program
    • The Institute for Pure and Applied Mathematics (IPAM) at UCLA, Spring 2025
  • National-level recognition for Exceptional Contributions to the Cybersecurity Community
    • WiCyS 2024, Spring 2024
    • Women in Cybersecurity (WiCyS), Spring 2024
  • Cybersecurity Focus Area Distinguished Paper Award
    • ISCAP/EDSIG Conference., Fall 2023
  • Mentoring Future Scholars Award
    • University of Arizona, Fall 2023 (Award Nominee)
  • Capacity Building Award
    • UArizona RII RLI Program, Summer 2023

Licensure & Certification

  • Division of Teaching Excellence and Innovation (DTEI) Fellow, University of California Irvine (2020)
  • Cybersecurity Boot Camp (6 months program), University of California Irvine (2019)
  • Entelligence Certified IT Professional, Entelligence (2020)
  • AWS Certified Solutions Architect, Amazon Web Services (AWS) (2020)
  • CompTIA Security+, CompTIA (2020)

Related Links

Share Profile

Interests

Research

Cybersecurity in general, Cloud Security, Cloud Penetration Testing, Penetration Testing, Transportation Systems Engineering, Network Security, Human-Computer Interaction (HCI), Social Engineering, Usable Security and Security Policies/procedures, Digital Forensics and Incident Response (DFIR), Cryptography, Automation, Intelligent Vehicles, Management Information System (MIS), Leadership

Courses

2025-26 Courses

  • Cloud Security
    CYBV 579 (Spring 2026)
  • Violent Python
    CYBV 473 (Spring 2026)
  • Cloud Security
    CYBV 579 (Fall 2025)
  • Violent Python
    CYBV 473 (Fall 2025)

2024-25 Courses

  • Intro to Security Scripting
    CYBV 312 (Summer I 2025)
  • Cloud Security
    CYBV 579 (Spring 2025)
  • Independent Study
    CYBV 599 (Spring 2025)
  • Violent Python
    CYBV 473 (Spring 2025)
  • Cloud Security
    CYBV 579 (Fall 2024)
  • Violent Python
    CYBV 473 (Fall 2024)

2023-24 Courses

  • Intro to Security Scripting
    CYBV 312 (Summer I 2024)
  • Intro Amazon Web Services
    NETV 381 (Spring 2024)
  • Violent Python
    CYBV 473 (Spring 2024)
  • Violent Python
    CYBV 473 (Fall 2023)

2022-23 Courses

  • Intro to Security Scripting
    CYBV 312 (Summer I 2023)
  • Capstone in Cyber Operations
    CYBV 498 (Spring 2023)
  • Capstone in Cyber Operations
    CYBV 498 (Fall 2022)
  • Violent Python
    CYBV 473 (Fall 2022)

2021-22 Courses

  • Cyber Warfare
    CYBV 480 (Summer I 2022)
  • Capstone in Cyber Operations
    CYBV 498 (Spring 2022)
  • Violent Python
    CYBV 473 (Spring 2022)
  • Cyber Warfare
    CYBV 480 (Fall 2021)
  • Violent Python
    CYBV 473 (Fall 2021)

Related Links

UA Course Catalog

Scholarly Contributions

Journals/Publications

  • Shahen Shah, A. F., Karabulut, M. A., Kamruzzaman, A., Alharthi, D. N., & Bradford, P. G. (2025).

    A Survey on Artificial Intelligence and Blockchain Clustering for Enhanced Security in 6G Wireless Networks

    . The CMC-Computers, Materials & Continua Journal.
    More info
    The advent of 6G wireless technology, which offers previously unattainable data rates, very low latency, and compatibility with a wide range of communication devices, promises to transform the networking environment completely. The 6G wireless proposals aim to expand wireless communication’s capabilities well beyond current levels. This technology is expected to revolutionize how we communicate, connect, and use the power of the digital world. However, maintaining secure and efficient data management becomes crucial as 6G networks grow in size and complexity. This study investigates blockchain clustering and artificial intelligence (AI) approaches to ensure a reliable and trustworthy communication in 6G. First, the mechanisms and protocols of blockchain clustering that provide a trusted and effective communication infrastructure for 6G networks are presented. Then, AI techniques for network security in 6G are studied. The integration of AI and blockchain to ensure energy efficiency in 6G networks is addressed. Next, this paper presents how the 6G’s speed and bandwidth enables AI and the easy management of virtualized systems. Using terahertz connections is sufficient to have virtualized systems move compute environments as well as data. For instance, a computing environment can follow potential security violations while leveraging AI. Such virtual machines can store their findings in blockchains. In 6G scenarios, case studies and real-world applications of AI-powered secure blockchain clustering are given. Moreover, challenges and promising future research opportunities are highlighted. These challenges and opportunities provide insights from the most recent developments and point to areas where AI and blockchain further ensure security and efficiency in 6G networks.
  • Wagner, P. E., & Alharthi, D. N. (2024). Comprehensive Cybersecurity Programs: Case-Study Analysis of a Four-Year Cybersecurity Program at a Secondar Education Institutions. Cybersecurity Pedagogy and Practice Journal.
  • Wagner, P. E., & Alharthi, D. N. (2023). Leveraging VR/AR/MR/XR Technologies to Improve Cybersecurity Education, Training, and Operations. the Journal of Cybersecurity Education, Research and Practice (JCERP).
  • Alharthi, D. N., & Regan, A. C. (2021). A Literature Survey and Analysis on Social Engineering Defense Mechanisms and Infosec Policies. International Journal of Network Security & Its Applications, 13(2), 41-61. doi:10.5121/ijnsa.2021.13204
    More info
    Social engineering attacks can be severe and hard to detect. Therefore, to prevent such attacks, organizations should be aware of social engineering defense mechanisms and security policies. To that end, the authors developed a taxonomy of social engineering defense mechanisms, designed a survey to measure employee awareness of these mechanisms, proposed a model of Social Engineering InfoSec Policies (SE-IPs), and designed a survey to measure the incorporation level of these SE-IPs. After analyzing the data from the first survey, the authors found that more than half of employees are not aware of social engineering attacks. The paper also analyzed a second set of survey data, which found that on average, organizations incorporated just over fifty percent of the identified formal SE-IPs. Such worrisome results show that organizations are vulnerable to social engineering attacks, and serious steps need to be taken to elevate awareness against these emerging security threats.

Proceedings Publications

  • Alharthi, D., & Yasaei, R. (2025).

    LLM-Powered Automated Cloud Forensics: From Log Analysis to Investigation

    . In 18th IEEE International Conference on Cloud Computing, CLOUD 2025.
    More info
    Cloud forensics is a crucial yet challenging field, as traditional forensic techniques struggle to handle the large-scale, dynamic nature of cloud environments. Manual forensic analysis is time-consuming, error-prone, and often fails to detect evolving cyber threats. This paper presents a novel tool leveraging Large Language Models (LLMs) to fully automate cloud forensic investigations. Our approach utilizes few-shot learning to classify log data, extract forensic intelligence, and reconstruct attack timelines. We evaluate LLM-based automation against traditional machine learning models, including Random Forest, XGBoost, and Gradient Boosting, using cloud forensic log datasets. Experimental results demonstrate that LLMs improve forensic accuracy, precision, and recall while reducing the need for extensive feature engineering. However, challenges such as hallucination risks, adversarial manipulation, and forensic explainability must be addressed to ensure the reliability of AI-driven investigations. To mitigate these risks, we explore Retrieval-Augmented Generation (RAG) for context-aware forensic intelligence and propose hybrid AI models integrating rule-based forensic validation. Our findings highlight the potential of LLM-driven forensic automation to enhance cloud security operations while outlining key areas for future research, including adversarial robustness, forensic transparency, and multi-cloud scalability.
  • Musa, Y., Tantawi, K., Mikhail, M., Ma, J., & Alharthi, D. N. (2025).

    Semiconductor Manufacturing Industry: Assessment, Challenges, and Future Trends

    . In The 2nd International Conference on Advanced Innovations in Smart Cities (ICAISC25).
    More info
    In this work we evaluate the state of the semiconductor manufacturing industry and its challenges and trends. Future trends in the industry are analyzed from three perspectives: the evolution of Industry 4.0, the advances in semiconductor materials, and the impact of the Covid-19 Pandemic. The semiconductor manufacturing industry witnessed an acute decline in the United States and other regions in the two decades prior to the CoVid-19 pandemic. The decline was only uncovered after the chip shortage of 2021 that resulted from the severe supply chain disruption. Trends in the industry were analyzed from three perspectives: Industry 4.0, advances in materials, and the Post-pandemic era. As a result of the evolution of the fourth generation of industry (Industry 4.0), trends in semiconductor manufacturing include robotization, which caused the industry to become the largest market for industrial robotics since 2020, and an all-time peak globalization. The semiconductor industry is a very globalized industry with corporates from different parts of the world taking part in the production of the final product. Although some materials such as carbon and Gallium Nitride show promising trends to replace silicon as the material of choice. It will likely not be before two or three decades when a semiconductor material will be able to replace silicon. Challenges for the industry include the shortage of the trained-workforce, and the added inter-country restrictions that may hinder the globalization of the industry.
  • Alharthi, D. N. (2024). Cloud Incident Response Framework. In IEEE 15th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON24).
  • Alharthi, D. N., & Abbas, M. (2024). A Zero-Trust Reinforcement Learning Policy for Mitigating Cyberattacks on Emergency Vehicle Preemption Systems. In IEEE 15th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON24).
  • Alharthi, D. N. (2023). Secure Cloud Migration Strategy (SCMS): A Safe Journey to the Cloud. In the 18th International Conference on Cyber Warfare and Security.
    More info
    The paper proposed a comprehensive Secure Cloud Migration Strategy (SCMS) that organizations can adopt to secure their cloud environment. The proposed SCMS consists of three main repeatable phases/processes, which are preparation; readiness and adoption; and testing. Among these phases, the author addresses tasks/projects from the different perspectives of the three cybersecurity teams, which are the blue team (defenders), the red team (attackers), and the yellow team (developers). This can be used by the Cloud Center of Excellence (CCoE) as a checklist that covers defending the cloud; attacking and abusing the cloud; and applying the security shift left concepts. In addition to that, the paper addresses the necessary cloud security documents/runbooks that should be developed and automated such as incident response runbook, disaster recovery planning, risk assessment methodology, and cloud security controls. The ultimate goal is to support the development of a proper security system to an efficient cloud computing system to help harden organizations’ cloud infrastructures and increase the cloud security awareness level, which is significant to national security. Furthermore, practitioners and researchers can use the proposed solutions to replicate and/or extend the proposed work.
  • Collier, H., MORTON, C., Alharthi, D. N., & Kleiner, J. (2023). Cultural Influences and Information Security. In ECCWS 22nd European Conference on Cyber Warfare and Security.
    More info
    The end goal of this research is to use culture, along with behaviour and social media usage as new metrics in measuring a person’s susceptibility to cybercrime. This information can then be used by information security teams to better prepare individuals to defend themselves from cyber threats! This paper is the start of the research process into how culture impacts a person’s susceptibility to cybercrime.
  • O'Mara, A., Alsamadi, I., Aleroud, A., & Alharthi, D. N. (2023). Phishing Detection Based on Webpage Content: Static and Dynamic Analysis. In the IEEE Third Intelligent Cybersecurity Conference (ICSC2023).
  • Straight, R. M., Alharthi, D. N., & Honomichl, R. J. (2023). Bridging Complexity and Distance: Designing an Online MS Program in Cyber and Information Operations. In the International Conference of Education, Research and Innovation (ICERI).
  • Wagner, P. E., & Alharthi, D. N. (2023). Comprehensive Cybersecurity Programs: Case-Study Analysis of a Four-Year Cybersecurity Program at a Secondary Education Institution. In The Computing Education and Information Systems Applied Research (ISCAP) Conference.
  • Alharthi, D. N., & Regan, A. C. (2021). Social Engineering Infosec Policies (SE-IPS). In Computer Science & Information Technology (CS & IT).
    More info
    The sudden increase in employees working primarily or even exclusively at home has generated unique societal and economic circumstances which makes the protection of information assets a major problem for organizations. The application of security policies is essential for mitigating the risk of social engineering attacks. However, incorporating and enforcing successful security policies in an organization is not a straightforward task. To that end, this paper develops a model of Social Engineering InfoSec Policies (SE-IPs) and investigates the incorporation of those SE-IPs in organizations. This paper proposes a customizable model of SE-IPs that can be adopted by a wide variety of organizations. The authors designed and distributed a survey to measure the incorporation level of formal SE-IPs in organizations. After collecting and analyzing the data which included over fifteen hundred responses, the authors found that on average, organizations incorporated just over fifty percent of the identified formal Social Engineering InfoSec Policies.
  • Alharthi, D., & Regan, A. (2020). Social Engineering Defense Mechanisms: A Taxonomy and a Survey of Employees’ Awareness Level. In Intelligent Computing: Proceedings of the 2020 Computing Conference.
    More info
    In the information security chain, humans have become the weakest point, and social engineers take advantage of that fact by psychologically manipulating people to persuade them to disclose sensitive information or execute malicious acts. Social engineering security attacks can be severe and hard to detect. Therefore, to prevent such attacks, organizations and their employees should be aware of the defense mechanisms that can mitigate the risk of these attacks. To that end, the authors (1) developed a taxonomy of social engineering defense mechanisms and also (2) designed and distributed a survey to measure employees’ level of awareness of these mechanisms. To develop the taxonomy, the authors reviewed the related literature and extracted the main defense mechanisms. To measure employees’ level of awareness of social engineering defense mechanisms, the authors designed and distributed a survey in which 791 employees participated. Finally, after collecting and analyzing the data, the authors found that more than half of the surveyed employees are not aware of social engineering attacks and their defense mechanisms. Such a worrisome result shows that employees and organizations are extremely vulnerable to such attacks, and serious steps need to be taken to elevate the employees’ awareness level against these emerging security threats.
  • Alharthi, D., Hammad, M., & Regan, A. (2020). A Taxonomy of Social Engineering Defense Mechanisms. In Future of Information and Communication Conference.
    More info
    Humans have become the weakest point in the information security chain, and social engineers take advantage of that fact. Social engineers manipulate people psychologically to convince them to divulge sensitive information or to perform malicious acts. Social engineering security attacks can be severe and difficult to detect. Therefore, to prevent these attacks, employees and their organizations should be aware of relevant defense mechanisms. This research develops a taxonomy of social engineering defense mechanisms that can be used to develop educational materials for use in various kinds of organizations. To develop the taxonomy, the authors conducted a systematic literature review of related research efforts and extracted the main target points of social engineers and the defense mechanisms regarding each target point.

Presentations

  • Alharthi, D. N. (2025).

    Keynote talk: Cloud Security and Forensics in the GenAI Era: A New Frontier. 

    . The IEEE International Conference on Next Generation Communication & Information Processing.
  • Alharthi, D. N., & Rainbow, J. (2025).

    Presenting Current Research- WellCATS: A Faculty-Led Initiative for Enhancing Student Wellbeing at UA. 

    . UArizona Student Success Conference.
  • McAllister, K. S., & Alharthi, D. N. (2025).

    Responsible Conduct of Research Workshop: The Ethical Use of Artificial Intelligence in Research

    . UArizona Research and Partnership: Responsible Conduct of Research (RCR) Program. in-person in the Bio5 building: UArizona Research and Partnership.
    More info
    This line of work began in Spring 2024 with the goal of supporting responsible and effective use of AI and large language models in research. It focuses on how to select appropriate LLMs for different research contexts, develop sound prompt engineering practices, and critically engage with the ethical use of AI in scholarly work. An initial iteration of this material was presented in Fall 2025, with revisions underway to incorporate feedback and evolving standards, and a subsequent presentation scheduled for Spring 2026.
  • Alharthi, D. N. (2024).

    A Collective Intelligence Framework for Cloud Security

    . UCLA IPAM Mathematics of Intelligences.
    More info
    The rapid transition from on-premises infrastructure to cloud environments has revolutionized how organizations manage data and operations. However, this shift introduces unique security challenges, such as real-time vulnerability assessment, incident response, and digital forensics in a highly dynamic and distributed ecosystem. This talk will present a novel framework that leverages collective intelligence to address these challenges in cloud security. By utilizing a multi-agent system, we propose an approach where agents collaborate, share insights, and make decentralized decisions to improve threat detection and response.
  • Alharthi, D. N. (2024).

    A Collective Intelligence Framework for Cloud Security

    . UCLA IPAM: Workshop: Modeling Multi-Scale Collective Intelligences.
    More info
    The rapid transition from on-premises infrastructure to cloud environments has revolutionized how organizations manage data and operations. However, this shift introduces unique security challenges, such as real-time vulnerability assessment, incident response, and digital forensics in a highly dynamic and distributed ecosystem. This talk will present a novel framework that leverages collective intelligence to address these challenges in cloud security. By utilizing a multi-agent system, we propose an approach where agents collaborate, share insights, and make decentralized decisions to improve threat detection and response.
  • Alharthi, D. N. (2024). Navigating the Future of Cybersecurity in the Age of Cloud, Containers, and AI. Women in Data Science (WiDS) Conference.
  • Alharthi, D. N. (2024). Optimizing DFIR in Public Cloud: AWS, Azure, and GCP.. Women in Cybersecurity (WiCyS) 2024.
  • Alharthi, D. N. (2024). Towards Secure Cloud Environments: Hands-on with AWS, Azure, and GCP. UArizona Women’s Hackathon.
  • Alharthi, D. N. (2023). Attacking and Defending Public Cloud Environments. Women in Cybersecurity (WiCyS) 2023. Denver, CO..
  • Galde, M. R., Wagner, P. E., & Alharthi, D. N. (2023). Who's Watching Who: Hacking IP Cameras. CactusCon11 2023. Mesa, AZ.

Others

  • Alharthi, D. N. (2025, 02).

    NSF & UCLA IPAM White Paper: Mathematics of Intelligences (MOI)

    . https://www.ipam.ucla.edu/reports/white-paper-mathematics-of-intelligences-2024/
    More info
    The quest to understand intelligence is one of the great scientific endeavors, on par with quests to understand the origins of life or the foundations of the physical world. Several scientific communities have made significant progress on this quest. Relevant fields like animal cognition, cognitive science, collective intelligence, and artificial intelligence (AI)—as well as the social and behavioral sciences—have generated a wild variety of new experimental and observational data. They have also built mathematical and computational models of impressive sophistication and performance. Yet these communities remain largely disconnected; in no small part, this is because they lack a common framework and a shared (mathematical) language.The IPAM Long Program on the Mathematics of Intelligences (MOI) aimed to bring these communities together with mathematicians to work toward the mathematical foundations necessary for transformational advances in our understanding of natural and artificial intelligences. This white paper was drafted at the culminating retreat of the Long Program and synthesizes the view of the field developed by its core participants. That said, it is not meant to be a comprehensive account of everything that happened at the Long Program. Likewise, the views expressed here do not necessarily reflect the views of IPAM or all the authors.“Intelligence” is an ambiguous term. It can refer to an “intelligent system,” as when we speak of an “artificial intelligence.” It can also refer more broadly to a general capacity; roughly speaking, something that enables (intelligent) systems to solve problems more easily, whether they be organisms, collectives, or artificial agents. Although the former usage is returning to prominence in the age of Large Language Models (LLMs), we will generally refer to “intelligent systems” rather than “intelligences” to avoid confusion.We also (attempt to) avoid the anthropocentric bias that makes humans the paradigm of intelligence. MOI explored intelligence as a multifaceted, multiscale phenomenon; it is for this reason that the Long Program was called the Mathematics of Intelligences, with the final embracing this multiplicity and variety.
  • Alharthi, D. N. (2025, July).

    Impact of AI on workers in the United States

    . https://fundforhumanity.org/national-science-foundation-ai-worker-impact-report/
    More info
    NSF-affiliated research report examining the implications of AI and intelligent systems, developed to inform research and workforce discussions.
  • Alharthi, D., & Garcia, I. R. (2025, Fall 2025).

    A Call to Action for a Secure-by-Design Generative AI Paradigm

    . arXiv. https://arxiv.org/abs/2510.00451v1
    More info
    Large language models have gained widespread prominence, yet their vulnerability to prompt injection and other adversarial attacks remains a critical concern. This paper argues for a security-by-design AI paradigm that proactively mitigates LLM vulnerabilities while enhancing performance. To achieve this, we introduce PromptShield, an ontology-driven framework that ensures deterministic and secure prompt interactions. It standardizes user inputs through semantic validation, eliminating ambiguity and mitigating adversarial manipulation. To assess PromptShield's security and performance capabilities, we conducted an experiment on an agent-based system to analyze cloud logs within Amazon Web Services (AWS), containing 493 distinct events related to malicious activities and anomalies. By simulating prompt injection attacks and assessing the impact of deploying PromptShield, our results demonstrate a significant improvement in model security and performance, achieving precision, recall, and F1 scores of approximately 94%. Notably, the ontology-based framework not only mitigates adversarial threats but also enhances the overall performance and reliability of the system. Furthermore, PromptShield's modular and adaptable design ensures its applicability beyond cloud security, making it a robust solution for safeguarding generative AI applications across various domains. By laying the groundwork for AI safety standards and informing future policy development, this work stimulates a crucial dialogue on the pivotal role of deterministic prompt engineering and ontology-based validation in ensuring the safe and responsible deployment of LLMs in high-stakes environments. 
  • Alharthi, D., & Garcia, I. R. (2025, Fall, 2025).

    Cloud Investigation Automation Framework (CIAF): An AI-Driven Approach to Cloud Forensics

    . arXiv. https://arxiv.org/abs/2510.00452v1
    More info
    Large Language Models (LLMs) have gained prominence in domains including cloud security and forensics. Yet cloud forensic investigations still rely on manual analysis, making them time-consuming and error-prone. LLMs can mimic human reasoning, offering a pathway to automating cloud log analysis. To address this, we introduce the Cloud Investigation Automation Framework (CIAF), an ontology-driven framework that systematically investigates cloud forensic logs while improving efficiency and accuracy. CIAF standardizes user inputs through semantic validation, eliminating ambiguity and ensuring consistency in log interpretation. This not only enhances data quality but also provides investigators with reliable, standardized information for decision-making. To evaluate security and performance, we analyzed Microsoft Azure logs containing ransomware-related events. By simulating attacks and assessing CIAF's impact, results showed significant improvement in ransomware detection, achieving precision, recall, and F1 scores of 93 percent. CIAF's modular, adaptable design extends beyond ransomware, making it a robust solution for diverse cyberattacks. By laying the foundation for standardized forensic methodologies and informing future AI-driven automation, this work underscores the role of deterministic prompt engineering and ontology-based validation in enhancing cloud forensic investigations. These advancements improve cloud security while paving the way for efficient, automated forensic workflows. 

Profiles With Related Publications

  • Paul E Wagner
  • Robert J Honomichl
  • Ryan M Straight
  • Jessica Rainbow
  • Kenneth S McAllister
  • Michael R Galde

 Edit my profile

UA Profiles | Home

University Information Security and Privacy

© 2026 The Arizona Board of Regents on behalf of The University of Arizona.