Artificial intelligence presents meaningful opportunities to enhance teaching, learning, and research across higher education. As these technologies become more integrated into academic environments, it is essential to approach their use with care, clarity, and a commitment to shared values. This page provides guidance on ethical decision-making, institutional policies, and responsible practices that align with Arizona State University's Principled Innovation framework.

AI ethics, policy, and digital trust
Growing with purpose
ASU's AI Digital Trust Guidelines
The ASU AI Digital Trust Guidelines provide a foundation for ethical and transparent use of generative AI technologies across the university. Developed in collaboration with the AI Digital Trust Community of Practice, these guidelines outline key considerations for AI integration in alignment with ASU values. The recommendations reflect input from multiple university partners and have been reviewed and approved by Enterprise Technology Digital Trust, Cybersecurity, AI Acceleration, Learning Experience, and the Office of General Counsel. This resource supports individuals and teams in making informed, responsible decisions about AI use in academic, research, and operational settings.
Resources
AI Syllabus Guidelines
The AI Syllabus Guidelines were created by the Office of the University Provost to support faculty in clearly communicating expectations around the use of generative AI in their courses. This resource provides three adaptable sample statements that reflect different instructional approaches: permitted use, conditional use, and no use. These models help instructors align AI policy with their course goals while promoting academic integrity, transparency, and consistency across the learning environment.
FAQs
Ethical use of AI in education involves transparency, fairness, privacy, and accountability. Instructors and staff should consider how AI tools affect decision-making, student agency, and data privacy. It is important to disclose AI use, align with institutional policies, and avoid over-reliance on AI for tasks that require human judgment or empathy.
Clearly communicating AI expectations in the syllabus and in class discussions helps create a shared understanding. The ASU Syllabus Guidelines offer adaptable statements for different levels of AI integration—from full use to limited or prohibited use. Instructors should explain when and how AI is permitted, what constitutes misuse, and how attribution should be handled.
Yes. AI tools listed on ai.asu.edu/ai-tools have undergone the VITRA (Vendor IT Risk Assessment) process and are approved for use with low-risk data, including FERPA-protected student information. When selecting new tools, it is important to verify their privacy policies and ensure they align with ASU’s digital trust and data security standards.
ASU’s Principled Innovation framework encourages decisions that are reflective, inclusive, and value-driven. In the context of AI, this means thoughtfully considering the human and societal impacts of technology use, fostering trust, and centering learner well-being. Educators are encouraged to model ethical AI engagement and help students critically evaluate these tools.
Before using an unlisted AI tool, review its data practices, terms of service, and accessibility. If the tool may process student data or be integrated into academic workflows, it should go through the VITRA process. You can begin this process through ASU's Get Protected site, or contact your unit's IT representative for guidance.