Welcome to Arkansas State University!

Ethical Use of Artificial Intelligence

                      

The Arkansas State University Guidelines and Considerations for the Ethical Use of Artificial Intelligence (AI) were developed in Spring and Summer of 2024 by the campus Artificial Intelligence Policy Committee, which consisted of a group of faculty and administrators from across campus. The Guidelines were informed by commonalities among policies at a variety of other institutions and they incorporate existing A-State tools and platforms. The intent is that these guidelines accommodate a diverse spectrum of opinions regarding the use of AI, and that faculty retain the authority to determine the scope of acceptable usage of AI in their courses.

Faculty are advised to include their AI policies in their syllabi, and encouraged to use the provided Guidelines to assist in determining what those policies are, and how to state them clearly.

Arkansas State University Guidelines and Considerations for the Ethical Use of Artificial Intelligence

Overview and Rationale
Within the current educational landscape, the integration of Artificial Intelligence (AI) technologies holds the potential to enhance all aspects of the teaching and learning process and may streamline various administrative processes as well. To operate effectively and competitively as an institution of higher education, Arkansas State University must maximize opportunities to benefit from these rapidly evolving technologies. At the same time, however, A-State must conscientiously address ethical considerations to guarantee the responsible and fair utilization of these tools, aligning with our institution's fundamental values and principles. 

Human-Centered Approach
Institutions of higher education, including Arkansas State University, were founded and designed to enhance human potential, and human intellect and capabilities clearly remain central to the mission of this institution. While artificial intelligence (AI) has made significant advancements in replicating certain human abilities, it lacks most aspects of human cognition, emotion, creativity, and consciousness. Consequently, A-State’s approach regarding the use of AI likewise should be a human-centered approach, giving precedence to the implementation of AI systems centered around human needs. Such systems should be discussed, and their use crafted to enhance human intelligence and expertise rather than attempting to supplant them.

Ideal human-centered practices regarding AI use include, but are not limited to: 

  • Gathering input from students, faculty, and various stakeholders to guide the enhancement and adjustment of AI systems in accordance with human-centered principles and preferences.
  • Guaranteeing that interventions driven by AI uphold and foster individual autonomy, offering options for flexibility, decision-making, and authority.
  • Acknowledging and upholding the independence of faculty and educational institutions in determining the integration of AI technologies into educational practices and administrative functions.
  • Empowering colleges, departments, programs, and faculty members with the freedom and assistance necessary to personalize and match requirements and policies regarding AI use to their course and program outcomes and to the needs of their students and circumstances of specific instructional settings.
  • AI is an important element of the current and future research enterprise in many fields of study from art to engineering to life and materials sciences. Ensuring faculty members and students conducting research into improving AI or using AI as a tool to enhance their discoveries and innovations is central for research. Relatedly, students engaging in AI-focused or AI-assisted research projects will prepare them for the current and future workforce.

 

Communicating Diverse Approaches and Needs
Amid the evolving contexts of higher education and the dynamic AI market, the adoption of standardized, universally applicable AI policies proves impractical in the long run. Such policies may fail to accommodate the diverse perspectives instructors hold regarding AI integration in their teaching practices. Given the diverse array of roles and potential applications for AI within an institution of higher education, it is imperative to accommodate a variety of approaches that encompass the needs and requirements of all involved in the teaching and learning process.

Establishing transparency and clearly communicating expectations will foster an atmosphere of trust within a fair and inclusive approach. It is the responsibility of each college, department, and individual faculty member to establish clear channels of communication with students regarding the use of generative AI in their programs and courses. Explicitly communicating the extent to which AI may or may not be used by students in each course or program will provide concrete guidelines regarding the ethical use of AI by students for academic purposes. 

Policy Considerations
As colleges, departments, and faculty members develop AI policies and syllabus statements, their reasoning should be rooted in the intellectual content and expectations of their courses and their disciplines. They should contemplate questions such as: What might students gain or lose through the integration of generative AI in the course? What aspects of AI and intellectual development do they aim to convey to students? Instructors are encouraged to articulate the rationale behind their policies to their students and initiate discussions about AI use with them. Such a process offers opportunities to engage students in discussions to better understand and to help them expand their levels of AI literacy.

AI Literacy
Generative AI is rapidly evolving as a novel information resource, and the guidelines for citation and its overall utilization are still in flux. Instructing students on when and how to use AI tools and how to cite content generated from AI sources alleviates the cognitive burden on students who may be uncertain about adhering to the policy regarding Academic Misconduct as it appears in the A-State Student Handbook.

Discussing AI use with students also gives faculty members the opportunity to point out its limitations. In addition to their lack of consciousness, emotion, and creativity that is less than that of a human, current generative AI tools create output that may include incorrect citations, instances of various types of bias, and other inaccuracies. As students become cognizant of these limitations, they are empowered to proactively adopt AI tools in a manner that is ethically sound, efficient, and responsible. 

Policy Statements Defining Acceptable Use
There are several possible approaches for addressing AI use at the program or course level. Under each approach, statements appearing in syllabi or within specific assignment guidelines should fully explain the circumstances under which students may use generative AI. Furthermore, if AI tools are restricted in any capacity, it is crucial to engage in discussions with enrolled students regarding the scope and conditions of their use within the course or program. 

Common Language for Policy Statements
To better resolve any potential conflicts and facilitate communication about the complex topic of AI and its use, it is beneficial for all syllabi and assignment guidelines to use the same definitions for AI and its derived technologies. 

  • Artificial Intelligence (AI) - a computer program or algorithm designed to imitate Human intelligence, whether that intelligence is static (pre-programmed) or dynamic (learning from input data)

  • Machine Learning (ML) - an artificial intelligence designed to learn from data that is provided to it

  • Generative AI - a machine learning algorithm that, once trained on a given set of data, can produce synthetic data (in the form of text, images, sounds, videos, or other synthetic data points) such that most casual observers would agree that the synthetic data realistically approximates the character and form of the training data with high fidelity, without explicitly copying the training data

  • Large Language Model (LLM) - a generative AI that is capable of functionally understanding and producing human language such that it can respond to a variety of prompts in a manner that is relevant to the given prompt

 

Approaches for Syllabus Policies on AI Use
Colleges, departments, and individual faculty members should weigh the potential benefits of four basic approaches to the use of generative AI by students and craft statements reflecting the most effective policies for their syllabi and assignment guidelines. The University of Delaware’s Center for Teaching and Assessment of Learning fully describes these approaches on its website.

 

  1. Use Prohibited In courses where students work independently without collaborating or using external tools, faculty may entirely prohibit the use of AI. Clearly informing students of this policy and explaining why collaboration and tool usage are not permitted is necessary to foster understanding of such a policy. Honest discussions about the importance of independent work in the class can help students become aware of the academic, professional, or disciplinary norms underlying the course or program.

  2. Use with Permission In select courses, it might be suitable to permit students to use AI tools in specific instances or assignments but not in others. In such cases, instructors should clearly communicate with students about when and how they can or cannot utilize these tools and how they can receive permission to do so. It would be beneficial also to explain the reasoning behind permitting these tools in certain scenarios while prohibiting them in others.

  3. Use with Acknowledgement In courses and programs where collaboration and the use of advanced AI tools are permitted or encouraged, it might be appropriate to authorize students to utilize these tools throughout the course, provided they clearly cite or acknowledge their use. In such cases, it is important to explicitly specify the type and format of citation or acknowledgment methods students should employ.

  4. Use Freely Permitted In courses or programs where students often collaborate and use advanced AI tools, it could be suitable to permit students to use these tools without the explicit requirement to cite or acknowledge their use. However, it remains critical for students in such circumstances to grasp the boundaries, risks, and ethical applications of those tools.

      AI Detection Software
      Arkansas State is well positioned to monitor the use of AI in courses, through AI detection tools such as Turnitin and the AI Detection Platform by K-16. It is critical for instructors to understand the limitations of AI detection software, and that these should be used as a guide rather than a confirmation that a student may have used AI. In addition to AI detection tools, instructors that wish to detect the use of AI could consider software that tracks keystroke input logs and records editing process (e.g. Google Suite’s version history is a good example of how this can be effectively tracked to map how a document has been formed. Draftback is a Google Chrome extension that gives users the ability to play back the revision history of any Google Doc they have edit access to).

      Courses using Turnitin to review submitted assignments will see all content reviewed and specialized for student writing and is highly proficient in distinguishing between AI and human-written content. The K-16 AI Detection Platform is integrated as a Learning Tools Interoperability (LTI) tool within A-State’s Canvas LMS platform on all courses to check assignments, quizzes, and discussion boards. Instructors will have access to the administrative platform of both Turnitin and K-16 tools to ensure students are upholding academic integrity. 

      As outlined throughout this document, AI is an evolving technology that will require colleges, departments, and faculty to continue to also evolve with their approach when monitoring for AI use. The advantages of AI detection software can be identified as follows: 

      1. Efficiency and Scalability AI tools can analyze large volumes of text quickly, making it feasible for instructors to check for academic dishonesty in a way that would be impossible manually. This is particularly valuable in online courses with large enrollments, creating a resource that can be used in conjunction with human review.

      2. Objectivity AI tools can provide an objective assessment based on the data they are trained on, potentially reducing biases that might influence human judgment.

      3. Consistency Using standardized tools across courses or departments can ensure consistency in how academic dishonesty is identified and addressed. It is recommended that all instructors adopt the AI Detection Platform by K-16, and for those using Turnitin, this can also be used in conjunction with this.

       

      AI Detection software is also an evolving technology. By using a variety of technologies and methodologies to distinguish between human-generated and machine-generated content, these platforms provide statistically based reviews on the likelihood of AI being used in the analyzed content. The AI Detection Platform by K-16 for example, uses the same artificial intelligence model by OpenAI as ChatGPT and other leading AI-driven platforms that are readily available.

      Guidelines for Use of AI Detection

      It is important to note that any AI detection platform should not be used as a sole source of identifying academic dishonesty when considering AI use. While AI tools can be valuable assets in identifying academic dishonesty, their use should be carefully considered and balanced with ethical considerations, transparency, and the educational mission of fostering an environment of trust and learning. Instructors should adopt the following best practices when utilizing AI Detection software: 

      1. Transparent Use Clearly communicate to students that AI tools will be used, how they work, and what the expectations are regarding academic integrity.

      2. Comprehensive Review Use AI tools as one part of a comprehensive approach to academic integrity that includes education, discussion, and the development of critical thinking and ethical reasoning skills. Use human judgment in review of discussion boards, quizzes, and assignments, as AI tools can generate false positives (incorrectly identifying honest work as dishonest) and false negatives (failing to catch dishonesty).

      3. Training and Awareness Instructors should be trained not only in how to use AI tools but also in understanding their limitations and interpreting their outputs critically.
      4. Academic Misconduct Process Students should have the opportunity to respond to any accusations of academic dishonesty, including those identified by AI tools, ensuring fairness and accountability. Instructors should follow the A-State academic misconduct reporting process by submitting a report via the icon in my.Astate.

       

      (Last updated May 2024)