Generative AI @ WCU -
A Guiding Framework

Introducing the Framework

The purpose of this framework is to provide the West Chester University community of educators with guidance on the responsible and ethical uses of generative artificial intelligence, a subset of artificial intelligence, in academic, research, and operational settings.  These recommendations are intended to apply to all members of the university community, including students, faculty, and staff. The framework aligns uses of artificial intelligence with institutional goals and strategies. As such students, faculty, and staff, are encouraged to continue to promote “intellectual curiosity, critical thinking, application of concepts,” as they make informed decisions about integrating artificial intelligence responsibly into the ongoing university tasks and activities.  The framework is intended to help individuals understand various ways to consider using AI to enhance learning, research, and administrative tasks while maintaining academic integrity, respecting data privacy, and fostering an inclusive environment.

This framework is a living document, not a university policy. It contains a series of recommendations intended to work within and support existing WCU policies (see 1. Review and Comply with University Policies). University policies and the process for their creation are outlined on the University Policies website. Because artificial intelligence technologies and their implications are in a period of rapid change and the university policy process is a deliberate and collaborative process that takes time, the university is using this framework approach so that the AI guidance can remain flexible. The guidance proffered in the document is fluid and will be periodically reviewed and updated to reflect emerging trends, ethical considerations, and regulatory developments.

The framework is a result of a collaborative process led by IS&T and TLC. They created an initial outline reviewed by students, faculty, staff, deans, and governance groups. A cross-university task force then developed the complete framework. As a living document, this framework will evolve with our understanding of generative AI. Comments, feedback, and suggested changes can be submitted via aiframework@wcupa.edu.

Artificial Intelligence and Generative AI

Artificial Intelligence (AI) refers to the ability of a computer system to perform tasks that typically require human intelligence.  Bowen and Watson (2024) define artificial intelligence as “the ability of computer systems to mimic human intelligence and also to the development of such systems” (Bowen, J. A., & Watson, C. E. (2024). Teaching with AI: A practical guide to a new era of human learning, pg 16. Johns Hopkins University Press).  AI is all around us, often without us realizing it. Digital assistants like Alexa and Siri are examples of AI tools that many of us use daily. Additionally, features such as the grammar and language editor in MS Word that help us with editing tasks are based on AI. Our university’s IS&T division leverages AI to expand service desk capabilities through chatbots.

The types and uses of AI vary widely. This framework focuses primarily on generative artificial intelligence. Generative AI, a subset of AI, uses large data models to create new content, such as text, images, music, and sounds, based on patterns in training data (Bowen & Edward, 2024). ChatGPT is one of many such tools that are now easily accessible. Read ‘What is Generative AI’ to learn more about this technology and how it works.

Generative AI is transforming how we learn, teach, and work by offering powerful tools to create new content, assist with problem-solving, and improve efficiency. The framework offers 6 key recommendations, summarized below, to guide using AI tools ethically and responsibly while upholding institutional values and professional standards. Following these recommendations enables us to leverage the potential of AI while maintaining integrity, protecting privacy, and ensuring accountability.

It is important to note that the AI tool landscape is changing very quickly. This framework provides general advice that can be adapted to apply to specific contexts. We believe this process of seeking understanding and adapting to Generative AI makes us stronger at WCU and demonstrates an opportunity for the WCU community of educators to live out several of our mission and values statement of:

  • Develop graduates to succeed personally and professionally and contribute to the common good.
  • Understanding ethical implications of decisions and the world in which we live
  • Collaborate with others to solve problems and address societal needs
  • Demonstrate critical thinking

As other transitions demonstrated, policies will evolve as more information about AI impacts are discovered. Patience is needed with all stakeholders as conflicting advice and information is processed by the WCU community.

A Note about Sustainability

Based on WCU’s current Strategic Plan, the university “promotes the value and importance of creating a sustainable society.” As the United Nations Environmental Programme makes clear, data centers that house AI servers produce high volumes of electronic waste, consumer vast quantities of water, rely on scarce minerals, and consumer extraordinary amounts of electricity in our already rapidly warming planet. Choices to use Generative AI have an environmental impact and as an institution committed to a sustainable society, members of the WCU community should commit to choices that limit our negative impact on the environment and climate.

Key Recommendations for using Generative AI ethically and responsibly

  1. Know and comply with relevant institutional policies before using AI tools for any academic and work-related tasks.
  2. Protect sensitive information by carefully evaluating data inputs and understanding AI providers' data-handling practices
  3. Ensure equitable access and outcomes by considering how AI use might impact different groups, addressing barriers to access, and actively working to prevent discriminatory effects.
  4. Take ownership of AI outputs by verifying accuracy and addressing potential biases before using them.
  5. Be transparent about AI usage when it has contributed to your work product to help maintain academic standards and support the use of AI to enhance, not replace original thinking.
  6. Stay informed about AI best practices by utilizing available training and resources.

 


Recommendation 1: Follow Institutional Policies for Academic Integrity and Acceptable Use

Use institutional policies to guide what data you input into generative AI tools and how to use outputs from generative AI. 

West Chester University faculty, staff, and students are required to follow existing university policies when using generative AI tools. These policies safeguard our university’s values and foster a responsible environment for educational excellence. We recommend reviewing the policies summarized in this framework and considering their application to the use of generative AI. 

Acceptable Use Policy

 WCU's Acceptable Use Policy (AUP) emphasizes that University network and resources are intended to support the academic mission, facilitate information-sharing, and manage the University’s administrative and service operations. When using generative AI tools, it is important to adhere to the established principles of academic integrity and professional responsibility. This includes using these tools ethically and responsibly to support academic and administrative activities without compromising the integrity of the University's systems and data.

Data Classification Policy

 The university’s data classification policy provides direction regarding the privacy, security, and integrity of WCU data and outlines the responsibilities of institutional units and individuals for such data. To protect the confidentiality and integrity of sensitive data and prevent unauthorized disclosure or use, do not input any sensitive data into generative AI tools.

Academic Integrity Policies

To uphold and maintain the ethical standards outlined in the Undergraduate Student Academic Integrity Policy  and Graduate School Academic Integrity Policy, it is important to use generative AI tools responsibly and ethically. Avoid violating the following academic integrity standards from each policy: 

  1. Plagiarism: Do not present someone else's work as your own. Output for generative AI tools is based on the existing works of others. This includes words, ideas, data, visual representations, audio, video, and digital materials. In keeping with recommendation #5 verify the accuracy of AI outputs, you should confirm that any AI output you use is not a direct copy of other’s work. This is based on the University of Florida’s recommendation regarding plagiarism and generative AI.
  2. Fabrication: Do not present fabricated or unverified information, results, information, citations, or other findings.
  3. Cheating: Do not misrepresent your mastery of information or skills for assessments. This includes, but is not limited to, using or attempting to use unauthorized materials, information, or study aids in any academic exercise. 

Generative AI tools can be used ethically and responsibly to assist or enhance content development without compromising or violating academic integrity.

Other Policies

  • The Family Educational Rights and Privacy Act (FERPA) protects the privacy of students by maintaining the confidentiality of their education records. When using generative AI tools, do not input any student records, grades, class rosters, or other educational information protected by FERPA.
  • In accordance with the Use of Human Subjects in Research Policy, faculty, students, and staff conducting research should ensure that they do not input into generative AI tools any information collected for specific limited purposes, such as research data or information where a person’s identity is or could easily be connected to the data. 

 


Recommendation 2: Protect Sensitive Data

Protect sensitive information by carefully evaluating data inputs and understanding AI providers' data-handling practices

AI systems produce results by learning from data.  AI systems often record, and store data provided to them. If private data is shared with an AI tool, this data may be revealed later in the results that the AI tool produces in response to prompts by any user.

The university classifies data into three categories:

Confidential Data

Confidential data requires the highest level of protection to prevent unauthorized disclosure or use. This includes data the University must keep private under federal, state, or local laws. Examples include personally identifiable information (PII) such as social security numbers and protected health information (PHI) such as health records.

Entering any personally identifiable information (PII) such as names, email addresses, birthdates or ID numbers, into an AI platform could cause the tool to repeat this information as part of interactions with other users at a later time.

Sensitive Data

Sensitive data is generally private to West Chester University and access is limited to the West Chester community members on a need-to-know basis. These data are not typically available to external parties. Examples include certain research records, library and archive circulation and order transactions, and university partner or sponsor information.

Any sensitive information, personally identifiable information (PII), or text/information that is not already public knowledge is at risk of being exposed if it is entered into an AI tool. Extra caution should be taken with information regarding hiring, admissions, or evaluation of students or employees as it is often combined with other detailed personal and confidential information.

Public Data

Public data has no legal or other restrictions on access or usage and is generally open to the WCU community and external parties. Examples include information already available to the general public on WCU’s website, approved official meeting minutes, published research, policies, and posted press releases.

 

How to Protect Data Used with AI Tools

Based on these data classification, we offer the following recommendations:

  • You can share public data that is already accessible by the public with AI tools.
  • Do NOT provide data to AI tools if any part of that data should not be included in future results provided by that system.
  • Do NOT share Confidential data or Sensitive data with AI tools unless that sharing has been specifically authorized.
  • Do NOT put confidential research data into AI tools as it may be considered a public disclosure for the purposes of intellectual property rights. Most common AI tools are not compliant with the Family Educational Rights and Privacy Act (FERPA), Health Insurance Portability and Accountability Act (HIPAA), and the European Union’s General Data Protection Regulation (GDPR).  Intellectual property, copyright and contract law around AI is an evolving area and the University’s understanding will develop as new policies, regulations and case law become settled. For now, members of the WCU community should adhere to the general guidelines based on data classifications.
  • Do NOT submit other's intellectual property to an AI tool without their permission. Many journals and publishing agencies have specific policies regarding AI use by authors and reviewers. It is your responsibility to both know and follow those policies. In cases where the journal itself has not articulated a policy, authors should consider the risk to their own or others’ intellectual property rights if unpublished research is publicly exposed through an AI platform because of a reviewer using the platform to summarize a submitted paper, or an author using AI to write or edit.
  • Do NOT submit student’s work to an AI tool without their express permission. Students own the copyright to their work for classes and as such, get to control how it is used.

If you are using a tool that has not been vetted by the university, it is your responsibility to figure out how the tool uses data. The university will not be responsible for users’ negligence with unsanctioned tools.

 


Recommendation 3: Consider Equity in Access and Skills

Ensure equitable access and outcomes by considering how AI use might impact different groups, addressing barriers to access, and actively working to prevent discriminatory effects.

Uneven access to AI resources and training can widen educational and professional gaps. In the context of the classroom, students who can afford paid versions of generative AI tools have a clear advantage over students who can only access the free versions.

Information literacy is a key aspect to navigating today’s vast information landscape that includes AI. There are inequities and disparities that exist with information literacy at every level of the curricula including access to resources like school libraries, certified school librarians, information (information privilege is a real thing!), computers, WiFi, and much more. We recommend supporting the development of information literacy skills to effectively engage with AI.

 


Recommendation 4: Verify Accuracy

AI Inaccuracy

Generative AI tools are based on mathematical models that predict likely outputs for a given prompt, or input. These predictions are based on mathematical patterns in the underlying training data the AI tool is based on. In some cases, these patterns lead the model to give factually accurate answers to questions. For example, if asked for the capital city of Germany, most current models will accurately answer “Berlin,” because the model has noticed patterns linking words like “capital” and “country” and “Germany” and “Berlin.” 

However, models may also provide inaccurate information. This may occur for two reasons:

  1.  In some cases the model’s ability to predict words may lead it to make statements that make grammatical “sense” but do not reflect any underlying reality. This may happen especially when the model is asked to provide unusual information that was not frequently repeated in its training data. For example, asked for an overview of the Alaskan salmon harvest, ChatGPT will provide a generally accurate summary. However, asked for the number of salmon harvested in Alaska for any particular year, ChatGPT will generate a plausible, but incorrect, number. This failure mode is sometimes called hallucination. All known generative AI tools are prone to this kind of failure, to varying degrees. 
  2.  In other cases, the model may faithfully report an inaccurate piece of information in an underlying source its been asked to summarize or work with. This is particularly acute in models that use what’s called Retrieval Augmented Generation (RAG) as a way to limit hallucination by tying a model to human authored sources. In one famous example, Google’s AI search summary told users to add glue to pizza sauce based on a joke post to the Q&A site Quora that turned up in its search results. 

Because of these potential sources of inaccuracy, we suggest that users of AI tools do the following:

  1. Do NOT rely on AI for answers in fields you do not know well. You are less likely to be able to catch factual errors in new information.
  2. Verify any factual statements in AI output. Be aware that the fact that an AI tool is using confident sounding language does not mean it is being accurate. AI can introduce inaccuracies with a great deal of confidence.  Always check any factual statement in any final document to ensure its accuracy. Do not take the AI’s word for it.
  3. Do NOT use AI as a kind of information search tool (like Google) if the AI tool you are using does not include a list of underlying sources. It will summarize for you by default. Some tools will provide a list of “sources” if asked for one, but this source list may not be accurate. In some cases, the sources “cited” may not even exist! Find and evaluate underlying sources.

AI Bias

In addition to introducing inaccuracy , users should be aware that AI tools can introduce bias. This bias is a result of underlying biases in the information the AI tools were trained on. Since we rarely know the full scope of training data or how it was processed and selected, it can be difficult to say for sure what kinds of bias might be produced by any particular tool. 

However, we know from prior research that AI tools will often reproduce biased associations based on gender, race, or similar markers. For example, multiple studies have shown that tools like ChatGPT tend to associate the role of “doctor” with men and “nurse” with women. Image producing tools may also reproduce similar forms of bias, depicting people in positions of power as white and male more frequently. Cultural stereotypes are more likely to appear in AI generated content. An example, image generators asked to show "poor" people often conflate that with homelessness.  

Because of these potential sources of bias, we recommend that users of AI tools doing the following:

  1. Confirm all claims with known, verified sources. Be especially skeptical of AI claims about identity-based categories like race, gender, etc.
  2. Carefully read any copy or images produced by AI for potential bias, keeping in mind your own implicit bias as you do so
  3. Use caution when prompting AI, since accidental or implicit bias in prompts may result in biased results and add to bias to the language learning models.

The bottom line is that you are accountable for all output based on the prompts you enter. As the user, you (and not the tool) will be held responsible for any inaccuracies or biases in the content created. Consider the following scenarios:

  • A student uses Generative AI to write an essay in which the facts are wrong and/or the sources are made up. This presents a lack of rigor that the student is accountable for.
  • A faculty member drafts an email with Generative AI, containing false information, and sends it without review. The faculty member is responsible for sending out false information. As a matter of good practice, it is advisable to review information that a third party has written before you hit send.
  • A staff member uses Generative AI to generate an image for an upcoming career fair. Some peers and students find the image offensive. The staff member is accountable for the content of their work.
  • A staff member uses Generative AI to generate an image for an upcoming career fair. The image recreates the signature style of a well-known photographer. The staff would face risk of being accused of appropriating someone else’s intellectual property. If AI-generated art is needed, the staff member should provide their own creative direction rather than mimicking an existing artist.

 


Recommendation 5: Be Transparent

Cambridge Dictionary defines transparency as “a situation where activities are done in an open way without secrets, so that people can trust that they are fair and honest.”

When deciding about whether to be transparent or not regarding AI use ask yourself this question, “would a reasonable person expect to know that AI was used to create this item?”

The Importance of Transparency

Being transparent about your use of AI is important because it

  • develops and maintains trust and credibility for all stakeholders
  • maintains the academic tradition of showing your work through citing sources of ideas, providing credit to those who created artistic elements, and conveying information in ways that can be replicated by others
  • allows for the protection of individual’s privacy and copyrighted work
  • allows others to engage an item knowing that potential AI mistakes or biases could be present

In addition to being transparent about AI use, this framework recommends those in positions of authority (faculty in a classroom, administrators) to be transparent in their expectations for AI use in settings for which they are responsible to prevent mis-understanding or improper use. For example, faculty should be transparent about their expectations for AI use in the classroom to prevent students from getting into academic trouble if they use AI inappropriately. See sample generative AI syllabus statements for examples.

What Transparency Looks Like at West Chester University

Here is what AI transparency might look like for:

  • Students
    • Students communicate the ways they engage with and use AI tools in support of their learning process and any products they create.
    • Students are aware of how and for what purpose AI might be used with their information that is maintained by the institution. For example, advising records and transcripts.
  • Faculty
    • Teaching
      • Transparency of the justification for AI use in a course. A clear, easy to understand syllabus statement on if and/or when students may use AI in their coursework.
      • Transparent in how they are using AI to support the work of teaching. Do faculty indicate where AI was used to generate questions, assist with grading and feedback, or influence teaching decisions?
    • Research
      • Clear statement by the IRB on how AI may be used and/or disclosed when conducting research.
      • Clarity on how AI may be used or disclosed in the promotion and publication of research.
      • Faculty are open about their use of AI in support of research and manuscript development.
    • Service
      • Transparency on how AI may or may not be used in completing tasks.
  • Staff
    • transparency on how staff use AI tools and functionality to complete the work of the institution. For example, AI use in relation to employee data, for communication, for monitoring, or for managing support requests.
    • Transparency on how administrators will use AI tools and functionality at the institutional level to track, monitor, and analyze data/services

It is your decision (except where it is a requirement for a course or job) to disclose use of AI. If you do decide to do so, we recommend the following:

We recommend that faculty using AI do the following:

  • Communicate expectations and the reasons behind decisions for transparency. For example:
    • A faculty member explains to students that they will use AI to create content for class while prohibiting students from using AI in class work using a justification that they have credentialed expertise in the field whereas students are still working towards that goal.
    • A faculty member might create classroom rules about how and when the technology can be used. Those rules are communicated frequently and clearly. There is also an opportunity to co-create these rules with students.
    • A faculty member may prohibit AI in the classroom completely. The instructor will not use it, and neither with the students. Faculty may articulate that the environmental concerns justify no one using the tools.
  • Communicate, where appropriate, encouragement and patience as students have received mixed messages about AI use. Furthermore, since a solid and agreed-upon K-12 AI literacy foundation has yet to be established, these messages might come from parents, former teachers, society, friends, the tech industry etc. etc. Our students did not ask to be a college student during this transition.
  • Avoid blanket statements regarding intent in terms of AI use such as “AI is cheating” or “AI is great and transformative” or “AI is bad.” The world is facing mixed messages about AI. Blanket statements risk shutting down conversations when we need to keep the lines of communication between students, faculty and staff wide open. Many aspects of AI are incredibly nuanced and deserve thoughtful consideration.
  • Avoid making comparison about AI use in other disciplines to justify AI polices or usage. For example, statements such as “AI might be a good fit for this department/discipline but not for us,” muddies the waters during an already complex time. Instead, we encourage faculty to select and feel confident in their strategy and communicate that strategy loudly and proudly. Giving students more chances to encounter differences in decision-making and strategy makes us stronger. Yes, we will have to “experience” increased student complaints and frustration…but we can do it.

 


Recommendation 6: Keep Informed

West Chester University has not yet established a dedicated AI Center. Various West Chester University offices are gathering resources to support different constituency groups using AI. Below are some of the resources available:

The Teaching & Learning Center (TLC) has created a collection of generative artificial intelligence (AI) resources to guide faculty with teaching in the era of generative AI. Resources include Generative AI Syllabus Considerations, Using Generative AI for Active Learning, and Assessment Strategies with Artificial Intelligence Tools using generative AI, a series of generative AI blog posts, recorded webinars, and more. Faculty can work with instructional designers in the TLC to develop strategies engaging with generative AI in specific teaching contexts.  

IS&T and the Teaching and Learning Center developed an Introduction to Generative Artificial Intelligence module in the D2L site: Navigating Digital Learning. This module is available for students to learn what generative AI is, including benefits, pitfalls, and how the tools might be used at West Chester University. 


Appendix

Examples of How to Communicate AI Usage Transparently

 

 AI Use Transparency Action Example
Brainstorming Ideas Name of tool(s) used on date to brainstorm ideas for this name of item. ChatGPT used on 10/30/24 to brainstorm ideas for this blog post.
Outlining a structure Ideas outlined by Name of tool(s) on date. The outline of this document was generated by Claude.AI on 10/11/24.
Editing assistance Editing using Name of tool(s) on date. Edited for clarity using Grammarly on 10/12/24.
Writing assistance Approximately % created by Name of tool(s). Approximately 33% written by CoPilot and ChatGPT.
Presentation Creation The presentation was created using Name of tool(s) on date. Presentation slides created using Canva AI on November 4, 2024.
 AI Image Created using Name of tool(s) on date. Created using Adobe FireFlyon on October 15, 2024.
AI Audio or Video This is an AI generated name of item created by AUTHOR using Name of tool(s) on date. This is an AI generated recording created by Rammy using Hugging Face Model 3t on 10/21/24.
 Translating Data This translation of language to language was created using Name of tool(s) on date. This translation of English to Spanish was created using DeepL on12/24/24.
 Summarize Data This summary was generated using Name of tool(s) on date from name of original data set. This summary was generated using Zoom AI assistant from the transcript of the 11/6/24 meeting of the Faculty Senate.