Responsible and Ethical Use of GenAI Framework & Toolkit

Step Two

Define your tasks and name potential risks and harms

The first step in this framework was suitable for anyone building any AI tool in higher education. For those users creating an AI coach or advisor to guide students through higher education, the next set of steps start with careful consideration of your users’ potential experiences in interacting with your program and the AI tool you create. This thought exercise will help you surface not only the benefits but also the risks that using AI may pose, as well as the specific harms those risks might introduce. This step should also include considerations of different types of student experiences and of rare or complex “edge cases” that may mandate special treatment.

Substeps

  1. List all the tasks that you expect the AI coach to offer.
  2. Discuss the risks (things that can go wrong) for each task, as well as the impact those harms might have on your users.
  3. List risks that stem from common limitations of the AI model that could affect each task.
  4. Consider how different student experiences might interact with AI coaching processes.
  5. Identify additional risks that come from potential “edge cases” and complexities in all coaching activities.
  6. Take a deeper look at the tasks and risks that may lead to rare but disproportionate and/or irreversible harm.

a) List all the tasks that you expect the AI coach to offer.

Identifying the specific tasks you expect the AI coach to perform is crucial for implementing targeted responsible AI practices. Different coaching tasks – such as setting goals, tracking progress, or providing emotional support – come with unique ethical considerations and potential risks that need to be addressed individually. Tasks that are common, require significant amounts of complex information, and/or have minimal judgement involved are often appropriate for AI to handle.

Examples:

  • Helping a student navigate their options for colleges, majors, or careers
  • Supporting a student who has lost their housing and needs help finding temporary shelter
  • Assisting a student with an upcoming FAFSA deadline
  • Assisting a student in balancing their coursework, part-time job, and social life

Questions to Discuss:

  • What are the coaching/advising tasks or processes that you currently offer? Which of those are you considering assigning to the AI tool?
  • What does success look like for each of these tasks?
  • What are the most time-consuming parts of these tasks?
  • What part of these tasks are repeatable, require heavy research, and/or use low judgement? (These are all potentially a fit for AI.)

b) Discuss the risks (things that can go wrong) for each task, as well as the impact those harms might have on your users.

A good starting point is to list risks inherent within these coaching tasks in general. Consider referring to your internal training guidelines about what coaches should not do.

Examples:

  • Students become overly dependent on advice
  • Advice is culturally insensitive
  • No clear process for the student seeking additional help
  • Confidentiality between sessions is breached
  • AI coach can't handle emergency situations properly
  • AI coach fails to adapt to individual situations

Questions to Discuss:

  • What things can go wrong within each coaching task that you might assign to an AI coach?
  • What is the potential harm and impact of each of these risks?
  • Who is notified if something goes wrong with your current process? Who should be notified if something goes wrong with an AI tool that becomes a part of your process?

c) List risks that stem from common limitations of the AI model that could affect each task.

Even advanced AI systems operate within boundaries defined by their training and design. By recognizing these common limitations and issues early, you and your team can design safeguards, set realistic expectations, and educate users.

Examples:

  • Difficulty detecting emotional or non-verbal cues, especially in culturally sensitive ways
  • Failure to correctly recognize situations that require human intervention
  • Low response accuracy (generative AI tools can produce “hallucinations,” false information presented as true)
  • Initial training datasets that lack access to updated information
  • Bias in data used to train AI models can be reflected in coach responses, such as gender bias or discrimination based on race, class, or ethnicity
  • Unintentionally leaking sensitive information about students to other students, coaches, or third parties

Questions to Discuss:

  • Does the training data set reflect the demographics of your users? Does it include current information about your subject area?
  • How should the system (the AI tool and the humans who use it) recognize and raise an alert when the AI provides incorrect or incomplete information?
  • How could users be informed about how the tool generated its recommendations?
  • How should the tool evolve as the underlying data and best practices evolve?

Learn More:

d) Consider how different student experiences might interact with AI coaching processes.

Different student backgrounds and circumstances can lead to unique risks in AI coaching. By considering various student experiences, you and your team can uncover potential issues that might not be apparent when thinking about a “typical” student. This approach helps identify risks that could disproportionately affect certain groups or arise in specific situations.

Examples:

  • Students with disabilities could face accessibility barriers in the interface
  • First-generation students could receive advice that assumes prior family experience with higher education
  • Working students might struggle with time management advice that assumes a traditional schedule
  • Given advice might not consider the resource constraints faced by students from various economic backgrounds
  • Students with mental health issues might receive inappropriately generalized wellness advice
  • International students might interpret advice differently based on their culture or background
  • Non-native speakers may have a different understanding of nuanced language such as adverbs, metaphors, and analogies in coaching feedback

Questions to Discuss:

  • What are the relevant student backgrounds, circumstances, and characteristics that you are considering?
  • How might these backgrounds, circumstances, and characteristics increase or decrease the risks or potential harm/impact of each of these tasks?

e) Identify additional risks that come from potential “edge cases” and complexities in all coaching activities.

Seemingly benign coaching tasks can cause unexpected complexities and risks. The real challenge often lies not in the question itself, but in the sensitive context surrounding it. Recognizing these situations is crucial, as they may require more nuanced handling or human intervention to ensure appropriate support for the student. Identifying atypical but important “edge cases” should be an ongoing process, as you may not be able to anticipate every scenario from the outset.

Examples:

  • "How do I tell my professor I need extra time on assignments? There is a health issue going on with a family member and I'm struggling to cope."
  • "How can I keep up with my classes while helping at home? My little brother got in an accident and now needs constant care."
  • "I feel tired all the time. What's the best way to handle my coursework when I'm always exhausted?"
  • "I'm thinking about changing my major, but I'm worried about disappointing my parents."
  • "I can't seem to manage my time well. Is there something wrong with me?"
  • "I hate group projects. My teammates never listen to my ideas."
  • "What career should I pursue? I feel like I'll never be good enough for any job."
  • "Is there any financial aid left? I can't afford textbooks this semester."
  • "Are there any clubs for people like me? I haven't made any friends yet."
  • "How do I network at career fairs? Just thinking about it makes me anxious."

Questions to Discuss:

  • What signals could indicate that the user or users need to lean into emotional intelligence? Some signals might indicate a student might be experiencing deeper issues. These might be emotional cues, observable behaviors, financial or health status, etc.
  • How will you document and incorporate newly discovered edge cases into your process?
  • What mechanisms can you use to track whether your approach to edge cases is successful in providing appropriate support, and how will you respond to any identified gaps?

f) Take a deeper look at the tasks and risks that may lead to rare but disproportionate and/or irreversible harm.

Rare risks in coaching can lead to disproportionate harm. Although these situations might be uncommon, their potential impact might be very high. Identifying these tasks and risks is the first step toward developing appropriate actions to mitigate this potential harm.

Examples:

  • AI coach fails to recognize thoughts of self-harm in a student's messages.
  • AI coach provides dangerously incorrect medical advice to a student.
  • AI coaching advice exacerbates a student's mental health issues.
  • AI coach fails to recognize signs of abuse or harassment in a student's situation.

Questions to Discuss:

  • Are there key words or sentiments that should trigger the AI coach to raise an alert and call for human support before proceeding?
  • How will progress between the AI coach and student be monitored over time to ensure that learning and support is happening as intended?
  • How will coaches be able to intervene if the AI coach fails to recognize signs of harm?