Responsible and Ethical Use of GenAI Framework & Toolkit

Step Four

Design interactions and user interface that empower student agency and maintain privacy/security

The user interface (UI) is the primary point of interaction between students and the AI coach. Thoughtful design of interface elements can enhance transparency, set appropriate expectations, encourage responsible use, and provide safeguards. Provide clear guidance within the UI to help users understand the AI coach's capabilities and limitations, and balance students’ own agency with appropriate access to human coaches.

Substeps

  1. Provide necessary information about the state of the AI coach and make terminology easily accessible, but encourage students to ask for clarification.
  2. Design clear mechanisms to indicate the confidence or uncertainty of the AI coach.
  3. Consider how to make ongoing interactions safe, secure, and transparent.
  4. Design clear mechanisms to balance student access to human coaches.
  5. Create mechanisms for students to set their own goals, preferences, and boundaries within the AI coach.
  6. Consider creating a range of AI communication styles for different coaching contexts and allowing students to set preferences for conversation tones and communication styles.
  7. Consider the varying degrees of anthropomorphism your AI coach may have, from minimal human-like attributes to highly realistic human characteristics.

a) Provide necessary information about the state of the AI coach and make terminology easily accessible, but encourage students to ask for clarification.

Transparency is crucial, but too much information upfront can be overwhelming. Finding the right balance between offering explanations and prompting students to ask questions helps prevent misunderstandings and promotes active engagement.

Students may encounter unfamiliar terminology during AI coaching sessions, potentially hindering their understanding if immediate clarification is not available. Readily make definitions and terms available to students as they appear in conversations with the AI coach. Allow students to dig deep if they want to. A tiered system allows students to access basic details easily while giving them the option to explore more in-depth information if they choose.

Examples:

  • Clear introductory screens that explain the AI coach's capabilities, limitations, and scope of use before the first interaction
  • An FAQ section that addresses common misconceptions and provides guidance on how to use the AI coach effectively and responsibly
  • Short video explanations that educate students on how AI coaching works and its appropriate use cases
  • Ability for the AI coach to provide accurate and detailed responses to user questions about its capabilities, limitations, scope, and usage guidelines

Questions to Discuss:

  • What product-specific terms might users need to have defined for them quickly as they interact with the AI tool?
  • What definitions can you make available upon request?
  • Where should the definitions appear? (in the tool, on a website, in an app, elsewhere?)

Learn More:

b) Design clear mechanisms to indicate the confidence or uncertainty of the AI coach.

Acknowledging uncertainty about information when it exists is crucial for transparent AI coaching. Design ways to represent uncertainty, such as color-coded confidence levels or verbal disclaimers in AI responses.

Examples:

  • Provide visual cues, such as color coding or icons, to denote confidence levels of the information being shared.
  • Use language to differentiate between general advice and more critical or sensitive information that may require further verification
    • General language: “Responses by this AI coach might sometimes be inaccurate, always make sure to consider this possibility.” or “This advice is general. Your specific situation might require additional considerations.”
    • Warnings or disclaimers to indicate uncertainty on specific responses: ”My information on this policy might be outdated.” or “I'm about 70% certain, but please verify this deadline on the official website.”Suggest double-checking: “I'm not fully confident about this answer. Can you double-check with your professor?” or “This advice is based on available information, but please verify with a human advisor.”

Questions to Discuss:

  • How will you measure the accuracy/confidence or potential inaccuracy/uncertainty of your AI coach’s responses?
  • Under what conditions would you want to share that information with users? Would you do so using a specific metric upon request, voluntarily if below a certain threshold, etc?
  • How might you use visual cues such as colors or icons in addition to (or instead of) text?

Learn More:

c) Consider how to make ongoing interactions safe, secure, and transparent.

Students' willingness to discuss sensitive topics with your AI coach may be influenced by their perception of AI. Such trust is built over time. Moreover, coaching frequently involves repeated interactions that cannot be completed in a single session, requiring data to be stored. To foster and maintain trust, be transparent when data from student-AI interaction is being saved and used, and provide necessary assurances to students through education, interface, or explanations that their saved data is secure and will not be shared.

Examples:

  • Allow students to define their comfort levels for discussing different sensitive topics.
  • Inform the student when their data or conversations are being saved.
  • Prompt the student, asking permission to save data or conversations.
  • Allow students to delete parts of their conversation.
  • Provide a link to a space where all saved data are visible to the student, and can readily be deleted or updated.
  • Provide a data stewardship statement and be clear about when and how student data are saved within or shared outside of the system. Sample language includes:
    • “We don’t claim ownership of your data: it remains subject to your control.”
    • “We will delete your data, correct it, or transfer it somewhere else if you ask.”
    • “We will protect and steward your data and comply with applicable privacy laws, but you may have privacy obligations as well.”
    • “If you allow us to conduct research with your data, we will follow best practices around the anonymization of personal data, and published research results will be made available to you for free." (https://bd4d.org/resources/24_04-a_better_deal_for_data.pdf)

Questions to Discuss:

  • What data will you be gathering? Is there any data that you can remove from that list and still offer effective support?
  • How can you give users some transparency into (or control over) what is saved and what is deleted?
  • What data must be saved on an ongoing basis to make coaching effective?
  • How can you give users some transparency into (or control over) what is saved and what is deleted?

d) Design clear mechanisms to balance student access to human coaches.

Some coaching tasks are better suited to human contact versus to AI coaching. For example, a student might gain more by talking directly with a human coach who just went through the college experience and knows what it's like to feel homesick or out of place in a new campus/university. Refer to coaching best practices to determine whether a student would benefit from contact with human coaches. Clear mechanisms should be implemented to ensure that students can easily access human coaches when necessary.

Examples:

  • A prominent ”Help” or “Get Human Assistance” or “Request Human Coach” button available at all times, allowing students to easily escalate to a human advisor when needed
  • Feedback form for students to express their need for human interaction.
  • Real-time notifications or alerts if the AI detects a situation that requires immediate human intervention, such as potential mental health concerns
  • User preference setting for the frequency of human coach check-ins

Questions to Discuss:

  • How will you ensure that students receive fair access to coaches?
  • How will you ensure that students who need coaches have access to them?
  • How will you ensure students are building the social capital, relationships, and networks they will need offline even while gathering information online?

Learn More

e) Create mechanisms for students to set their own goals, preferences, and boundaries within the AI coach.

Consider the importance of preserving and enhancing student agency, and explore how the tool may offer features that support this.

Examples:

  • Frequency of check-ins or reminders
  • Topics they're comfortable discussing with AI coach vs. with human coaches
  • How early to start reminding about due dates
  • Level of detail in explanations
  • Cultural or religious considerations in advice and examples
  • Privacy settings for sharing their data

Questions to Discuss:

  • How much control do you want to give users over how they use the tool?
  • How can you build those controls into the initial interaction or check on them during ongoing interactions?

f) Consider creating a range of AI communication styles for different coaching contexts and allowing students to set preferences for conversation tones and communication styles.

The most effective tone for AI coaching may vary based on the subject or task. Evaluate whether different tones or adapting tones are useful strategies across different demographic groups or contexts. The formality level of communication can impact a student's comfort and receptiveness, although some tones such as humor might be inappropriate for some people, contexts, or tasks.

Examples:

  • Referring to coaching best practices to automatically adopt a tone for a specific task
  • Enabling students to set their preferred communication tone (formal, casual, motivational, etc.) from a list

(If assumptions are made based on existing data, the system should transparently communicate those assumptions.)

Questions to Discuss:

  • What types of communication styles and tones might feel appropriate for different coaching contexts or users?
  • To what extent do you want to allow users to set the communication style or tone to their preference?

Learn More:

g) Consider the varying degrees of anthropomorphism your AI coach may have, from minimal human-like attributes to highly realistic human characteristics.

Different degrees of anthropomorphism in an AI coach can affect student engagement and trust. Be aware that anthropomorphic bots might create an illusion of real human if you are not careful to be transparent about their design.

Examples:

  • A direct and to-the-point bot, which provides factual feedback in a straightforward manner without emotional nuance
  • A friendly but professional bot, that uses basic empathy in responses with some informal language that clearly declares itself as a chatbot
  • A bot that uses rich emotional language, humor, and personal anecdotes and that can adapt its tone to match the user's mood
  • A bot that does not clearly distinguish itself as an AI and discusses past memories while discussing doing work together

Questions to Discuss:

  • How human-like do you want your AI coach to seem? How might more anthropomorphic characteristics support or hinder your goals?

Learn More:

Google DeepMind, The Ethics of Advanced AI Assistants, Chapter 10: Anthropomorphism

https://bd4d.org/resources/24_04-a_better_deal_for_data.pdf