Responsible and Ethical Use of GenAI Framework & Toolkit

Step Five

Build user literacy that supports responsible and informed use

In addition to building a user interface that explicitly considers responsible use, consider how you will foster user literacy – ensuring that students have the knowledge and skills to navigate the system.

Learn More:

Substeps

  1. Consider students' preconceptions about AI before engagement.
  2. Conduct a mandatory training session before first use and evaluate the necessity of human facilitation during AI coach onboarding.
  3. Create user education materials that clearly communicate the capabilities and limitations of the AI coaching system.
  4. Encourage students to critically examine each AI-provided response.

a) Consider students' preconceptions about AI before engagement.

Students’ existing attitudes towards AI might vary and impact their trust. Understanding these preconceptions helps tailor the introduction and use of AI coaching to individual needs, whether that includes strategies for helping students who might over-rely on AI or tactics to address those that might have a natural skepticism or aversion toward AI. Consider assessing a wide range of users’ AI perceptions – including their perceived knowledge of, attitudes about, and willingness to use AI, as well the social norms and usage patterns of their peers, professors, support staff, and others in their context – so you can be prepared to address them.

Questions to Discuss:

  • What do different groups of users already know and believe about AI? If you don’t know the answer, how might you ask?
  • How can you adapt AI to accommodate different levels of familiarity and comfort?

Learn More:

b) Conduct a mandatory training session before first use and evaluate the necessity of human facilitation during AI coach onboarding.

First impressions matter. A well-designed onboarding process, potentially involving human guidance, can significantly impact how students perceive and use the AI coach throughout their academic journey by establishing trust, setting clear expectations, and addressing any initial concerns. Periodic “refresher” training may also be useful to ensure that users remain engaged and aware of the AI coach’s capabilities and limitations.

Questions to Discuss:

  • How will you onboard users into using the AI tool?
  • Will a human coach or other facilitator be needed? How might that help or hinder usage?
  • What training would this coach or facilitator need to conduct the onboarding?
  • How often might re-training be offered? To everyone or only to users that fall below a certain threshold for usage or meet other criteria?

c) Create user education materials that clearly communicate the capabilities and limitations of the AI coaching system.

Users should be aware of the AI coach’s offerings and its limitations. Clear and transparent communication about the system helps set realistic expectations.

Examples:

  • Clear introductory screens that explain the AI coach's capabilities, limitations, and scope of use before the first interaction
  • In-app tutorials or tips explaining how to interpret AI responses, including understanding uncertainty and when to seek further help
  • Pop-up reminders about key limitations during interactions, especially for critical tasks where human oversight is essential
  • An FAQ section that addresses common misconceptions and provides guidance on how to use the AI coach effectively and responsibly
  • Short video explanations embedded within the interface to educate students on how AI coaching works and its appropriate use cases
  • Peer-led discussion forums to share experiences and insights

Questions to Discuss:

  • What types of information and context are important for users to have upfront? What about in different situations?
  • What types of formats are appropriate for different users and types of information? Where might users respond better to pop-up text versus a short video versus a peer discussion, for example?

d) Encourage students to critically examine each AI-provided response.

Current AI models may produce inaccurate information while sounding accurate, leading students to misinterpret AI coach suggestions as absolute truth without proper context. Explain that AI advice should be taken as guidance rather than definitive answers, and personal judgment should always be applied. Offer guidelines on when it’s appropriate to seek human advice or additional verification, and encourage students to cross-check AI-provided information with reliable sources for critical decisions.

Examples:

  • Teach students to look for verbal or visual cues that indicate confidence or uncertainty levels.
  • Provide scenarios where students can practice identifying and interpreting uncertainty in AI responses.
  • Explain or reinforce any uncertainty indicators you have built into the interface, such as colors, icons, or language.

Questions to Discuss:

  • How might you encourage students to balance their trust in AI-provided information versus encouraging them to question and confirm?