Responsible and Ethical Use of GenAI Framework & Toolkit

Step Seven

Continuously refine the AI coach in response to user feedback

Engage a group of representative stakeholders who can help you review, refine, and even periodically challenge your guidelines for responsible AI coaching, the real and potential risks for harm to users, and the way the tool itself operates.

Substeps

  1. Develop an ethics advisory board that includes educators, AI experts, ethicists, and student representatives.
  2. Convene this board for regular reviews.Regularly monitor and update the risks for your tasks.
  3. Regularly monitor and update the risks for your tasks.

a) Develop an ethics advisory board that includes educators, AI experts, ethicists, and student representatives.

Assumptions about responsible practices might unintentionally prioritize certain communication styles or outcomes over others. An independent or external auditing board provides valuable oversight and risk assessment.

Questions to Discuss:

  • Which of the original advisors/stakeholders are willing to continue testing and refining the system over time?
  • What new constituents might you add to this ongoing review body?
  • How will users provide ongoing feedback?
  • How might users and the community help monitor and improve outcomes?

b) Convene this board for regular reviews.

Continuous auditing and monitoring – where core assumptions about responsible AI coaching practices are systematically examined and updated based on the latest research and expert opinions – is crucial for identifying emerging risks in AI coaching systems. This can lead to new prevention and mitigation actions.

Questions to Discuss:

  • How will you test the accuracy of the system? What about its fairness and alignment with our original values, goals, and priorities?
  • What data can you gather to ensure the system is leading to equally positive outcomes for all users?

c) Regularly monitor and update the risks for your tasks.

Notions of responsible AI coaching can evolve rapidly with technological advancements and changing educational paradigms. Responsible AI policies must evolve to address newly identified risks. You can implement an ongoing monitoring schedule that regularly assesses the AI coaching system's performance.

Questions to Discuss:

  • What new risks have emerged since you launched the system? Are there new types of users or new user behaviors to consider?
  • How might you address those emerging risks?