A toolkit and framework to guide thoughtful development of AI tools for college success.

This toolkit was created to help higher education leaders and practitioners design and adopt AI tools that truly serve first-generation and under-resourced students. Developed by Beyond 12, it includes guiding principles and a step-by-step framework to support the ethical and responsible use of AI for college success.
See the Framework
GenAI offers new tools to students who’ve historically lacked access to educational support.
Used strategically, AI can help reduce longstanding disparities for first-generation and underrepresented students.
AI is powerful, but it cannot replace the care, empathy, and insight of human advisors and coaches.
Ethical, responsible design ensures AI tools work in service of learners and educators—not instead of them.
AI solutions must be built with—and for—the people they’re intended to support.
This toolkit was created for institutions and individuals that believe we can do better.
To develop this toolkit for use by higher education leaders, program managers, technology developers, and others who are making decisions and taking action related to AI, we engaged consultants Dr. Alireza Karduni, Atefeh Mahdavi Goloujeh, and Afua Bruce to build upon Beyond 12’s own experience with AI development.
Karduni and Mahdavi Goloujeh conducted extensive research into AI in college coaching and advising as well as adjacent sectors – including in conversational AI and in education – by reviewing more than 75 research papers, articles, and frameworks. They also engaged in interviews with experts in higher education, K-12, and philanthropy. Bruce conducted a review of the AI development process, and identified key actions that strengthened Beyond12’s approach to building a responsible AI tool.
Finally, these consultants conducted interviews and workshops with Beyond 12 staff, coaches, and advisors, including our student advisory board – a group of 10 current college students from across the country who help inform Beyond 12’s products, services, and strategic projects, including the use, testing, and promotion of MyCoach AI.

To ensure our AI-powered tools are reflective of our core values and our commitment to do right by students, we created the following organizational guidelines for using AI responsibly and ethically.
These principles must not only be discussed and agreed upon at the outset of the AI development process, they must also be revisited and considered carefully each step of the way.
Enhance the profound capacity and unique capabilities of humans first and foremost. Help administrators and educators do less repetitive work so they can focus on tasks that can best be performed by humans – tasks that benefit from human intuition, judgement, and empathy. Build both wisdom and skills in students and coaches so that they feel equipped and ready to take action.
Treat students with respect, with the foundational belief that they are creative, resourceful, and whole and not broken. Never speak down to them. Use language that is organized, supportive, culturally inclusive, warm, practical, and anchored in their strengths rather than their deficits.
For Beyond 12, ”asset-based” refers to our coaching curriculum, which emphasizes resilience, growth mindset, and the Co-Active Coaching Model that frames coachees as “creative, resourceful, and whole." In this context, asset-based should refer to your organization’s key assets and competencies.
Be transparent about the use of AI and explain its processes wherever practical/possible. Be inspectable and explainable – enabling people to understand how an AI system is developed, trained, operates, and is deployed – as well as overridable if necessary.
“These systems sometimes are seen as a black box kind of a situation where predictions are made based on lots of data. But what we need is to have a clear view—to clearly show how those recommendations or those interactions are made and what evidence is used or what data is used to be able to make those recommendations … having open learning environments or inspectable learner models or applications where the stakeholders can understand how these systems make decisions or recommendations is going to be an important aspect in the future of teaching and learning.”
Diego Zapata-Rivera, ETS Research Institute, in Artificial Intelligence and the Future of Teaching and Learning from the US Department of Education
Share accurate, timely, verifiable, and up-to-date information with students.
Take action throughout the development process to ensure AI represents and addresses the rich and full range of user perspectives, particularly those who have historically been marginalized or under-represented or who might be particularly vulnerable to harm. Make every effort to build inclusive systems by consistently testing models for under- or over-representation of particular characteristics, such as reinforcing racial or gender stereotypes or limiting the variety of available options.
“Bias is intrinsic to how AI algorithms are developed using historical data, and it can be difficult to anticipate all impacts of biased data and algorithms during system design. The Department holds that biases in AI algorithms must be addressed when they introduce or sustain unjust discriminatory practices in education. For example, in postsecondary education, algorithms that make enrollment decisions, identify students for early intervention, or flag possible student cheating on exams must be interrogated for evidence of unfair discriminatory bias—and not only when systems are designed, but also later, as systems become widely used.”
Artificial Intelligence and the Future of Teaching and Learning, US Department of Education
Protect student safety by avoiding toxicity and potential harm. Reduce harmful content by carefully stewarding sensitive and personally identifiable student data. Do not share information with third parties unless confident that your same high standards for student privacy will be upheld. Conduct rigorous testing and evaluation to identify and resolve any harmful outputs before deploying the technology.