Bringing together users is a critical first step that will surface what success should look like and the role of AI in the work, as well as which principles for responsible and ethical AI you will choose to abide by throughout the process.
Substeps
- Convene a representative sample of users/stakeholders who will help anchor responsible AI development in the needs of students.
- Identify and explore your overall objectives as a team/organization and the potential role of AI relative to your other activities and resources.
- With your advisory group/team, review the Guiding Principles for responsible/ethical AI and add or modify any that are relevant to your context.
a) Convene a representative sample of users/stakeholders who will help anchor responsible AI development in the needs of students. Suggested groups include:
- Students: Students can provide valuable feedback on the AI tool's usability, relevance, and effectiveness, ensuring that the next steps address their unique needs, preferences, and expectations.
- Coaches/Advisors: Coaches or student advisors can offer valuable insights from their direct experience with students, helping to shape AI tasks and identify potential risks or pitfalls for each AI task based on their real-world interactions.
- Higher Education Administrators: Higher education administrators – including individuals operating programs and centers within the educational institution– can offer insights into institutional needs, regulatory requirements, and student support priorities, helping tailor the AI tool to meet specific campus demands and compliance standards.
- Nonprofit Executives: Similar to administrators in higher education institutions, executives running college access and success nonprofits provide strategic direction, allocate necessary resources, and ensure alignment between the AI tool and the organization's (or program’s) overall mission and goals.
- AI Developers: AI developers can provide technical insights on the AI system's capabilities, limitations, and potential risks, ensuring that the next steps are technologically feasible and aligned with responsible AI practices.
- UX Designers and Researchers: User experience (or “UX”) designers and researchers can contribute user-centered perspectives, ensuring that the AI tool prioritizes student needs, preferences, and potential usability concerns.
- Professors: Professors can offer pedagogical insights as to what works well with students and can speak to what skills students need to have when they enter the classroom – and when they complete their degree or program.
Questions to Discuss:
- What are the relevant student groups you might consider?
- Who interacts with students that you might include in this group – educators, coaches, tutors, counselors, etc?
- Who leads and operates programs that interact and overlap with the goals and tasks you are considering using AI to address?
- Who approves requirements for the tech team, or who approves procurement requests?
- Who else would you add to this list?
How We Did It:
Beyond 12 consulted the inputs in its organizational logic model when determining what individuals and institutions should help guide the development of its MyCoach AI tool. The chosen representatives spanned the following groups:
- Students, including first-generation students, students from racial/ethnic groups historically underrepresented in higher education, and students from low-income communities
- Beyond 12 board and staff, including leadership, support staff, and full-time near peer coaches
- Partners, including administrators and staff members from degree-granting colleges and universities (both 2-year public institutions and 4-year private institutions), as well as leaders from citywide initiatives and college success organizations in key geographic areas
Beyond 12 engaged this group regularly, including upfront brainstorming conversations, ongoing workshops to develop success criteria and explore potential risks and tradeoffs, feedback sessions on prototypes of the tool, and evaluative tests to determine whether the AI tool was achieving its intended goals without causing harm.
b) Identify and explore your overall objectives as a team/organization and the potential role of AI relative to your other activities and resources.
Not only does AI need to be implemented thoughtfully and with attention toward its impact on users, it also needs to be considered in the broader context of your organizational and programmatic goals. Consider the expertise you already have – what you already know about your aims and progress, the things you already know lead to success with human approaches that you would want to build into an AI tool, and the ways in which AI might free up people’s time, turbocharge their productivity, or scale up their reach.
Also consider how to include input from your customers and community, not just through your user advisory group (see previous sub-step), but also from the wider group. “Those deciding how to use AI should first ask the affected community members: 'What are your needs?' and 'What are your dreams?'” says author Afua Bruce, who co-wrote The Tech That Comes Next: How Changemakers, Philanthropists, and Technologists Can Build an Equitable World. “The answers to these questions should drive constraints for developers to implement, and should drive the decision about whether and how to use AI.”
Questions to Discuss:
- What are you trying to accomplish? What would success look like? What outcomes would be unacceptable?
- How might AI help you in achieving your goals or solving problems?
- Which tasks in your process could benefit from the machine precision and accuracy of AI tools?
- How might people in your organization benefit from using AI? What could AI free up or build upon in those peoples’ capabilities?
- Which repetitive tasks that people in your organization are performing could be delegated to AI?
- What could AI technology allow individuals in your organization to focus their energy, empathy, skills, and potential upon, in place of those repetitive tasks?
- How might you make best use of humans’ judgment, flexibility and creativity?
- Which activities should AI not be used for?
- How will you determine if the AI tool outperforms your current process?
How We Did It:
In Beyond 12’s coaching model, AI (and in particular, machine learning) was already a core part of our technology. For several years, we had offered holistic advising that married personalized support from human, near-peer coaches with AI-powered nudges and analytics. GenAI presented an opportunity to allocate our team members’ time and expertise more effectively, improving the quality of coaching services provided to students while optimizing operational costs.
In 2023, we partnered with IDEO to organize a half-day session for our team members to learn about the latest developments in GenAI and potential areas of opportunity for Beyond 12. We used insights from this time to identify, scope, and launch pilot projects that would explore how generative AI could help us scale our mission. These included:
- Automated Transcript Processor: We built a tool that offloads the manual transcript logging process from our college coaches so they can focus on understanding transcript data to tailor their coaching approach and reduce their administrative burden.
- Automated Text Logging: We piloted a strategy to use generative AI to extract the topics and challenges faced by students that are part of text exchanges between coaches and students. Leveraging generative AI to draw insights from the messages and using machine learning to label discrete conversations, we can now identify new topics or challenges, improve logging consistency, and significantly reduce coach workload.
Learn More:
c) With your advisory group/team, review the Guiding Principles for responsible/ethical AI and add or modify any that are relevant to your context:
- Empowering
- Asset-Based
- Transparent
- Accurate
- Inclusive
- Safe and Secure
Questions to Discuss:
- Which principles does your organization agree are a high priority?
- What do these principles mean to the different users/stakeholders?
- What should these principles look like in practice with the new AI tool? (Think about how users will access the tool, how users will make decisions with the tool, and the tone of the output, for example.)
- Does your team have any principles to add to this list?