Originally published in MIT Technology Review, Feb 26, 2021.
Counselors volunteering at the Trevor Project need to be prepared for their first conversation with an LGBTQ teen who may be thinking about suicide. So first, they practice. One of the ways they do it is by talking to fictional personas like “Riley,” a 16-year-old from North Carolina who is feeling a bit down and depressed. With a team member playing Riley’s part, trainees can drill into what’s happening: they can uncover that the teen is anxious about coming out to family, recently told friends and it didn’t go well, and has experienced suicidal thoughts before, if not at the moment.
Now, though, Riley isn’t being played by a Trevor Project employee but is instead being powered by AI.
Just like the original persona, this version of Riley—trained on thousands of past transcripts of role-plays between counselors and the organization’s staff—still needs to be coaxed a bit to open up, laying out a situation that can test what trainees have learned about the best ways to help LGBTQ teens.
Counselors aren’t supposed to pressure Riley to come out. The goal, instead, is to validate Riley’s feelings and, if needed, help develop a plan for staying safe.
To continue reading this article, click here.