NIDHI SALIAN (Research Assistant at CriticalAI)
Crisis Text Line (CTL) is a text-based mental health support and crisis intervention service that publicizes itself as “a tech-forward not-for-profit that pairs trained volunteer Crisis Counselors with data science and cutting edge technology.” Since its launch in August 2013, CTL has hosted more than four million text conversations with people in need and, in doing so, amassed more than 129 million messages across the United States, resulting in what the organization calls the “one of the largest health data sets in the world.” The company prides itself on this massive collection, asserting that everything the organization does with its data is in line with its central mission of supporting people in crisis, wherever they are, and promoting mental well-being overall. Yet, on January 28, 2022, Politico reported that CTL was sharing its data with a spinoff company, Loris.ai, a for-profit start-up that uses machine learning technology to build and sell customer-service software. This data-sharing agreement between CTL and Loris.ai, which capitalizes on vulnerable users and their highly sensitive conversations, has raised widespread concern. The backlash highlights the pressing need for policy and regulations that address the ethical use of data in the mental health domain.
Since machine learning algorithms and data analysis have always played a large role in Crisis Text Line’s operations, the organization details its data philosophy on its official website. The website explains that CTL uses the data in a number of ways to “to support people in crisis, to put more empathy in the world and to help people live their best lives.” It is important to note that CTL’s website may have been updated since the 2022 investigation and may not have originally included the information cited in this blog post.
CTL uses machine learning to analyze the messages that people text to the hotline to help determine which users are at the highest risk for self-harm. It should be noted that this automated function replaces the triage that, in typical healthcare situations, is performed by a trained professional. For example, Crisis Text Line found that when a texter uses words such as “acetaminophen,” “Ibuprofens” [sic], or “800 mg” there is a high chance that the texter is in a life-threatening situation. Through this automated version of triage, those determined to be at high risk are classified as “code orange,” moved to the top of the queue, and connected with a Crisis Counselor as soon as possible. CTL’s website also explains that the organization uses analysis of text conversations to improve counselor training, gauge texter satisfaction, detect spikes in mental health issues, support open-data research collaborations, and publish mental health trend reports.
In a February 2020 Press Release, CTL co-founder and chief data scientist Bob Fiblin declared that “since day one, Crisis Text Line has implemented data science to analyze the information exchanged in our messages to understand and ultimately prevent crises.” Nowhere does this statement explicitly divulge that since 2018, CTL has been sharing its data with for-profit tech startup Loris.ai or explain how corporate partnerships such as the one with Loris.ai support the organization’s mission of helping people in need.
Deterioration of Ethical Standards:
The relationship between Crisis Text Line and Loris.ai was first brought to light by a former volunteer, Tim Reierson, who, after being fired from CTL in 2021, created an advocacy site for those concerned that CTL was violating its own professed dedication to data ethics. According to Reierson’s detailed timeline, in February 2016, CTL’s “Enclave Data FAQ” stated that only a handful of rigorously vetted researchers were given access to the organization’s data. The FAQ further emphasized that CTL will “NEVER share data for commercial use, with individuals not associated with a university or research institution, or ‘just because.’” But in October 2017, according to Reierson, CTL removed any language about prohibitions on commercial use of data from its website. Just one month later, Crisis Text Line, Inc. incorporated for-profit spinoff company Loris.ai and, by January 2018, the organization’s financial statements revealed that the text line had acquired a 53% interest in Loris.ai and that Loris.ai would pay a small percentage of its annual revenue to the text line.
Politico’s subsequent investigation into the non-profit’s relationship with a for-profit start-up, published in January 2022, resulted in widespread outcry. In response to this criticism, noted data ethics expert danah boyd, a founding CTL board member and the board’s chair from June 2020 to January 2021, published a detailed account on her personal weblog. As boyd tells it, CTL was facing concerns “as all non-profits do, with how to be sustainable.” In response, Nancy Lublin, the founder and former CEO of Crisis Text Line, proposed founding Loris.ai.
According to boyd, a partnership between Loris.ai and Crisis Text Line would enable CTL to “build a revenue stream” from what the organization had “learned from training counselors.” The new relationship with Loris.ai, she claims, “paralleled” the same data-sharing agreement that CTL had already created with academic researchers: “controlled access to scrubbed data solely to build models for training that would improve mental health more broadly.” According to boyd, the board has since “concluded that we were wrong to share texter data with Loris.ai.” Recounting her own feelings at the time, boyd says that though she was hesitant about the data-sharing agreements, she came to support the partnership because of its potential benefits. If “another entity could train more people to develop the skills our crisis counselors were developing,” she reasoned, “perhaps the need for a crisis line would be reduced.”
Of course, boyd’s justification ignores the stark difference between using privileged data for research by mental health professionals and using the same data for commercial use. Although boyd may have persuaded herself that Loris’s research would improve mental health, the company’s website, an example of which is inserted below, makes clear that the stated goal was to make better chatbots for business. As Loris.ai’s website explains, the company’s products are “enterprise software that helps companies boost empathy AND bottom line.”
Dr. Stevie Chancellor, a computer scientist at the University of Minnesota who specializes in human-centered machine learning and mental health, shares the same concerns. In a series of tweets on the subject, Dr. Chancellor wrote, “the decisions set up a false parallel and moral ground between parties who are interested in CTL’s data. It’s plain to me that for-profit companies have wildly different motivations than researchers or the non-profit itself…data sharing for such a massive departure from the context of CTL (helping people in crisis) is exploitative of people in distress and of its volunteers who help save peoples’ lives.”
Consent, Data Privacy, and Anonymity:
Some major concerns that stem from CTL’s agreement with Loris.ai revolve around consent, privacy, and anonymity. According to CTL’s February 2020 Press Release, 37% of people texting CTL for support were first-time clients. In addition, 68% of texters reported sharing information with CTL that they had never shared with any one before. Such information suggests that texters expected confidentiality and privacy while using the service.
Yet, according to Politico, Shawn Rodriguez, the vice president and general counsel for CTL, maintains that all users were aware of the organization’s data-sharing policy. Rodriquez stated that “Crisis Text Line obtains informed consent from each of its texters” and that “the organization’s data-sharing practices are clearly stated in the Terms of Service & Privacy Policy to which all texters consent in order to be paired with a volunteer crisis counselor.” However, a closer look at CTL’s Terms of Service & Privacy Policy illustrates how the organization’s data sharing policies are deeply buried in this lengthy legal document. Moreover, using CTL through third-party services like Facebook and WhatsApp means agreeing to additional terms and policies associated with the third-party app. Given the sizeable amount of information that one must digest in order to understand the terms and privacy policies, one can safely assume that someone in crisis would not have the time or capacity to thoroughly read and truly consent to the disclosure.
In terms of privacy, Crisis Text Line asserts that all data it shares is completely anonymous; but several studies, as reported by the MIT Technology Review, have revealed that people can be easily re-identified from almost any database, even when their personal details have been stripped out and seemingly “anonymized”. According to Politico, Eric Perakslis, a chief science and digital officer at Duke Clinical Research Institute, noted how a breach in data privacy might lead to one’s name being associated with a suicide hotline. This breach in anonymity of CTL’s data, is “a lot different than someone just understanding your cholesterol,” Perakslis explains, pointing out “how disclosures about a person’s HIV status in the 1980s, or involvement with Planned Parenthood today, could put a person at risk.” Considering the highly sensitive nature of the data in question, CTL’s ethically questionable data-sharing practices may have severe implications for individuals who are reaching out for help.
Technopolitics and the Silicon-Valley Model:
Crisis Text Line’s problems are a product of the larger issues in technology and regulation. Though on paper CTL is a non-profit organization, it shares many features with a tech company—in fact, Lublin herself has described the text line as “a tech startup.” As an article in Vice suggested, CTL’s data-sharing relationship with Loris.ai reproduces the belief—popularized by Big Tech companies such as Facebook and Google—that personal data is a commodity to be harnessed by privatized infrastructures.

This “Silicon-Valleyfication” of the non-profit’s business model and practices points towards larger trends in “technopolitics.” As Jathan Sadowski, a senior research fellow in the Emerging Technologies Research Lab at Monash University, highlights in his book Too Smart: How Digital Capitalism is Extracting Data, Controlling Our Lives, and Taking Over the World, data was once the purview of researchers and policymakers but has since become a form of capital. Sadowski argues that despite the conveniences we receive as a result of smart technology, smart technology ultimately serves to advance the interests of corporate technocratic power—and will continue to do so unless we demand oversight and ownership of our data.
One example of CTL’s “technopolitics” was its efforts to maintain a monopoly in the mental health domain. Upon its launch in 2013, CTL quickly became the sole player in the text counseling field by harnessing what danah boyd describes as the “awesome power of technology—including machine learning and data analytics—for good in innovative new ways” in a letter to the FCC. However, in 2020 when the Federal Communications Commission (FCC) was voting to approve the National Suicide Prevention Lifeline texting service, boyd (on behalf of CTL) wrote the above letter to the FCC lobbying them to scrap the new public service and instead to incorporate CTL into the program. Incorporating CTL into the program, boyd argued, would “save lives via text message in a safe, smart, and cost-efficient way, with the innovation and speed of a private tech company acting in the public interest.”
In lobbying for the prevention of a federally funded text line, boyd and Crisis Text Line embodied the Silicon Valley attitude of pitting private tech’s allegedly superior solutions against the affordances of public services. In doing so, CTL, like Google in the search domain or Amazon in e-commerce, used a dominant market position to prompt user dependency and exert near monopoly power at the expense of users—many of whom are vulnerable teens and students, and all are by definition in “crisis.” In fact, universities including Rutgers have promoted and publicized CTL’s services with little or no knowledge of any hidden motivations.
As a student of technology myself, I can certainly appreciate all the ways that technology can be used to improve lives, but for-profit businesses are not a replacement for public services, and non-profits such as Crisis Text Line should certainly not adopt tech startup models under the guise of “tech for good.” In Vice journalist Joanne McNeil’s words, “Suicide prevention doesn’t look like the ‘speed of a private tech company’ or ‘awesome’ machine learning. It requires safety and care with no strings attached. This care includes generosity and expansion of public resources like access to housing, food, healthcare, and other basic needs; it can’t be measured in [Key Performance Indicators]. The very purpose Crisis Text Line claimed to serve is incompatible with the Silicon Valley way of doing business.”
Conclusion
Since the outcry following the original investigation by Politico, Crisis Text Line has ended its data-sharing agreement with Loris.ai and has requested that Loris delete the data it has received from Crisis Text Line. The story of Crisis Text Line is undoubtedly an exemplary case in the study of data ethics, privacy, and the role of technology in public services. Despite the changes that Crisis Text Line has recently made to its data policy, there will certainly be detrimental impacts going forward, including the possibility that the non-profit’s infringement in privacy and trust of its users will dissuade those in need of support from reaching out for further life-saving help. It is my hope that Crisis Text Line serves as a call to action not only for governments to impose much-needed accountability regulations on technologies but also for tech companies to remodel their business practices in a way that values human lives more than the commercial value of their data.