As quickly as Americans are integrating artificial intelligence into their daily lives, BYU researchers are eagerly exploring how this technology might affect human relationships — for better or worse.
AI has the potential to help people enhance empathy online, for example, which could be a boon for oft-derisive civic discourse. A Pew Research Center survey found that 41% of American adults have personally experienced online harassment, and one in five say they’ve been harassed online for their political views.

BYU political science professors Lisa Argyle, Ethan Busby, and Josh Gubler, collaborated with BYU computer scientists and a colleague at Duke to experiment with how AI could improve online conversations and reduce public vitriol.
The study included 1,500 participants who were put in opposing pairs to discuss gun control online. An AI chatbot made suggestions to restate, validate, or increase the politeness of a statement before it was posted. The chatbot did not change the viewpoint and participants could decide the suggestions they wanted to implement if any. Participants followed 2,742 suggestions (about two-thirds) from the AI chatbot.
The enhanced posts led participants to have increased feelings of being heard and understood. “We found that when the AI chat assistant was present, participants rated the conversations as more positive and extended more understanding/democratic reciprocity to their opponents,” says Gubler.
He adds, “For me, the study illustrates that we can use AI tools to make disagreements, conflict, and democracy better and we don’t have to force everyone to have the same political views to accomplish this goal.”
The paper “Leveraging AI for democratic discourse: Chat interventions can improve online political conversations at scale” was published in PNAS by the research team.
Helping people have productive and courteous conversations is one positive outcome of AI. On the other hand, though, using AI to replace human relationships opens up some of the negative influences of the technology.
“Engagement with AI technologies designed to simulate romantic partners and the viewing of idealized and sexualized AI images of men and women online is higher and more widespread than many people may think, particularly among young adults,” says Brian Willoughby, professor of family life at BYU, fellow at the Wheatley Institute, and co-author of the study “Counterfeit Connections: The Rise of Romantic AI Connections and AI Sexualized Media Among the Rising Generation.”

A survey of nearly 3,000 U.S. adults found that “viewing AI media with idealized and sexualized images of men and women is relatively high, especially given the relatively new nature of these technologies,” according to the study. Over half of survey respondents reported encountering this type of AI content. Young adults (ages 18-30) were twice as likely as older adults (ages 30+) to follow such accounts, and 19% had interacted with AI designed to simulate a romantic partner.
The findings reveal that many individuals use AI companions to manage relationship dissatisfaction, which can have a damaging impact on dating culture, couple relationships, and family formation trends.
“If my current partner is angry at me or I’m not satisfied in my current relationship, engaging with an AI might feel like I’m not really cheating,” Willoughby explains. “The concern with AI is that, like pornography, it will create very unhealthy, unrealistic expectations about real relationships.”
He warns that because AI companions are programmed to meet users’ emotional needs without mutual effort, they risk distorting their expectations in real-life relationships.
While AI can offer support and improve communication, it can also present serious challenges to genuine, reciprocal relationships by subtly pulling individuals away from eternal human relationships that are at the core of the gospel plan. The difference lies in how it is used.
Read more about the latest social science research.