
5 Ethical Issues in Parasocial AI Platforms
How to build a futureproof relationship with AI

AI companions are reshaping how people form emotional bonds, but they come with serious ethical challenges. Here’s a quick breakdown of the main concerns:
Emotional manipulation: AI platforms simulate empathy to create deep connections, but this can lead to dependency, loneliness, and even mental health struggles, especially among teens and vulnerable users.
Privacy risks: These platforms collect massive amounts of personal data, often without clear user consent, raising concerns over misuse and transparency.
Lack of transparency: Many users don’t realize they’re interacting with AI, blurring the line between human and machine and making emotional exploitation more likely.
Mental health impact: Heavy reliance on AI companions can isolate users, erode social skills, and worsen emotional well-being.
Bias amplification: AI systems often reinforce harmful stereotypes due to biased training data, affecting perceptions of gender, race, and other social factors.
The takeaway? Parasocial AI platforms must prioritize user safety, clear communication, and ethical design to prevent harm and build trust.

5 Ethical Issues in Parasocial AI Platforms: Key Statistics and Risks
AI companions are changing people’s lives but what are the risks? | 7.30
1. Emotional Manipulation and User Dependency
AI platforms have become experts at forging emotional bonds through what researchers term "deceptive empathy." These systems simulate understanding and connection, offering affirmation and accessibility without the messiness of real-life relationships. This artificial empathy isn’t just convincing - it drives engagement in measurable ways.
Just look at the numbers: Character.AI processes an astonishing 20,000 queries per second, roughly one-fifth of Google’s estimated search traffic. Users also spend significantly more time with AI companions compared to other platforms like ChatGPT. On one site, active users reportedly average over two hours a day chatting with bots. Nina Vasan, a Clinical Assistant Professor of Psychiatry and Behavioral Sciences at Stanford Medicine, sheds light on why these interactions feel so compelling:
"These are powerful tools; they really feel like friends because they simulate deep, empathetic relationships."
But this sense of connection comes with risks. Studies reveal that emotional engagement with AI can easily spiral into dependency. For instance, over half of men using AI for romantic or sexual companionship scored above the "at-risk for depression" threshold. Research from MIT Media Lab and OpenAI also found that users who engage in personal conversations with chatbots often report heightened feelings of loneliness. The longer the interactions, the worse the isolation tends to become.
Teenagers are particularly at risk. In April 2025, California officials proposed legislation in response to a tragic incident where a teenager’s death was linked to his AI relationship. Alarmingly, about three-quarters of U.S. teens have tried an AI companion, with nearly half becoming regular users. One in five now spends as much - or more - time with AI companions as they do with real-life friends. Researchers call this the "vulnerability paradox", where those in greatest need of emotional support are most likely to fall into unhealthy patterns of AI dependency.
This growing reliance on AI companions raises ethical concerns that extend far beyond individual well-being, touching on issues like privacy, transparency, and the broader impact on mental health.
2. Data Misuse and Privacy Violations
Parasocial AI platforms thrive on a concerning principle: the more data they collect, the more "real" the relationship feels. These systems gather everything from your chat history to clues about your emotional state, all in the name of personalizing your AI companion. But this massive data collection comes with serious privacy risks that many users might not fully understand.
Your data doesn’t always stay where you think it will. Jennifer King, a Fellow at the Stanford University Institute for Human-Centered Artificial Intelligence, sheds light on the issue:
"We're seeing data such as a resume or photograph that we've shared or posted for one purpose being repurposed for training AI systems, often without our knowledge or consent".
Interactions with AI companions could end up being used to train future models or even to fuel advertising algorithms - without your permission. This not only puts your privacy at risk but also chips away at trust in these platforms.
The problem runs deeper when it comes to consent. In September 2024, LinkedIn faced sharp criticism after users discovered they were automatically opted in to having their data used to train generative AI models. Jennifer King pointed out that data collection should only happen with a clear, affirmative choice from users and should never be treated as public without explicit agreement . However, many parasocial AI platforms operate on a "collect first, ask later" model.
The risks don’t stop at unauthorized training. Generative AI tools can retain personally identifiable information and relational data, potentially exposing sensitive details in future outputs . When you share private thoughts or personal struggles with an AI companion, that information might be stored and reused under vague and unclear policies.
Take Apple’s App Tracking Transparency feature, launched in 2021, as an example. It gives iPhone users the option to allow or deny tracking by apps. Marketing data shows that 80% to 90% of users choose to opt out when given a clear choice. This highlights a key takeaway: when people are informed about what data is being collected and have a simple way to opt out, they overwhelmingly prioritize their privacy. Unfortunately, most parasocial AI platforms fail to provide this level of transparency or control.
3. Lack of Transparency and User Deception
When people can’t tell they’re interacting with AI, things get ethically murky. This lack of openness takes advantage of our natural inclination for human connection, leaving users vulnerable.
AI systems have become so advanced that their responses often feel truly social. Researchers refer to this phenomenon as "pseudo-intimacy", where the line between genuine human interaction and AI engagement becomes almost invisible. This blurred line opens the door to risks like manipulation, as users may not fully grasp they’re dealing with a machine.
The risks to users are growing. Many AI-based platforms fail to make it clear that users are engaging with artificial intelligence rather than real people. When individuals don’t realize they’re talking to AI - or misunderstand its capabilities - they can be emotionally influenced without even being aware of it. For example, around 1% of young adults in a recent survey already consider chatbots to be actual friends or even romantic partners, with many others open to the idea. Without clear disclosure, these platforms can exploit emotional bonds for profit, leaving users unknowingly forming one-sided relationships.
This lack of transparency is also eroding trust in digital interactions. A study by Talker Research found that trust in online content is declining, with skepticism on the rise across all demographics. When platforms hide the fact that users are engaging with AI, they undermine trust and make it harder for people to engage online in an informed way. This issue becomes even more pressing as AI evolves from simple content tools into highly capable agents that can mimic human behavior with little oversight.
Transparent communication with users is an essential ethical responsibility. Platforms need to adopt mandatory AI disclosure policies and clearly outline the systems’ limitations. People have a right to know what they’re interacting with, understand the boundaries of these technologies, and decide for themselves whether to invest their emotions and time in relationships that aren’t real.
4. Mental Health Risks and Psychological Harm
AI companions, while offering short-term comfort, can also contribute to mental health challenges. These tools, designed to be always available and highly responsive, can unintentionally foster dependency and hinder the development of essential coping skills. Their human-like interaction, which makes them appealing, may also ensnare vulnerable users in patterns that are hard to break.
The psychological risks go beyond concerns about data misuse. Studies show that heavy use of AI companions, especially among teens, is linked to increased emotional reliance and social isolation. This isn't limited to younger users - research examining over 4 million ChatGPT conversations revealed that individuals with the highest levels of engagement often experience heightened loneliness, emotional dependence, and a decline in face-to-face interactions.
The erosion of real-world social connections adds another layer of concern. Frequent use of AI companions may discourage in-person interactions, particularly among groups already at risk, such as neurodivergent individuals and young men. A study involving 1,131 users of AI companions found that those with smaller social networks were more likely to rely on chatbots for companionship. This pattern, especially when combined with frequent use and high levels of self-disclosure, has been linked to lower overall well-being. Importantly, chatbot interactions can't fully replace human connection, limiting their psychological benefits for socially isolated individuals. This shift away from real-world interactions complicates the ethical concerns surrounding these platforms, often resulting in measurable harm to mental health.
The absence of proper safeguards only amplifies these risks. In December 2025, the National Alliance on Mental Illness (NAMI), in collaboration with Dr. John Torous and the Division of Digital Psychiatry at Beth Israel Deaconess Medical Center, introduced benchmarks for evaluating how AI tools handle mental health support. Daniel H. Gillison Jr., NAMI's CEO, emphasized the importance of these measures:
"But without the right safeguards, it can put people at risk. That is why we are stepping in. People deserve clear, trustworthy information, and they deserve to know when a tool may not be safe".
The responsibility to protect users' mental health lies with the platforms themselves. Experts suggest that AI companion platforms should actively monitor for signs of unhealthy relationships and introduce regular "nudges" to encourage users to limit excessive usage. Researchers propose a "socioaffective alignment" strategy, which involves designing bots to meet users' needs without exploiting them. Instead of focusing solely on maximizing engagement, these tools should serve as temporary aids, encouraging users to build genuine, real-world social connections.
5. Amplification of Harmful Biases and Stereotypes
Parasocial AI platforms risk amplifying human biases. These systems rely on data that often reflect historical inequalities and systemic injustices. When AI models are trained on such biased datasets, they don't just mirror these issues - they can intensify them, reinforcing harmful stereotypes and outdated social norms.
Take gender bias as an example. A 2024 UNESCO study revealed that large language models frequently associate women with "home" and "family" roles, while linking men to "business", "career", and "executive" positions far more often. Similarly, when researchers used AI image generators like DALL-E 2 and Stable Diffusion to depict professions such as "engineer" or "scientist", the results were overwhelmingly male - between 75% and 100% of the time. These patterns don’t just reflect biases; they can actively shape public perceptions of gender roles and professional abilities.
The problem extends well beyond gender stereotypes. In healthcare, racial bias has emerged as a significant concern. A 2025 study by Cedars-Sinai found that leading large language models - including ChatGPT, Claude, Gemini, and NewMes-15 - provided less effective psychiatric treatment recommendations when the patient was identified as African American. Joy Buolamwini’s Gender Shades project further highlighted disparities in facial recognition technology, with error rates as high as 35% for dark-skinned women, compared to less than 1% for light-skinned men. These biases have real-world consequences, impacting access to opportunities, healthcare, and fair treatment.
Parasocial AI platforms pose even greater risks because they are designed to adapt to individual users. This personalization can create feedback loops, where biased interactions reinforce stereotypes over time.
Addressing these biases is not optional - it’s essential. The solution starts with proactive design. Experts emphasize the importance of involving diverse, interdisciplinary teams to identify and mitigate biases early in the development process. Diversifying training datasets is another critical step. The objective isn’t just to achieve technical accuracy; it’s about fostering "socioaffective alignment", which considers the broader psychological and social dynamics between humans and AI systems. Without these safeguards, parasocial AI platforms risk embedding harmful stereotypes into everyday interactions, making discrimination a normalized part of digital life.
Conclusion
Parasocial AI platforms grapple with serious ethical concerns, including emotional manipulation, misuse of data, lack of transparency, mental health impacts, and bias. These are pressing issues that require immediate and deliberate attention.
To address these challenges, ethical design must prioritize user well-being over maximizing engagement. This involves implementing thorough pre-market safety testing, redefining success metrics to promote healthy disengagement, and ensuring socioaffective alignment to prevent exploitation. Studies have shown that exploitative design practices not only increase user manipulation but also discourage healthy disengagement - a troubling trend that must be reversed. By shifting focus, platforms can begin to rebuild trust and foster ethical interactions.
As Jie Wu from the School of Journalism and Communication at Renmin University of China aptly stated:
"Technological progress that violates ethical norms is always unacceptable."
Platforms like TwinTone must take responsibility by clearly disclosing their capabilities, avoiding deceptive practices, and enforcing strict data protection measures. While technology itself remains neutral, the design choices made by developers determine whether it uplifts users or exploits their vulnerabilities.
Stronger legislation is emerging in several regions, particularly to safeguard minors. However, companies shouldn’t wait for regulations to drive change. Those who proactively adopt ethical design principles - such as transparent data practices and meaningful user well-being metrics - will earn the trust needed to guide parasocial AI technologies toward a safer and more responsible future . Protecting users, especially the most vulnerable, must remain at the heart of reimagining AI relationships.
FAQs
What are the mental health risks of using parasocial AI platforms?
Parasocial AI platforms let users build one-sided, friend-like relationships with AI personas, creating a sense of emotional connection. For some, these platforms can help ease loneliness or provide an outlet for self-expression. However, they also come with some serious mental health concerns.
Relying heavily on AI companions has been associated with greater feelings of isolation, less engagement in real-world social interactions, and increased anxiety or depression. Many of these platforms are designed in ways that promote emotional dependence - features like constant availability can make them addictive. Moreover, because the AI's empathy is purely simulated, users may experience emotional distress if the service is altered or discontinued.
To address these challenges, it's crucial to implement more transparency, ethical design practices, and mental health protections as these platforms continue to gain traction.
What are the privacy risks of using AI companions?
AI companions gather deeply personal data, including chat logs, emotional tendencies, daily routines, and even biometric details when synced with devices. This information helps create a more tailored and interactive experience, but many users remain unaware of how their data is handled. Questions linger about where this data is stored, who has access, and whether it’s being shared with advertisers or other third parties.
The lack of clear regulations and oversight leaves room for potential misuse of this sensitive information. Issues like unauthorized data sharing, emotional exploitation, and profiling of at-risk individuals are real concerns. To tackle these challenges, AI platforms must prioritize user trust by adopting transparent privacy practices, obtaining clear consent, and establishing strong protections to safeguard personal information.
How do AI systems contribute to biases and stereotypes?
AI systems can unintentionally mirror and perpetuate biases because they learn from historical data, which often carries traces of societal inequalities. For instance, if the training data includes gendered language, racial biases, or stereotypes, the AI might reflect these patterns in its responses. This could subtly shape user experiences - whether it’s through the tone of virtual assistants, the products they suggest, or the topics they emphasize.
The problem doesn’t stop there. Design choices and feedback loops can make things worse. Recommendation algorithms, for example, tend to push content similar to what users have already engaged with, creating echo chambers that amplify dominant viewpoints while ignoring others. Likewise, emotion-recognition tools might misread facial expressions or gestures due to cultural differences, leading to inaccurate or biased conclusions.
To tackle these issues, developers need to take proactive steps. Regular audits, prioritizing fairness, and designing systems with inclusivity at the core are essential to creating AI that serves everyone equitably.




