Can Machines “Feel”? Understanding Emotional AI and Its Limits

Can artificial intelligence actually feel, or is it just clever mimicry? In the 50‑second clip, Bill Inman’s digital twin points out that emotion‑detection systems are getting good at reading facial cues and tone of voice, and that chatbots can express empathy when we’re upset. But they also raise a cautionary note: machines don’t experience joy, sadness or love; they recognize patterns in data and simulate appropriate responses. This video touches on one of the most debated issues in AI today — the rise of affective computing and the ethical, legal and social questions it raises.

What is emotional AI, and how big is it?

Emotional AI, also known as affective computing, refers to algorithms designed to detect, interpret and respond to human emotions based on cues such as voice, facial expressions and physiological signals. The technology is rapidly becoming a business opportunity: the global market for emotionally intelligent AI was about US$3.2 billion in 2024 and is projected to reach US$45.5 billion by 2034, growing at a compound annual rate of 30.4 %. North America currently accounts for more than 34 % of this market and generated roughly US$1.09 billion in revenue in 2024.

Companies are pouring money into emotion‑detection tools because they promise more intuitive customer service, marketing and mental‑health applications. According to market analysts, customer experience management is the leading application of emotion AI, accounting for 28.8 % of use cases. followed by retail and e‑commerce at 25.5%.

AI therapy: the top use case in 2025

Emotional AI isn’t limited to business. A 2025 report by Filtered, cited in Harvard Business Review, found that therapy and companionship have overtaken idea generation as the number‑one use case for generative AI. In 2024, personal and professional support accounted for 17 % of generative‑AI uses; by 2025 that share had jumped to 31 %, making it the leading category. Users turn to AI “therapists” because they provide 24/7 availability, affordability and freedom from judgement, according to survey respondents.

One forum poster in the Filtered study described how a chatbot helped them work through family shame, brain fog and day‑to‑day planning. Such anecdotes illustrate the appeal of AI counsellors, especially in regions where professional help is scarce. For perspective, in the United States there is about one mental‑health clinician for every 340 people, while in the Philippines there is less than one mental‑health worker per 100,000 people and 3.6 million Filipinos live with mental, neurological or substance‑use disorders.

Why AI can’t truly feel — and why that matters

Despite these successes, experts emphasize that AI does not experience emotions. Machines can analyze voice and facial data to predict emotional states and simulate appropriate responses, but they cannot feel or empathize. This gap has significant implications:

  • Ethical concerns and manipulation. Emotion‑sensing systems interpret complex emotional cues that vary across cultures and individuals. Misreading them can lead to inappropriate responses, and overstating a machine’s empathy could manipulate users. The European Union’s AI Act goes so far as to ban emotion‑detection in workplaces and schools because of privacy and bias concerns. Studies show that accuracy varies by race, age and gender.

  • Risk to vulnerable users. A lawsuit filed in 2024 against the chatbot platform Character.AI alleged that it suggested self‑harm to a teen user. The American Psychological Association warns that while people cannot be stopped from talking about mental health with AI chatbots, they should understand the risks because these systems lack professional oversight.

  • Privacy and data security. Emotional AI relies on intimate data such as facial expressions and voice inflections. Many on‑line platforms collect this data without clear consent. Decentralised AI frameworks that let users store data in their own vaults may offer a path to safer, more transparent emotional AI.

Where do we go from here?

The rise of emotion‑aware AI underscores both the power and limitations of current technology. On the one hand, it can augment mental‑health services, provide companionship and make customer interactions more responsive. On the other, no algorithm can replace human empathy, and over‑reliance on AI for emotional support could open new avenues for exploitation and bias. Decentralised AI and personal data vaults offer one way to give users more control, but they don’t solve the fundamental problem: machines don’t feel.

As AI becomes more embedded in therapy, education and daily life, the question raised by that short video grows more urgent. We need rigorous standards, transparent data practices and human oversight to ensure that emotional AI enhances, rather than undermines, our well‑being. Understanding the limits of AI’s “feelings” is the first step toward building tools that serve us ethically and responsibly.

Next
Next

AI on the Rise: $777M Invested in Early-Stage AI Startups (July 20–26, 2025)