In brief
This inquiry isn't just theoretical. Many people naturally believe they need a human expert to listen, provide perspective and help steer them through their most critical moments and decisions. A recent ServiceNow study reveals that 70% of UK consumers expect humans to perceive their emotions, while only 42% expect the same from artificial intelligence (AI) chatbots. This disparity suggests that consumers may have differing expectations for emotional understanding between humans and AI. As AI technology evolves and leaders are balancing competing forces, they need to also consider the emerging need for AI technology to resonate with human emotions and experiences to foster acceptance and trust in this new landscape.
Across multiple areas of life and work, people's openness to AI is higher than their current level of engagement. The EY Global 2025 AI Sentiment Study finds this adoption gap is particularly pronounced in finance: while 84% of respondents demonstrated an openness to using AI for finance-related needs, only 43% use it for such purposes. Our own research further explores the adoption dynamics, revealing that while 60% of UK consumers express openness to using conversational AI tools for various tasks, adoption varies across demographics.
Notably, younger generations show a greater willingness to engage with AI, with 70% of Gen Z and Millennials comfortable using conversational AI for tasks, including sensitive scenarios, compared to just 40% of those at state pension age[1] and beyond. This presents a significant opportunity that can’t be achieved by solely increasing access to technology: instead, AI experiences should be designed with these varying preferences and expectations in mind. By understanding and addressing the specific needs and expectations of different customer groups, organizations can create more effective and engaging AI solutions that resonate with users and bridge the adoption gap.
AI is already revolutionizing how we experience services across a spectrum of critical, emotionally charged needs. Interaction models are evolving, moving the human-machine interface into new territories of experience design and usability. With the large-scale adoption of generative AI (GenAI), consumer behavior is changing, yet many organizations still focus primarily on usability and technology rather than the emotional dynamics influencing consumer choices. To differentiate themselves, organizations must address these human dynamics in high-stakes moments, rather than merely optimizing transactions.
When researching what AI might mean during the hard corners of life, simply asking individuals how they might feel about using future technologies can lead to unreliable data. People struggle to accurately predict their needs and how technology could meet those needs in emotionally charged situations. To navigate this limitation head-on, the EY Studio+ UK team developed a speculative AI experience (AX) research concept: a conversational AI agent prototype designed to assist people during complex intersections of money, life and decision-making. Positioned as a future service offered by banks, the research concept allowed us to delve into various aspects of the banking experience, uncovering attitudes towards GenAI in financial services. This research concept was introduced to explore consumers’ acceptance of GenAI in a range of scenarios, including:
Consumers are ready for AI that works for them – under certain conditions.
Traditionally, interactions with banks have focused on obtaining value, whether through accessing products and services or resolving issues. In many cases, AI can effectively handle these scenarios without compromising customer satisfaction. However, when emotions are high and the risk of failure looms, consumers naturally expect human resolution. This stems from the belief that only humans possess the skills, knowledge, empathy and adaptability necessary to navigate critical situations effectively.
In exploring whether customers would accept GenAI in complex, high-emotion moments, the EY Studio+ UK team three areas of insight:
Empathy demand refers to customers’ desire to feel recognized, respected and responded to at an appropriate emotional level, specific to their unique circumstance, and influenced by the technological context of the interaction.
As an explanatory tool, the prototype illustrates that GenAI can provide solutions that not only meet practical needs but also offer emotionally appropriate, non-judgmental support. Interestingly, the prototype revealed that in some situations, particularly those involving feelings of shame or anxiety, customers may be better served without a human representative. While certain emotional states will always necessitate human intervention, it is valuable to acknowledge that the objectivity and neutrality of AI can be a benefit.
Our report, From AI to AX: Empathy demand in the next generation of AI powered services (pdf), deep dives into such insights that, when considered, enable AI systems to recognize and respond to emotional contexts. By doing so, these systems can assist customers without compromising empathy; in fact, in some scenarios, they can even enhance it.
I would want to have a conversation with the AI agent, to understand what would happen if I was in a tricky financial situation. I would ask for recommendations, with the agent drafting my responses for the bank, but also guiding me through what is best and most affordable for me. That's the sort of interaction I would expect. It's advisory, it's empathetic, it's there to benefit me.
Through interaction with the prototype, the research participants articulated how nuanced and carefully calibrated empathy demand must be to address their needs in high-emotion contexts. However, woven throughout the discussions, customers consistently shared practical and pragmatic concerns related to GenAI technology, including issues of privacy, data security and third-party use.
The EY Global 2025 AI Sentiment Study also captures skepticism regarding whether organizations will manage AI with consumers' best interests in mind, with 30% of respondents not trusting that financial services companies, such as banks and insurance companies, offer AI solutions that align with their best interests.
While there is excitement about GenAI’s potential to enhance financial wellbeing, the enthusiasm is balanced with persisting concerns about how organizations will manage AI. To foster acceptance and adoption of AI in financial services, it is essential to design systems that address these fears and communicate the practical realities of how AI operates.
I’m more cynical about the sharing and using of data, but if it’s data that is already shared with your bank, and they’re helping you with the information they have, to make the right choices, that would be beneficial.
Our findings highlight three key concerns that must be addressed for advanced GenAI to gain full acceptance among customers:
As we explore consumers’ attitudes towards AI tools, it becomes clear that preferences vary across different demographics. Notably, the quantitative part of this research, conducted online among a nationally representative sample of 2,000 UK consumers, finds younger consumers demonstrate a greater openness to engaging with conversational AI tools in financial settings. This generational divide is crucial for understanding how to design AI experiences that align with consumer expectations and preferences.
While 60% of total participants expressed a willingness to use conversational AI tools for various tasks, the openness is particularly pronounced among Gen Z and Millennials, 70% of whom would be comfortable using conversational AI to apply for financial products, compared to 40% of those at state pension age and beyond. This trend suggests that as younger consumers grow more accustomed to these technologies, their comfort level will likely increase, paving the way for broader acceptance.
In terms of preferences regarding the form and interaction style of conversational AI, our research reveals that 58% of consumers prefer to engage with shapeless conversational AI rather than a humanlike form. This preference is even more pronounced among Gen Z and Millennials (61%). Moreover, 67% of consumers expressed a desire for conversational AI to be factual and direct rather than emotionally engaging. This aligns with the findings from our qualitative research, which indicated that consumers often prefer straightforward assistance, rather than an emotional check-in.
These insights underscore the importance of designing AI experiences that resonate with users' needs and expectations, particularly in high-stakes situations where emotional context is paramount. By understanding these preferences, organizations can create AI solutions that not only meet practical needs but also align with the emotional dynamics of their users.
As we move towards a future where AI plays a pivotal role in financial services, it is essential to establish a framework that guides the design of conversational and agentic AI technologies. Based on our research, we’ve developed a set of nine principles that serve as a roadmap for creating AI experiences that resonate with users' needs and expectations. These principles are particularly crucial in the context of highly emotional and complex interactions, where the stakes are high, and the demand for empathy is paramount.
These principles can be categorized into three key design themes that should guide the development of AI experiences:
As organizations navigate the evolving landscape of AI-powered services, it is clear that empathy demand will play a pivotal role in shaping customer experiences. Organizations that prioritize emotional intelligence, transparency and user preferences in their AI design will not only enhance customer satisfaction but also gain a competitive edge in the marketplace. By understanding and addressing the nuanced needs of consumers, particularly in high-stakes situations, businesses can create AI solutions that resonate deeply with users, ultimately transforming the way we interact with technology.