The rise of Artificial Intelligence (AI) powered technologies has significantly transformed how individuals interact with digital systems, particularly Artificial Intelligence Voice Assistants (AIVAs) and generative AI applications such as ChatGPT. These technologies provided ideal contexts to explore how task demands, emotions and adaptive behaviors influence users’ adoption and satisfaction. The existing studies rarely integrate task context and usage behavior into models of AI adoption and satisfaction.
In this context, Dr. John Vara Prasad Ravi completed his doctoral thesis “Understanding Consumers’ Adoption and Usage Behavior with AI-Powered Tools for Different Tasks and Contexts”, under the supervision of Dr. Ramon Palau Saumell and Dr. Jan Hinrich Meyer , in the CONHATIVE – Consumer Behavior Perspectives research group at IQS School of Management. Its main objective was to explore users’ engagement with these technologies by examining behavioral intentions, satisfaction, and adaptive behaviors across different AI interaction experiences. Three empirical studies identify key psychological and contextual factors that shape individuals’ adoption of and satisfaction with AI-driven interactions, focused on AIVAs and Large Language Models (LLMs).
Adoption of AIVAs
The first study was focused to investigate how task complexity influences users’ behavioral intentions to adopt AIVAs, through the design of an online experiment and survey grounded in the UTAUT2 mode – Unified Theory of Acceptance and Use of Technology 2, with two contrasting usage contexts of AIVAs such as simple and complex tasks. This analysis showed that AIVA adoption depends on task context, with cognitive evaluations shifting by task complexity. The AIVAs were viewed as functional tools, especially in adult users who seek efficiency in complex tasks and enjoyment in the simple ones.
Emotions and AIVAs’ satisfaction
The second study in Dr. Ravi’s research explored the impact of user-expressed emotions to predict user’s satisfaction across different AIVA modalities in different contexts (text and voice). Through a mixed methods approach combining multiple laboratory studies with survey data, the study confirmed that users’ satisfaction was influenced by voice tone and speech content, not only by functionality but also by user-expressed emotions, especially affective processes in human-AI interaction. On the other hand, it remained unclear in which situations each emotional cue is relevant and why.
ChatGPT interactions
Finally, the third study examined how users adapt their behavior during ChatGPT-based search interactions, focusing on exploitative and explorative adaptation strategies and their impact on behavioral intentions to rely on ChatGPT for product and service searches. Using a quantitative approach with an online survey of ChatGPT users, the study offered multiple noteworthy findings that highlight the validity of the extended ASTI model in understanding ChatGPT users. One of the highlights was the indirect effects of exploitative technology, which influenced adaptation behavioral intentions via exploitative task adaptation and perceived diagnostic compared to the explorative pathway through perceived serendipity.
In conclusion, Dr. Ravi’s doctoral thesis provided a unified understanding of the individuals’ interaction with AI-powered technologies such as AIVAs and LLMs, by addressing research interests such as task complexity, emotional engagement, and adaptive search behaviors. Furthermore, the findings offer both theoretical contributions to AI adoption literature and practical implications for the design and development of AI systems. This dissertation ultimately presents a holistic perspective on how users engage with, adapt to, and integrate AI technologies into their everyday lives.
As a part of his research, Dr. Ravi had the opportunity to collaborate with DEXLab at Maastricht University.
Related paper
Ravi J.V.P, Meyer J.H., Palau Saumell, R, Seernani D., 2025, It’s not only what is said, but how: how user-expressed emotions predict satisfaction with voice assistants in different contexts, 2025, Journal of Service Management, 1-32.














