Abstract: The widespread adoption of generative artificial intelligence (GAI) tools has opened new avenues for exploring human trust in these technologies, particularly given their distinctive features. Grounded in the person-situation trust framework, this study employed a cross-sectional online survey (N = 1,496) to examine users’ GAI trust across various situational topics, alongside potential antecedents. Using latent class analysis (LCA), four distinct trust profiles emerged: dissenters, value-laden skeptics, optimists, and empathic agnostics. The findings reveal that task types and ethical concerns of GAI impact these trust profiles differently. Critical thinking was also identified as a moderating factor in trust formation. Theoretically, this study deepens our understanding of human-AI interactions by adopting a person-centered lens to identify distinct user trust profiles and conceptualizing trust in GAI as a context-dependent, multifaceted construct. Practically, the findings offer nuanced insights into user heterogeneity, informing more responsive and targeted strategies for designing and deploying GAI-powered applications.
No Comments.