Understanding the Role of Training for Trust in AI-Based Cybersecurity
InSITE 2025
• 2025
• pp. 31
Aim/Purpose
This paper investigates the impact of training on trust in AI-based cybersecurity solutions, addressing challenges related to skills development and trust dynamics.
Background
Implementing AI in cybersecurity has proven effective by enhancing threat identification, management, and prevention capabilities. Proper training and education facilitate comprehension of AI solutions and concepts, assisting cybersecurity professionals in utilizing the technology. Experts use AI-based cybersecurity systems based on their trust in these systems.
Methodology
A structured survey was conducted with 100 cybersecurity experts. Data were analyzed using multiple regression and structural equation modeling to explore the relationships between training, skills, perceived effectiveness, and trust.
Contribution
This study provides insights into how training influences trust through skill-building and perceived effectiveness. It contributes to better training programs’ design and fosters trust in AI-driven cybersecurity solutions.
Findings
The study finds that training enhances skills, which in turn affects perceived effectiveness and trust, though highly skilled individuals may develop skepticism toward AI systems. Training significantly enhances AI-related skills. However, its impact on trust is indirect, as it primarily improves skills, which in turn influences perceived effectiveness and trust. Trust is mediated by improved perceptions of effectiveness driven by skill development. Advanced skills may paradoxically reduce trust due to increased awareness of system limitations.
Recommendations for Practitioners
Training programs should integrate hands-on experiences and explainable AI techniques to balance skill-building with trust-enhancing strategies.
Recommendations for Researchers
Further investigation is needed into the trust-skills paradox and the cultural or contextual factors influencing trust in AI systems.
Impact on Society
Enhancing trust in AI-based cybersecurity systems promotes broader adoption, contributing to improved cybersecurity resilience.
Future Research
Future studies should focus on objective performance assessments, diverse user groups, and cultural factors affecting trust dynamics
This paper investigates the impact of training on trust in AI-based cybersecurity solutions, addressing challenges related to skills development and trust dynamics.
Background
Implementing AI in cybersecurity has proven effective by enhancing threat identification, management, and prevention capabilities. Proper training and education facilitate comprehension of AI solutions and concepts, assisting cybersecurity professionals in utilizing the technology. Experts use AI-based cybersecurity systems based on their trust in these systems.
Methodology
A structured survey was conducted with 100 cybersecurity experts. Data were analyzed using multiple regression and structural equation modeling to explore the relationships between training, skills, perceived effectiveness, and trust.
Contribution
This study provides insights into how training influences trust through skill-building and perceived effectiveness. It contributes to better training programs’ design and fosters trust in AI-driven cybersecurity solutions.
Findings
The study finds that training enhances skills, which in turn affects perceived effectiveness and trust, though highly skilled individuals may develop skepticism toward AI systems. Training significantly enhances AI-related skills. However, its impact on trust is indirect, as it primarily improves skills, which in turn influences perceived effectiveness and trust. Trust is mediated by improved perceptions of effectiveness driven by skill development. Advanced skills may paradoxically reduce trust due to increased awareness of system limitations.
Recommendations for Practitioners
Training programs should integrate hands-on experiences and explainable AI techniques to balance skill-building with trust-enhancing strategies.
Recommendations for Researchers
Further investigation is needed into the trust-skills paradox and the cultural or contextual factors influencing trust in AI systems.
Impact on Society
Enhancing trust in AI-based cybersecurity systems promotes broader adoption, contributing to improved cybersecurity resilience.
Future Research
Future studies should focus on objective performance assessments, diverse user groups, and cultural factors affecting trust dynamics
AI-based cybersecurity, trust dynamics, training, skill development, perceived effectiveness.
19 total downloads


Back