D6.4 Stakeholders’ Feedback and Evaluation of the AI4Gov Use Cases V1 was developed in the context of WP6 “Use Case Implementation, Validation, and Evaluation”, and more specifically, it is connected to T6.5 “Stakeholders’ Feedback and Evaluation”. The report includes the pilot methodology, the evaluation methodology, and the results of the first validation phase of the AI4Gov piloting activities. It is the first of the two versions on feedback and evaluation, covering the piloting activities up to M24. The results of this first validation will feed WP3 and WP4 and help the technical partners improve the AI4Gov tools and release their final version by M27, where all technical tasks finish.
The results from the first phase indicate that the AI4Gov tools are progressing well toward achieving their objectives. Stakeholders recognized the tools’ potential to improve operational efficiency, enhance decision-making processes, and introduce innovative solutions for policy optimization. The user experience evaluations demonstrated a generally positive perception of both the pragmatic qualities (e.g., usability, clarity) and hedonic qualities (e.g., interest, inventiveness) of the tools. While participants appreciated the tools’ modern and supportive features, areas for improvement, particularly regarding efficiency and responsiveness, were highlighted.
Key findings include:
- Positive Stakeholder Perceptions: Stakeholders across all use cases found the tools valuable for optimizing processes, providing actionable insights, and addressing domain-specific challenges such as water management, waste reduction, and policy visualization.
- Trust and Security Concerns: Trust in AI tools remains conditional, influenced by data reliability, transparency, and perceived fairness. Concerns around bias, data security, and the explainability of results highlight the need for continuous improvement in these areas.
- Engagement and Participation: Stakeholder engagement was strong, particularly in workshops where participants actively tested and provided feedback on the tools. However, response rates varied across use cases, reflecting the need for further engagement strategies in remote or asynchronous evaluations.
- Challenges Identified: Technical limitations such as incomplete data integration, variability in performance, and usability barriers emerged as areas requiring targeted refinements before the second validation phase.