FAIR & AI Symposium at TU Graz, 27 November 2025

FAIR & AI Symposium Highlights: A Community Shaping the Future of Trustworthy Research Data and AI

The FAIR & AI Symposium, organised under Cluster Forschungsdaten, brought together a vibrant community of researchers, infrastructure specialists, data stewards, and policy experts to explore one of today’s most pressing intersections: the evolving relationship between FAIR research data and artificial intelligence (AI).

Held in the magnificent Aula of TU Graz’ historic main building, the rich program sparked lively discussions on how data management and AI development can advance together in a responsible and sustainable way.

Key themes and insights: FAIR and AI complement each other, but privacy, transparency, and bias remain key challenges

Throughout the symposium, participants reflected on the pivotal question:

Are the FAIR Guiding Principles still sufficient for data stewardship, in an era where AI increasingly guides how we create, manage, and reuse data?

The sessions made clear that while FAIR remains a strong foundation, AI brings both powerful opportunities and new responsibilities. Automated metadata generation, semantic enrichment, and improved data discoverability were highlighted as significant enablers for FAIRification. At the same time, speakers emphasized that challenges such as transparency, bias detection, accountability, and ethical decision-making cannot be delegated to machines alone.

A program that sparked dialogue and collaboration

Participants engaged in a dynamic mix of keynote addresses, expert presentations, lightning talks, and hands-on discussions. Breakout groups explored concrete use cases at the intersection of FAIR data and AI, while debates created space for critical reflection on Austria’s and Europe’s evolving research data and AI infrastructures.

Ilire Hasani-Mavriqi, head of the RDM Team at TU Graz and organizer of the event, welcomed the attendees, highlighted the event’s timeliness, and introduced the key questions to be addressed: Are the FAIR principles sufficient to ensure trustworthiness in AI? How can AI support FAIR and reproducible research? Which aspects require human supervision?

In her institutional welcome, TU Graz’ Vice-Rector for Research Andrea Höglinger emphasized that FAIR and AI mark the future of the research ecosystem, and pointed to the emerging close cooperation at the international level under the auspices of the EOSC. Sabine Neff-Kolassa (TU Wien), stressed the role of Cluster Forschungsdaten in promoting collaboration and networking among Austrian universities. The event was moderated by Suvini Lai and Livia Beck (both from TU Wien).

Keynotes: Research data management is key but data quality is (still) a challenge

In the first keynote, by Jana Lasser (University of Graz, IDea_Lab) told her experience of working with sensitive data (“tales from the trenches”, as she put it), presenting three case studies on the privacy implications of FAIR data management. She stressed that the current extremely challenging—and rapidly evolving—landscape requires skills, like anonymisation techniques of unstructured data, that are beyond the capabilities of individual researchers, which means that data professionals such as Data Stewards are highly sought. In this landscape, professional, efficient and secure data management practices constitute a real competitive advantage for universities and other research performing organisations. Daniel Garijo (Polytechnic University of Madrid) delivered the second keynote, focused on quality in heterogeneous digital objects. Daniel stressed that FAIR should be understood as a means towards “higher ends”, and not an end in itself: FAIR should be employed to improve scientific credit, acknowledge datasets as key research outputs, and improve reproducibility. He concluded that while FAIR (metadata, interoperability, search, provenance) is key for AI, data quality is (still) an open issue.

The keynotes were followed by three lightning talks by Markus Stöhr (AI Factory Austria, Austrian Scientific Computing) on high-performance computing (HPC), Jeannette Gorzala (Act.AI.now) on AI governance and literacy, and Emily Kate (University of Vienna) on data stewardship and RDM. Markus Stöhr provided an overview of the possibilities offered by national and European HPC infrastructures and explained how researchers can effectively access and use these resources. Jeannette Gorzala highlighted key legal considerations for AI and data-driven research, outlining practical ‘dos and don’ts’ in light of current regulations and governance frameworks. Finally, Emily Kate discussed the evolving role of data stewards in supporting RDM and trustworthy AI, illustrating this through the University of Vienna’s ongoing journey towards more robust data stewardship practices.

The symposium concluded with two parallel breakout sessions, tackling two complementary questions: How can AI support the FAIRification of research data, and how can FAIR data practices support AI applications, respectively. Participants in both sessions engaged in lively discussions of these questions, concluding that FAIRification pertains to different levels of (data) aggregates which require standardized tools and official certification, and that AI will most likely support FAIRification by automating repetitive tasks.

A community moving forward

One of the most resounding takeaways from the symposium was the shared recognition that trustworthy AI and FAIR data must evolve together. Ensuring that AI-driven research workflows remain transparent, reusable, and ethically grounded will require ongoing interdisciplinary cooperation — across research domains, institutions, and infrastructures.

We thank all speakers, contributors, and participants for their valuable input and strong engagement. The conversations sparked at this symposium mark an important step toward shaping a responsible, FAIR, and AI-ready research landscape.