Skip to main content

Pathways from Geneva: Our ‘Field Notes’ on Interoperability

15–17 September 2025, Geneva. From the sponsor booth to hallway conversations, OSFAIR 2025 gave us space to test, stress-check, and present how Plan–Track–Assess pathways work in practice.

All about Open Science. OSFAIR brought 352 attendees across research support teams, funders, infrastructure and service providers, and communities together around five tracks, from Digital Backbone to Research Assessment and Skills & Community. We were there to listen to and compare notes with colleagues and experts from Europe, Asia, and the US, engaging at the booth with universities, national research institutes and academies, international infrastructures, and community networks. Our presentation, “OSTrails: Connecting Tools and Communities for a Federated Open Science Ecosystem ”, gave us the opportunity to explain our modus operandi and engage in conversation with everyone in a lively Q&A.

Picture1

Below, we share the insights and actions we’re taking forward regarding how systems connect, who is responsible for what, and the steps different professionals follow, presented in the form of field-notes.

Plan - Track - Assess Pathways

Open sponsor-area booth included a board game of OSTrails consisting of:
a. service icons (DMPs, Evaluators, Repositories, FAIR-check tools, SKGs)
b. moveable arrows
c. colour-coded post-its per role (data policy officers, ITs, librarians & data stewards, ethics reviewers, researchers)

Blank ruled notebook on a white table

Reflections from participants

• Less retyping. Let tools pass a small set of facts automatically.
• Clear roles. Make it obvious who owns what and who confirms it.
• Start simple. A basic setup that works everywhere, with optional upgrades later.

Useful Resources: D1.1

Interoperability - focused presentation

The presentation covered the Plan–Track–Assess framework as a structured approach to improving research data management and open science practices. It highlighted gaps across Data Management Plans (DMPs), Semantic Knowledge Graphs (SKGs), and FAIR principles, emphasizing the need for better alignment and integration. The session also introduced the OSTrails toolkit and its growing community of pilot projects, showcasing practical implementations that bridge these gaps. Finally, it outlined the training and mentorship roadmap, designed to build capacity, foster collaboration, and support researchers and institutions in adopting sustainable, FAIR-aligned data practices.

 

Picture3

Page 2

Useful Resources: D1.4

Key takeaways:
  1. Keep things simple. Use a few core links so people enter information once and reuse it elsewhere.
  2. Offer core examples that work now. Publish a straightforward, end-to-end set of service interactions with examples of per stakeholder actions.
  3. Build reviews on facts. Agree with a small, shared set of assessments, send them automatically to reviewers, and loop results back.
  4. Make it easy to adapt and adapt. Spell out who does what, keep the rules light, and provide short how-to guides.
  5. Improve by measuring. Track number of links and how often information is reused, and how widely this is adopted, then iterate.
One step closer to our vision!

These steps move us from architecture to practice, towards a federated, FAIR-aligned, machine-actionable ecosystem where researchers do less duplicate work, services interoperate by default, and assessments rely on verifiable tests and documented processes.

Thanks to everyone who leaned in at OSFAIR!

We’d love to continue our discussions, so feel free to reach us at This email address is being protected from spambots. You need JavaScript enabled to view it..

  • Created on .
  • Last updated on .
  • Hits: 45

Rethinking DMP Evaluation: Funders’ Perspectives, Practices, and Input for Action

On 12 June, the OSTrails Horizon Europe project launched the first in its series of maDMP Evaluation Webinars, aiming to improve data management practices through the adoption of machine-actionable Data Management Plans (maDMPs). Attended by more than 80 participants, the first webinar focused on inviting funders to explore new potentials, discuss evaluation challenges, and directly shape the OSTrails maDMP Evaluation Framework.

Rethinking DMP Evaluation 1

Key Takeaways 

  • Funders want practical tools that help evaluate DMPs based on clarity, feasibility, and policy alignment, not just completeness.
  • Machine-actionable DMPs are powerful, but they must be designed and evaluated with care. Undoubtedly because they make it easier to link planning with implementation, reuse, and assessment.
  • The evaluation framework fills a gap by translating high-level guidance (e.g. Science Europe) into structured, testable dimensions. This can help align DMPs with funder goals and research needs.
  • Community feedback is essential to ensure the framework supports real workflows and diverse policy contexts. Rather than enforcing universal openness, Open Science infrastructures should support pluralistic and ethical openness that is tailored to different contexts.
  • There is a strong interest in using the evaluation service dimensions once it moves beyond beta.

The central message was that DMPs should no longer be treated as static documents but rather as active and shared components of research workflows.

The session was led by the OSTrails Horizon Europe pilot, which has been working with funders and research communities to redesign how DMPs are created, reviewed, and assessed. It introduced the three core areas of OSTrails’ work with DMPs:

  1. Conceptual: Defining the structure, content, and principles of DMPs, including the evaluation framework and the underlying policy and community requirements. Rethinking DMP Evaluation 2
  2. Technical: Developing the specifications, standards, and interoperability mechanisms that enable DMPs to be machine-actionable and integrated with other research services. Rethinking DMP Evaluation 3
  3. Operational: Implementing and testing DMP concepts and technical solutions through platforms, pilots, and real-world use cases, ensuring they work in practice and can be adopted by different stakeholders. Rethinking DMP Evaluation 4

As a concrete example, the pilot presented a proposed two-layer Horizon Europe DMP model separating declarative content (e.g. intent, policy) from implementation details (e.g. storage, preservation, access), to better support assessment and improve clarity of roles and responsibilities.

Building on this foundation, participants were introduced to the draft DMP Evaluation Framework, designed to support semi-automated assessments that are flexible enough to adapt to different funder templates and emerging needs. The framework consists of dimensions and metrics that go beyond completeness to address aspects like feasibility and clarity of data practices. This represents a significant shift towards tracking the implementation of data practices over time, rather than merely recording intentions. For OSTrails, this work is fundamental.

The webinar also featured a collaboration with the TIER2 project, which focuses on enhancing the reproducibility aspects available in DMPs. Together, we shared community insights and demonstrated the importance of engaging with stakeholders early on and often.

The session invited funders to contribute feedback through a short survey, reinforcing the participatory ethos that underpins the OSTrails approach. Moreover, it demonstrated that the OSTrails approach (combining machine-actionable formats, shared APIs, and layered planning) is technically feasible and ready to be integrated into funder workflows and DMP platforms. This confirmation is essential as OSTrails prepares for broader adoption across EOSC-related infrastructures used in its 24 pilots.

Participating research funders found the session highly relevant and timely. Many expressed strong interest in the evaluation approach and welcomed the opportunity to shape its development. There was clear demand for the DMP evaluation service, with several attendees noting they were eager to use it once it moves from beta to full production.

As OSTrails continues its work, this webinar series will provide an ongoing opportunity to include more voices in the conversation, including not only funders, but also researchers, institutions and community organisations.

We thank all attendees and invite continued engagement through upcoming sessions and surveys!

Webinar

  • Created on .
  • Last updated on .
  • Hits: 551

OSTrails Hackathon: Advancing Assessment and Knowledge Sharing in Research

From June 11th to 12th 2025, the European Synchrotron Radiation Facility (ESRF) in Grenoble hosted the OSTrails Hackathon: Building Assessment‑IF & SKG‑IF Solutions Together, a two-day collaborative event that brought together OSTrails’ tool developers, domain experts, and open science advocates, as well as experts from GraspOS project. The aim was to co-develop and refine the core components of the OSTrails Interoperability Reference Architecture, focusing on the Assessment Interoperability Framework (Assessment‑IF, previously FAIR‑IF) and the Scientific Knowledge Graph Interoperability Framework (SKG‑IF).

Key takeaways

  • A common REST API specification for FAIR assessment operations was refined
  • Benchmarks were defined in terms of authorship, scope, and execution
  • The distinction between Benchmarks and Algorithms was formalised
  • Metadata needed for describing a Benchmark and Algorithm (basic) has been discussed
  • A mock DMP Evaluation walk-through showed how even staged scenarios align with Assessment‑IF
  • Developers contributed real-world dataset examples and tested the SKG‑IF API
  • The SKG-IF extension process was refined, and documentation improvements were proposed
  • Collaboration with GraspOS and Athena RC ensured alignment with community needs and RDA SKG‑IF WG

Sharpening the Assessment‑IF

For the Assessment‑IF, the hackathon centred on two of its essential elements: API and Benchmarks. 

API: the meeting described a first version of the Assessment-IF REST API, refining the operations for interchanging metadata and results that result from FAIR assessment tools

Benchmarks: These represent community expectations for how digital objects should behave and serve as the foundation for assessing conformance and quality. Until now, Benchmarks were acknowledged as necessary, but their practical definition, authorship, and implementation had not been formalised. This event addressed that by:

  • Defining how Benchmark are authored and tied to user or community expectations
  • Clarifying how Benchmarks are executed and how user feedback is produced
  • Distinguishing Benchmarks from the Scoring Algorithms that implement them

A key example was the mock-up of the DMP Evaluation Service, showcasing how real-world scenarios for evaluating machine-actionable Data Management Plans (maDMPs) fully align with the structure and semantics of the Assessment‑IF and are compliant with the DMP Common Standard (DCS). Specifically, the example demonstrated how the evaluation can be applied across different stages of a DMP’s lifecycle, confirming the framework’s robustness and flexibility.

Advancing the SKG‑IF

The SKG‑IF provides a shared framework for exchanging structured metadata across a wide array of research entities, including publications, datasets, software, organisations, and more. Sessions focused on advancing the SKG-IF through standardisation of a common API and refinement of the model’s extension mechanism. These sessions involved close collaboration with the GraspOS project and colleagues from the Athena Research Center, reinforcing OSTrails' alignment with current implementations and the RDA SKG‑IF Working Group. With the first public release of the SKG‑IF API specification scheduled for late 2025, this hackathon served as a critical milestone in its development.

Key outcomes included:

  • Real-world testing of the SKG‑IF API in pilot implementations.
  • Review and refinement of the SKG‑IF extension process, leading to planned improvements in documentation and onboarding materials.
  • Strengthened collaboration with GraspOS and Athena RC to ensure SKG‑IF remains aligned with evolving community and infrastructure needs

Importance for OSTrails 
The hackathon was a key checkpoint for validating and refining core elements of the OSTrails Interoperability Reference Architecture. For Assessment‑IF, it confirmed that the architecture is generic and flexible enough to support multiple assessment across different domains. Clarifying the roles of Benchmarks and Algorithms also improved internal coherence and communication across teams.

On the SKG‑IF side, the event enabled early pilot testing of the API specification and strengthened the approach for community-driven extensions. Collaboration with GraspOS and Athena RC helped ensure the work remains grounded in real-world needs and existing international efforts.

Why it matters for the broader Open Science community 
Across the research landscape, tools for FAIR assessment and metadata exchange often lack common structures, terms, or protocols. This fragmentation limits interoperability and reduces transparency.

The harmonisation work advanced during this hackathon, including shared terminology, standardised APIs, and modular design, lays the foundation for:

  • Greater compatibility between assessment tools and their results
  • Seamless metadata integration across infrastructures
  • Stronger alignment between technical systems and researcher needs

Together, these efforts support a more open, interoperable, and scalable research ecosystem.

Impressions 
The in-person format created space for rapid iteration, immediate feedback, and deep technical exchange. From architecture discussions to hands-on testing, the setting proved highly productive.

Hackathon Grenoble 2

 

“Sometimes a few hours of discussion are more important than a few days of coding.”

-Renaud Duyme, ESRF

Looking ahead 
The hackathons solidified core design decisions for both Assessment‑IF and SKG‑IF. For Assessment-IF, the API now provides a consistent way to discover and run assessments, and the benchmark framework has been formally defined, with pilot data to be added before implementation across use cases. For SKG-IF, the API has been tested in real-world scenarios, and its extension mechanism is progressing toward wider adoption.

Hackathon

  • Created on .
  • Last updated on .
  • Hits: 533

OSTrails at Open Repositories 2025: Bridging Research Workflows with Interoperability APIs

At Open Repositories 2025, held from June 15th to 18th in Chicago, Illinois, Maximilian Moser (TU Wien) presented OSTrails and its ongoing work to make FAIR and reproducible science more interoperable across research services. Under the conference theme “Twenty Years of Progress, a Future of Possibilities, the OSTrails poster highlighted the project's efforts to connect isolated systems into cohesive workflows that reduce researcher workload and support long-term sustainability.

OSTrails is making it easier for different research services to work together by providing standardised APIs. This reduces manual effort for researchers and helps repository teams build more sustainable, vendor-neutral infrastructures.

- Maximilian Moser, TU Wien, at OR2025

The Poster at a Glance

TU Wien, the technical lead for OSTrails and one of the project's 24 national and thematic pilot institutions, showcased a poster titled “TU Wien & OSTrails: Connecting services”. It focused on how service-agnostic APIs and interoperability specifications developed within the project are enabling more seamless interactions between tools for Data Management Plans (DMPs), Scientific Knowledge Graphs (SKGs), and FAIR assessment.

The poster featured a working pilot: an integration between TU Wien’s InvenioRDM-based research data repository (TUWRD) with the DAMAP Data Management Planning tool, implemented as part of the OSTrails Austrian pilot. This allows researchers to report dataset reuse in their DMPs directly from the repository interface, replacing a previously manual and fragmented workflow.

A limited proof of concept for integration between a DMP platform and a research data repository has already been presented at OR2023 and completed at TU Wien in 2024. Similar tests are now scaled across different services and research settings to achieve more standardized proof of concepts across the project’s pilots.

Why It Matters

The OSTrails Interoperability Frameworks aim to overcome fragmentation by defining modular, vendor-neutral APIs that can be implemented across various platforms. This approach:

  • Simplifies how researchers interact with multiple tools
  • Reduces reliance on specific service providers
  • Supports a more sustainable and collaborative infrastructure ecosystem

Infrastructure providers and developers can reuse these standardised building blocks. This lowers integration costs and improves metadata quality, traceability, and FAIRness.

 Audience Engagement

The poster was designed to engage both repository managers and developers. The presentation aligned particularly well with the OR2025 sub-themes of Sustainability and Preservation” and “Community, highlighting how open and flexible technical approaches can foster inclusive, future-ready research infrastructure.

Visitors particularly appreciated the connected services (DMP Platform, SKGs, FAIR Assessment Tool) and were interested in future use-cases pertaining to their own repositories. Moreover, they enjoyed Maximilian’s tablet-drawn illustrations on the poster!

Visitors engage with the poster as Suvini Lai TUWIEN offers project specific insights photo by Maximilian MoserVisitors engage with the poster as Suvini Lai (TU Wien) offers project specific insights. Photo by Maximilian Moser.

Looking Ahead

OSTrails offers practical, standards-based solutions to long-standing interoperability challenges, helping to turn open science principles into everyday practice. Rather than creating new platforms, it builds bridges between existing ones, empowering institutions to incrementally improve their workflows.

Once the OSTrails interoperability framework is finalised, the Austrian OSTrails pilot will implement it in its participating institutions.

- Written by Suvini Lai (TU Wien)

More about OSTrails pilots
InvenioRDM | Damap DMP Tool
Integration preview on Zenodo

  • Created on .
  • Last updated on .
  • Hits: 1099

OSTrails Interoperability Webinar Series: Making Research Tools EOSC-Ready Through Common Standards

Over the past semester, OSTrails launched the first part of its Interoperability Webinar Series spotlighting the development of Interoperability Frameworks (IFs) under the OSTrails Reference Architecture.

Key takeaways 

  • Early access to FAIR-IF, DMP-IF, and SKG-IF specifications to prepare tools and workflows for integration.
  • Practical examples of embedding frameworks into widely used platforms for EOSC alignment.
  • Networking with EOSC, RDA, and related initiatives to align with interoperability standards.
  • Exchange of insights on APIs, benchmarks, and extension mechanisms for real-world application.
  • Direct input into framework development through use cases and feedback.

Nearly 300 participants joined the sessions on the Assessment-IF (previously FAIR‑IF), the Data Management Plan IF (DMP-IF), and the Scientific Knowledge Graphs IF (SKG-IF), representing a diverse mix of data stewards, librarians, repository managers, software developers, researchers, and open science coordinators from universities, research infrastructures, and international organisations. They came to learn how these efforts are enabling seamless collaboration and information exchange across research data management (RDM) tools and communities.

Main message.

In the context of the Open Science agenda, interoperability of research practices, tools, and infrastructures is no longer optional. It is essential for research to function at a scale across domains and communities. Achieving this, however, requires more than technical fixes — it demands alignment, shared standards, and collective effort.

OSTrails is working with the community to align practices, clarify responsibilities, and define common interfaces between tools and services. The immediate goal is technical compatibility, ultimately supporting a research ecosystem where infrastructures can interact by design, policies are easier to implement, and knowledge flows more openly across systems.

Session Highlights

Assessments IF (previously FAIR-IF): Mark Wilkinson (UPM)WP3 lead (Assessment tools & services), introduced the Assessment-IF, developed to address the lack of harmonisation across assessment tools and the inconsistent interpretation of FAIR principles. The framework supports interoperability and is designed to be embedded in DMP platforms to provide real-time, standardised feedback.

FAIR IFComponents of the FAIR Assessment landscape, as presented during the webinar.

We want to design and publish metrics and tests for a wider range of digital objects beyond data and including domain specific assessments.

- Mark Wilkinson

In the discussion, participants showed strong interest in the upcoming API specifications that will enable tools to interact and harmonise outputs, as well as in approaches for describing algorithms as community judgments over benchmarks. There was also significant engagement around governance models involving diverse stakeholders — including domain experts, coders, and FAIR specialists — in the validation of metrics and tests.

DMP-IF: Marek Suchánek (CTU), WP2 (Plan-Track-Assess Alignment Implementation) lead, presented DMP-IF as a solution to fragmentation and lack of machine-actionability in current data management plans. Building on the RDA DMP Common Standard, the framework adds an application profile and a common API. This enables platforms to integrate with FAIR assessment tools, repositories, and SKGs in a uniform way while adapting the standard to European research needs. In doing so, it supports policy compliance and accommodates diverse research outputs.DMP IF editedInsights on DMP-IF presentation.

“We are building on top of the DMP Common Standard from RDA by creating an application profile and API specification for DMP platforms, enabling interoperability, extensibility, and alignment with real-world needs such as policy compliance and support for diverse research outputs.”

- Marek Suchánek

During the discussion, participants highlighted how the framework can connect data repositories and DMP platforms in both directions — either fetching DMPs or pushing updates, such as dataset publication events, via the planned API. They emphasised its tool-agnostic design and its potential to reduce duplication through automation, ensure policy compliance, and support diverse outputs, including software management plans. Questions also covered reusing example DMPs, integrating with OSTrails DMP platforms and ensuring compatibility across systems.

SKG-IF: Andrea Mannocci (CNR), co-chair of the SKG-IF RDA WG, introduced the framework and its role in enabling cross-disciplinary research through semantic and technical interoperability. Building on the RDA model, SKG-IF provides a common data model, API specifications, and an extension mechanism to align and integrate knowledge graphs across systems, addressing the fragmentation caused by isolated, non-aligned graphs.

SKG IFOverview of OSTrails contribution to the SKG-IF core model.

“The SKG-IF isn’t magic. There’s no central implementation or governing body that operates a service or infrastructure to enable dialogue between different SKGs. That might sound reductive, but in essence, SKG-IF is a set of guidelines. Anyone who wants their SKG to be interoperable with others and compliant with SKG-IF must follow these guidelines.”

- Andrea Mannocci

In the discussion, participants explored how communities — both established and emerging — can contribute requirements to SKG-IF extensions, either through the RDA Working Group or dedicated extension projects. Questions focused on the level of technical expertise needed to develop extensions and the importance of practical guidance and real-world examples. Interest also centred on keeping the extension process accessible to non-experts, providing templates, and maintaining strong links between community-driven extensions and the core model.

Why was it important for OSTrails? 

The webinar series provided OSTrails with a high-impact opportunity to present its core technical developments and engage with the research community and EOSC landscape. The events helped validate the project’s direction, and practical feedback surfaced for several topics, ranging from inconsistencies in FAIR metrics to technical needs for DMP platform integration and SKG alignment. These insights are now shaping the ongoing refinement of each of the three frameworks to better support real-world use cases.

Why was it important for the Research Community in general?

The series enabled the research community to engage directly with the emerging frameworks and provide input early in their development. It also fostered alignment with ongoing international and EOSC-related initiatives, such as the RDA DMP Common Standard and SKG-IF Working Group and FAIR Metrics and Digital Objects Task Force, and complementary projects like FAIRCORE4EOSC and FAIR-IMPACT. These connections ensure that OSTrails developments build on widely adopted standards, support machine-actionable outputs, enable integration with established platforms, and address shared priorities such as policy compliance, persistent identifiers, and cross-platform interoperability.

Conclusions

The OSTrails webinars highlighted a shared need across the research landscape for more aligned, automated, and interoperable infrastructures. Each framework is helping to bridge gaps, whether in FAIR evaluation, data management automation, or SKG integration, and their development is being guided by community input. As they evolve, the IFs are poised to support a more open, trustworthy, and connected research ecosystem.

Further Resources

Explore the full webinar series and access resources (recordings and slides) here.

Discover the documentation for the OSTrails Interoperability Architecture and its three Interoperability Frameworks here.

Webinar

  • Created on .
  • Last updated on .
  • Hits: 636