Skip to main content
News

OSTrails Hackathon: Advancing Assessment and Knowledge Sharing in Research

08 August 2025

From June 11th to 12th 2025, the European Synchrotron Radiation Facility (ESRF) in Grenoble hosted the OSTrails Hackathon: Building Assessment‑IF & SKG‑IF Solutions Together, a two-day collaborative event that brought together OSTrails’ tool developers, domain experts, and open science advocates, as well as experts from GraspOS project. The aim was to co-develop and refine the core components of the OSTrails Interoperability Reference Architecture, focusing on the Assessment Interoperability Framework (Assessment‑IF, previously FAIR‑IF) and the Scientific Knowledge Graph Interoperability Framework (SKG‑IF).

Key takeaways

  • A common REST API specification for FAIR assessment operations was refined
  • Benchmarks were defined in terms of authorship, scope, and execution
  • The distinction between Benchmarks and Algorithms was formalised
  • Metadata needed for describing a Benchmark and Algorithm (basic) has been discussed
  • A mock DMP Evaluation walk-through showed how even staged scenarios align with Assessment‑IF
  • Developers contributed real-world dataset examples and tested the SKG‑IF API
  • The SKG-IF extension process was refined, and documentation improvements were proposed
  • Collaboration with GraspOS and Athena RC ensured alignment with community needs and RDA SKG‑IF WG

Sharpening the Assessment‑IF

For the Assessment‑IF, the hackathon centred on two of its essential elements: API and Benchmarks. 

API: the meeting described a first version of the Assessment-IF REST API, refining the operations for interchanging metadata and results that result from FAIR assessment tools

Benchmarks: These represent community expectations for how digital objects should behave and serve as the foundation for assessing conformance and quality. Until now, Benchmarks were acknowledged as necessary, but their practical definition, authorship, and implementation had not been formalised. This event addressed that by:

  • Defining how Benchmark are authored and tied to user or community expectations
  • Clarifying how Benchmarks are executed and how user feedback is produced
  • Distinguishing Benchmarks from the Scoring Algorithms that implement them

A key example was the mock-up of the DMP Evaluation Service, showcasing how real-world scenarios for evaluating machine-actionable Data Management Plans (maDMPs) fully align with the structure and semantics of the Assessment‑IF and are compliant with the DMP Common Standard (DCS). Specifically, the example demonstrated how the evaluation can be applied across different stages of a DMP’s lifecycle, confirming the framework’s robustness and flexibility.

Advancing the SKG‑IF

The SKG‑IF provides a shared framework for exchanging structured metadata across a wide array of research entities, including publications, datasets, software, organisations, and more. Sessions focused on advancing the SKG-IF through standardisation of a common API and refinement of the model’s extension mechanism. These sessions involved close collaboration with the GraspOS project and colleagues from the Athena Research Center, reinforcing OSTrails' alignment with current implementations and the RDA SKG‑IF Working Group. With the first public release of the SKG‑IF API specification scheduled for late 2025, this hackathon served as a critical milestone in its development.

Key outcomes included:

  • Real-world testing of the SKG‑IF API in pilot implementations.
  • Review and refinement of the SKG‑IF extension process, leading to planned improvements in documentation and onboarding materials.
  • Strengthened collaboration with GraspOS and Athena RC to ensure SKG‑IF remains aligned with evolving community and infrastructure needs

Importance for OSTrails 
The hackathon was a key checkpoint for validating and refining core elements of the OSTrails Interoperability Reference Architecture. For Assessment‑IF, it confirmed that the architecture is generic and flexible enough to support multiple assessment across different domains. Clarifying the roles of Benchmarks and Algorithms also improved internal coherence and communication across teams.

On the SKG‑IF side, the event enabled early pilot testing of the API specification and strengthened the approach for community-driven extensions. Collaboration with GraspOS and Athena RC helped ensure the work remains grounded in real-world needs and existing international efforts.

Why it matters for the broader Open Science community 
Across the research landscape, tools for FAIR assessment and metadata exchange often lack common structures, terms, or protocols. This fragmentation limits interoperability and reduces transparency.

The harmonisation work advanced during this hackathon, including shared terminology, standardised APIs, and modular design, lays the foundation for:

  • Greater compatibility between assessment tools and their results
  • Seamless metadata integration across infrastructures
  • Stronger alignment between technical systems and researcher needs

Together, these efforts support a more open, interoperable, and scalable research ecosystem.

Impressions 
The in-person format created space for rapid iteration, immediate feedback, and deep technical exchange. From architecture discussions to hands-on testing, the setting proved highly productive.

Hackathon Grenoble 2

 

“Sometimes a few hours of discussion are more important than a few days of coding.”

-Renaud Duyme, ESRF

Looking ahead 
The hackathons solidified core design decisions for both Assessment‑IF and SKG‑IF. For Assessment-IF, the API now provides a consistent way to discover and run assessments, and the benchmark framework has been formally defined, with pilot data to be added before implementation across use cases. For SKG-IF, the API has been tested in real-world scenarios, and its extension mechanism is progressing toward wider adoption.

Tassos Stavropoulos
This email address is being protected from spambots. You need JavaScript enabled to view it.