AI WORKFLOWS
Scalable AI Workflow Orchestration Platform for Cloud, Edge, and HPC
The project enables data scientists and researchers to seamlessly design, prototype, and execute complex AI workflows across heterogeneous environments—ranging from public cloud to HPC centers to edge devices. By making Jupyter Notebooks the core user interface, our platform lets users develop code interactively and then deploy it, unchanged, to scalable production systems.

What the Platform Does
This orchestration system addresses a key challenge in modern AI: bridging the gap between quick prototyping and full-scale, multi-environment deployment. The platform allows users to:
- Develop and test AI code within Jupyter Notebooks
- Assign specific notebook cells to different compute targets (cloud, HPC, edge)
- Run workflows seamlessly across all these infrastructures—without rewriting code
- Track results, logs, and resource usage in a unified way
Core Architecture
The solution is made of three tightly integrated modules:
- Environment Selector: A Jupyter Notebook extension that lets users pick the compute environment for each cell or workflow step.
- Compiler Decorators: Python decorators automatically parallelize and containerize code, optimizing it for the chosen backend.
- Workflow Orchestrator: Tools to build and manage directed acyclic graphs (DAGs) of notebook cells, handling data dependencies and job scheduling.
User Experience & Collaboration
To ensure an accessible experience, the platform provides:
- A template-driven UI to create and share compute environment configurations
- User, admin, and group roles for controlled collaboration and reproducibility
- Integrated monitoring of execution status and resource usage
Deployment Context and Impact
The platform is developed within the FLUENDO initiative, part of Italy's National Recovery and Resilience Plan and supported by the National HPC, Big Data, and Quantum Computing Center (ICSC).
Its adoption enables:
Its adoption enables:
- Scalable, reproducible AI and data science across diverse infrastructures
- Reduction of manual workflow engineering time
- Bridging the gap between experimentation and production
- Future expansion towards new languages and embedded devices