An approach to provide serverless scientific pipelines within the context of SKA

by   Carlos Ríos-Monje, et al.

Function-as-a-Service (FaaS) is a type of serverless computing that allows developers to write and deploy code as individual functions, which can be triggered by specific events or requests. FaaS platforms automatically manage the underlying infrastructure, scaling it up or down as needed, being highly scalable, cost-effective and offering a high level of abstraction. Prototypes being developed within the SKA Regional Center Network (SRCNet) are exploring models for data distribution, software delivery and distributed computing with the goal of moving and executing computation to where the data is. Since SKA will be the largest data producer on the planet, it will be necessary to distribute this massive volume of data to the SRCNet nodes that will serve as a hub for computing and analysis operations on the closest data. Within this context, in this work we want to validate the feasibility of designing and deploying functions and applications commonly used in radio interferometry workflows within a FaaS platform to demonstrate the value of this computing model as an alternative to explore for data processing in the distributed nodes of the SRCNet. We have analyzed several FaaS platforms and successfully deployed one of them, where we have imported several functions using two different methods: microfunctions from the CASA framework, which are written in Python code, and highly specific native applications like wsclean. Therefore, we have designed a simple catalogue that can be easily scaled to provide all the key features of FaaS in highly distributed environments using orchestrators, as well as having the ability to integrate them with workflows or APIs. This paper contributes to the ongoing discussion of the potential of FaaS models for scientific data processing, particularly in the context of large-scale, distributed projects such as SKA.


Co-Tuning of Cloud Infrastructure and Distributed Data Processing Platforms

Distributed Data Processing Platforms (e.g., Hadoop, Spark, and Flink) a...

Album: a framework for scientific data processing with software solutions of heterogeneous tools

Album is a decentralized distribution platform for solutions to specific...

Function Delivery Network: Extending Serverless Computing for Heterogeneous Platforms

Serverless computing has rapidly grown following the launch of Amazon's ...

Exoshuffle: Large-Scale Shuffle at the Application Level

Shuffle is a key primitive in large-scale data processing applications. ...

FACT-Tools - Processing High-Volume Telescope Data

Several large experiments such as MAGIC, FACT, VERITAS, HESS or the upco...

Nova – A rainbow cloud over the Alps

A pooled and shared on-demand Infrastructure as a Service (IaaS), based ...

Dfuntest: A Testing Framework for Distributed Applications

New ideas in distributed systems (algorithms or protocols) are commonly ...

Please sign up or login with your details

Forgot password? Click here to reset