This is a website for an H2020 project which concluded in 2019 and established the core elements of EOSC. The project's results now live further in www.eosc-portal.eu and www.egi.eu

all?page=16

Workshop on Integrated Modelling of Protein-Protein Interactions

Training modules used during the EBI Hinxton - Joint Instruct-ERIC/CAPRI Workshop on Integrated Modelling of Protein-Protein Interactions

EGI Jupyter Notebooks tutorial

Materials for the tutorial that was given in Taipei on Apr 2, 2019. The tutorial was 3x90 minutes long.

The webpage and abstract of the tutorial is available at https://indico4.twgrid.org/indico/event/8/session/9/?slotId=0#20190402

ECAS Training Repositories

Repository for training/demo materials

EOSC-hub Data Platforms for data processing and solutions for publishing and archiving scientific data - Part I

The main objective of this session is to show how EOSC services can be used for managing active research data (i.e. data transfer, storage, and sharing) and for preserving final research data (i.e. data archiving and publishing). During this training, we will give a brief overview of the EUDAT Services in the data life cycle and demonstrate how these services operate and integrate with each other to meet the data management requirements of research communities and comply with the FAIR principles - which require the data to be properly documented, annotated, archived, published and accessible to the wider community.

This training track is relevant for researchers, IT support people, and service providers who operate services for Open Science.

Training on the INDIGO/DEEP/XDC Services

DEEP-Hybrid-DataCloud project aims to promote the integration of specialized, and expensive, hardware under a Hybrid Cloud platform, so it can be used on-demand by researchers of different communities.

XDC project aims at address high-level topics ranging from the federation of storage resources with standard protocols, the policy driven data management based on Quality of Service, data lifecycle management, metadata handling and manipulation, data preprocessing and encryption during ingestion, and smart caching solutions among remote locations.

This training session will provide practical overview on the solutions implemented both at the level of IaaS, PaaS and SaaS within the projects: INDIGO-DataCloud, eXtreme DataCloud (XDC) and DEEP-HybridDataCloud.

EOSC-hub Data Platforms for data processing and solutions for publishing and archiving scientific data - Part II

The main objective of this session is to demonstrate how end-users can perform data analysis on large volume of datasets, and produce reusable results following the FAIR principles. During this training track, the latest features of the EGI DataHub, including the interoperability with the EGI Jupyter Notebooks and the EUDAT B2Handle and B2Find services, will be also introduced.

This training track is relevant for researchers, IT support people, and service providers who operate services for Open Science.

Services to support FAIR data

In the series of workshops on FAIR services for research data that OpenAIRE is jointly organizing with FAIRsFAIR, EOSC hub and RDA-Europe, the second workshop took place as a part of the larger event ‘Linking Open Science in Austria’. The series of workshops explores: how data infrastructures can work together to meet the challenges of creating, managing, opening and archiving FAIR data.

The aim of the workshops is to:

  • Look at the landscape of data infrastructures looking to integrate FAIR into their services
  • Set initial recommendations and to find out what the challenges and priorities are

The interactive workshop "Services to Support FAIR Data" had a similar structure as the first workshop in the series of three but was aimed at researchers and research support.

Full programme is here: https://linkingopenscience.univie.ac.at/agenda/

 

Jupyterhub Deployment - hands-on training

Jupyter provides a powerful environment for expressing research ideas as notebooks, where code, test and visualizations are easily combined together on an interactive web-frontend. JupyterHub allows to deploy a multi-user service where users can store and run their own notebooks without the need of installing anything on their computers. This is the technology behind the EGI Notebooks service and other similar Jupyter-based services for research.

In this training we will demonstrate how to deploy a JupyterHub instance for your users on top of Kubernetes and explore some of the possible customisations that can improve the service towards your users like integration with authentication services or with external storage systems. After this training, the attendees will be able to deploy their own instance of JupyterHub on their facilities.

Target audience: Resource Center/e-Infrastructure operators willing to provide Jupyter environment for their users.

Pre-requisites: basic knowledge of command-line interface on Linux.

Digital Forensics for SSC Solvers

In the hands-on session, the participants will be provided with a VM infected with the 'malware' used in SSC-19.03. They will then be guided through the methods necessary to solve the challenges built into the simulated attack

The Elastic Cloud Computing Cluster (EC3)

Elastic Cloud Computing Cluster (EC3) is a tool to create elastic virtual clusters on top of Infrastructure as a Service (IaaS) providers, either public (such as Amazon Web ServicesGoogle Cloud or Microsoft Azure) or on-premises (such as OpenNebula and OpenStack). We offer recipes to deploy TORQUE (optionally with MAUI), SLURMSGEHTCondorMesosNomad and Kubernetes clusters that can be self-managed with CLUES: it starts with a single-node cluster and working nodes will be dynamically deployed and provisioned to fit increasing load (number of jobs at the LRMS). Working nodes will be undeployed when they are idle. This introduces a cost-efficient approach for Cluster-based computing.

Pages