You can find my full CV here.

Research Interests

My research interests lie in the fields of Computer Systems, Virtualization, Operating Systems, High Performance Computing, and Interconnects.

My diploma thesis, conducted at the Computing Systems Laboratory, under the supervision of Associate Professor Nectarios Koziris, was focused on integrating HPC Interconnect semantics in virtualized environments, using a simple lightweight RDMA protocol over Ethernet, and Xen hypervisor's split driver model. [abstract]

While working on my diploma thesis, I also worked (debugging / optimizations) on an Intel Xscale (ARM-based) board, running Linux (memory / DMA and network transactions).

I'm also very interested in the fields of Distributed Systems, Computer Architecture, OS Security, Networking in Virtualized/Cloud Environments, and Cloud Computing, especially in conjunction with HPC.

Publications
Work Experience

I worked for ~3 years as a (part-time) system administrator at the Computing Center of the Electrical and Computer Engineering Department of the National Technical University of Athens.

My duties involved maintaining, testing and deploying various servers and services, including Web servers, DNS, Email, LDAP, Kerberos, OpenVPN etc, as well as network maintenance (Cisco Catalyst Switches, Cisco IOS, Wifi etc).

I also worked as a system administrator at the DPG Web Development company. My duties involved the maintenance / scaling /deployment of an infrastructure of Web and DB servers, hosting high traffic web portals.

Currently, I'm working at GRNET, and specifically on its Cloud Computing software/infra.

Education
Free Software Community Contributions

The objective of this study is the analysis and evaluation of the behavior of modern HPC cluster interconnects in virtualized environments. This work is based on previous research conducted by the Computing Systems Laboratory: we evaluate a simple interconnect based on an RDMA mechanism over programmable 10GbE interfaces, and its modified implementation, which integrated this interconnect in the Xen virtualization platform.

Both the native and the virtualized implementations of the protocol are thoroughly evaluated, in order to identify and eliminate possible bottlenecks both in hardware, and in the protocol's implementation. To obtain further insight into the implications of software overheads, we port the virtualized implementation of the protocol to the host's kernel. Specifically, to profile and instrument the various phases of a network packet's lifecycle, we implement the interconnect's protocol using the Xen's split driver model. To this end, we acquired some interesting results: a significant amount of time spent is due to the frontend--backend communication mechanism; moreover, for large messages, the time spent in copying pages across domains is non-negligible. Using simple optimizations, we were able to amortize these overheads and, thus, reduce the total time spent in the software stack.

Compared to Xen's generic ethernet interface, our approach is able to reduce the CPU overhead of protocol processing by directly transferring data from VM's memory to the network. To achieve this, we register pages prior to communication, a common approach used in HPC cluster interconnects. Preliminary results using simple micro-benchmarks show that the kernel-level implementation sustains 681 MiB/sec for large messages, while limiting the privileged guests CPU utilization to 34%. In terms of latency our approach is able to achieve 28us vs. 70us in the TCP/IP case.