Old Stuff
I was a Ph.D. Student in the Sloop project-team (former name of the Mascotte project-team) from December 1994 to December 1998; my advisors were Michel Syska and Jean-Claude Bermond. Then I had a postdoctoral fellowship with CNES (the french Space Agency) from January 1999 to August 2000.
The USS-SIMGRID ANR Project (2010–2011)
Starting Sept 2010, I am taking over the Tasks 2.3 and 6.2 of WP6 of the USS-SIMGRID ANR Project, following the departure of my colleague F. Lefessant (INRIA Saclay). Our contribution to this project will be two-fold:
- Application Workload Characterization. The goal of this task is to capture the workload through a set of defined events. Some of them (such as send and receive) are shared between all applications, but they are very low-level. Higher level events have to be specific to the application. For example, in a P2P DHT, join or lookup are classic events while submit is often seen in a batch scheduler context. This task has two goals. First, we aim at implementing an instrumentation tool able to capture the low-level events of any application (using system-level solutions such as ld preload), and record them accordingly to a generic event format. Then, we want to provide a solution to generate the simulation code corresponding to a given event log. We do not plan on providing a tool to capture high-level traces since they are too application-specific for us to devise a generic tool.
- Peer-to-peer backups: Simulation environments for large scale distributed applications, such as peer-to-peer video on-demand systems or generic peer-to-peer storage systems, are generally limited to the estimation of metrics such as the number of messages exchanged between the different peers and do not consider timing issues. In the particular case of peer-to-peer backup, being able to estimate the time needed to load or store a file chunk is crucial. We expect the use of such a tool to provide a better understanding of the behavior of a working backup system, and in particular, to compute some parameters that impact the performance of the system and are hard to guess from standard simulations (sizes of volumes, sizes of chunks, failure detection delays) where network characteristics are taken into account enough.
The Spiderman project (Supported by INRIA, since 2008)
Spiderman is a project initiated by my PhD student Juan-Carlos Maureira and our colleague Diego Dujovne (INRIA Planete project-team). It is new system designed to provide network connectivity to in-motion communicating devices inside buses, trains or subways, moving at high speed. The system is made of two parts. The mobile part, called \emph{Spiderman Device}, is installed in the mobile vehicles and provides a standard WiFi connection service to end users. The static part is made of multiple identical devices, called Wireless Switch, which are installed all along the path and provide the connection with the fixed network infrastructure. The connection between the mobile and fixed parts is maintained using a two-radio IEEE802.11 hand-over custom-made procedure, implemented within the Spiderman device. This handover procedure is designed in order to ensure a continuous connection at the data link layer level for vehicles moving at high speeds up 150 km/h and possibly higher. The system is currently under testing.
The SPREADS ANR Project (2007–2010)
SPREADS is the acronym for Safe P2p-based REliable Architecture for Data Storage. It is a common research project between UbiStorage, I3S/INRIA/Mascotte, Eurecom/NSTeam, LIP6/INRIA/REGAL and LACL/SC. The project started in Dec. 2007 with a funding from the French National Research Agency (ANR) with an additional sponsorship from the SCS pole of competitivity. Many other people are working with me on this exciting project in the Mascotte team (At the time of writing, we are no less than 2 assoc. profs, 1 Research associate, 1 postdoc, 1 enginner, 3 PhD students !)
The BROCCOLI INRIA ARC Project (2008–2009)
The goal of the BROCCOLI ARC project is to design a platform for describing, deploying, executing, observing, administrating, and reconfiguring large-scale component-based software architectures, in particular for building discrete event simulation applications. In addition to the Mascotte Project team (Judicael Ribault, Fabrice Peix and me), this project involves 2 other research groups:
- INRIA Futurs - ADAM Project-team (Philippe Merle and Lionel Seinturier)
- Telecom Sud-Paris ACMES team (Denis Conan and Sébastien Leriche).
The OSERA ANR project? (2005–2008)
OSERA is a project founded by ANR that aims at studying Ambiant Networks in Urban Areas. I am working on this project with two other members of the Mascotte team, Hervé Rivano et David Coudert. Hervé and David mainly focus on optimization and algorithmic apects, and I focus on the simulation and discrete-event modeling ones. For this purpose, I initiated the design and development of a new open component-based simulation platform called OSA. I work on this platform with the help of Cyrine Mrabet, who is Associate Engineer in our team and Judicael Ribault (MEng. student in CS. Engineering).
ASIMUT CNES project (1999–2004)
Asimut is a telecommunications network simulator. During my post-doc at the French Space Agency center at Toulouse, I participated to the design effort of a new simulation environment for (satellite) telecommunication networks, called ASIMUT. In short, ASIMUT innovates in the field of network simulation, because it rely on a new hierarchical, component-based modeling concept. ASIMUT is complete environment that provides support for network architecture design, simulation campains, experiment planing, data analysis and, to some extent, model components developpment (in C++).
“Exotic” File Systems for Unix/Linux
I started to focus on this topic during my PhD with the Multi-Points Communications File Systems. MPCFS is a kernel extension that allows Unix users to exchange data between Unix systems by simply reading or writing these data to/from special files. What is interesting in this approach is that once the MPCFS extension (a kernel module) is plugged in the operating system, users can benefit from its multi-points communication ability without any special tools or library. Sending data accross the network is as easy as writing to a file, or redirecting the standard output of a process to a nammed pipe.
Unfortunately, the first prototype of MPCFS for Linux (1998) was too big and buggy to be of real use. Since the idea was still funny, I decided to restart the project from the begining (2001) with a much more modular approach: develop the File System based API on one hand, and the multi-point communications protocols on the other hand. The first prtotype of the API part was released in 2002 by Olivier Francoise. The protocol part is still under study…
If you want to learn more, you may have a look to the slides (PDF file, 682 KB) of the talk I gave at Sun Labs Europe (Grenoble, France) in november 2002…
… or to this newer version of the slides I use for the talk I gave to the SolutionsLinux Conférence in february 2003 (formats OpenOffice SXI or HTML)
Communications and dynamic load balancing for parallel and distributed architectures
I worked on this topic during my PhD. thesis and especially on Networks of Workstations:
- In order to modelize the behavior (the performance level according to the workload level) of the several kinds of workstations available in a local area network, I developped the LoadBuilder environment, a distributed platform designed for the definition and management of distributed experiments. When complete, this platform should help in designing efficient information policies for multi-criteria dynamic load balancing algorithms.
- With the help of a few engineer trainees of the neighbouring School of Computer Engineering (ESSI), I initiated a project whose goal is to include into a UNIX kernel (Linux) all the functionnalities allowing distributed parallel applications to transparently communicate through the file system. This project resulted in the development of a virtual file system driver for Linux: MPCFS.
- I was involved in the setting up of the first cluster of workstations that was being installed in INRIA Sophia Antipolis Research Center. I was especially interested in the performance evaluation of its communication network and i was also in charge of installing and evaluating the performances of a small Myrinet network over these workstations.
- I regularly participated to the GRAPPES Working Group meetings, in which french teams interested in the various aspects of cluster computing presented their work and ongoing research.