Main

Home Page

Welcome on my Wiki site!

Research Projects

My current research focus is on Modeling & Simulation and Peer-to-peer storage/backup systems.

Current

[Safer]

(Since 2015)

A new generation NAS storage system that uses a Peer-to-peer backup infrastructure to save costs and improve reliability.

Funded Projects

INFRA-SONGS

(2012-2015)

The SONGS Project is a follow-up to the USS-SIMGRID ANR Project (see also here). The goal of the SONGS project is to extend the applicability of the SimGrid simulation framework from Grids and Peer-to-Peer systems to Clouds and High Performance Computation systems. Each type of large-scale computing system will be addressed through a set of use cases and lead by researchers recognized as experts in this area. Any sound study of such systems through simulations relies on the following pillars of simulation methodology: Efficient simulation kernel; Sound and validated models; Simulation analysis tools; Campaign simulation management. La page du WP8

The EA DISSIMINET

(Associated Team) (2011–2013)

Since January 2011, the MASCOTTE project-team is an associate team with ARS Laboratory at Carleton University, Ottawa, ON (Canada). This Franco-Canadian team will advance research on the definition of new algorithms and techniques for component-based simulation using a web-services based approach. On one hand, the use of web-services is expected to solve the critical issues that pave the way toward the simulation of systems of unprecedented complexity, especially (but not exclusively) in the studies involving large networks such as Peer-to-peer networks. Web-Service oriented approaches have numerous advantages, such as allowing the reuse of existing simulators, allowing non-computer experts to merge their respective knowledge, or seamless integration of complementary services (eg. on-line storage and repositories, weather forecast, traffic, etc.). One important expected outcome of this approach is to significantly the simulation methodology in network studies, especially by enforcing the seamless reproducibility and traceability of simulation results. On the other hand, a net-centric approach of simulation based on web-services comes at the cost of added complexity and incurs new practices, both at the technical and methodological levels. The results of this common research will be integrated into both teams’ discrete-event distributed simulators: the CD++ simulator at Carleton University and the simulation middle-ware developed in the MASCOTTE EPI, called OSA, whose developments are supported by an INRIA ADT (Development Action) named OSA? starting in December 2011.

The OSA project

(Supported by INRIA between 2005 and 2012)

OSA stands for Open Simulation Architecture. This is a development project for a new discrete event simulation platform. The original elements of this new platform are:

  1. the integration in the same tool of a large number of the Modeling & Simuilation concerns (modeling, developments, instrumenting, …)
  2. the extensive of Component-Based Sofware Engineering (CBSE) techniques, and more particularly the Fractal component model (for example, in order to ease the reuse and replacement of parts of the platform AND models —cf this paper — )
  3. the use of Aspect Oriented Programming (AOP) techniques in order to separate concerns
  4. an open (Open Source) and modular architecture, easy to use (automatic dependencies management based on a Maven repository), inspired AND based on Eclipse
  5. a collaborative development model (forge, wiki …)

OSA v0.6 is available on the INRIA forge with a demo of Peer-to-peer storage simulation.

Other projects

Binding Layers

(Since Dec 2011)

Binding Layers is a new Component Architecture Model.

A software Component Architecture Model (CAM) describes a set of operating rules and mechanisms for building complex applications using a structured assembly of software components. Compared to a component model, eg. J2EE, Spring, SCA or Fractal, a CAM does NOT specify the component model itself, but builds instead on top of existing Component Models (CMs). As a result, an important property seeked in BL-CAM is genericity: BL-CAM is meant to be compliant with many Component Models.

Various approaches have been proposed so far to specify the structure of complex applications based on components, but the most popular are certainly the following:

  • Flat structures: all components lay in a common container and interact directly with each other depending on their dependencies;
  • Hierarchical structure: components can be grouped into bigger units, that can in turn be used to form even bigger units, and so on.

Boths approaches have their pros and cons: Flat structures avoid the complexity of hierarchy and therefore usually offer better performance, but at the cost of a lesser reusability and control; on contrary, hierarchical structures offer great means for reusing parts of an application, and the hierarchy provides a de facto means for building complex control and fine-tuned non-fonctionnal services. However, despite their popularity, both approaches fail to provide good means for the Separation of Concerns at the architectural level.

Binding Layers is an attempt to solve this issue by following a third, different approach. Like flat structures, BL does not suffer from a many-level hierarchy performance cost, and yet, like hierarchical structures, it allows for sophisticated grouping strategies. For this purpose, BL relies extensively on two original features: component sharing and layering by extension.

Component sharing means that a single component instance can be found in many component assemblies. Therefore, assuming that component assemblies are formed according to some common concern, component sharing allows a component to be directly part of a concern, rather than to reach for it, eg. using a complex path through the component hierarchy. A usual idiom found in other component models is to shorten this path by placing non-fonctionnal concerns in, or beside each component (eg. in the membrane of Fractal components). However, this approach creates an artificial dichotomy among components, each of which endding-up belonging to either of the two dimensions: functional or non-fonctional. On contrary, thanks to component sharing, Binding Layers support seamlessly and uniformly an arbitrary number of dimensions (including functional and non-fonctional).

Component groups formed in each dimensions are called layers. Each layer has a flat structure. However, reuse is made easy: First because the number of layers is not limited, and therefore each layer, typically in charge of a concern, can be reused independently to build new applications (eg. a persistence layer can be reused for many applications). In addition, Binding Layer offers an extension mechanism, somewhat similar to the heritage mechanism found in OO languages, that allows for incremental specializations of a given layer.

Status: work-in-progress.

See this presentation (PDF, 836 KiB) for more details.

Some Recent Talks

  • “Some questions about the relations between activity and time representations”, presented at the ACTIMS Workshop in Zurich, Jan 16–18 2014.
  • Binding Layers Level 0: An abstract multi-purpose component layer, Sophia Antipolis, SCADA meeting, Nov 28 2013.
    See project description above.
    (NB: This is the latest version of a talk first given at Carleton University, Ottawa, on Oct 13, 2013.)
  • “Using TM for high-performance Discrete-Event Simulation on multi-core architectures” Δ. Presentation at the EuroTM’2013 Workshop on Transactional Memory, Prague, April 14th 2013.
    Abstract: I recently started to investigate how TM could possibly be used for optimizing the performance of a discrete-event simulation (DES) engine on a multi-core architecture. A DES engine needs to process events in chronological order. For this purpose, it needs an efficient data structure, typically abstracted as a heap or priority queue. Therefore, my goal is to design an optimized heap-like data structure supporting concurrent multi-thread access patterns, such that multiple events can be processed in parallel by multiple threads. In DES, traditional parallelization techniques fall in two categories: either conservative, or optimistic. In the conservative approach, events are dequeued and processed in strict chronological order, which requires a synchronization protocol between the concurrent logical processes (LPs), to ensure consistency. In the optimistic approach, LPs are free to proceed and possibly violate the chronological order, but in case such a violation happens, a roll-back mechanism is used to return to the last consistent state (which requires a snapshot). The solution I am currently investigating is based on a Software Emulation Library for C++ called TBoot.STM. This library offers various transaction semantics, among which one, called invalidate-on-commit, that allows a transaction to be invalidated by the process that “suffers” the violation rather than the one that originates it. In our case, assuming that a transaction is associated to the dequeuing and processing of an event, a transaction is deemed successful when it completes without any prior event to be inserted in the heap an no earlier event is still pending. This is where building a solution based on invalidate-on-commit and transaction composition seems promising: Indeed, it seems easier to discover chronological violation when new events are inserted. In that case, all transactions that where mistakenly started too early can be invalidated. This library also provides a way for composing transactions, which could also prove to be helpful. For example, an agressively optimistic strategy could dequeue new events before the full completion of earlier events in which case the composition could be used to make the completion of later events depend on the completion of earlier ones. I am still at an early stage of this work, for which I just started experiments and performance evaluations.
  • Using Computer Simulations for Producing Scientific Results: Are We There Yet?  Δ
    Keynote presentation given at WNS3 2013, the 2013 Workshop on NS3, Cannes, France, March 5 2013
    Abstract: A rigourous scientific methodology has to follow a number of supposedly well-known principles. These principles come from as far as the ancient Greece where they started to be established by Philosophers like Aristotle; later noticeable contributions include principles edicted by Descartes, and more recently Karl Popper. All disciplines of modern Science do manage to comply with those principles with quite some rigor. All … Except maybe when they come to computer-based Science.
    Computer-based Science should not to be confused with the Computer Science discipline (a large part of which is not computer-based); It designates the corpus of scientific results obtained, in all disciplines, by means of computers, using in-silico experiments, and in particular computer simulations. Issues and flaws in computer-based Science started to be regularly pointed out in the scientific community during the last decade.
    In this talk, after a brief historical perspective, I will review some of these major issues and flaws, such as reproducibility of results or reusability and traceability of scientific software material and data. Finally I will discuss a number of ideas and techniques that are currently investigated or could possibly serve as part of candidate solutions to solve those issues and flaws.
  • On Reproducibility and Traceability of Simulation Experiments (PDF) Δ presented at WinterSim in Berlin, Dec. 2012.
    Abstract: Reproducibility of experiments is the pillar of a rigorous scientific approach. However, simulation-based experiments often fail to meet this fundamental requirement. In this paper, we first revisit the definition of reproducibility in the context of simulation. Then, we give a comprehensive review of issues that make this highly desirable feature so difficult to obtain. Given that experimental (in-silico) science is only one of the many applications of simulation, our analysis also explores the needs and benefits of providing the simulation reproducibility property for other kinds of applications. Coming back to scientific applications, we give a few examples of solutions proposed for solving the above issues. Finally, going one step beyond reproducibility, we also discuss in our conclusion the notion of traceability and its potential use in order to improve the simulation methodology.
  • My D.E.S. is Going to Be Better Than Yours (PDF) Δ at SFU Seminar, in Surrey Campus, Surrey, BC, May 28 2012.
    Abstract: Although provocative, this claim is often made by those who are considering the perilous project of writing their own discrete event simulator. In this talk, I will first review the pros and cons of writing a new simulator to demonstrate that there is no clear choice between writing a new simulator or reusing an existing one. Assuming that the decision to write a new simulator is eventually made, I will present a number of technical issues and some techniques that I have been using, in the last few years, to solve them. Most of these techniques involve advanced software engineering techniques and concepts, including Software Reuse, Aspect Oriented Programming, Separation of Concerns, Component Frameworks, and Architecture Description Languages.
    Then, I will introduce the Open Simulation Architecture (OSA) and its philosophy. OSA is a research project that I have been leading during the last few years, whose goal is to experiment with the various techniques described above in order to improve the simulation methodology. In response to the provocative title of this talk, I will show how OSA aims at offering a new simulator by attempting to integrate and reuse the best parts of other simulators. Finally, I will focus on the particular “layered” design used in OSA. While this concept sounds familiar, this layering is actually a unique feature that allows all of the previously mentioned concepts to fit together and serves the overall modeling and simulation methodology surprisingly well.
  • Some desired features for the DEVS ADL (PDF) Δ at the DEVS/TMS Workshop, Boston, April 6th, 2011.
  • Invited presentation at the USS-SIMGRID workshop in Cargese (Corsica, FR), April 2010. (Some Methodology Issues and Methodology Experiments in the OSA Project - PDF slides)
  • Invited presentation at the ARS/SCS seminar at Carleton University (Ottawa, CA), August 2010 (Same slides as Cargese above).

OldStuff

Miscellaneous activities

Conference & Workshop organizations

Conference Programme Committee Memberships

Some even older activities

Stays abroad

Miscellaneous tasks & memberships

  • Expert reviewer for Ministry of Higher Education & Research (MESR) CIR applications (Credit Impot Recherche) (2010-)
  • Member of the Comité de Sélection for a permanent faculty position at Univ. of Provence (Marseille) (2010)
  • Expert reviewer for ANR projects and other similar submissions (2008)
  • Member of the VerSim workgroup, where french-speaking people discuss theoretical aspects of simulation (I organized the last meeting in Sophia Antipolis, June 6th 2006)
  • I am member of the Commission de Spécialistes 27e section of U. Nice (the computer science scientific committee) since 2001.
  • I am also member of several committees in both my two research labs: the Commission Développements Logiciels (Sofware Developments Committee) of INRIA Sophia Antipolis, the Commision Informatique (Computing technical committee) of I3S, …
  • I am member of various societies (ACM SIGSIM, ICST, IEEE CS, SCS, …). (Unless I forgot to renew membership…)

You will find more about me here.

Research

Teaching

edit SideBar

Blix theme adapted by David Gilbert, powered by PmWiki