Research

DISSIMINET

DISSIMINET Associated Team

INRIA Sophia Antipolis MASCOTTE project-team

Since 2012, following Olivier Dalle’s move to the OASIS team, the DISSIMINET Associated Team is now a collaboration between INRIA Sophia Antipolis OASIS project-team and ARS Laboratory at Carleton University . This administrative status change has no impact on the scientific activity and goals of this Associated Team.

Welcome on the DISSIMINET INRIA Associated Team Web-page.

About the Associated Team funding

An INRIA Associated Team provides financial support to support exchange between to research teams that already have a significant level of collaboration. The INRIA funding is backed up by a similar funding on the other institution side.

Activity report

Summary of joint research activities

As further described below in the Research Goal section, our research is done in the 3 following directions:

  1. Build a simulation middleware using Web-services.
    Our work in this direction consists in providing a web-service middleware for simulation. This work is a follow-up extension of the work done by Wainer et Mohallemi on the RISE middleware. The initial version of the middleware was only designed to support the DEVS simulator developped at Carleton. An integration of the OSA simulator developped at INRIA Sophia Antipolis started in June 2012 while O. Dalle was visiting Carleton. This integration will allow combined simulations involving simultaneously the two simulation platforms. This work is still undergoing and has not yet been published.
  2. Provide tools to better support the simulation methodology.
    Our work in this direction consists in investigating and putting into practice a number of innovative ideas in the field of simulation:
    1. Improve the level of reuse[Dal11a,Dal11b]
    2. Provide strong workflow-based support of the methodology[RW12a]
    3. Provide (network-centric) tools to ensure reproducibility of simulations experiments[Dal12]
  3. Apply results to improve our existing simulation software.
    1. Investigate the use of web-services for particular simulation application, such as building simulations in which (intensive) computations are done in the cloud and vizualisation and rendering are offered through handeld devices.[MWAD12, RW12b]

A new “cotutelle” PhD is starting in October 2012. The new student (Damian Vicino) will work on Open Science aspects, in the context of the ANR INFRA SONGS project: how to ensure reproducibility of simulations, how and where to put the material on-line, how to specify the experimental workflows, etc.

EA team contributors

  • Carleton University:
    • Gabriel Wainer (Full Professor)
    • Shafagh Jafer (PhD. Student - Graduated in Sept. 2011)
    • Mohammad Moallemi (PhD. Student - Graduated in Sept. 2011)
    • Sixuan Wang. Ph.D. EE Dept. of Systems and Computer Engineering, Carleton University. “Distributed simulation and Architecture. (since 2012).
    • Colin Timmons. M.A.Sc. Systems and Computer Engineering. Since 2011.
  • INRIA
    • Olivier Dalle (Maître de Conférence)
    • Francoise Baude (Professeur des Universités) (Since Oct 2012)
    • Emilio Mancini (Postdoc, contrat ANR USS-SIMGRID) (Jan - Nov 2012)
    • Van Dan Nguyen (Ingénieur expert, financement INRIA - ADT OSA) (Jan 2011-Sept 2012)
    • Judicael Ribault (Postdoc INRIA, financement DGA) : 1 year stay at Carleton (April 2011-May 2011)
    • Damian Vicino (PhD student, co-tutelle INRIA/Carleton) (Since Oct 2012)

Activities of the EA team members in reverse chronological order

  • Sep 13 2013 Early-draft presentation of Binding Layers Level 0, with some hints about upper levels, given at the ARS Lab seminar, at Carleton University:
    Binding Layers Level 0: an abstract, multi-purpose component model architecture API. (Olivier Dalle, )
    Binding Layers (BL) is a work-in-progress project whose aim is to provide a generic means for the specification complex and reusable software architectures based on components. Rather than compete with existing component models (CMs), BL is designed to build on top of most of them. In other words, given an existing CM named “A”, BL provides a new generic API to describe an application architecture made of native “A” components without actually requiring the use of the native API of “A”. This abstract representation of a component model and the corresponding generic API represents the first of the four levels of the BL API, called Level 0. In this presentation we will mainly focus on the design and implementation issues of this Level 0. However, some general principles about the upper levels of BL, and how they exploit lower levels to builds the advanced features of BL incrementally on top of each others will also be discussed briefly.
  • August 21–23, O. Dalle and Sixuan Wang (PhD stud., Carleton) visit AutoDesk (R. Goldstein and S. Breslav) in Toronto. They discuss the issue of finding a compact, efficient and error-free representation of time in multi-scale simulations. A journal paper is in preparation.
  • August-Sept 2013, O. Dalle visits Carelton (1 month)
    Olivier works on time representation (see visit to AutoDesk above) and on the BindingLayers project. Binding Layers (BL) is a new component architecture and an Architecture Description Language (ADL) for building hierarchical and dynamical software architectures. BL is not intended to be a self-contained component model, but rather to extend (and improve) existing component models by providing additional features. BL is proposed as a specification made of 4 levels, each of which builds on top of each other. The specification of the 3 first levels is well advanced and a first publication (as Research Report) is planed in October.
  • June 2013, O. Dalle presents DISSIMINET activities at INRIA Journees Scientifiques
    Here are the slides of the presentation (PDF, 947 KB) Δ given by O. Dalle and here is the video of the workflow demo Δ (46.9 MB, format: wmv)
  • June-July 2013, G. Wainer visits INRIA (3 weeks)
  • May 2013, D. Vicino stays at Carleton (2 weeks)
    Damian had a poster presentation (PDF, 108KB) Δ accepted at the ACM SIGSIM PADS Conference in Montreal and he took this opportunity to combine a working stay at Carleton in the same trip.
  • January 2013, Damian Vicino joins DISSIMINET Research Team as joint PhD between Univ. Nice and Carelton.
  • June 2012, Olivier Dalle visits Carleton University (1 month)
    Olivier works with Sixuan on the integration of OSA in RISE web-service middleware. RISE is a dedicated middleware developped at Carleton for execution of simulations through web-services. This work aims at allowing two simulations sub-parts to interact transparently through the RISE middleware.
  • June 2012, Gabriel Wainer visits INRIA Sophia Antipolis (3 weeks)
    Gabriel works with Emilio.
  • Mars-April 2012, Olivier Dalle visits Carleton University (1 month)
    Olivier works with Judicael and Sixuan on simulation workflows.
  • September 2011, Emilio Mancini visits Carleton University (1 month)\\ Emilio will work on the modeling and simulation of large distributed system with people from Carleton.
  • August 2011, Van Dan Nguyen visits Carleton University (1 month)\\ Van Dan will work on the OSA tutorial and implementation of workflows in OSA with Judicael.
  • July 2011, O. Dalle visits Carleton University (1 month)
    Olivier works with Judicael on Workflows. We study how to best describe simulation studies workflows and how to support workflows to help end-users design new simulation studies or derive new studies from existing ones. Most existing works found in literature describe workflows at a quite high level of abstraction, which makes them difficult to exploit in actual simution tools. Judicael works on a practical case study derived from the OSA tutorial (File Transfers). New developments for OSA are started and a few ideas have emerged for writing a paper. This work is connected to some recent work by G. Wainer and one of his PhD students on workflows. Olivier and Judicael also worked a bit on the OSA tutorial, in preparation of the next coming visit of Van Dan.
  • June 2011, 2 PhD students from Carleton visit Sophia Antipolis (1 month)
    Bad luck, both visit cancelled! (visa authorisation refused by French authorities in Montreal at last minute. Plane tickets lost!)
    • Le premier etudiant travaille sur la simulation de systèmes temps-réel embarqué. La simulation permet en effet de simuler des systèmes dans des situations difficiles à obtenir dans la réalité. Il s’appuie sur le formalisme de modélisation et simulation DEVS, qui est la spécialité de leur équipe de recherche. Notre équipe a Sophia Antipolis commence tout juste a travailler à l’intégration de ce formalisme dans sa propre plate-forme de simulation. L’etudiant a développé une technique élaborée qui permet de combiner dans une même simulation, des éléments simulés et des systèmes réel, et surtout de remplacer à volonté les uns par les autres. L’utilisation de cette technique en matière de simulation de réseau, notre spécialité à Sophia, serait une contribution majeure à notre architecture. Sa venue lui permettrait donc de collaborer avec nos chercheurs et ingénieurs travaillant actuellement sur notre plate-forme OSA, afin d’y intégrer cette amélioration.
    • Le deuxieme etudiant travaille sur à la parallélisation de simulations concues a partir du formalisme DEVS. Ce sujet est au centre de nos travaux de recherche actuels au sein de l’INRIA Sophia Antipolis. La plate-forme OSA que nous développons a pour objectif de simuler des systèmes de très grande taille tel que les réseaux Pair-à-pair. Nous sommes d’ailleurs impliqués dans un projet financé par l’ANR sur ce sujet (ANR USS-SIMGRID). La simulation de tels systèmes ne peut se faire qu’à l’aide de moyens de calcul à très grande échelle tels que la grille de calcul Grid5000. L’experience et les techniques dévelopées par cet etudiant nous seront donc d’une grande utilité.
  • January 2011, G. Wainer visits Sophia Antipolis (5 days)
    G. Wainer was invited as a jury member for the PhD defenses of two Mascotte “simulationists”: Juan-Carlos Maureira (January 20th) and Judicael Ribault (January 21st). Following his defense, Judicael will pursue his research in G. Wainer’s team at Carleton (April 2011-March 2012).

Associate Team Publications

[VDW13] Damian Vicino, Olivier Dalle, Gabriel A. Wainer. Using DEVS models to define a fluid based uTP/LEDBAT model, Poster Presentation, ACM SIGSIM Conference on Principles of Advanced Discrete Simulation (PADS’13)′ ’, Montreal, May 19–22 2013.

[RW12a] Judicael Ribault, Gabriel A. Wainer. “Using Workflows Of Web Services To Manage Simulation Studies Into The Cloud”, Proceedings of 2012 Spring Simulation Conference (SpringSim12), DEVS/TMS Symposium, page TBD - March 2012

[RW12b] Judicael Ribault, Gabriel A. Wainer. Simulation Processes in the Cloud for Emergency Planning. in the CCGRID ‘12 Proceedings of the 2012 12th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (ccgrid 2012), Pages 886–891.

[MWAD12] Emilio P. Mancini, Gabriel Wainer, Khaldoon Al-Zoubi and Olivier Dalle (2012) Simulation in the Cloud Using Handheld Devices. In {MSGC@CCGRID - Workshop on Modeling and Simulation on Grid and Cloud Computing - 2012}. Ottawa, Canada, May. (IEEE, Eds.) Pages 867–872, . Wainer, Gabriel and Hill, David and Taylor, Simon.

[Dal12] Olivier Dalle (2012) On Reproducibility and Traceability of Simulations. In Proceedings of the 2012 Winter Simulation Conference. dec. (C. Laroque, J. Himmelspach, R. Pasupathy, O. Rose and A. M. Uhrmacher, Eds.).To appear.

[Dal11a] Olivier Dalle (2011) Should Simulation Products Use Software Engineering Techniques or Should They Reuse Products of Software Engineering? — Part 2. Modeling \& Simulation Magazine, 11(4).Online publication.

[Dal11b] Olivier Dalle (2011) Should Simulation Products Use Software Engineering Techniques or Should They Reuse Products of Software Engineering? — Part 2. Modeling \& Simulation Magazine, 11(4).Online publication.

[WAD+11a] Gabriel A. Wainer, Khaldoon Al-Zoubi, Olivier Dalle, David R.C. Hill, S. Mittal, J.L. Risco Martìn, Hessam Sarjoughian, L. Touraille, Mamadou K. Traoré and Bernard P. Zeigler (2011) Standardizing DEVS model representation. In Discrete-Event Modeling and Simulation: Theory and Applications, G. Wainer, P. Mosterman Eds., Taylor and Francis, pages 427–458.

[WAD+11b] Gabriel A. Wainer, Khaldoon Al-Zoubi, Olivier Dalle, David R.C. Hill, S. Mittal, J.L. Risco Mart{\’i}n, Hessam Sarjoughian, L. Touraille, Mamadou K. Traor{é} and Bernard P. Zeigler (2011) Standardizing DEVS Simulation Middleware. In Discrete-Event Modeling and Simulation: Theory and Applications, G. Wainer, P. Mosterman Eds., Taylor and Francis, pages 459–494.

Summary

Since January 2011, the INRIA Sophia Antipolis MASCOTTE project-team is an associate team with ARS Laboratory at Carleton University, Ottawa, ON (Canada). This Franco-Canadian team will advance research on the definition of new algorithms and techniques for component-based simulation using a web-services based approach. On one hand, the use of web-services is expected to solve the critical issues that pave the way toward the simulation of systems of unprecedented complexity, especially (but not exclusively) in the studies involving large networks such as Peer-to-peer networks. Web-Service oriented approaches have numerous advantages, such as allowing the reuse of existing simulators, allowing non-computer experts to merge their respective knowledge, or seamless integration of complementary services (eg. on-line storage and repositories, weather forecast, traffic, etc.). One important expected outcome of this approach is to significantly the simulation methodology in network studies, especially by enforcing the seamless reproducibility and traceability of simulation results. On the other hand, a net-centric approach of simulation based on web-services comes at the cost of added complexity and incurs new practices, both at the technical and methodological levels. The results of this common research will be integrated into both teams’ discrete-event distributed simulators: the CD++ simulator at Carleton University and the simulation middle-ware developed in the MASCOTTE EPI, called OSA, whose developments are supported by an INRIA ADT (Development Action) named OSA? starting in December 2011.

Research Goals

In recent years, the use of Modeling and Simulation (M&S) has enabled the creation of new knowledge with unprecedented level of detail, and it is now widely accepted that discovery and analysis are founded on three pillars: theory, experimentation, and simulation. The computational demands imposed by this model of research are continually pushing the envelope of the available technologies, as many sectors have growing needs to process, visualize, make readable, understand, and deploy complex models that use immense amounts of data. These players need to transform data into hypothesis building and critical decision-making, and to adapt or modify their models quickly in response to new discoveries and hypotheses. It is worth emphasizing that the studies carried in various sectors using computer simulation techniques, e.g. on telecommunications and computer networks, on biological and environmental systems, on road networks and logistics, on emergency response plans or even on social events organization, have many similarities that result in a strong potential for reusing a common set of supporting software tools and computing resources. The ability to share these M&S resources is of utter importance. This involves two separate needs: executing distributed simulations (including simulators interoperation and their combination into larger experiments), and sharing M&S resources at different levels of resolution and visual assets between the participants. Based on our experience in using simulation in various scientific project, we have also identified a number of methodological issues in the current common practices. A number of recent research studies have pointed out the fact that a large number of publications using computer simulations in the networking community lack of enough information to ensure reproducibility (and therefore credibility) of the published results. For example, in [2], Pawlikowsky et al. surveyed over 2200 publications on telecommunication networks in proceedings of the IEEE INFOCOM and such journals as the IEEE Transactions on Communications, the IEEE/ACM Transactions on Networking, and the Performance Evaluation Journal and their conclusion was that « […]the majority of recently published results of simulation studies do not satisfy the basic criteria of credibility. » Similarly, in [7] Kurkowski et al. surveyed the results of MANET simulation studies published between 2000 and 2005 in the ACM MobiHoc Symposium : 75% of these papers used simulation but they found out that less than 15% only of the simulations were repeatable and only 7% addressed such important issues as initialization bias. Pointing out such important issues is a first step toward improving the practices, but it is far from sufficient, because these papers, despite well known, do not provide technical means for solving the issues, they just give recommendations. Going one step further in their initiative, the authors of the previous papers did start to contribute software tools to better support the simulation methodology, such as the AKAROA2 software developed by Pawlikowsky et al. However, most of these tools only focus on a selected number of common flaws found in the simulation workflow methodology (eg. AKAROA2 helps to address the intial bias issue).

Methods to be employed.

  1. Build a simulation middleware using Web-services. Currently, most collaborative systems for M&S only focus on sharing data (in varied formats) and processing power. Existing simulation standards have had limited success. Instead, formal M&S techniques like DEVS (Discrete Events Systems specification), promise better success by addressing these issues at a higher level of abstraction [1]. With DEVS, models, simulators and experiments are systematically created for interoperability. Nevertheless, distributed simulation is not enough: we also need to share models, experiments and visualizations. Service Oriented Architecture (SOA) is a promising technology that can be used to achieve these goals (SOA is used to build interoperable systems for machine-to-machine interaction over a network), and varied simulation software is already using SOA-based technologies. Nevertheless, building SOA-based simulations is still complex, as the services usually address the interoperability of simulation engines at a low level of abstraction.
  2. Provide tools to better support the simulation methodology This Franco-Canadian collaboration intends to advance research one step further by providing means to support the methodology globally, during all the steps of the simulation study workflow. Indeed, credibility can only be achieved when all the elements of the methodology are considered together. Hence, a first obvious task is to find means for expressing such workflows. Some existing formalisms, like BPEL have to be considered, but they might not be sufficient. Other initiatives, like the myExperiment project, seem to open very promising perspectives (http://www.myexperiment.org/). Furthermore, some goals like reproducibility and traceability can only be achieved at the global level of a full simulation workflow. Traceability, for example, is the property that allows a third party to retrieve all of the (software) components that were used to run a simulation experiment and publish the corresponding results, in the exact state of development they were at the time of the study. Therefore, this property is impossible to achieve without some archival facility and version tracking system. Such tools are widely available (typical components of a software forge) but are still little used for archiving simulation software and experiments. . Therefore, our proposed approach to deal with these important issues is to provide means of expressing the various workflows found in simulation-based studies. Once such workflows have been identified, we intend to use these workflows to enforce traceability and reproducibility of simulation results by designing a set of specialized tools. For example, once a flaw in a particular model used (and reused) in multiple simulations have been found, the formal definition of the workflows used in these simulation studies can be used to trace automatically which part of the studies are impacted.
  3. Apply results to improve our existing simulation software We are thus investigating advanced algorithms and methods for combining DEVS, web services and SOA into a middleware for distributed simulation and collaboration[6]. This middleware will allow the interconnection of existing simulators, and in particular the one developed in the two teams, the CD++ simulator at Carleton and the OSA (Open Simulation Architecture) in the Mascotte EPI. However, this simulation middleware will also be open to other simulators, especially the other ones developed at INRIA like NS3 (part of the current OSA ADT goals), as well as complementary services by means of a mashup approach for sharing and reusing models and experiments (mashups are service-based applications that combine data and services provided by third parties). We want also to advance the current state-of-the-art in the M&S field by using a more abstract solution (i.e., instead of dealing with the data and simulation levels, interoperability will be dealt with at the modeling and experimentation levels, improving reuse and providing better ways to mashup models, experiments and other services), based on recent advances in Software Engineering. Last but not least, we will define new methods to handle the large amounts of simulation data through instrumentation of scenarios, aggregation policies and dynamic adaptation of the simulation to varying computing conditions based on different policies, as we already started to investigate on the French side in the context of the INRIA ARC Broccoli project (2008–2009).

[1] Bernard Zeigler (1976). Theory of Modeling and Simulation (first ed.). Wiley Interscience, New York.

[2] K. Pawlikowski, H.-D.J. Jeong and J.-S. Ruth Lee. On Credibility of Simulation Studies Of Telecommunication Networks (PDF, 129KB) (extended version). IEEE Communications Magazine, January 2002, 132–139.

[3] Jan Himmelspach, Olivier Dalle and Judicael Ribault. Design considerations for M&S software. In Proceedings of the Winter Simulation Conference (WSCÕ09). Austin, TX, December 13Ð16 2009. (D. Rossetti, R. R. Hill, B. Johansson, A. Dunkin and R. G. Ingalls, Eds.). Invited paper, to appear.

[4] Judicael Ribault and Olivier Dalle. Enabling advanced simulation scenarios with new software engineering techniques. In 20th European Modeling and Simulation Symposium (EMSS 2008). Briatico, Italy, September 2008.

[5] Rachid Chreyh and Gabriel A. Wainer. CD++ Repository: An Internet Based Searchable Database of DEVS Models and Their Experimental Frames. In proceedings of the Spring Simulation Conference, March 2009.

[6] Khaldoon Al-Zoubi and Gabriel A. Wainer. Performing Distributed Simulation with RESTful Web-Services Approach. In Proceedings of the Winter Simulation Conference (WSC’09). Austin, TX, December 13Ð16 2009. (D. Rossetti, R. R. Hill, B. Johansson, A. Dunkin and R. G. Ingalls, Eds.).

[7] S. Kurkowski, T. Camp, and M. Colagrosso. (2005) MANET Simulation Studies: The Incredibles. ACM’s Mobile Computing and Communications Review, 9(4), 50–61.

Research

Teaching

edit SideBar

Blix theme adapted by David Gilbert, powered by PmWiki