Call for Papers



ABSs are an increasingly popular tool for research and management in many fields such as ecology, economics, sociology, etc. In some fields, such as the social sciences, these models are seen as a key instrument to the generative approach, essential for understanding complex social phenomena. But also in policy-making, biology, military simulations, control of mobile robots and economics, the relevance and effectiveness of ABSs is recently recognized.

The computer science community has responded to the need for platforms that can help the development and testing of new models in specific fields by providing tools, libraries and frameworks that speed up and make massive-scale simulations possible. The key objective of this Fifth Workshop on Parallel and Distributed Agent-Based Simulations is to bring together researchers that are interested in getting more performance out of their simulations, by using synchronized, many-core simulations (e.g., GPUs), strongly coupled, parallel simulations (e.g. MPI) and loosely coupled, distributed simulations (distributed heterogeneous setting) as well as on-demand simulation (cloud setting).

Several frameworks have been recently developed and are active in this field. They range from the GPU-many-core approach, to parallel, to distributed simulation environments. In the first category, you can find FLAME GPU, that allows also non-GPU specialists to harness the GPUs performance for real time simulation and visualization. For tightly-coupled, large computing clusters and supercomputers a very popular framework is Repast for High Performance Computing (Repast HPC), a C++-based modeling system. On the distributed side, recent work on Distributed Mason, allows non-specialists to use heterogeneous hardware and software in local area networks for enlarging the size and speeding up the simulation of complex ABS models.

The Program Committee includes the main developers of the three leading platforms in each of the distributed ABS fields (Repast HPC, FLAME and MASON/D-MASON) and experts in the field of ABSs.

The topics of interest for the Workshop include: 
  • Frameworks for parallel/distributed ABSs 
  • Case studies of ABSs in parallel/distributed settings, with an emphasis on the technical implementation, architectural choices and their impact on performances
  •  Methods and techniques for applications of Deep Learning in parallel/distributed ABSs 
  • Data structures for accelerating communication in ABSs 
  • Benchmark parallel/distributed ABSs 
  • Debugging parallel/distributed ABSs 
  • Formal methods and algorithms for ABSs in parallel/distributed models 
  • Load Balancing algorithms, techniques and frameworks 
  • Management and deployment of parallel/distributed ABSs 
  • Visualization of parallel/distributed ABSs 
  • Parallel/distributed frameworks for Model Exploration and Simulation Optimization Process 
  • Open benchmark models contributing towards OpenAB initiative (www.openab.org)

The workshop will allow presentation of regular papers (25 + 5 minutes). The papers will be presented in the regular format (10 pages LNCS format) and will be reviewed, anonymously by at least 2 reviewers of the Program Committee. Acceptance as regular papers will depend upon scientific value and relevance to the workshop theme.

During the workshop will be organized: 

  • A panel sessions on the OpenAB initiative (chaired by Paul Richmond, University of Sheffield, UK)
  • A M.Sc/PhD Forum (chaired by Jonathan Ozik, Argonne National Laboratory). The forum will provide an opportunity for M.Sc./Ph.D. students in the fields of parallel/distributed ABSs to present their thesis/dissertation research, including work in progress within the topics of interest for PADABS 2017.  To encourage interaction among the M.Sc./Ph.D. students a full paper collecting the contributions of the forum will appear in the workshop post-proceedings. 
Submissions and reviewing will be through EasyChair (more details on the Submission page).


Topics of interest include (but are not limited to)...

  • Frameworks for parallel/distributed ABSs
  • Case studies of ABSs in parallel/distributed settings, with an emphasis on the technical implementation, architectural choices and their impact on performances
  • Methods and techniques for applications of Deep Learning in parallel/distributed ABSs
  • Data structures for accelerating communication in ABSs
  • Benchmark parallel/distributed ABSs
  • Debugging parallel/distributed ABSs
  • Formal methods and algorithms for ABSs in parallel/distributed models
  • Load Balancing algorithms, techniques and frameworks
  • Management and deployment of parallel/distributed ABSs
  • Visualization of parallel/distributed ABSs
  • Parallel/distributed frameworks for Model Exploration and Simulation Optimization Process
  • Open benchmark models contributing towards OpenAB initiative (www.openab.org)

Deadlines

  • Workshop papers due: May 12, 2017
  • Workshop authors notification: June 16, 2016