- A Broader View on Verification: From Static to Runtime and Back
Organizers: Wolfgang Ahrendt, Marieke Huisman, Giles Reger, Kristin Yvonne Rozier
- Evaluating Tools for Software Verification
Organizers: Markus Schordan, Dirk Beyer, Stephen F. Siegel
- Towards a Unified View of Modeling and Programming
Organizers: Manfred Broy, Klaus Havelund, Rahul Kumar, Bernhard Steffen
- RV-TheToP: Runtime Verification from the Theory
To the industry Practice
Organizers: Ezio Bartocci and Ylies Falcone
- Rigorous Engineering of Collective Adaptive Systems
Organizers: Rocco De Nicola, Stefan Jähnichen, Martin Wirsing
- Reliable Smart Contracts: State-of-the-art, Applications, Challenges and Future Directions
Organizers: Gerardo Schneider, Martin Leucker, César Sánchez
- Formal Methods in Industrial Practice – bridging the gap
Organizers: Michael Felderer, Dilian Gurov, Marieke Huisman, Björn Lisper, Rupert Schlick
- Science and Engineering for Distributed Ledger Systems of Future
Practice
Organizers: Fritz Henglein, Michael Huth, Boris Düdder, Leif Lundbaek, Jesper Andersen
- X-by-Construction
Organizers: Maurice H. ter Beek, Loek Cleophas, Ina Schaefer, and Bruce W. Watson
- Statistical Model Checking
Organizers: Axel Legay and Kim Larsen
- Verification and Validation of Distributed Systems
Organizer: Cristina Seceleanu
- Cyber-Physical Systems Engineering
Organizers: J Paul Gibson, Marc Pantel, Peter Gorm Larsen, Jim Woodcock, John Fitzgerald
- ISoLA 2018 Doctoral Symposium
Organizer: Anna-Lena Lamprecht
A Broader View on Verification: From Static to Runtime and Back
Organisers
Wolfgang Ahrendt , Chalmers University of Technology
Marieke Huisman, University of Twente
Giles Reger , University of Manchester
Kristin Yvonne Rozier , Iowa State University
Recently, there have been multiple proposals to combine static with dynamic program analysis
techniques. Static analysis has been used to reduce the overhead of dynamic analysis, while
dynamic analysis is used if static analysis fails, or to learn information which can be used later
for static analysis (e.g., mining invariants, and detecting data structures). Significant progress
has been made in this way, but the efforts also show that the communities of static and dynamic
analysis researchers are often quite disjoint. There is even less of a connection between other
early design-time verification techniques, such as model checking, and the intermediate (e.g.,
code analysis), and execution-time (e.g., runtime monitoring) verification techniques that follow
in the system's development cycle.
Within this track, we will investigate what can be achieved in joint efforts of the communities of
verification techniques for different design stages, including dynamic and static analysis, model
checking, deductive verification, and runtime verification. This track is a follow-up of the track
Static and Runtime Verification: Competitors or Friends? , organised during ISoLA 2016. There,
we investigated in what different ways static and dynamic analysis could be combined. During
ISoLA 2018, we will address questions such as which application areas could benefit from
combinations of static and dynamic analysis, which program properties can be established
using which technique, what artifacts can be carried forward through different verification
technologies, what is good balance of guarantees and user efforts in various application areas,
and how can we integrate the various techniques into the software, and hybrid/cyber-physical
system development process.
This track will consist of three sessions of three speakers of 30 minutes each, followed by a 90
minutes closing discussion. The closing discussion should lead to a roadmap on how to
continue the work on combining static and dynamic analysis.
Evaluating Tools for Software Verification
Organisers
Markus Schordan (Lawrence Livermore National Laboratory, CA, USA)
Dirk Beyer (LMU Munich, Germany)
Stephen F. Siegel (University of Delaware, DE, USA)
In the last few years, tools for program analysis and verification became a lot more mature, and
there are competitions that comparatively evaluate the implemented analyses for a given set of
benchmarks. The comparison of the analyses either focuses on the analysis results themselves
(verification of specified properties) or on the impact on a client analysis.
This track is concerned with methods of evaluation for comparing analysis and verification
techniques and how verified program properties can be represented such that they remain reproducible
and reusable as intermediate results for other analyses and verification phases and/or tools.
Topics of interest
• Reuse of verification results and combination of multiple verifiers using conditional model checking
• Analysis benchmarking and experimental evaluation of analysis accuracy
• Verification of parallel programs (e.g., data races in shared-memory parallel programs)
• Identification of undefined behavior in programs (e.g. C and C++)
• Generation and checking of verification witnesses
• Interoperability of analysis tools
• Combination of techniques (e.g., model checking with theorem proving)
• Approaches for evaluating interactive verification tools
Towards a Unified View of Modeling and Programming
Organisers
Manfred Broy, Technische Universität München, Germany
Klaus Havelund, Jet Propulsion Laboratory, NASA, USA
Rahul Kumar, Jet Propulsion Laboratory, NASA, USA
Bernhard Steffen, TU Dortmund, Germany
For several decades we have seen a tremendous amount of scientific work in the
fields of visual modeling formalisms (from here on "modeling" refers to visual
modeling, in the line of e.g. SysML and UML), formal specification languages
(usually textual, e.g. Z, VDM, Coq, PVS, ...), and programming languages
(usually textual). In spite of the very high volume of work, however, this effort has
found its limitation by the fact that we do not have a sufficient integration of these
formalisms. Visual formalisms appreciated by users are usually not connected to
formal specification languages, and usually both of these are disconnected from
programming languages. There do exist cross solutions such as e.g. Dafny and
Whiley: imperative programming languages born with verification in mind, and
Agda and Idris: functional programming languages born with verification in mind.
This situation conveys several sub-challenges. First, a tighter integration
between visual modeling and textual and specification languages might be
desirable. Second, a better understanding of the commonalities between
modeling and specification languages on the one hand, and programming
languages on the other is desirable. Third, the role of Domain-Specific
Languages (DSLs) and meta-programming is closely related to these topics as a
possibly synergistic element connecting them.
An additional difficulty is due to the fact that even in each of these fields, a rich
variety of different approaches exist. Scientifically, this variety is highly
necessary. For finding its way into engineering practice, however, the variety and
the missing integration of the methods is a serious obstacle. One may, however,
note that OMG is attempting such standardization for modeling formalisms.
It is the goal of the meeting to discuss the relationship between modeling,
specification, and programming, with the possible objective of achieving an
agreement of what a unification of these concepts would mean at an abstract
level, and what it would bring as benefits on the practical level. What are the
trends in the three domains: model-based engineering, formal specification and
verification, and programming, and what can be considered to be unifying these
domains.
RV-TheToP: Runtime Verification from the Theory To the industry Practice
Organisers
Ezio Bartocci, TU Wien, Austria
Ylies Falcone, Univ. Grenoble Alpes, Inria, CNRS, Laboratoire d'Informatique de Grenoble, France
Runtime Verification (RV) has gained much focus, from both
the research community and practitioners. Roughly speaking, RV combines a set of
theories, techniques and tools aiming towards efficient analysis of systems' executions
and guaranteeing their correctness using monitoring techniques. Major challenges in
RV include characterizing and formally expressing requirements that can be monitored,
proposing intuitive and concise specification formalisms, and monitoring specifications
efficiently (time and memory-wise).
With the major strides made in recent years, much effort is still needed to make RV
an attractive and viable methodology for industrial use. In addition, further studies are
needed to apply RV to wider application domains such as security, bio-health, power
micro-grids and internet of things.
A Springer LNCS tutorial volume on advanced research topics is about to be released
by the organisers of the track. The book shall contain a selection of tutorials on
advanced topics from leading researchers in the domain of runtime verification.
The purpose of the RV-TOP track at ISoLA'18 is to bring
together experts on runtime verification and industry practitioners domains to i) have a
dissemination of advanced research topics, ii) have a dissemination of current industrial
challenges, and iii) get RV more attractive to industry and usable in additional application
domains. As such, the track will contain three sorts of presentation:
– presentation of advanced tutorials from the authors involved in the Springer LNCS
tutorial volume;
– presentation of current challenges in industry from industry practitioners;
– presentation of successful applications of RV to past industrial problems.
Rigorous Engineering of Collective Adaptive Systems
Organisers
Rocco De Nicola, IMT Lucca, Italy
Stefan Jähnichen, TU Berlin, Germany
Martin Wirsing, LMU München, Germany
Today´s software systems are becoming increasingly distributed and decentralized and have to adapt autonomously to dynamically changing, open-ended environments. Often the nodes have their own individual properties and objectives; interactions with other nodes or with humans may lead to the emergence of unexpected phenomena. We call such systems collective adaptive systems. Examples for collective adaptive systems are robot swarms, smart cities, voluntary peer-to-peer clouds as well as socio-technical systems and the internet of things.
This track is a follow-up of the successful tracks on "rigorous engineering autonomic ensembles" at ISOLA 2014 and ISOLA 2016 and aims at bringing together researchers and practitioners to present and discuss rigorous methods to engineer collective adaptive systems. Topics of interest include (but are not limited to):
• Methods and theories for designing and analysing collective adaptive systems
• Techniques for programming and operating collective adaptive systems
• Methods and mechanisms for adaptation and dynamic self-expression
• Validation and verification of collective adaptive systems
• Security, trust and performance of collective adaptive systems
Reliable Smart Contracts: State-of-the-art, Applications, Challenges and Future Directions
Organisers
Gerardo Schneider, University of Gothenburg, Sweden
Martin Leucker, University of Lübeck, Germany
César Sánchez, Instituto IMDEA Software, Madrid, Spain
Blockchain is a global distributed ledger or database running on millions of devices where not
just information but anything of value (money, music, art, intellectual property, votes, etc.) can
be moved and stored securely and privately [1]. On the blockchain trust is established through
mass (distributed) collaboration. Blockchain has the potential to change in a fundamental way
how we deal not only with financial services but also with more general applications, improving
transparency and regulation. Many applications have been proposed, starting with bitcoin [1],
and smart contracts as introduced in Ethereum [2].
Smart contracts are software programs that self-execute complex instructions on blockchains.
The promise of smart contract technology is to diminish the costs of contracting, enforcing
contractual agreements, and making payments, while at the same time ensuring trust and
compliance with the absence of a central authority. It is not clear, however, whether this
promise can be delivered given the current state-of-the-art and state-of-practice of smart
contracts.
In particular, the two most recent multi-million Ethereum bugs [4,5] just witness what the
community were afraid of: that it is not clear what the contracts mean and how to ensure that
they are reliable and bug-free. This calls for better programming languages with stronger
security & privacy guarantees (or to provide mechanisms for verification, security and privacy to
existing ones).
In this track we want to encourage contributions for discussions related, but not limited, to:
— Research on different languages for expressing smart contracts (e.g., Solidity [3]).
— Research on the use of formal methods for specifying and verifying smart contracts (both
statically and at runtime).
— Surveys and SoK about security and privacy issues related to smart contract
technologies.
— New applications based on smart contracts.
— Description of challenges and research directions to future development for better smart contracts.
Formal Methods in Industrial Practice – bridging the gap
Organisers
Michael Felderer, University of Innsbruck, Austria
Dilian Gurov , KTH Stockholm
Marieke Huisman, University of Twente
Björn Lisper , Mälardalen University
Rupert Schlick, AIT Austrian Institute of Technology
Already for many decades, formal methods are considered to be the way forward to help the
software industry to make more reliable and trustworthy software. However, despite this strong
belief, and many individual success stories, no real change in industrial software development
seems to happen. In fact, the software industry is moving fast forward itself, and the gap
between what formal methods can achieve, and the daily software development practice does
not seem to get smaller (and might even be growing).
In the past, numerous recommendations have already been made on how to develop formal
methods research in order to close this gap. In this track, we plan to investigate why the gap
nevertheless still exists, and what can be done by the formal methods research community to
bridge it.
The track will consist of three sessions of three speakers of 30 minutes each, followed by a 90
minutes closing discussion. We plan to invite speakers that have collaborated with industry, and
ask them for their experiences and recommendation on what should be done to close the gap.
In addition, we would also like to invite industrial speakers who have collaborated with
academia, so as to learn from their experiences. During the closing discussion, we will
investigate if there are shared recommendations, and also, how we can put these
recommendations into practice.
Science and Engineering for Distributed Ledger Systems of Future Practice
Organisers
Fritz Henglein University of Copenhagen, Denmark
Michael Huth Imperial College London, United Kingdom
Boris Düdder University of Copenhagen, Denmark
Leif Lundbaek Imperial College London, United Kingdom / XAIN AG, Berlin, Germany
Jesper Andersen Deon Digital AG, Switzerland
The past ten years have seen considerable innovation in the foundations of state-replication and
state-machine replication systems. Bitcoin is such an example of creating trust in a network of
mutually distrustful nodes so that its replicated state of financial transactions seems to be
resilient to manipulation, trustworthy, and effective. A Blockchain - also know as a Distributed
Ledger - is a cryptographic data structure used in a peer-to-peer network such as Bitcoin for
storing and replicating across that network a sequence of events in an immutable ledger. The
shift to blockchain systems is promising but comes also with multiple trade-offs, for example that
between the desire for fast trustworthiness of new information on the blockchain and high
transaction volumes of information processing. Smart contracts are programs stored directly in
the blockchain, implement complex business logic, and their replicated execution also
guarantees their immutability and consensus. However, the execution of such programs takes
place in a replicated virtual machine and this interaction can generate vulnerabilities (e.g. due to
language constructs with complex semantics)Since economic assets can be stored in a
blockchain, e.g. Bitcoin or Ether, exploiting such vulnerabilities may lead to a direct financial
loss (e.g. as witnessed in the DAO hack with US$ 50 Mio. in June 2016).
Blockchains are novel (2007) and their technology is still immature for supporting big-scale,
real-world applications; for example, Bitcoin cannot attain processing rates as those needed by
Visa systems. Therefore, future blockchain systems are likely to look and behave very differently
from today's academic and industrial proofs of concept and products. The charting of the design
space for blockchain systems has just began and its regions still need to be explored. In
addition, various sweet spots in that design space, desirable from the perspectives of an
industrial application, are hard to optimize - for example a sweet spot that makes optimal
tradeoffs between scalability and security. Despite of the novelty of blockchain technology,
computer science, cryptography, and other areas do offer a lot of relevant knowledge and
methods that are applicable to design, implement, and validate blockchain systems of future practice. We envision that use of such methods will lead to not one but an interoperable family
of blockchain methods and products, in part supported by international standards - similar to
how today's database management systems co-exist. Nevertheless, these products, methods,
and platforms will benefit from an ability to be tailored and customized to individual usages
within enterprises (much like product lines with particular features), for example from an ability
to choose application-specific consensus mechanism such as full/partial consensus vs.
centralized consensus (notaries).
The workshop's intention is to bring researchers from computer science, business & enterprise
systems, cryptography, artificial intelligence, policy, and related fields together with practitioners,
developers, and business stakeholders to discuss R&D topics in this space that can help with
the adoption and maturation of this technology and as well as with realizing its full potential in
future practice.
We are interested in cross-fertilization of ideas in all areas of blockchain technology, including:
— Program analysis & verification
— Execution platforms, their operational semantics and validation
— Programming models and languages for Blockchain and Smart Contracts
— Automated Smart Contract generation and synthesis
— Blockchain System Design, Modelling, and Online Optimization
— Data Science and Artificial Intelligence for Adaptive Blockchain Systems
— Software and Cyber Security of Blockchain Systems
— Enterprise System Integration and Blockchain Bridges for Legacy Systems
— Future Applications and Case Studies
X-by-Construction
Organisers
Maurice H. ter Beek, Loek Cleophas, Ina Schaefer and Bruce W. Watson
Correctness-by-Construction (C-by-C) approaches the development of software (systems) as a true form of Engineering, with a capital 'E'. C-by-C advocates a step-wise refinement process from specification to code, ideally by C-by-C design tools that automatically generate error-free software (system) implementations from rigorous and unambiguous specifications of requirements. Afterwards, testing only serves the purpose of validating the C-by-C process rather than to find bugs. (Of course, bugs might still be present outside the boundaries of the verified system: in libraries, compilers, hardware, etc.)
A lot of progress has been made in this domain, implying it is time to look further than correctness and investigate a move from C-by-C to X-by-C, i.e. by considering also non-functional properties. X-by-C is thus concerned with a step-wise refinement process from specification to code that automatically generates software (system) implementations that by construction satisfy specific non-functional properties concerning security, dependability, reliability or resource/energy consumption, etc.
Building on the highly successful ISoLA 2016 track "Correctness-by-Construction and Post-hoc Verification: Friends or Foes?"which focussed on the combination of post-hoc verification with C-by-C, we propose "X-by-Construction" (X-by-C) as a track at ISoLA 2018, with the aim of bringing together researchers and practitioners that are interested in C-by-C and the promise of X-by-C.
Statistical Model Checking
Organisers
Axel Legay and Kim Larsen
Statistical Model Checking (SMC) has been proposed as an
alternative to avoid an exhaustive exploration of the state space of a
system under verification. The core idea of the approach is to conduct
some simulations of the system and then use results from the
statistics area in order to decide whether the system satisfies the
property with respect to a given probability. The answer is correct up
to some confidence. SMC is generally much faster than formal
verification techniques, and it is closer to industry current
practices. Moreover, the approach can be used to verify properties
that cannot be expressed by the classical temporal logics used in
formal verification as well as to approximate undecidable model
checking problems. Impressive results have been obtained in various
areas such as energy-centric systems, automotive, or systems biology.
In Isola 2014 and Isola 2016, we organized the two first sessions on
SMC. The first session was on pure theory, while the second started to
englobe applications such as learning. The objective of this new
session is to build upon discussions made during those sessions. We
are still expecting new theory contributions to reduce the cost of
simulation (rare events, Bayesian approach, etc) or to tackle bigger
systems such as large case studies from Systems biology. In addition,
we seek for new applications. Particularly, the session will explore
the following aspects: 1. SMC and security (attack trees), SMC and
railway systems, SMC and non-determinism (especially learning-based
algorithms) to cope with CPS, SMC and reconfigurable systems (product
lines, etc). Finally, we plan to explore the impact of simulation on
SMC. The latter being done via runtime verification.
Verification and Validation of Distributed Systems
Organizer
Cristina Seceleanu
Theme
Leslie Lamport, known for his seminal work in distributed systems, famously said: “A distributed system is one in which the failure of a computer you didn’t even know existed can render your own computer unusable” (CAITIE MCCAFFREY: The Verification of a Distributed System - A practitioner’s guide to increasing confidence in system correctness). Given such a pessimistic outlook, how can one even start verifying distributed systems? This track focuses on providing answers to such question, by inviting submissions that propose models, techniques, and tools for the rigorous analysis of any sort of distributed systems, that is, systems whose components are located on networked computers that communicate and coordinate their actions.
Topics of interest
Formal Modeling and Specification of Distributed Systems: networked systems, multicore/multiprocessor systems, grid/cloud systems, mobile systems, service-oriented systems, systems of systems, distributed databases, gaming systems etc.
Analysis Techniques for Distributed Systems: model-checking, statistical model checking, static analysis, compositional techniques, deductive techniques, model-based testing, unit and integration testing, monitoring, model extraction etc.
Tools and Case-studies: applications, experience reports and case studies of verifying distributed and cyber-physical systems (automotive systems, autonomous systems, robotic systems, AAL/healthcare systems etc.), tools and frameworks for distributed systems verification.
Position papers: current and future trends, challenges, open problems in modeling and ensuring the correctness of distributed systems.
Cyber-Physical Systems Engineering
Organisers
J Paul Gibson, Telecom Sud Paris, Evry, France
Marc Pantel, IRIT-INPT-ENSEEIHT, University of Toulouse, France
Peter Gorm Larsen, Aarhus University, Denmark
Jim Woodcock, University of York, UK
John Fitzgerald, Newcastle University, UK
Summary
Cyber-Physical Systems connect the real world to software systems through a network of sensors and actuators in which physical and logical components interact in complex ways.
There is a diverse range of application domains, including health, energy, transport, autonomous vehicles and robotics; and many of these include safety critical requirements.
Such systems are, by definition, characterised by both discrete and continuous components.
The development and verification processes must, therefore, incorporate and integrate discrete and continuous models.
The development of techniques and tools to handle
the correct design of cyber-physical systems has drawn the attention of many researchers.
Continuous modelling approaches are usually based on a formal mathematical expression
of the problem using dense reals and differential equations to model
the behaviour of the studied hybrid system. Then, models are simulated
in order to check required properties.
Discrete modelling approaches rely on formal methods, based on abstraction, model-checking and theorem proving.
There is much ongoing research concerned with how best to combine these approaches in a more coherent and pragmatic fashion, in order to support more rigorous and automated hybrid-design verification.
It is also possible to combine different discrete-event and continuous-time models using a technique called co-simulation. This has been supported by different tools and the underlying foundation for this has been analysed. Thus, the track
will also look into these areas as well as the industrial usage of this kind of technology.
The track will consist of 4 sessions over a complete day:
1) Workshop type presentations, with emphasis on industrial case studies and tool demonstrations.
2) Theoretical challenges and ongoing research
3) Invited speakers - 1 from industry, 1 from academia
4) Collaborative drafting of a track position paper on current challenges and research directions
The ISoLA Symposium is a forum for developers, users, and researchers to discuss issues related to the adoption and use of rigorous tools for the specification, analysis, verification, certification, construction, test, and maintenance of systems from the point of view of their different application domains.
Academic Matters
Bernhard Steffen
Symposium Chair
Technical University of Dortmund, Germany
steffen[at]cs.uni-dortmund.de
Tiziana Margaria
Program Chair
LERO, The Irish Software Research Centre
tiziana.margaria[at]lero.ie
Conference Arrangements
For other matters, including reservations, registration, accommodation etc.Easy Conferences
info@easyconferences.euwww.easyconferences.eu
T: (+357) 22 591 900
F: (+357) 22 591 700