This is a list of COSTA related papers, classified into categories and in
reverse chronological order inside each
category. Please note the following copyright notice:
The documents distributed have been provided by the
contributing authors as a means to ensure timely
dissemination of technical work on a noncommercial
basis. Copyright and all rights therein are maintained
by the authors or by other copyright holders,
notwithstanding that they have offered their works
here electronically. It is understood that all persons
copying this information will adhere to the terms and
constraints invoked by each author's copyright. These
works may not be reposted without the explicit
permission of the copyright holder.
[1]
|
Shankara Pailoor, Yanju Chen, Franklyn Wang, Clara
Rodríguez-Núñez, Jacob Van Geffen, Jason Morton, Michael
Chu, Brian Gu, Yu Feng, and Isil Dillig.
Automated Detection of Under-Constrained Circuits in Zero-Knowledge
Proofs.
Proc. ACM Program. Lang., 7(PLDI):1510-1532, 2023.
[ bibtex |
abstract |
DOI |
http ]
As zero-knowledge proofs gain increasing adoption, the
cryptography community has designed domain-specific
languages (DSLs) that facilitate the construction of
zero-knowledge proofs (ZKPs). Many of these DSLs,
such as Circom, facilitate the construction of
arithmetic circuits, which are essentially
polynomial equations over a finite field. In
particular, given a program in a zero-knowledge
proof DSL, the compiler automatically produces the
corresponding arithmetic circuit. However, a common
and serious problem is that the generated circuit
may be underconstrained, either due to a bug in the
program or a bug in the compiler
itself. Underconstrained circuits admit multiple
witnesses for a given input, so a malicious party
can generate bogus witnesses, thereby causing the
verifier to accept a proof that it should
not. Because of the increasing prevalence of such
arithmetic circuits in blockchain applications,
several million dollars worth of cryptocurrency have
been stolen due to underconstrained arithmetic
circuits. Motivated by this problem, we propose a
new technique for finding ZKP bugs caused by
underconstrained polynomial equations over finite
fields. Our method performs semantic reasoning over
the finite field equations generated by the compiler
to prove whether or not each signal is uniquely
determined by the input. Our proposed approach
combines SMT solving with lightweight uniqueness
inference to effectively reason about
underconstrained circuits. We have implemented our
proposed approach in a tool called QED2 and evaluate
it on 163 Circom circuits. Our evaluation shows that
QED2 can successfully solve 70% of these
benchmarks, meaning that it either verifies the
uniqueness of the output signals or finds a pair of
witnesses that demonstrate non-uniqueness of the
circuit. Furthermore, QED2 has found 8 previously
unknown vulnerabilities in widely-used circuits.
@article{PailoorCWRVMCGFD23,
author = {Shankara Pailoor and
Yanju Chen and
Franklyn Wang and
Clara Rodr{\'{\i}}guez{-}N{\'{u}}{\~{n}}ez and
Jacob Van Geffen and
Jason Morton and
Michael Chu and
Brian Gu and
Yu Feng and
Isil Dillig},
title = {Automated Detection of Under-Constrained Circuits in Zero-Knowledge
Proofs},
journal = {Proc. {ACM} Program. Lang.},
volume = {7},
number = {{PLDI}},
pages = {1510--1532},
year = {2023},
url = {https://doi.org/10.1145/3591282},
doi = {10.1145/3591282},
rank:grinscie:class = {1}
}
|
[2]
|
Elvira Albert, Shelly Grossman, Noam Rinetzky, Clara
Rodríguez-Núñez, Albert Rubio, and Mooly Sagiv.
Relaxed Effective Callback Freedom: A Parametric Correctness
Condition for Sequential Modules With Callbacks.
IEEE Trans. Dependable Secur. Comput., 20(3):2256-2273,
2023.
[ bibtex |
abstract |
DOI |
http ]
Callbacks are an essential mechanism for event-driven
programming. Unfortunately, callbacks make reasoning
challenging because they introduce behaviors where
calls to the module are interleaved. We present a
parametric method that, from a particular invariant
of the program, allows reducing the problem of
verifying the invariant in the presence of
callbacks, to the callback-free
setting. Intuitively, we allow callbacks to
introduce behaviors that cannot be produced by
callback free executions, as long as they do not
affect correctness. A chief insight is that the user
is aware of the potential effect of the callbacks on
the program state. To this end, we present a
parametric verification technique which accepts this
insight as a relation between callback and callback
free executions. We implemented our approach and
applied it successfully to a large set of real-world
programs.
@article{AlbertGRRRS23,
author = {Elvira Albert and
Shelly Grossman and
Noam Rinetzky and
Clara Rodr{\'{\i}}guez{-}N{\'{u}}{\~{n}}ez and
Albert Rubio and
Mooly Sagiv},
title = {Relaxed Effective Callback Freedom: {A} Parametric Correctness Condition
for Sequential Modules With Callbacks},
journal = {{IEEE} Trans. Dependable Secur. Comput.},
volume = {20},
number = {3},
pages = {2256--2273},
year = {2023},
url = {https://doi.org/10.1109/TDSC.2022.3178836},
doi = {10.1109/TDSC.2022.3178836},
rank:jcr:q = {1}
}
|
[3]
|
Elvira Albert, Maria Garcia de la Banda, Miguel Gómez-Zamalloa, Miguel
Isabel, and Peter J. Stuckey.
Optimal dynamic partial order reduction with context-sensitive
independence and observers.
Journal of Systems and Software, 202:111730, 2023.
[ bibtex |
abstract |
DOI |
http ]
Dynamic Partial Order Reduction (DPOR) algorithms are used in stateless model checking of concurrent programs to avoid the exploration of equivalent execution sequences. In order to detect equivalence, DPOR relies on the notion of independence between execution steps. As this notion must be approximated, it can lose precision and thus treat execution steps as interfering when they are not. Our work is inspired by recent progress in the area that has introduced more accurate ways to exploit conditional notions of independence: Context-Sensitive DPOR considers two steps p and t independent in the current state if the states obtained by executing p⋅t and t⋅p are the same; Optimal DPOR with Observers makes their dependency conditional to the existence of future events that observe their operations. This article introduces a new algorithm, Optimal Context-Sensitive DPOR with Observers, that combines these two notions of conditional independence, and goes beyond them by exploiting their synergies. The implementation of our algorithm has been undertaken within the Nidhugg model checking tool. Our experimental evaluation, using benchmarks from the previous works, shows that our algorithm is able to effectively combine the benefits of both context-sensitive and observers-based independence and that it can produce exponential reductions over both of them.
@article{AlbertGGIS23,
title = {Optimal dynamic partial order reduction with context-sensitive independence and observers},
journal = {Journal of Systems and Software},
volume = {202},
pages = {111730},
year = {2023},
issn = {0164-1212},
doi = {https://doi.org/10.1016/j.jss.2023.111730},
url = {https://www.sciencedirect.com/science/article/pii/S0164121223001255},
author = {Elvira Albert and Maria Garcia {de la Banda} and Miguel G\'omez-Zamalloa and Miguel Isabel and Peter J. Stuckey},
rank:jcr:q = {2}
}
|
[4]
|
Elvira Albert, Pablo Gordillo, Alejandro Hernández-Cerezo, Albert Rubio, and
Maria A. Schett.
Super-Optimization of Smart Contracts.
ACM Transactions on Software Engineering and Methodology, 31
Issue 4(70):1-29, 2022.
[ bibtex |
abstract |
DOI |
http ]
Smart contracts are programs deployed on a
blockchain. They are executed for a monetary fee
paid in gas, a clear optimization target for smart
contract compilers. Because smart contracts are a
young, fast-moving field without (manually)
fine-tuned compilers, they highly benefit from
automated and adaptable approaches, especially as
smart contracts are effectively immutable, and
as-such need a high-level of assurance. This makes
them an ideal domain for applying formal methods.
Super-optimization is a technique to find the best
translation of a block of instructions by trying all
possible sequences of instructions which produce the
same result. We present a framework for
super-optimizing smart contracts based on Max-SMT
with two main ingredients: (1) a stack functional
specification extracted from the basic blocks of a
smart contract, which is simplified using rules
capturing the semantics of arithmetic, bit-wise, and
relational operations, and (2) the synthesis of
optimized blocks, which finds, by means of an
efficient SMT encoding, basic blocks with minimal
gas cost whose stack functional specification is
equal (modulo commutativity) to the extracted one.
We implemented our framework in the tool
SYRUP. Through large-scale experiments on real-world
smart contracts we analyze performance improvements
for different SMT encodings, as well as trade-offs
between quality of optimizations and required
optimization time.
@article{AlbertGHRS22,
author = {Elvira Albert and
Pablo Gordillo and
Alejandro Hern\'andez-Cerezo and
Albert Rubio and
Maria A. Schett},
title = {{Super-Optimization of Smart Contracts}},
journal = {{ACM Transactions on Software Engineering and Methodology}},
year = {2022},
volume = {31 Issue 4},
number = {70},
pages = {1--29},
issn = {1049-331X},
doi = {10.1145/3506800},
url = {https://doi.org/10.1145/3506800},
rank:jcr:q = {1}
}
|
[5]
|
Elvira Albert, Jesús Correas, Pablo Gordillo, Guillermo Román-Díez, and
Albert Rubio.
Don't Run on Fumes - Parametric Gas Bounds for Smart Contracts.
Journal of Systems and Software, 176:110923, 2021.
[ bibtex |
abstract |
DOI |
PDF ]
Gas is a measurement unit of the computational effort
that it will take to execute every single replicated
operation that takes part in the Ethereum blockchain
platform. Each transaction executed by the Ethereum
Virtual Machine (EVM), has an associated gas
consumption specified by Ethereum. If a transaction
exceeds the amount of gas allotted by the user
(known as gas limit), an out-of-gas exception is
raised, interrupting the current execution. We
present, to the best of our knowledge, the first
static analysis that is able to infer parametric gas
bounds for smart contracts. The inferred bounds are
typically parametric on the sizes of the input
parameters for the functions, but also they are
often parametric on the contract state, and even
sometimes they will be parametric on blockchain
data. The analysis is implemented in a tool named
GASTAP, Gas-Aware Smart contracT Analysis Platform,
which takes as input a smart contract (either in
EVM, disassembled EVM, or in Solidity source code)
and automatically infers sound gas upper bounds for
its public functions. GASTAP has been applied over
the entire contents of the Ethereum blockchain, in
total we have been able to analyze 318,397
functions, and succeeded to obtain gas bounds for
91.02% of them.
@article{AlbertCGRR21,
author = {Elvira Albert and
Jes\'us Correas and
Pablo Gordillo and
Guillermo Rom\'an-D\'iez and
Albert Rubio},
title = {{Don't Run on Fumes --- Parametric Gas Bounds for Smart Contracts}},
journal = {{J}ournal of {S}ystems and {S}oftware},
volume = {176},
pages = {110923},
issn = {0164-1212},
doi = {https://doi.org/10.1016/j.jss.2021.110923},
year = {2021},
rank:jcr:q = {2}
}
|
[6]
|
Jesús Correas, Pablo Gordillo, and Guillermo Román-Díez.
Static Profiling and Optimization of Ethereum Smart Contracts Using
Resource Analysis.
IEEE Access, 9:25495-25507, 2021.
[ bibtex |
abstract |
DOI |
http ]
@article{DBLP:journals/access/CorreasGR21,
author = {Jes{\'{u}}s Correas and
Pablo Gordillo and
Guillermo Rom{\'{a}}n{-}D{\'{\i}}ez},
title = {Static Profiling and Optimization of Ethereum Smart Contracts Using
Resource Analysis},
journal = {{IEEE} Access},
volume = {9},
pages = {25495--25507},
year = {2021},
url = {https://doi.org/10.1109/ACCESS.2021.3057565},
doi = {10.1109/ACCESS.2021.3057565},
rank:jcr:q = {2}
}
|
[7]
|
Miguel Gómez-Zamalloa and Miguel Isabel.
Deadlock-Guided Testing.
IEEE Access, 9:46033-46048, 2021.
[ bibtex |
abstract |
DOI |
http ]
@article{DBLP:journals/access/Gomez-ZamalloaI21,
author = {Miguel G{\'{o}}mez{-}Zamalloa and
Miguel Isabel},
title = {Deadlock-Guided Testing},
journal = {{IEEE} Access},
volume = {9},
pages = {46033--46048},
year = {2021},
rank:jcr:q = {2},
url = {https://doi.org/10.1109/ACCESS.2021.3065421},
doi = {10.1109/ACCESS.2021.3065421},
timestamp = {Wed, 03 Nov 2021 08:25:52 +0100},
biburl = {https://dblp.org/rec/journals/access/Gomez-ZamalloaI21.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
|
[8]
|
Elvira Albert, Miguel Gómez-Zamalloa, Miguel Isabel, Albert Rubio,
Matteo Sammartino, and Alexandra Silva.
Actor-based model checking for Software-Defined Networks.
J. Log. Algebraic Methods Program., 118:100617, 2021.
[ bibtex |
abstract |
DOI |
PDF |
http ]
|
[9]
|
Elvira Albert, Nikolaos Bezirgiannis, Frank de Boer, and Enrique Martin-Martin.
A Formal, Resource Consumption-Preserving Translation from Actors
with Cooperative Scheduling to Haskell.
Fundamenta Informaticae, 177(3-4):203-234, 2020.
[ bibtex |
abstract |
DOI |
http ]
@article{AlbertBBM19,
author = {Elvira Albert and Nikolaos Bezirgiannis and Frank de Boer and Enrique Martin-Martin},
title = {A Formal, Resource Consumption-Preserving Translation from Actors with Cooperative Scheduling to Haskell},
journal = {Fundamenta Informaticae},
publisher = {IOS Press},
volume = {177},
number = {3-4},
pages = {203--234},
year = {2020},
url = {https://doi.org/10.3233/FI-2020-1988},
doi = {10.3233/FI-2020-1988},
rank:jcr:q = {3}
}
|
[10]
|
Elvira Albert, Samir Genaim, Raúl Gutiérrez, and Enrique Martin-Martin.
A Transformational Approach to Resource Analysis with Typed-norms
Inference.
Theory and Practice of Logic Programming, 20(3):310-357, 2020.
[ bibtex |
abstract |
DOI ]
@article{AlbertGGM19,
author = {Albert, Elvira and Genaim, Samir and Guti\'errez, Ra\'ul and Martin-Martin, Enrique},
title = {A Transformational Approach to Resource Analysis with Typed-norms Inference},
doi = {10.1017/S1471068419000401},
journal = {Theory and Practice of Logic Programming},
publisher = {Cambridge University Press},
volume = {20},
number = {3},
pages = {310--357},
year = {2020},
issn = {1471-0684},
rank:jcr:q = {2}
}
|
[11]
|
Jesús J. Doménech, John P. Gallagher, and Samir Genaim.
Control-Flow Refinement by Partial Evaluation, and its
Application to Termination and Cost Analysis.
Theory and Practica of Logic Programming, 19(5-6):990-1005,
2019.
[ bibtex |
abstract |
DOI ]
Control-flow refinement refers to program
transformations whose purpose is to make implicit
control-flow explicit, and is used in the context of
program analysis to increase precision. Several
techniques have been suggested for different
programming models, typically tailored to improving
precision for a particular analysis. In this paper
we explore the use of partial evaluation of Horn
clauses as a general-purpose technique for
control-flow refinement for integer transitions
systems. These are control-flow graphs where edges
are annotated with linear constraints describing
transitions between corresponding nodes, and they
are used in many program analysis tools. Using
partial evaluation for control-flow refinement has
the clear advantage over other approaches in that
soundness follows from the general properties of
partial evaluation; in particular, properties such
as termination and complexity are preserved. We use
a partial evaluation algorithm incorporating
property-based abstraction, and show how the right
choice of properties allows us to prove termination
and to infer complexity of challenging programs that
cannot be handled by state-of-the-art tools. We
report on the integration of the technique in a
termination analyzer, and its use as a preprocessing
step for several cost analyzers.
@article{DBLP:journals/tplp/DomenechGG19,
author = {Jes{\'{u}}s J. Dom{\'{e}}nech and John P. Gallagher and Samir Genaim},
title = {{C}ontrol-{F}low {R}efinement by {P}artial {E}valuation, and its {A}pplication to {T}ermination and {C}ost {A}nalysis},
journal = {Theory and Practica of Logic Programming},
volume = {19},
number = {5-6},
pages = {990--1005},
year = {2019},
issn = {1471-0684},
press = {Cambridge University},
doi = {10.1017/S1471068419000310},
rank:jcr:q = {1}
}
|
[12]
|
Elvira Albert, Miquel Bofill, Cristina Borralleras, Enrique Martin-Martin, and
Albert Rubio.
Resource Analysis driven by (Conditional) Termination Proofs.
Theory and Practice of Logic Programming, 19(5-6):722-739,
2019.
[ bibtex |
abstract |
DOI ]
@article{AlbertBBMR19,
author = {Elvira Albert and Miquel Bofill and Cristina Borralleras and Enrique Martin-Martin and Albert Rubio},
title = {Resource Analysis driven by (Conditional) Termination Proofs},
volume = {19},
doi = {10.1017/S1471068419000152},
number = {5-6},
journal = {Theory and Practice of Logic Programming},
publisher = {Cambridge University Press},
year = {2019},
pages = {722-739},
rank:jcr:q = {1}
}
|
[13]
|
Cosimo Laneve, Michael Lienhardt, Ka I Pun, and Guillermo Román-Díez.
Time analysis of actor programs.
Journal of Logical and Algebraic Methods in Programming, 105:1
- 27, 2019.
[ bibtex |
abstract |
DOI ]
This paper proposes a technique for estimating the computational time
of programs in an actor model, which is intended to serve as a
compiler target of a wide variety of actor-based programming
languages. We define a compositional translation function returning
cost equations, which are fed to an automatic off-the-shelf solver for
obtaining the time bounds. Our approach is based on a new notion of
synchronization sets, which captures possible difficult
synchronization patterns between actors and helps make the analysis
efficient and precise. The approach is proven to correctly
over-approximate the worst computational time of an actor model of a
concurrent programs.
@article{LaneveLPR19,
author = {Cosimo Laneve and
Michael Lienhardt and
Ka I Pun and
Guillermo Rom\'an-D\'iez},
title = {Time analysis of actor programs},
journal = {Journal of Logical and Algebraic Methods in Programming},
issn = {2352-2208},
publisher = {Elsevier},
doi = {https://doi.org/10.1016/j.jlamp.2019.02.007},
year = {2019},
rank:jcr:q = {2},
volume = {105},
pages = {1 -- 27}
}
|
[14]
|
Elvira Albert, Jesús Correas, and Guillermo Román-Díez.
Peak Resource Analysis of Concurrent Distributed Systems.
Journal of Systems and Software, 149:35 - 62, 2019.
[ bibtex |
abstract |
DOI ]
Traditional cost analysis frameworks have been defined
for cumulative resources which keep on increasing
along the computation. Examples of cumulative
resources are execution time, number of executed
steps, and energy consumption. Non-cumulative
resources are acquired and (possibly) released along
the execution. Examples of non-cumulative cost are
number of connections established that are later
closed, or resources requested to a virtual host
which are released after using them. We present a
static analysis framework to infer the peak cost for
non-cumulative types of resources in concurrent
distributed systems. Our framework is generic
w.r.t. the type of resource and can be instantiated
with any of the above mentioned resources as well as
with any other resource that is measurable by
inspecting the instructions of the program. The
concurrent distributed language that we consider
allows creating distributed locations dynamically
within the program and spawning tasks that execute
concurrently at any of the existing locations. Our
analysis infers, for the different distributed
locations, the maximum (or peak) amount of resources
that they may require along any execution. This
information can be used, among other applications,
to detect bottlenecks in the system and to
efficiently dimension the processing capacity and
storage that the locations of the concurrent
distributed system require.
@article{AlbertCR18,
author = {Elvira Albert and
Jes\'us Correas and
Guillermo Rom\'an-D\'iez},
title = {{P}eak {R}esource {A}nalysis of {C}oncurrent {D}istributed {S}ystems},
journal = {{J}ournal of {S}ystems and {S}oftware},
volume = {149},
pages = {35 - 62},
issn = {0164-1212},
year = {2019},
doi = {https://doi.org/10.1016/j.jss.2018.11.018},
publisher = {Elsevier},
rank:jcr:q = {2}
}
|
[15]
|
Elvira Albert, Jesús Correas, Einar Broch Johnsen Ka I Pun, and Guillermo Román-Díez.
Parallel Cost Analysis.
ACM Transactions on Computational Logic, 19(4):1 - 37, 2018.
[ bibtex |
abstract |
DOI ]
This paper presents parallel cost analysis, a static
cost analysis targeting to over-approximate the cost
of parallel execution in distributed systems. In
contrast to the standard notion of serial cost,
parallel cost captures the cost of synchronized
tasks executing in parallel by exploiting the true
concurrency available in the execution model of
distributed processing. True concurrency is
challenging for static cost analysis, because the
parallelism between tasks needs to be soundly
inferred, and the waiting and idle processor times
at the different locations need to be accounted for.
Parallel cost analysis works in three phases: (1) it
performs a block-level analysis to estimate the
serial costs of the blocks between synchronization
points in the program; (2) it then constructs a
distributed flow graph (DCG) to capture the
parallelism, the waiting and idle times at the
locations of the distributed system; and (3) the
parallel cost can finally be obtained as the path of
maximal cost in the DCG. We prove the correctness of
the proposed parallel cost analysis, and provide a
prototype implementation to perform an experimental
evaluation of the accuracy and feasibility of the
proposed analysis.
@article{AlbertCPJR18,
author = {Elvira Albert and
Jes\'us Correas and
Ka I Pun,
Einar Broch Johnsen and
Guillermo Rom\'an-D\'iez},
title = {{P}arallel {C}ost {A}nalysis},
journal = {ACM Transactions on Computational Logic},
issn = {1529-3785},
publisher = {ACM},
rank:jcr:q = {4},
year = {2018},
volume = {19},
pages = {1 -- 37},
number = {4},
doi = {10.1145/3274278}
}
|
[16]
|
Damiano Zanardini.
Field-sensitive sharing.
Journal of Logical and Algebraic Methods in Programming,
95:103-127, 2018.
[ bibtex |
abstract |
DOI |
http ]
@article{Zanardini18,
author = {Damiano Zanardini},
title = {Field-sensitive sharing},
journal = {Journal of Logical and Algebraic Methods in Programming},
volume = {95},
pages = {103--127},
year = {2018},
publisher = {Elsevier},
url = {https://doi.org/10.1016/j.jlamp.2017.10.005},
doi = {10.1016/j.jlamp.2017.10.005},
issn = {2352-2208},
rank:jcr:q = {2}
}
|
[17]
|
Elvira Albert, Puri Arenas, and Miguel Gómez-Zamalloa.
Systematic testing of actor systems.
Software Testing, Verification and Reliability, 2018.
[ bibtex |
abstract |
DOI |
http ]
Testing concurrent systems requires exploring all
possible nondeterministic interleavings that the
concurrent execution may have, as any of the
interleavings may reveal the erroneous behavior. In
testing of actor systems, we can distinguish 2
sources of nondeterminism: (1) actor selection, the
order in which actors are explored, and (2) task
selection, the order in which the tasks within each
actor are explored. This article provides new
strategies and heuristics for pruning redundant
state-exploration when testing actor systems by
reducing the amount of unnecessary nondeterminism of
both types. Furthermore, we extend these techniques
to handle synchronization primitives that allow
awaiting for the completion of an asynchronous
task. We report on an implementation and
experimental evaluation of the proposed techniques
in SYCO, a testing tool for actor-based
concurrency.
@article{AlbertAGZ18,
author = {Elvira Albert and Puri Arenas and Miguel G\'omez-Zamalloa},
title = {Systematic testing of actor systems},
journal = {Software Testing, Verification and Reliability},
year = {2018},
publisher = {Wiley},
issn = {1099-1689},
url = {http://dx.doi.org/10.1002/stvr.1661},
doi = {10.1002/stvr.1661},
rank:jcr:q = {3}
}
|
[18]
|
Jesús Correas, Sonia Estévez Martín, and Fernando
Sáenz-Pérez.
Enhancing set constraint solvers with bound consistency.
Expert Systems with Applications, 92:485-494, 2018.
[ bibtex |
abstract |
DOI |
http ]
Constraint solvers have proved to be a powerful tool for
both expressing and solving complex problems in many
contexts, in particular in Automated Planning and
Scheduling. In this work, we propose new
constraints to enhancing an implementation of a
constraint solver for sets of integers (ic_sets)
embedded in the ECLiPSe system. This way, we
confer a new vision of modelling that in particular
allows a task to be planned on alternate days. In
addition, these new constraints make finite domain
and finite sets domains to cooperate, reducing the
search space and improving the efficiency of the
existing set solvers for some specific cases. We
report on a prototype implementation and apply these
constraints to modelling scheduling problems as
benchmarks for a performance analysis. Experiments
reveal a relevant improvement with respect to the
standard ic_sets solver which only features
cardinality as cooperation mechanism between
domains. Both enhanced expressiveness and
performance are key points for practical
applications of such solvers.
@article{CorreasEMSP18,
author = {Jes{\'{u}}s Correas and
Sonia Est{\'{e}}vez Mart{\'{\i}}n and
Fernando S{\'{a}}enz{-}P{\'{e}}rez},
title = {Enhancing set constraint solvers with bound consistency},
journal = {Expert Systems with Applications},
publisher = {Elsevier},
volume = {92},
pages = {485--494},
year = {2018},
url = {https://doi.org/10.1016/j.eswa.2017.09.056},
doi = {10.1016/j.eswa.2017.09.056},
rank:jcr:q = {1}
}
|
[19]
|
Isabela Mastroeni and Damiano Zanardini.
Abstract Program Slicing: an Abstract Interpretation-based
approach to Program Slicing.
ACM Transactions on Computational Logic, 18(1):7:1-7:58, 2017.
[ bibtex |
abstract |
DOI |
http ]
@article{MastroeniZ17,
author = {Isabela Mastroeni and Damiano Zanardini},
title = {Abstract {P}rogram {S}licing: an {A}bstract {I}nterpretation-based approach to {P}rogram {S}licing},
journal = {ACM Transactions on Computational Logic},
issn = {1529-3785},
publisher = {ACM},
rank:jcr:q = {1},
volume = {18},
number = {1},
year = {2017},
pages = {7:1--7:58},
url = {http://doi.acm.org/10.1145/3029052},
doi = {10.1145/3029052}
}
|
[20]
|
Elvira Albert, Antonio Flores-Montoya, Samir Genaim, and Enrique Martin-Martin.
Rely-Guarantee Termination and Cost Analyses of Loops
with Concurrent Interleavings.
Journal of Automated Reasoning, 59(1):47-85, Jun 2017.
[ bibtex |
abstract |
DOI |
PDF |
http ]
By following a rely-guarantee style of reasoning, we present
novel termination and cost analyses for concurrent programs that, in order to
prove termination or infer the cost of a considered loop: (1) infer the
termination/cost of each loop as if it were a sequential one, imposing
assertions on how shared-data is modified concurrently; and then (2) prove
that these assertions cannot be violated infinitely many times and, for cost
analysis, infer how many times they are violated. At the core of the analysis,
we use a may-happen-in-parallel analysis to restrict the set of program
points whose execution can interleave. Interestingly, the same kind of
reasoning can be applied to prove termination and infer upper bounds on
the number of iterations of loops with concurrent interleavings. To the best
of our knowledge, this is the first method to automatically bound the cost of
such kind of loops. We have implemented our analysis for an actor-based
language, and showed its accuracy and efficiency by applying it on several
typical applications for concurrent programs and on an industrial case study.
@article{AlbertFGM17,
author = {Elvira Albert and Antonio Flores-Montoya and Samir Genaim and Enrique Martin-Martin},
title = {{R}ely-{G}uarantee {T}ermination and {C}ost {A}nalyses of {L}oops with {C}oncurrent {I}nterleavings},
year = {2017},
month = {Jun},
volume = {59},
number = {1},
pages = {47--85},
doi = {10.1007/s10817-016-9400-6},
url = {http://dx.doi.org/10.1007/s10817-016-9400-6},
journal = {Journal of Automated Reasoning},
issn = {0168-7433},
publisher = {Springer},
rank:jcr:q = {3},
rank:ms:rating = {59}
}
|
[21]
|
Elvira Albert, Antonio Flores-Montoya, Samir Genaim, and Enrique Martin-Martin.
May-Happen-in-Parallel Analysis for Actor-based
Concurrency.
ACM Transactions on Computational Logic, 17(2):11:1-11:39,
March 2016.
[ bibtex |
abstract |
DOI |
PDF |
http ]
This article presents a may-happen-in-parallel (MHP) analysis
for languages with actor-based concurrency. In this
concurrency model, actors are the concurrency units such
that, when a method is invoked on an actor a2 from a task
executing on actor a1, statements of the current task in a1
may run in parallel with those of the (asynchronous) call on a2,
and with those of transitively invoked methods. The goal of the MHP
analysis is to identify pairs of statements in the program that may
run in parallel in any execution. Our MHP analysis is formalized as
a method-level (local) analysis whose information can be
modularly composed to obtain application-level (global)
information. The information yielded by the MHP analysis is essential
to infer more complex properties of actor-based concurrent programs,
e.g., data race detection, deadlock freeness, termination and
resource consumption analyses can greatly benefit from the MHP
relations to increase their accuracy. We report on MayPar, a
prototypical implementation of an MHP static analyzer for a
distributed asynchronous language.
@article{AlbertFGM15,
author = {Elvira Albert and Antonio Flores-Montoya and Samir Genaim and Enrique Martin-Martin},
title = {{M}ay-{H}appen-in-{P}arallel {A}nalysis for {A}ctor-based {C}oncurrency},
journal = {ACM Transactions on Computational Logic},
volume = {17},
number = {2},
month = mar,
year = {2016},
issn = {1529-3785},
pages = {11:1--11:39},
articleno = {11},
numpages = {39},
doi = {10.1145/2824255},
url = {http://dx.doi.org/10.1145/2824255},
publisher = {ACM},
rank:jcr:q = {1},
rank:ms:rating = {38}
}
|
[22]
|
Damiano Zanardini, Elvira Albert, and Karina Villela.
Resource-Usage-Aware Configuration in Software Product
Lines.
Journal of Logical and Algebraic Methods in Programming, 85(1,
Part 2):173-199, 2016.
Formal Methods for Software Product Line Engineering.
[ bibtex |
abstract |
DOI |
PDF |
http ]
Deriving concrete products from a product-line
infrastructure requires resolving the variability
captured in the product line, based on the company
market strategy or requirements from specific
customers. Selecting the most appropriate set of
features for a product is a complex task, especially
if quality requirements have to be
considered. Resource-usage-aware configuration aims
at providing awareness of resource-usage properties
of artifacts throughout the configuration
process. This article envisages several strategies
for resource-usage-aware configuration which feature
different performance and efficiency trade-offs. The
common idea in all strategies is the use of
resource-usage estimates obtained by an
off-the-shelf static resource-usage analyzer as a
heuristic for choosing among different candidate
configurations. We report on a prototype
implementation of the most practical strategies for
resource-usage-aware configuration and apply it on
an industrial case study.
@article{ZanardiniAV16,
author = {Damiano Zanardini and Elvira Albert and Karina Villela},
title = {Resource-{U}sage-{A}ware {C}onfiguration in {S}oftware {P}roduct {L}ines},
journal = {Journal of Logical and Algebraic Methods in Programming},
year = {2016},
volume = {85},
number = {1, Part 2},
pages = {173--199},
doi = {10.1016/j.jlamp.2015.08.003},
url = {https://doi.org/10.1016/j.jlamp.2015.08.003},
publisher = {Elsevier},
issn = {2352-2208},
rank:jcr:q = {2},
note = {Formal Methods for Software Product Line Engineering},
butype = {article}
}
|
[23]
|
Elvira Albert, Richard Bubel, Samir Genaim, Reiner Hähnle, Germán Puebla,
and Guillermo Román-Díez.
A Formal Verification Framework for Static Analysis.
Software and Systems Modeling, 15(4):987-1012, 2016.
[ bibtex |
abstract |
DOI |
PDF |
http ]
Static analysis tools, such as resource analyzers, give useful
information on software systems, especially in real-time and
safety-critical applications. Therefore, the question of the
reliability of the obtained results is highly important.
State-of-the-art static analyzers typically combine a range of
complex techniques, make use of external tools, and evolve quickly. To
formally verify such systems is not a realistic option.
In this work we propose a different approach whereby, instead of
the tools, we formally verify the results of the
tools. The central idea of such a formal verification framework for
static analysis is the method-wise translation of the information
about a program gathered during its static analysis into
specification contracts that contain enough information for them to
be verified automatically.
We instantiate this framework with , a state-of-the-art
static analysis system for sequential Java programs, for producing
resource guarantees and , a state-of-the-art verification tool,
for formally verifying the correctness of such resource guarantees.
Resource guarantees allow to be certain that programs will
run within the indicated amount of resources, which may refer to
memory consumption, number of instructions executed, etc.
Our results show that the proposed tool cooperation can be used for
automatically producing verified resource guarantees.
@article{AlbertBGHPR15,
author = {Elvira Albert and
Richard Bubel and
Samir Genaim and
Reiner H\"ahnle and
Germ\'{a}n Puebla and
Guillermo Rom\'an-D\'iez},
title = {{A} {F}ormal {V}erification {F}ramework for {S}tatic {A}nalysis},
journal = {{S}oftware and {S}ystems {M}odeling},
year = {2016},
pages = {987-1012},
volume = {15},
number = {4},
publisher = {Springer},
url = {http://dx.doi.org/10.1007/s10270-015-0476-y},
doi = {10.1007/s10270-015-0476-y},
issn = {1619-1366},
rank:jcr:q = {2}
}
|
[24]
|
Elvira Albert, Puri Arenas, Jesús Correas, Samir Genaim, Miguel Gómez-Zamalloa, and Germán Puebla an d Guillermo Román-Díez.
Object-Sensitive Cost Analysis for Concurrent Objects.
Software Testing, Verification and Reliability, 25(3):218-271,
2015.
[ bibtex |
abstract |
DOI |
PDF |
http ]
This article presents a novel cost
analysis framework for concurrent objects.
Concurrent objects form a well established model for distributed
concurrent systems. In this model, objects are the
concurrency units which communicate among them via
asynchronous method calls. Cost analysis aims at
automatically approximating the resource consumption of
executing a program in terms of its input parameters. While cost
analysis for sequential programming languages has received
considerable attention, concurrency and distribution have been notably
less studied. The main challenges of cost analysis in a concurrent
setting are:
(1) Inferring precise size abstractions for data in the program in
the presence of shared memory. This information is essential for
bounding the number of iterations of loops.
(2) Distribution suggests that analysis must infer the cost of the
diverse distributed components separately. We handle this by means of a
novel form of object-sensitive recurrence equations which use
cost centers in order to keep the resource usage assigned to
the different components separate.
We have implemented our analysis and evaluated it
on several small applications which are
classical examples of concurrent and distributed programming.
@article{AlbertACGGPR15,
author = {Elvira Albert and
Puri Arenas and
Jes\'us Correas and
Samir Genaim and
Miguel G\'omez-Zamalloa and
Germ\'{a}n Puebla an d
Guillermo Rom\'an-D\'iez},
title = {{O}bject-{S}ensitive {C}ost {A}nalysis for {C}oncurrent {O}bjects},
journal = {Software Testing, Verification and Reliability},
year = {2015},
publisher = {Wiley},
pages = {218--271},
url = {http://dx.doi.org/10.1002/stvr.1569},
doi = {10.1002/stvr.1569},
volume = {25},
number = {3},
issn = {0960-0833},
rank:jcr:q = {2}
}
|
[25]
|
Elvira Albert, Puri Arenas, Samir Genaim, and Germán Puebla.
A Practical Comparator of Cost Functions and Its
Applications.
Science of Computer Programming, 111(3):483-504, 2015.
[ bibtex |
abstract |
DOI |
PDF |
http ]
Automatic cost analysis has
significantly advanced in the last few years. Nowadays, a number of
cost analyzers exist which automatically produce upper- and/or
lower-bounds on the amount of resources required to execute a
program. Cost analysis has a number of important applications such as
resource-usage verification and program synthesis and optimization.
For such applications to be successful, it is not sufficient to have
automatic cost analysis. It is also required to have automated means
for handling the analysis results, which are in the form of
Cost Functions (CFs for short) i.e., non-recursive
expressions composed of a relatively small number of types of basic
expressions. In particular, we need automated means for comparing CFs in
order to prove that a CF is smaller than or equal to
another one for all input values of interest.
General function comparison is a hard mathematical problem. Rather
than attacking the general problem, in this work we focus on
comparing CF by exploiting their syntactic properties
and we present, to the best of our knowledge, the first
practical CF comparator which opens the door to fully automated
applications of cost analysis.
We have implemented the
comparator and made its source code
available online, so that any cost analyzer can use it.
@article{AlbertAGP15,
author = {Elvira Albert and Puri Arenas and Samir Genaim and Germ\'{a}n Puebla},
title = {A {P}ractical {C}omparator of {C}ost {F}unctions and {I}ts {A}pplications},
journal = {Science of Computer Programming},
year = {2015},
publisher = {Elsevier},
pages = {483--504},
volume = {111},
number = {3},
rank:jcr:q = {3},
issn = {0167-6423},
doi = {10.1016/j.scico.2014.12.001},
url = {http://dx.doi.org/10.1016/j.scico.2014.12.001}
}
|
[26]
|
Elvira Albert, Jesús Correas, Germán Puebla, and Guillermo Román-Díez.
Quantified Abstract Configurations of Distributed Systems.
Formal Aspects of Computing, 27(4):665-699, 2015.
[ bibtex |
abstract |
DOI |
PDF |
http ]
When reasoning about distributed systems, it is essential to have
information about the different kinds of nodes that compose the
system, how many instances of each kind exist, and how nodes
communicate with
other nodes.
In this paper we present a static-analysis-based approach which is
able to provide information about the questions above.
In order to cope with an unbounded number of nodes and an unbounded
number of calls among them, the analysis
performs an abstraction of the system producing a graph whose
nodes may represent (infinitely) many concrete nodes and arcs represent
any number of (infinitely) many calls among nodes.
The crux of our approach is that the abstraction is enriched with
upper bounds inferred by resource analysis that limit the
number of concrete instances that the nodes and arcs represent and
their resource consumption.
The information available in our quantified abstract
configurations allows us to define performance indicators which
measure the quality of the system. In particular, we present
several indicators that assess the level of distribution in the
system, the amount of communication among distributed nodes
that it requires, and how balanced the load of the distributed nodes
that compose the system is. Our performance indicators are given as
functions on the input data sizes, and they can be used to automate the
comparison of different distributed settings and guide towards finding the
optimal configuration.
@article{AlbertCPR14,
author = {Elvira Albert and Jes\'us Correas and Germ\'{a}n Puebla and Guillermo Rom\'{a}n-D\'{i}ez},
title = {{Q}uantified {A}bstract {C}onfigurations of {D}istributed {S}ystems},
journal = {Formal Aspects of Computing},
year = {2015},
publisher = {Springer},
issn = {0934-5043},
volume = {27},
number = {4},
pages = {665--699},
rank:jcr:q = {4},
doi = {10.1007/s00165-014-0321-z},
url = {https://doi.org/10.1007/s00165-014-0321-z}
}
|
[27]
|
Peter Y.H. Wong, Richard Bubel, Frank S. de Boer, Miguel Gómez-Zamalloa,
Stijn de Gouw, Renier Hahnle, Karl Meinke, and MuddassarAzam Sindhu.
Testing Abstract Behavioral Specifications.
International Journal on Software Tools for Technology
Transfer, 17(1):107-119, Feb 2015.
[ bibtex |
abstract |
DOI |
PDF |
http ]
@article{WongBBGGHMS14,
author = {Peter Y.H. Wong and Richard Bubel and Frank S. de Boer and Miguel G\'{o}mez-Zamalloa and Stijn de Gouw and Renier Hahnle and Karl Meinke and MuddassarAzam Sindhu},
journal = {International Journal on Software Tools for Technology Transfer},
title = {Testing Abstract Behavioral Specifications},
issn = {1433-2779},
url = {http://dx.doi.org/10.1007/s10009-014-0301-x},
doi = {10.1007/s10009-014-0301-x},
month = {Feb},
day = {01},
volume = {17},
number = {1},
pages = {107--119},
year = 2015,
publisher = {Springer-Verlag},
address = {Berlin, Heidelberg}
}
|
[28]
|
Elvira Albert, Jesús Correas, Germán Puebla, and Guillermo Román-Díez.
A Multi-Domain Incremental Analysis Engine and its
Application to Incremental Resource Analysis.
Theoretical Computer Science, 585(0):91 - 114, 2015.
[ bibtex |
abstract |
DOI |
PDF |
http ]
The aim of incremental analysis is, given a program,
its analysis results, and a series of changes to the program, to
obtain the new analysis results as efficiently as possible and,
ideally, without having to (re-)analyze fragments of code which are
not affected by the changes. Incremental analysis can significantly
reduce both the time and the memory requirements of analysis.
The first contribution of this article is a multi-domain
incremental fixed-point algorithm for a sequential Java-like
language. The algorithm is multi-domain in the sense that it
interleaves the (re-)analysis for multiple domains by taking into
account dependencies among them. Importantly, this allows the
incremental analyzer to invalidate only those analysis results
previously inferred by certain dependent domains. The second
contribution is an incremental resource usage analysis which,
in its first phase, uses the multi-domain incremental fixed-point
algorithm to carry out all global pre-analyses required to infer
cost in an interleaved way. Such resource analysis is parametric on
the cost metrics one wants to measure (e.g., number of executed
instructions, number of objects created, etc.). Besides, we present
a novel form of cost summaries which allows us to
incrementally reconstruct only those components of cost functions
affected by the changes. Experimental results in the COSTA system
show that the proposed incremental analysis
provides significant performance gains, ranging from a speedup of 1.48 up to
5.13 times faster than non-incremental analysis.
@article{AlbertCPR13-tcs,
author = {Elvira Albert and Jes\'us Correas and Germ\'{a}n Puebla and Guillermo Rom\'{a}n-D\'{i}ez},
title = {A {M}ulti-{D}omain {I}ncremental {A}nalysis {E}ngine and its {A}pplication to {I}ncremental {R}esource {A}nalysis},
journal = {Theoretical Computer Science},
year = {2015},
publisher = {Elsevier},
volume = {585},
number = {0},
pages = {91 - 114},
issn = {0304-3975},
doi = {10.1016/j.tcs.2015.03.002},
url = {https://doi.org/10.1016/j.tcs.2015.03.002},
rank:jcr:q = {3}
}
|
[29]
|
Damiano Zanardini and Samir Genaim.
Inference of Field-Sensitive Reachability and Cyclicity.
ACM Transactions on Computational Logic, 15(4):33:1-33:41,
September 2014.
[ bibtex |
abstract |
DOI |
arXiv |
http ]
In heap-based languages, knowing that a variable
x points to an acyclic data structure is
useful for analyzing termination: this information
guarantees that the depth of the data structure to
which x points is greater than the depth
of the structure pointed to by
x.fld, and allows bounding the
number of iterations of a loop which traverses the
data structure on fld. In general,
proving termination needs acyclicity, unless
program-specific or non-automated reasoning is
performed. However, recent work could prove that
certain loops terminate even without inferring
acyclicity, because they traverse data structures
“acyclically”. Consider a double-linked list: if
it is possible to demonstrate that every cycle
involves both the “next” and the “prev” field,
then a traversal on “next” terminates since no
cycle will be traversed completely. This paper
develops a static analysis inferring field-sensitive
reachability and cyclicity information, which is
more general than existing approaches. Propositional
formulæ are computed, which describe which fields
may or may not be traversed by paths in the
heap. Consider a tree with edges “left” and
“right” to the left and right sub-trees, and
“parent” to the parent node: termination of a loop
traversing leaf-up cannot be guaranteed by
state-of-the-art analyses. Instead, propositional
formulæ computed by this analysis indicate that
cycles must traverse “parent” and at least one
between “left” and “right”: termination is
guaranteed as no cycle is traversed completely.
This paper defines the necessary abstract domains
and builds an abstract semantics on them. A
prototypical implementation provides the expected
result on relevant examples.
@article{ZanardiniG14,
author = {Damiano Zanardini and Samir Genaim},
title = {{I}nference of {F}ield-{S}ensitive {R}eachability and {C}yclicity},
journal = {ACM Transactions on Computational Logic},
year = 2014,
month = sep,
publisher = {ACM},
volume = {15},
number = {4},
pages = {33:1--33:41},
articleno = {33},
numpages = {41},
doi = {10.1145/2629478},
url = {http://dl.acm.org/citation.cfm?doid=2629478},
arxiv = {http://arxiv.org/abs/1306.6526},
issn = {1529-3785},
rank:jcr:q = {2}
}
|
[30]
|
Amir M. Ben-Amram and Samir Genaim.
Ranking Functions for Linear-Constraint Loops.
Journal of the ACM, 61(4):26, 2014.
[ bibtex |
abstract |
arXiv |
DOI |
http ]
In this paper we study the complexity of the problems:
given a loop, described by linear constraints over a
finite set of variables, is there a linear or
lexicographical-linear ranking function for this
loop? While existence of such functions implies
termination, these problems are not equivalent to
termination. When the variables range over the
rationals (or reals), it is known that both problems
are PTIME decidable. However, when they range over
the integers, whether for single-path or multipath
loops, the complexity has not yet been
determined. We show that both problems are
coNP-complete. However, we point out some special
cases of importance of PTIME complexity. We also
present complete algorithms for synthesizing linear
and lexicographical-linear ranking functions, both
for the general case and the special PTIME cases.
Moreover, in the rational setting, our algorithm for
synthesizing lexicographical-linear ranking
functions extends existing ones, because our definition
for such functions is more general, yet it has
polynomial time complexity.
@article{Ben-AmramG14,
author = {Amir M. Ben-Amram and Samir Genaim},
title = {Ranking Functions for Linear-Constraint Loops},
journal = {Journal of the ACM},
volume = {61},
number = {4},
year = {2014},
pages = {26},
ee = {http://dx.doi.org/10.1145/2629488},
url = {http://dx.doi.org/10.1145/2629488},
arxiv = {http://arxiv.org/abs/1208.4041},
rank:jcr:q = {2}
}
|
[31]
|
Elvira Albert, Frank S. de Boer, Reiner Hähnle, Einar Broch Johnsen, Rudolf Schlatte, S. Lizeth Tapia Tarifa, and Peter Y. H. Wong.
Formal Modeling and Analysis of Resource Management for Cloud
Architectures.
Journal of Service Oriented Computing and Applications,
8(4):323-339, 2014.
[ bibtex |
abstract |
DOI |
PDF |
http ]
We demonstrate by a case study of an industrial distributed system
how performance, resource consumption, and deployment on the cloud
can be formally modeled and analyzed using the abstract behavioral
specification language Real-Time ABS. These non-functional aspects of the
system are integrated with an existing formal model of the
functional system behavior, achieving a separation of concerns
between the functional and non-functional aspects in the integrated
model. The resource costs associated with execution in the system
depend on the size of local data structures, which evolve over time;
we derive corresponding worst-case cost estimations by static
analysis techniques and integrate them into our resource-sensitive
model. The model is further parameterized with respect to deployment
scenarios which capture different application-level management
policies for virtualized resources. The model is validated against
the existing system's performance characteristics and used to
simulate, analyze, and compare deployment scenarios on the cloud.
@article{AlbertBHJTW-soca13,
author = {Elvira Albert and Frank S. de Boer and Reiner H\"ahnle and Einar Broch Johnsen and Rudolf Schlatte and S. Lizeth Tapia Tarifa and Peter Y. H. Wong},
title = {Formal Modeling and Analysis of Resource Management for Cloud Architectures},
journal = {Journal of Service Oriented Computing and Applications},
year = {2014},
number = {4},
volume = {8},
pages = {323-339},
publisher = {Springer},
issn = {1863-2386},
doi = {10.1007/s11761-013-0148-0},
url = {http://dx.doi.org/10.1007/s11761-013-0148-0}
}
|
[32]
|
Elvira Albert, Puri Arenas, Samir Genaim, Germán Puebla, and Guillermo Román-Díez.
Conditional Termination of Loops over Heap-allocated Data.
Science of Computer Programming, 92:2 - 24, 2014.
[ bibtex |
abstract |
DOI |
PDF |
http ]
Static analysis which takes into account the values of data stored
in the heap is considered complex and computationally intractable in
practice. Thus, most static analyzers do not keep track of object
fields nor of array contents, i.e., they are
heap-insensitive. In this article, we propose locality
conditions for soundly tracking heap-allocated data in Java
(bytecode) programs, by means of ghost non-heap allocated
variables. This way, heap-insensitive analysis over the transformed
program can infer information on the original heap-allocated data
without sacrificing efficiency. If the locality conditions cannot
be proven unconditionally, we seek to generate aliasing
preconditions which, when they hold in the initial state,
guarantee the termination of the program. Experimental results show
that we greatly improve the accuracy w.r.t. a heap-insensitive
analysis while the overhead introduced is reasonable.
@article{AlbertAGPG13,
author = {Elvira Albert and Puri Arenas and Samir Genaim and Germ\'{a}n Puebla and Guillermo Rom\'{a}n-D\'{i}ez},
title = {{C}onditional {T}ermination of {L}oops over {H}eap-allocated {D}ata},
journal = {Science of Computer Programming},
publisher = {Elsevier},
doi = {10.1016/j.scico.2013.04.006},
url = {http://dx.doi.org/10.1016/j.scico.2013.04.006},
issn = {0167-6423},
volume = {92},
pages = {2 - 24},
year = {2014},
rank:jcr:q = {3}
}
|
[33]
|
Elvira Albert, María García de la Banda, Miguel Gómez-Zamalloa,
José Miguel Rojas, and Peter J. Stuckey.
A CLP Heap Solver for Test Case Generation.
Theory and Practice of Logic Programming, 13(4-5):721-735,
July 2013.
[ bibtex |
abstract |
DOI |
PDF |
http ]
One of the main challenges to software testing today is
to efficiently handle heap-manipulating
programs. Thexse programs often build complex,
dynamically allocated data structures during
execution and, to ensure reliability, the testing
process needs to consider all possible shapes these
data structures can take. This creates scalability
issues since high (often exponential) numbers of
shapes may be built due to the aliasing of
references. This paper presents a novel CLP
heap solver for the test case generation of
heap-manipulating programs that is more scalable
than previous proposals, thanks to the treatment of
reference aliasing by means of disjunction,
and to the use of advanced back-propagation
of heap related constraints. In addition, the heap
solver supports the use of heap assumptions
to avoid aliasing of data that, though legal, should
not be provided as input.
@article{AlbertGGZRS13,
author = {Elvira Albert and Mar{\'i}a Garc{\'i}a de la Banda and Miguel G\'{o}mez-Zamalloa and Jos\'{e} Miguel Rojas and Peter J. Stuckey},
title = {{A} {CLP} {H}eap {S}olver for {T}est {C}ase {G}eneration},
journal = {Theory and Practice of Logic Programming},
year = {2013},
month = jul,
publisher = {Cambridge U. Press},
volume = {13},
number = {4-5},
pages = {721--735},
issn = {1471-0684},
doi = {10.1017/S1471068413000458},
url = {http://journals.cambridge.org/article_S1471068413000458},
rank:jcr:q = {1}
}
|
[34]
|
Samir Genaim and Damiano Zanardini.
Reachability-based Acyclicity Analysis by Abstract
Interpretation.
Theoretical Computer Science, 474(0):60 - 79, 2013.
Los agradecimientos al proyecto se han publicado en:
Corrigendum to
“Reachability-based acyclicity analysis by abstract interpretation” [Theoretical Computer Science 474 (2013) 60-79]. Theoretical Computer
Science, 503(0):115, 2013.
[ bibtex |
abstract |
DOI |
PDF |
http ]
In programming languages with dynamic use of memory, such as Java,
knowing that a reference variable x points to an acyclic data
structure is valuable for the analysis of termination and
resource usage (e.g., execution time or memory consumption).
For instance, this information guarantees that the depth of
the data structure to which x points is greater than the depth of
the data structure pointed to by x.f for any field f of x.
This, in turn, allows bounding the number of iterations of a loop
which traverses the structure by its depth, which is essential in
order to prove the termination or infer the resource usage of the
loop. The present paper provides an Abstract-Interpretation-based
formalization of a static analysis for inferring acyclicity, which
works on the reduced product of two abstract domains:
reachability, which models the property that the location
pointed to by a variable w can be reached by dereferencing another
variable v (in this case, v is said to reach w); and
cyclicity, modeling the property that v can point to a
cyclic data structure. The analysis is proven to be sound
and optimal with respect to the chosen abstraction.
@article{GenaimZ13,
author = {Samir Genaim and Damiano Zanardini},
title = {Reachability-based {A}cyclicity {A}nalysis by {A}bstract {I}nterpretation},
journal = {Theoretical Computer Science},
year = {2013},
publisher = {Elsevier},
volume = {474},
number = {0},
pages = {60 - 79},
issn = {0304-3975},
doi = {10.1016/j.tcs.2012.12.018},
url = {http://dx.doi.org/10.1016/j.tcs.2012.12.018},
rank:jcr:q = {3},
note = { Los agradecimientos al proyecto se han publicado en:
\href{http://dx.doi.org/10.1016/j.tcs.2013.07.032}{Corrigendum
to ``Reachability-based acyclicity analysis by
abstract interpretation''\ [Theoretical Computer
Science 474 (2013) 60-79]}. \emph{Theoretical
Computer Science}, 503(0):115, 2013. }
}
|
[35]
|
Elvira Albert, Samir Genaim, and Miguel Gómez-Zamalloa.
Heap Space Analysis for Garbage Collected Languages.
Science of Computer Programming, 78(9):1427-1448, 2013.
[ bibtex |
abstract |
PDF |
http ]
Accurately predicting the dynamic memory
consumption (or heap space) of programs can be
critical during software development. It is
well-known that garbage collection (GC) complicates
such problem. The peak heap consumption of a
program is the maximum size of the data on the heap
during its execution, i.e., the minimum amount of
heap space needed to safely run the program.
Existing heap space analyses either do not take
deallocation into account or adopt specific models
of garbage collectors which do not necessarily
correspond to the actual memory usage. This paper
presents a novel static analysis for
garbage-collected imperative languages that infers
accurate upper bounds on the peak heap usage,
including exponential, logarithmic and polynomial
bounds. A unique characteristic of the analysis is
that it is parametric on the notion of object
lifetime, i.e., on when objects become collectible.
@article{AlbertGGZ12,
author = {Elvira Albert and Samir Genaim and Miguel G\'{o}mez-Zamalloa},
title = {Heap Space Analysis for Garbage Collected Languages},
journal = {Science of Computer Programming},
publisher = {Elsevier},
issn = {0167-6423},
volume = {78},
number = {9},
pages = {1427--1448},
year = {2013},
url = {http://dx.doi.org/10.1016/j.scico.2012.10.008},
rank:jcr:q = {4}
}
|
[36]
|
Elvira Albert, Samir Genaim, and Abu Naser Masud.
On the Inference of Resource Usage Upper and Lower
Bounds.
ACM Transactions on Computational Logic, 14(3):22:1-22:35,
2013.
[ bibtex |
abstract |
DOI |
PDF |
http ]
Cost analysis aims at determining the amount of
resources required to run a program in terms of its
input data sizes. The most challenging step is to
infer the cost of executing the loops in the
program. This requires bounding the number of
iterations of each loop and finding tight bounds for
the cost of each of its iterations. This article
presents a novel approach to infer upper and lower
bounds from cost relations. These relations are an
extended form of standard recurrence equations which
can be non-deterministic, contain inexact size
constraints and have multiple arguments that
increase and/or decrease. We propose novel
techniques to automatically transform cost relations
into worst-case and best-case deterministic
one-argument recurrence relations. The solution of
each recursive relation provides a precise
upper-bound and lower-bound for executing a
corresponding loop in the program. Importantly,
since the approach is developed at the level of the
cost equations, our techniques are programming
language independent.
@article{AlbertGM12,
author = {Elvira Albert and Samir Genaim and Abu Naser Masud},
title = {On the {I}nference of {R}esource {U}sage {U}pper and {L}ower {B}ounds},
journal = {ACM Transactions on Computational Logic},
year = 2013,
pages = {22:1--22:35},
volume = {14},
number = {3},
issn = {1529-3785},
publisher = {ACM Press},
url = {http://doi.acm.org/10.1145/2499937.2499943},
doi = {10.1145/2499937.2499943},
rank:jcr:q = {1}
}
|
[37]
|
Damiano Zanardini.
Class-level Non-Interference.
New Generation Computing, 30(2-3):241-270, 2012.
[ bibtex |
abstract |
PDF ]
The Information-Flow property of Non-Interference was
recently relaxed into Abstract Non-Interference (ANI), a
weakened version where attackers can only observe properties
of data, rather than their exact value. ANI was originally defined
on integers, where a property models the set of numbers satisfying
it. The present work proposes an Object-Oriented,
Java-based formulation of an instance of ANI where data take
the form of objects, and the observed property comes to be their
type. The execution of a program is taken to be the invocation of
some method by an external user: a class is secure if, for
all its (non-private) methods, the type of their low-security
data after the execution does not depend on the initial type of its
high-security data (i.e., there are no illicit flows).
The relation to ANI theory (in its abstract version) can be
seen in the representation of abstract domains in terms of class
hierarchies: an upper closure operator map an object into the
smallest class it is an instance of. An analyzer for a
non-trivial subset of Java is illustrated. A sketch of a
soundness proof is provided: a program is never misclassified
as secure, i.e., it is rejected whenever the absence of illicit
flows cannot be guaranteed.
@article{ngc12,
author = {Damiano Zanardini},
title = {Class-level {N}on-{I}nterference},
journal = {New Generation Computing},
volume = 30,
number = {2-3},
year = {2012},
publisher = {Springer-Verlag, Berlin},
pages = {241--270},
rank:jcr:q = {2}
}
|
[38]
|
Amir M. Ben-Amram, Samir Genaim, and Abu Naser Masud.
On the Termination of Integer Loops.
ACM Trans. Program. Lang. Syst., 34(4):16, 2012.
[ bibtex |
abstract |
DOI |
PDF ]
In this paper we study the decidability of termination
of simple integer loops. We show that it is
undecidable when the body of the loop is expressed
by a set of linear inequalities where an arbitrary
irrational is allowed as a coefficient; when the
loop is a sequence of instructions, that compute
either linear expressions or the step function; or
when the loop body is a piecewise linear
deterministic update with two pieces. For integer
constraint loops with rational coefficients only, we
show that a Petri net can be simulated with such a
loop which implies some interesting lower bounds.
@article{Ben-AmramGM12a,
author = {Amir M. Ben-Amram and Samir Genaim and Abu Naser Masud},
title = {On the Termination of Integer Loops},
journal = {ACM Trans. Program. Lang. Syst.},
volume = {34},
number = {4},
year = {2012},
pages = {16},
ee = {http://doi.acm.org/10.1145/2400676.2400679}
}
|
[39]
|
Elvira Albert, Puri Arenas, Samir Genaim, Germán Puebla, and Damiano Zanardini.
Cost Analysis of Object-Oriented Bytecode Programs.
Theoretical Computer Science (Special Issue on Quantitative
Aspects of Programming Languages), 413(1):142-159, 2012.
[ bibtex |
abstract |
DOI |
PDF |
http ]
Cost analysis statically approximates the
cost of programs in terms of their input data size.
This paper presents, to the best of our knowledge,
the first approach to the automatic cost analysis of
Object-Oriented bytecode programs. In languages
such as Java and C#, analyzing bytecode has a much
wider application area than analyzing source code
since the latter is often not available. Cost
analysis in this context has to consider, among
others, dynamic dispatch, jumps, the operand stack,
and the heap. Our method takes a bytecode program
and a cost model specifying the resource of
interest, and generates cost relations which
approximate the execution cost of the program with
respect to such resource. We report on COSTA, an
implementation for Java bytecode which can obtain
upper bounds on cost for a large class of programs
and complexity classes. Our basic techniques can be
directly applied to infer cost relations for other
Object-Oriented imperative languages, not
necessarily in bytecode form.
@article{AlbertAGPZ12,
author = {Elvira Albert and Puri Arenas and Samir Genaim and Germ{\'a}n Puebla and Damiano Zanardini},
title = {Cost {A}nalysis of {O}bject-{O}riented {B}ytecode {P}rograms},
journal = {Theoretical Computer Science (Special Issue on Quantitative Aspects of Programming Languages)},
volume = 413,
number = 1,
pages = {142--159},
year = {2012},
publisher = {Elsevier},
issn = {0304-3975},
doi = {10.1016/j.tcs.2011.07.009},
url = {http://www.sciencedirect.com/science/article/pii/S0304397511006190},
rank:jcr:q = {4}
}
|
[40]
|
Peter Y. H. Wong, Elvira Albert, Radu Muschevici, José Proença, Jan
Schäfer, and Rudolf Schlatte.
The ABS Tool Suite: Modelling, Executing and Analysing
Distributed Adaptable Object-Oriented Systems.
International Journal on Software Tools for Technology
Transfer, 14(5):567-588, 2012.
[ bibtex |
abstract |
DOI ]
Modern software systems must support a high degree of
variability to accommodate a wide range of
requirements and operating conditions. This paper
introduces the Abstract Behavioural Specification
(ABS) language and tool suite, a comprehensive
platform for developing and analysing highly
adaptable distributed concurrent software
systems. The ABS language has a hybrid functional
and object-oriented core, and comes with extensions
that support the development of systems that are
adaptable to diversified requirements, yet capable
to maintain a high level of trustworthiness. Using
ABS, system variability is consistently traceable
from the level of requirements engineering down to
object behaviour. This facilitates temporal
evolution, as changes to the required set of
features of a system are automatically reflected by
functional adaptation of the system's behaviour. The
analysis capabilities of ABS stretch from debugging,
observing and simulating to resource analysis of ABS
models and help ensure that a system will remain
dependable throughout its evolutionary lifetime. We
report on the experience of using the ABS language
and the ABS tool suite in an industrial case study.
@article{WongAMPSS12,
author = {Peter Y. H. Wong and Elvira Albert and Radu
Muschevici and Jos{\'e} Proen\c{c}a and Jan Sch\"afer and Rudolf Schlatte},
title = {The {ABS} {T}ool {S}uite: {M}odelling, {E}xecuting and {A}nalysing {D}istributed
{A}daptable {O}bject-{O}riented {S}ystems},
journal = {International Journal on Software Tools for Technology Transfer},
year = {2012},
publisher = {Springer},
issn = {1433-2779},
doi = {10.1007/s10009-012-0250-1},
volume = {14},
number = {5},
pages = {567-588}
}
|
[41]
|
Elvira Albert, Puri Arenas, Samir Genaim, and Germán Puebla.
Closed-Form Upper Bounds in Static Cost Analysis.
Journal of Automated Reasoning, 46(2):161-203, 2011.
[ bibtex |
abstract |
DOI |
PDF ]
The classical approach to automatic cost analysis
consists of two phases. Given a program and some
measure of cost, the analysis first produces
cost relations (CRs), i.e., recursive
equations which capture the cost of the program in
terms of the size of its input data. Second, CRs
are converted into closed-form, i.e., without
recurrences. Whereas the first phase has received
considerable attention, with a number of cost
analyses available for a variety of programming
languages, the second phase has been comparatively
less studied. This article presents, to our
knowledge, the first practical framework for the
generation of closed-form upper bounds for CRs which
(1) is fully automatic, (2) can handle the
distinctive features of CRs originating from cost
analysis of realistic programming languages, (3) is
not restricted to simple complexity classes, and (4)
produces reasonably accurate solutions. A key idea
in our approach is to view CRs as programs, which
allows applying semantic-based static analyses and
transformations to bound them, namely our method is
based on the inference of ranking functions
and loop invariants and on the use of
partial evaluation.
@article{AlbertAGP11a,
author = {Elvira Albert and Puri Arenas and Samir Genaim and Germ\'an Puebla},
title = {{C}losed-{F}orm {U}pper {B}ounds in {S}tatic {C}ost {A}nalysis},
journal = {Journal of Automated Reasoning},
year = {2011},
publisher = {Springer},
volume = {46},
number = {2},
pages = {161-203},
doi = {10.1007/s10817-010-9174-1},
issn = {0168-7433},
rank:jcr:q = {3}
}
|
[42]
|
Elvira Albert, Puri Arenas, Samir Genaim, and Damiano Zanardini.
Task-Level Analysis for a Language with Async-Finish
Parallelism.
ACM SIGPLAN Notices, 46(5):21-30, 2011.
[ bibtex |
abstract |
DOI |
PDF ]
The task-level of a program is the maximum
number of tasks that can be available (i.e., not
finished nor suspended) simultaneously during its
execution for any input data. Static knowledge of
the task-level is of utmost importance for
understanding and debugging parallel programs as
well as for guiding task schedulers. We present, to
the best of our knowledge, the first static analysis
which infers safe and precise approximations on the
task-level for a language with async-finish
parallelism. In parallel languages, async and
finish are the basic constructs, respectively, for
spawning tasks and for waiting until they terminate.
They are the core of modern, parallel, distributed
languages like X10. Given a (parallel) program, our
analysis returns a task-level upper bound, i.e., a
function on the program's input arguments that
guarantees that the task-level of the program will
never exceed its value along any execution. Our
analysis provides a series of useful
(over)-approximations, going from the total number
of tasks spawned in the execution up to an accurate
estimation of the task-level.
@article{AlbertAGZ11,
author = {Elvira Albert and Puri Arenas and Samir Genaim and Damiano Zanardini},
title = {Task-{L}evel {A}nalysis for a {L}anguage with {A}sync-{F}inish {P}arallelism},
journal = {ACM SIGPLAN Notices},
volume = {46},
number = {5},
publisher = {ACM Press},
year = {2011},
issn = {0362-1340},
doi = {10.1145/1967677.1967681},
pages = {21--30},
rank:jcr:q = {4}
}
|
[43]
|
Miguel Gómez-Zamalloa, Elvira Albert, and Germán Puebla.
Test Case Generation for Object-Oriented Imperative
Languages in CLP.
Theory and Practice of Logic Programming, 26th Int'l. Conference
on Logic Programming (ICLP'10) Special Issue, 10(4-6):659-674, July 2010.
[ bibtex |
abstract |
DOI |
PDF ]
Testing is a vital part of the software development
process. Test Case Generation (TCG) is the process
of automatically generating a collection of
test-cases which are applied to a system under test.
White-box TCG is usually performed by means of
symbolic execution, i.e., instead of executing the
program on normal values (e.g., numbers), the
program is executed on symbolic values representing
arbitrary values. When dealing with an
object-oriented (OO) imperative language, symbolic
execution becomes challenging as, among other
things, it must be able to backtrack, complex
heap-allocated data structures should be created
during the TCG process and features like
inheritance, virtual invocations and exceptions have
to be taken into account. Due to its inherent
symbolic execution mechanism, we pursue in this
paper that Constraint Logic Programming (CLP) has a
promising application field in TCG. We will support
our claim by developing a fully CLP-based framework
to TCG of an OO imperative language, and by
assessing it on a corresponding implementation on a
set of challenging Java programs.
@article{Gomez-ZAP10,
author = {Miguel G\'{o}mez-Zamalloa and Elvira Albert and Germ{\'a}n Puebla},
title = {{T}est {C}ase {G}eneration for {O}bject-{O}riented {I}mperative {L}anguages in {CLP}},
journal = {Theory and Practice of Logic Programming, 26th Int'l. Conference on Logic Programming (ICLP'10) Special Issue},
year = {2010},
month = jul,
editor = {M.~Hermenegildo and T.~Schaub},
pages = {659--674},
volume = {10},
number = {4-6},
issn = {1471-0684},
ee = {http://dx.doi.org/10.1017/S1471068410000347},
publisher = {Cambridge U. Press},
rank:jcr:q = {2}
}
|
[44]
|
Elvira Albert, Samir Genaim, and Miguel Gómez-Zamalloa.
Parametric Inference of Memory Requirements for Garbage
Collected Languages.
ACM SIGPLAN Notices, 45(8):121-130, 2010.
[ bibtex |
abstract |
PDF ]
The accurate prediction of program's memory requirements
is a critical component in software development.
Existing heap space analyses either do not take
deallocation into account or adopt specific models
of garbage collectors which do not necessarily
correspond to the actual memory usage. We present a
novel approach to inferring upper bounds on memory
requirements of Java-like programs which is
parametric on the notion of object
lifetime, i.e., on when objects become collectable.
If objects lifetimes are inferred by a
reachability analysis, then our analysis
infers accurate upper bounds on the memory
consumption for a reachability-based garbage
collector. Interestingly, if objects lifetimes are
inferred by a heap liveness analysis, then we
approximate the program minimal memory requirement,
i.e., the peak memory usage when using an optimal
garbage collector which frees objects as soon as
they become dead. The key idea is to integrate
information on objects lifetimes into the process of
generating the recurrence equations which
capture the memory usage at the different program
states. If the heap size limit is set to the memory
requirement inferred by our analysis, it is ensured
that execution will not exceed the memory limit with
the only assumption that garbage collection works
when the limit is to be reached. Experiments on Java
bytecode programs provide evidence of the
feasibility and accuracy of our analysis.
@article{AlbertGG10b,
author = {Elvira Albert and Samir Genaim and Miguel G\'{o}mez-Zamalloa},
title = {{P}arametric {I}nference of {M}emory {R}equirements for {G}arbage {C}ollected {L}anguages},
journal = {ACM SIGPLAN Notices},
volume = {45},
number = {8},
year = {2010},
pages = {121--130},
issn = {0362-1340},
publisher = {ACM Press},
rank:jcr:q = {4}
}
|
[45]
|
Miguel Gómez-Zamalloa, Elvira Albert, and Germán Puebla.
Decompilation of Java Bytecode to Prolog by Partial
Evaluation.
Information and Software Technology, 51(10):1409-1427, October
2009.
[ bibtex |
abstract |
DOI |
PDF ]
@article{Gomez-ZAP09,
author = {Miguel G\'{o}mez-Zamalloa and Elvira Albert and Germ{\'a}n Puebla},
title = {{D}ecompilation of {J}ava {B}ytecode to {P}rolog by {P}artial {E}valuation},
journal = {Information and Software Technology},
publisher = {Elsevier},
volume = {51},
number = {10},
issn = {0950-5849},
ee = {http://dx.doi.org/10.1016/j.infsof.2009.04.010},
pages = {1409--1427},
npages = {19},
month = oct,
year = {2009},
rank:jcr:q = {1}
}
|
[46]
|
Elvira Albert, John P. Gallagher, Miguel Gómez-Zamalloa, and Germán Puebla.
Type-based Homeomorphic Embedding for Online Termination.
Information Processing Letters, 109(15):879-886, July 2009.
[ bibtex |
abstract |
DOI |
PDF ]
@article{AlbertGGP09,
author = {Elvira Albert and John P. Gallagher and Miguel G\'{o}mez-Zamalloa and Germ{\'a}n Puebla},
title = {{T}ype-based {H}omeomorphic {E}mbedding for {O}nline {T}ermination},
journal = {Information Processing Letters},
publisher = {Elsevier},
volume = {109},
number = {15},
issn = {0020-0190},
ee = {http://dx.doi.org/10.1016/j.ipl.2009.04.016},
pages = {879--886},
month = jul,
year = {2009},
rank:jcr:q = {3}
}
|
[1]
|
Elvira Albert, Jesús Correas, Pablo Gordillo, Guillermo Román-Díez, and
Albert Rubio.
Synthesis of Sound and Precise Storage Cost Bounds via Unsound
Resource Analysis and Max-SMT.
In ACM SIGSOFT International Symposium on Software Testing and
Analysis, ISSTA 2024. Proceedings, ACM, 2024.
To appear.
[ bibtex |
abstract ]
A storage is a persistent memory whose contents are
kept across different program executions. In the
blockchain technology, storage contents are
replicated and incur the largest costs of a
program’s execution (a.k.a. gas fees). Storage costs
are dynamically calculated using a rather complex
model which assigns a much larger cost to the first
access made in an execution to a storage key, and
besides assigns different costs to write accesses
depending on whether they change the values wrt. the
initial and previous contents. Safely assuming the
largest cost for all situations, as done in existing
gas analyzers, is an overly-pessimistic approach
that might ren- der useless bounds because of being
too loose. The challenge is to soundly, and yet
accurately, synthesize storage bounds which take
into account the dynamicity implicit to the cost
model. Our solution consists in using an
off-the-shelf static resource analysis —but do not
always assuming a worst-case cost— and hence
yielding unsound bounds; and then, in a posterior
stage, computing correc- tions to recover soundness
in the bounds by using a new Max-SMT based
approach. We have implemented our approach and used
it to improve the precision of two gas analyzers for
Ethereum, gastap and asparagus. Experimental results
on more than 400,000 func- tions show that we
achieve great accuracy gains, up to 75%, on the
storage bounds, being the most frequent gains
between 10-20%.
@inproceedings{AlbertCGRR24,
author = {Elvira Albert and
Jes\'us Correas and Pablo Gordillo and
Guillermo Rom\'an-D\'iez and
Albert Rubio},
title = {Synthesis of Sound and Precise Storage Cost Bounds via
Unsound Resource Analysis and Max-SMT},
booktitle = {ACM SIGSOFT International Symposium on Software Testing and Analysis, ISSTA 2024. Proceedings},
year = {2024},
series = {ACM},
note = {To appear}
}
|
[2]
|
Elvira Albert, Maria Garcia de la Banda, Alejandro Hernández-Cerezo,
Alexey Ignatiev, Albert Rubio, and Peter J. Stuckey.
SuperStack: Superoptimization of Stack-Bytecode via Greedy,
Constraint-based, and SAT Techniques.
In PLDI '24: 45th ACM SIGPLAN International Conference on
Programming Language Design and Implementation, Copenhagen, Denmark, June
24-28, 2024. ACM, 2024.
[ bibtex |
abstract ]
Given a loop-free sequence of instructions,
superoptimization techniques use a constraint solver
to search for an equivalent sequence that is optimal
for a desired objective. The complexity of the
search grows exponentially with the length of the
solution being constructed, and the problem becomes
intractable for large sequences of
instructions. This paper presents a new approach to
superoptimizing stack-bytecode via three novel
components: (1) a greedy algorithm to refine the
bound on the length of the optimal solution; (2) a
new representation of the optimization problem as a
set of weighted soft clauses in MaxSAT; (3) a series
of domain-specific dominance and redundant
constraints to reduce the search space for optimal
solutions. We have developed a tool, named
SuperStack, which can be used to find optimal code
translations of modern stack-based bytecode, namely
WebAssembly or Ethereum bytecode. Experimental
evaluation on more than 500,000 sequences shows the
proposed greedy, constraint-based and SAT
combination is able to greatly increase optimization
gains achieved by existing superoptimizers and
reduce to at least a fourth the optimization time.
@inproceedings{AlbertGHIRS24,
author = {Elvira Albert and
Maria Garcia de la Banda and
Alejandro Hern{\'{a}}ndez{-}Cerezo and
Alexey Ignatiev and
Albert Rubio and
Peter J. Stuckey},
title = {{SuperStack}: {Superoptimization} of {Stack-Bytecode} via
{Greedy}, {Constraint-based}, and {SAT} {Techniques}},
booktitle = {{PLDI} '24: 45th {ACM} {SIGPLAN} International Conference on Programming
Language Design and Implementation, Copenhagen, Denmark, June 24-28,
2024},
year = {2024},
publisher = {{ACM}},
rank:grinscie:class = {1}
}
|
[3]
|
M. Isabel, C. Rodríguez-Núñez, and A. Rubio.
Scalable Verification of Zero-Knowledge Protocols.
In 2024 IEEE Symposium on Security and Privacy (SP), pages
132-132, Los Alamitos, CA, USA, 2024. IEEE Computer Society.
[ bibtex |
abstract |
DOI |
http ]
The application of Zero-Knowledge (ZK) proofs is rapidly
growing in the industry and has become a key element
to enable privacy and enhance scalability in public
distributed ledgers. In most practical ZK systems,
the statement to be proven is expressed by means of
a set of polynomial equations in a prime field that
describe an arithmetic circuit. Describing general
statements using this kind of constraints is a
complex and error-prone task. This can be partly
mitigated by using high-level programming languages,
but at the cost of losing control over the added
constraints and, as a result, obtaining too large
systems for complex statements. In this context,
having tools to automatically verify properties of
the constraint systems is of paramount importance to
guarantee the security of the protocol. However,
since non-linear polynomial reasoning over a finite
field is needed for checking challenging properties,
existing automatic tools either do not scale or
cannot detect non-trivial bugs. In this paper, we
present a new scalable modular technique based on
the application of transformation and deduction
rules that have proven to be very effective in
verifying properties over the signals of a circuit
given as a set of polynomial equations in a large
prime field. Our technique has been implemented in a
tool called CIVER and applied to verify safety
properties for circuits implemented in circom, which
is one of the most popular languages for defining ZK
protocols. We have been able to analyze large
industrial circuits and detect subtle
vulnerabilities in circuits designed by expert
programmers.
@inproceedings{IsabelRR23,
author = {M. Isabel and C. Rodr{\'{\i}}guez{-}N{\'{u}}{\~{n}}ez and A. Rubio},
title = {{Scalable Verification of Zero-Knowledge Protocols}},
booktitle = {{2024 IEEE Symposium on Security and Privacy (SP)}},
year = {2024},
issn = {2375-1207},
pages = {132--132},
rank:grinscie:class = {1},
doi = {10.1109/SP54263.2024.00133},
url = {https://doi.ieeecomputersociety.org/10.1109/SP54263.2024.00133},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA}
}
|
[4]
|
Elvira Albert, Samir Genaim, Daniel Kirchner, and Enrique Martin-Martin.
Formally Verified EVM Block-Optimizations.
In Computer Aided Verification - 35th International Conference,
CAV 2023, Paris, France, July 17-22, 2023, Proceedings, Part III, volume
13966 of Lecture Notes in Computer Science, pages 176-189. Springer,
2023.
[ bibtex |
abstract |
DOI ]
@inproceedings{AlbertGKMM23,
author = {Elvira Albert and
Samir Genaim and
Daniel Kirchner and
Enrique Martin-Martin},
title = {Formally Verified EVM Block-Optimizations},
booktitle = {Computer Aided Verification - 35th International Conference, {CAV}
2023, Paris, France, July 17-22, 2023, Proceedings, Part {III}},
series = {Lecture Notes in Computer Science},
publisher = {Springer},
volume = {13966},
doi = {10.1007/978-3-031-37709-9\_9},
year = {2023},
pages = {176--189},
rank:grinscie:class = {1}
}
|
[5]
|
Elvira Albert, Jesús Correas, Pablo Gordillo, Guillermo Román-Díez, and
Albert Rubio.
Inferring Needless Write Memory Accesses on Ethereum Smart Contracts.
In 29th International Conference on Tools and Algorithms for the
Construction and Analysis of Systems, TACAS 2023. Proceedings, volume 13993
of Lecture Notes in Computer Science, pages 448-466. Springer, 2023.
[ bibtex |
abstract |
DOI |
PDF ]
Efficiency is a fundamental property of any type of
program, but it is even more so in the context of
the programs executing on the blockchain (known as
smart contracts). This is because optimizing smart
contracts has direct consequences on reducing the
costs of deploying and executing the contracts, as
there are fees to pay related to their bytes-size
and to their resource consumption (called
gas). Optimizing memory usage is considered a
challenging problem that, among other things,
requires a precise inference of the memory locations
being accessed. This is also the case for the
Ethereum Virtual Machine (EVM) bytecode generated by
the most-widely used compiler, solc, whose rather
unconventional and low-level memory usage challenges
automated reasoning. This paper presents a static
analysis, developed at the level of the EVM bytecode
generated by solc, that infers write memory accesses
that are needless and thus can be safely
removed. The application of our implementation on
more than 19,000 real smart contracts has detected
about 6,200 needless write accesses in less than 4
hours. Interestingly, many of these writes were
involved in memory usage patterns generated by solc
that can be greatly optimized by removing entire
blocks of bytecodes. To the best of our knowledge,
existing optimization tools cannot infer such
needless write accesses, and hence cannot detect
these inefficiencies that affect both the deployment
and the execution costs of Ethereum smart
contracts.
@inproceedings{AlbertCGRR23,
author = {Elvira Albert and
Jes\'us Correas and Pablo Gordillo and
Guillermo Rom\'an-D\'iez and
Albert Rubio},
title = {Inferring Needless Write Memory Accesses on Ethereum Smart Contracts},
booktitle = {29th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2023. Proceedings},
year = {2023},
rank:grinscie:class = {1},
volume = {13993},
pages = {448--466},
publisher = {Springer},
series = {Lecture Notes in Computer Science},
doi = {10.1007/978-3-031-30823-9_23}
}
|
[6]
|
Elvira Albert, Marta Bellés-Muñoz, Miguel Isabel, Clara
Rodríguez-Núñez, and Albert Rubio.
Distilling Constraints in Zero-Knowledge Protocols.
In Sharon Shoham and Yakir Vizel, editors, Computer Aided
Verification - 34th International Conference, CAV 2022, Haifa, Israel,
August 7-10, 2022, Proceedings, Part I, volume 13371 of Lecture Notes
in Computer Science, pages 430-443. Springer, 2022.
[ bibtex |
abstract |
DOI |
http ]
The most widely used Zero-Knowledge (ZK) protocols require provers to prove they know a solution to a computational problem expressed as a Rank-1 Constraint System (R1CS). An R1CS is essentially a system of non-linear arithmetic constraints over a set of signals, whose security level depends on its non-linear part only, as the linear (additive) constraints can be easily solved by an attacker. Distilling the essential constraints from an R1CS by removing the part that does not contribute to its security is important, not only to reduce costs (time and space) of producing the ZK proofs, but also to reveal to cryptographic programmers the real hardness of their proofs. In this paper, we formulate the problem of distilling constraints from an R1CS as the (hard) problem of simplifying constraints in the realm of non-linearity. To the best of our knowledge, it is the first time that constraint-based techniques developed in the context of formal methods are applied to the challenging problem of analysing and optimizing ZK protocols.
@inproceedings{DBLP:conf/cav/AlbertBIRR22,
author = {Elvira Albert and
Marta Bell{\'{e}}s{-}Mu{\~{n}}oz and
Miguel Isabel and
Clara Rodr{\'{\i}}guez{-}N{\'{u}}{\~{n}}ez and
Albert Rubio},
editor = {Sharon Shoham and
Yakir Vizel},
title = {Distilling Constraints in Zero-Knowledge Protocols},
booktitle = {Computer Aided Verification - 34th International Conference, {CAV}
2022, Haifa, Israel, August 7-10, 2022, Proceedings, Part {I}},
series = {Lecture Notes in Computer Science},
volume = {13371},
pages = {430--443},
publisher = {Springer},
rank:grinscie:class = {1},
year = {2022},
url = {https://doi.org/10.1007/978-3-031-13185-1\_21},
doi = {10.1007/978-3-031-13185-1\_21},
timestamp = {Sat, 10 Sep 2022 20:58:38 +0200},
biburl = {https://dblp.org/rec/conf/cav/AlbertBIRR22.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
|
[7]
|
Elvira Albert, Pablo Gordillo, Alejandro Hernández-Cerezo, Clara
Rodríguez-Núñez, and Albert Rubio.
Using Automated Reasoning Techniques for Enhancing the Efficiency and
Security of (Ethereum) Smart Contracts.
In Jasmin Blanchette, Laura Kovács, and Dirk Pattinson,
editors, Automated Reasoning - 11th International Joint Conference,
IJCAR 2022, Haifa, Israel, August 8-10, 2022, Proceedings, volume 13385 of
Lecture Notes in Computer Science, pages 3-7. Springer, 2022.
[ bibtex |
abstract |
DOI |
PDF ]
@inproceedings{AlbertGHRR22,
author = {Elvira Albert and
Pablo Gordillo and
Alejandro Hern{\'{a}}ndez{-}Cerezo and
Clara Rodr{\'{\i}}guez{-}N{\'{u}}{\~{n}}ez and
Albert Rubio},
title = {Using Automated Reasoning Techniques for Enhancing the Efficiency
and Security of (Ethereum) Smart Contracts},
booktitle = {Automated Reasoning - 11th International Joint Conference, {IJCAR}
2022, Haifa, Israel, August 8-10, 2022, Proceedings},
year = {2022},
publisher = {Springer},
series = {Lecture Notes in Computer Science},
doi = {10.1007/978-3-031-10769-6_1},
pages = {3--7},
volume = {13385},
isbn = {978-3-031-10769-6},
rank:grinscie:class = {2},
editor = {Jasmin Blanchette and
Laura Kov{\'{a}}cs and
Dirk Pattinson}
}
|
[8]
|
Elvira Albert, Pablo Gordillo, Alejandro Hernández-Cerezo, and Albert Rubio.
A Max-SMT Superoptimizer for EVM handling Memory and
Storage.
In Dana Fisman and Grigore Rosu, editors, 28th International
Conference on Tools and Algorithms for the Construction and Analysis of
Systems, TACAS 2022. Proceedings, volume 13243 of Lecture Notes in
Computer Science, pages 201-219. Springer, 2022.
[ bibtex |
abstract |
DOI |
PDF ]
Superoptimization is a compilation technique that
searches for the optimal sequence of instructions
semantically equivalent to a given (loop-free)
initial sequence. With the advent of SMT solvers, it
has been successfully applied to LLVM code (to
reduce the number of instructions) and to Ethereum
EVM bytecode (to reduce its gas consumption). Both
applications, when proven practical, have left out
memory operations and thus missed important
optimization opportunities. A main challenge to
superoptimization today is handling memory
operations while remaining scalable. We present
GASOLv2, a gas and bytes-size superoptimization tool
for Ethereum smart contracts, that leverages a
previous Max-SMT approach for only stack
optimization to optimize also w.r.t. memory and
storage. GASOLv2 can be used to optimize the size
in bytes, aligned with the optimization criterion
used by the Solidity compiler solc, and it can also
be used to optimize gas consumption. Our experiments
on 12,378 blocks from 30 randomly selected real
contracts achieve gains of 16.42% in gas w.r.t. the
previous version of the optimizer without memory
handling, and gains of 3.28% in bytes-size over
code already optimized by solc.
@inproceedings{AlbertGHR22,
author = {Elvira Albert and
Pablo Gordillo and
Alejandro Hern\'andez-Cerezo and
Albert Rubio},
title = {{A} {M}ax-{SMT} {S}uperoptimizer for {EVM}
handling {M}emory and {S}torage},
booktitle = {28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2022. Proceedings},
year = {2022},
rank:grinscie:class = {1},
publisher = {Springer},
series = {Lecture Notes in Computer Science},
doi = {10.1007/978-3-030-99524-9_11},
pages = {201-219},
volume = {13243},
isbn = {978-3-030-99524-9},
editor = {Dana Fisman and
Grigore Rosu}
}
|
[9]
|
Elvira Albert, Samir Genaim, Enrique Martin-Martin, Alicia Merayo, and Albert
Rubio.
Lower-Bound Synthesis Using Loop Specialization and Max-SMT.
In Alexandra Silva and K. Rustan M. Leino, editors, Computer
Aided Verification - 33rd International Conference, CAV 2021, Virtual
Event, July 20-23, 2021, Proceedings, Part II, volume 12760 of
Lecture Notes in Computer Science, pages 863-886. Springer, 2021.
[ bibtex |
abstract |
DOI |
http ]
This paper presents a new framework to synthesize lower-
bounds on the worst-case cost for non-deterministic integer loops. As
in previous approaches, the analysis searches for a metering function
that under-approximates the number of loop iterations. The key novelty
of our framework is the specialization of loops, which is achieved by
restricting their enabled transitions to a subset of the inputs combined
with the narrowing of their transition scopes. Specialization allows us
to find metering functions for complex loops that could not be handled
before or be more precise than previous approaches. Technically, it is
performed (1) by using quasi-invariants while searching for the metering
function, (2) by strengthening the loop guards, and (3) by narrowing the
space of non-deterministic choices. We also propose a Max-SMT encoding
that takes advantage of the use of soft constraints to force the solver look
for more accurate solutions. We show our accuracy gains on benchmarks
extracted from the 2020 Termination and Complexity Competition by
comparing our results to those obtained by the LoAT system.
@inproceedings{AlbertGMMR21,
author = {Elvira Albert and
Samir Genaim and
Enrique Martin-Martin and
Alicia Merayo and
Albert Rubio},
editor = {Alexandra Silva and
K. Rustan M. Leino},
title = {Lower-Bound Synthesis Using Loop Specialization and Max-SMT},
booktitle = {Computer Aided Verification - 33rd International Conference, {CAV} 2021, Virtual Event, July 20-23, 2021, Proceedings, Part {II}},
series = {Lecture Notes in Computer Science},
volume = {12760},
pages = {863--886},
publisher = {Springer},
year = {2021},
url = {https://doi.org/10.1007/978-3-030-81688-9\_40},
rank:grinscie:class = {1},
doi = {10.1007/978-3-030-81688-9\_40}
}
|
[10]
|
Elvira Albert, Reiner Hähnle, Alicia Merayo, and Dominic
Steinhöfel.
Certified Abstract Cost Analysis.
In Esther Guerra and Mariëlle Stoelinga, editors,
Fundamental Approaches to Software Engineering - 24th International
Conference, FASE 2021, Held as Part of the European Joint Conferences on
Theory and Practice of Software, ETAPS 2021, Luxembourg City, Luxembourg,
March 27 - April 1, 2021, Proceedings, volume 12649 of Lecture Notes in
Computer Science, pages 24-45. Springer, 2021.
[ bibtex |
abstract |
DOI |
http ]
A program containing placeholders for unspecified
statements or expressions is called an abstract (or
schematic) program. Placeholder symbols occur
naturally in program transformation rules, as used
in refactoring, compilation, optimization, or
parallelization. We present a generalization of
automated cost analysis that can handle abstract
programs and, hence, can analyze the impact on the
cost of program transformations. This kind of
relational property requires provably precise cost
bounds which are not always produced by cost
analysis. Therefore, we certify by deductive
verification that the inferred abstract cost bounds
are correct and sufficiently precise. It is the
first approach solving this problem. Both, abstract
cost analysis and certification, are based on
quantitative abstract execution (QAE) which in turn
is a variation of abstract execution, a recently
developed symbolic execution technique for abstract
programs. To realize QAE the new concept of a cost
invariant is introduced. QAE is implemented and runs
fully automatically on a benchmark set consisting of
representative optimization rules.
@inproceedings{AlbertHMS21,
author = {Elvira Albert and
Reiner H{\"{a}}hnle and
Alicia Merayo and
Dominic Steinh{\"{o}}fel},
editor = {Esther Guerra and
Mari{\"{e}}lle Stoelinga},
title = {Certified Abstract Cost Analysis},
booktitle = {Fundamental Approaches to Software Engineering - 24th International
Conference, {FASE} 2021, Held as Part of the European Joint Conferences
on Theory and Practice of Software, {ETAPS} 2021, Luxembourg City,
Luxembourg, March 27 - April 1, 2021, Proceedings},
series = {Lecture Notes in Computer Science},
volume = {12649},
rank:grinscie:class = {3},
pages = {24--45},
publisher = {Springer},
year = {2021},
url = {https://doi.org/10.1007/978-3-030-71500-7\_2},
doi = {10.1007/978-3-030-71500-7\_2}
}
|
[11]
|
Elvira Albert, Shelly Grossman, Noam Rinetzky, Clara Rodríguez-Nú nez,
Albert Rubio, and Mooly Sagiv.
Taming Callbacks for Smart Contract Modularity.
In ACM SIGPLAN Conference on Object-Oriented Programming
Systems, Languages and Applications, OOPSLA 2020. Proceedings, volume 4,
pages 209:1-209:30, 2020.
[ bibtex |
abstract |
DOI ]
Callbacks are an effective programming discipline for implementing
event-driven programming, especially in environments like Ethereum which
forbid shared global state and concurrency.
Callbacks allow a callee to delegate the execution back to the caller.
Though effective, they can lead to subtle mistakes principally in open environments where
callbacks can be added in a new code.
Indeed, several high profile bugs in smart contracts exploit callbacks.
We present the first static technique ensuring modularity in
the presence of callbacks and apply it to verify prominent smart
contracts. Modularity ensures that external calls to other contracts
cannot affect the behavior of the contract. Importantly, modularity
is guaranteed without restricting programming.
In general, checking modularity is undecidable-even for programs without loops.
This paper describes an effective technique for soundly ensuring modularity harnessing SMT solvers.
The main idea is to define a constructive version of modularity using commutativity and projection operations on program segments.
We believe that this approach is also accessible to programmers, since counterexamples to modularity can be generated automatically
by the SMT solvers, allowing programmers to understand and fix the error.
We implemented our approach in order to demonstrate the precision of
the modularity analysis and applied it to real smart contracts, including a subset of the 150 most active contracts in
Ethereum.
Our implementation decompiles bytecode programs into an intermediate representation and then implements the modularity checking
using SMT queries.
Overall, we argue that our experimental results indicate that the method can be applied to many realistic contracts,
and that it is able to prove modularity where other methods fail.
@inproceedings{AlbertGRRRS20,
author = {Elvira Albert and
Shelly Grossman and
Noam Rinetzky and
Clara Rodr\'iguez-N\'u\~{n}ez and
Albert Rubio and
Mooly Sagiv},
title = {{Taming Callbacks for Smart Contract Modularity}},
booktitle = { ACM SIGPLAN Conference on Object-Oriented Programming Systems, Languages and Applications, {OOPSLA}
2020. Proceedings},
year = {2020},
volume = {4},
pages = {209:1--209:30},
number = {OOPSLA},
rank:grinscie:class = {1},
doi = {10.1145/3428277}
}
|
[12]
|
Elvira Albert, Pablo Gordillo, Albert Rubio, and Maria A. Schett.
Synthesis of Super-Optimized Smart Contracts using Max-SMT.
In 32nd International Conference on Computer Aided Verification,
CAV 2020. Proceedings, volume 12224 of Lecture Notes in Computer
Science, pages 177-200, 2020.
[ bibtex |
abstract |
DOI |
PDF ]
With the advent of smart contracts that execute on the
blockchain ecosystem, a new mode of reasoning is
required for developers that must pay meticulous
attention to the gas spent by their smart contracts,
as well as for optimization tools that must be
capable of effectively reducing the gas required by
the smart contracts. Super-optimization is a
technique which attempts to find the best
translation of a block of code by trying all
possible sequences of instructions that produce the
same result. This paper presents a novel approach
for super-optimization of smart contracts based on
Max-SMT which is split into two main phases: (i) the
extraction of a stack functional specification from
the basic blocks of the smart contract, which is
simplified using rules that capture the semantics of
the arithmetic, bit-wise, relational operations,
etc. (ii) the synthesis of optimized-blocks which,
by means of an efficient Max-SMT encoding, finds the
bytecode blocks with minimal gas cost whose stack
functional specification is equal (modulo
commutativity) to the extracted one. Our
experimental results are very promising: we are able
to optimize 52.78% of the blocks, and prove that
34.28% were already optimal, for more than 61,000
blocks from the most used 2,500 Ethereum contracts.
@inproceedings{AlbertGRS20,
author = {Elvira Albert and
Pablo Gordillo and
Albert Rubio and
Maria A. Schett},
title = {{Synthesis of Super-Optimized Smart Contracts using Max-SMT}},
booktitle = {32nd International Conference on Computer Aided Verification, {CAV}
2020. Proceedings},
year = {2020},
series = {Lecture Notes in Computer Science},
pages = {177-200},
volume = {12224},
rank:grinscie:class = {1},
doi = {10.1007/978-3-030-53288-8_10},
isbn = {978-3-030-53288-8}
}
|
[13]
|
Elvira Albert, Jesús Correas, Pablo Gordillo, Guillermo Román-Díez, and
Albert Rubio.
GASOL: Gas Analysis and Optimization for Ethereum Smart
Contracts.
In Armin Biere and David Parker, editors, 26th International
Conference on Tools and Algorithms for the Construction and Analysis of
Systems, TACAS 2020. Proceedings, volume 12079 of Lecture Notes in
Computer Science, pages 118-125. Springer, 2020.
[ bibtex |
abstract |
DOI |
PDF ]
We present the main concepts, components, and usage of
GASOL, a Gas AnalysiS and Optimization tooL for
Ethereum smart contracts. GASOL offers a wide
variety of cost models that allow inferring
the gas consumption associated to selected types of
EVM instructions and/or inferring the number of
times that such types of bytecode instructions are
executed. Among others, we have cost models to
measure only storage opcodes, to measure a selected
family of gas-consumption opcodes following the
Ethereum's classification, to estimate the cost of a
selected program line, etc. After choosing the
desired cost model and the function of interest,
GASOL returns to the user an upper bound of the
cost for this function. As the gas consumption is
often dominated by the instructions that access the
storage, GASOL uses the gas analysis to detect
under-optimized storage patterns, and includes an
(optional) automatic optimization of the selected
function. Our tool can be used within an Eclipse
plugin for Solidity which displays the gas and
instructions bounds and, when applicable, the
gas-optimized Solidity function.
@inproceedings{AlbertCGRR20,
author = {Elvira Albert and
Jes\'us Correas and
Pablo Gordillo and
Guillermo Rom\'an-D\'iez and
Albert Rubio},
title = {{GASOL}: {G}as {A}nalysis and {O}ptimization for {E}thereum {S}mart {C}ontracts},
booktitle = {26th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2020. Proceedings},
year = {2020},
rank:grinscie:class = {1},
publisher = {Springer},
series = {Lecture Notes in Computer Science},
editor = {Armin Biere and
David Parker},
doi = {10.1007/978-3-030-45237-7_7},
pages = {118-125},
volume = {12079},
isbn = {978-3-030-45237-7}
}
|
[14]
|
Elvira Albert, Jesús Correas, Pablo Gordillo, Guillermo Román-Díez, and
Albert Rubio.
Smart, and also reliable and gas-efficient, contracts.
In 13th IEEE International Conference on Software Testing,
Validation and Verification, ICST 2020. Proceedings, IEEE, 2020.
[ bibtex |
abstract |
DOI |
PDF ]
@inproceedings{AlbertCGRR20-icst,
author = {Elvira Albert and
Jes\'us Correas and
Pablo Gordillo and
Guillermo Rom\'an-D\'iez and
Albert Rubio},
title = {{Smart, and also reliable and gas-efficient, contracts}},
booktitle = {13th {IEEE} International Conference on Software Testing, Validation
and Verification, {ICST} 2020. Proceedings},
year = {2020},
series = {IEEE},
rank:grinscie:class = {2},
doi = {10.1109/ICST46399.2020.00010}
}
|
[15]
|
Amir M. Ben-Amram, Jesús J. Doménech, and Samir Genaim.
Multiphase-Linear Ranking Functions and Their Relation to
Recurrent Sets.
In Bor-Yuh Evan Chang, editor, Proceedings of the 26th
International Symposium on Static Analysis (SAS'19), volume 11822 of
Lecture Notes in Computer Science, pages 459-480. Springer, 2019.
[ bibtex |
abstract |
DOI ]
Multiphase ranking functions (MΦRF) are
used to prove termination of loops in which the
computation progresses through a number of
phases. They consist of linear functions
f1,...,fd where fi decreases during
the ith phase. This work provides new insights
regarding MΦRFs for loops described by
a conjunction of linear constraints (SLC loops). In
particular, we consider the existence problem (does
a given SLC loop admit a MΦRF). The
decidability and complexity of the problem, in the
case that d is restricted by an input parameter,
have been settled in recent work, while in this
paper we make progress regarding the existence
problem without a given depth bound. Our new
approach, while falling short of a decision
procedure for the general case, reveals some
important insights into the structure of these
functions. Interestingly, it relates the problem of
seeking MΦRFs to that of seeking
recurrent sets (used to prove nontermination). It
also helps in identifying classes of loops for which
MΦRFs are sufficient, and thus have
linear runtime bounds. For the depth-bounded
existence problem, we obtain a new polynomial-time
procedure that can provide witnesses for
negative answers as well. To obtain this procedure
we introduce a new representation for SLC loops, the
difference polyhedron replacing the customary
transition polyhedron. We find that this
representation yields new insights on
MΦRFs and SLC loops in general, and
some results on termination and nontermination of
bounded SLC loops become straightforward.
@inproceedings{Ben-AmramDG19,
author = {Amir M. Ben{-}Amram and Jes{\'{u}}s J. Dom{\'{e}}nech and Samir Genaim},
editor = {Bor{-}Yuh Evan Chang},
title = {{M}ultiphase-{L}inear {R}anking {F}unctions and {T}heir {R}elation to {R}ecurrent {S}ets},
booktitle = {Proceedings of the 26th International Symposium on Static Analysis (SAS'19)},
series = {Lecture Notes in Computer Science},
volume = {11822},
pages = {459--480},
publisher = {Springer},
year = {2019},
isbn = {978-3-030-32303-5},
issn = {0302-9743},
doi = {10.1007/978-3-030-32304-2_22}
}
|
[16]
|
Elvira Albert, Pablo Gordillo, Albert Rubio, and Ilya Sergey.
Running on Fumes: Preventing Out-Of-Gas Vulnerabilities
in Ethereum Smart Contracts using Static Resource Analysis.
In Pierre Ganty and Mohamed Kaâniche, editors, 13th
International Conference on Verification and Evaluation of Computer and
Communication Systems, VECoS 2019. Proceedings, volume 11847 of
Lecture Notes in Computer Science, pages 63-78. Springer, 2019.
[ bibtex |
abstract |
DOI |
PDF ]
Abstract. Gas is a measurement unit of the
computational effort that it will take to execute
every single operation that takes part in the
Ethereum blockchain platform. Each instruction
executed by the Ethereum Virtual Machine (EVM) has
an associated gas consumption specified by
Ethereum. If a transaction exceeds the amount of gas
allotted by the user (known as gas limit), an
out-of-gas exception is raised. There is a wide
family of contract vulnerabilities due to out-of-gas
behaviors. We report on the design and
implementation of Gastap, a Gas-Aware Smart contracT
Analysis Platform, which takes as input a smart
contract (either in EVM, disassembled EVM, or in
Solidity source code) and automatically infers gas
upper bounds for all its public functions. Our
bounds ensure that if the gas limit paid by the user
is higher than our inferred gas bounds, the contract
is free of out-of-gas vulnerabilities.
@inproceedings{AlbertGRS19,
author = {Elvira Albert and
Pablo Gordillo and
Albert Rubio and Ilya Sergey},
title = {{R}unning on {F}umes: {P}reventing {O}ut-{O}f-{G}as {V}ulnerabilities in {E}thereum {S}mart {C}ontracts using {S}tatic {R}esource {A}nalysis},
booktitle = {13th International Conference on Verification and Evaluation of Computer and Communication Systems, {VECoS} 2019. Proceedings},
year = {2019},
publisher = {Springer},
editor = {Pierre Ganty and
Mohamed Ka\^aniche},
series = {Lecture Notes in Computer Science},
doi = {10.1007/978-3-030-35092-5_5},
pages = {63-78},
volume = {11847},
isbn = {978-3-030-35092-5}
}
|
[17]
|
Elvira Albert, Maria Garcia de la Banda, Miguel Gómez-Zamalloa, Miguel
Isabel, and Peter J. Stuckey.
Optimal Context-Sensitive Dynamic Partial Order
Reduction with Observers.
In Proceedings of the ACM SIGSOFT International Symposium on
Software Testing and Analysis 2019 (ISSTA'19), ACM, pages 352-362, 2019.
[ bibtex |
abstract |
DOI |
PDF ]
@inproceedings{AlbertGGIS19,
author = {Elvira Albert and
Maria Garcia de la Banda and
Miguel G\'omez-Zamalloa and
Miguel Isabel and
Peter J. Stuckey},
title = {{O}ptimal {C}ontext-{S}ensitive {D}ynamic
{P}artial {O}rder {R}eduction with {O}bservers},
booktitle = {Proceedings of the ACM SIGSOFT International Symposium on Software Testing and Analysis 2019 (ISSTA'19)},
year = {2019},
pages = {352--362},
series = {ACM},
doi = {10.1145/3293882.3330565},
rank:grinscie:class = {1}
}
|
[18]
|
Miguel Isabel.
Conditional Dynamic Partial Order Reduction and Optimality Results.
In Proceedings of the ACM SIGSOFT International Symposium on
Software Testing and Analysis 2019 (ISSTA'19), ACM, 2019.
[ bibtex |
abstract |
DOI |
PDF ]
@inproceedings{Isabel19,
author = {Miguel Isabel},
title = {Conditional Dynamic Partial Order Reduction and Optimality Results},
booktitle = {Proceedings of the ACM SIGSOFT International Symposium on Software Testing and Analysis 2019 (ISSTA'19)},
year = {2019},
series = {ACM},
doi = {10.1145/3293882.3338987}
}
|
[19]
|
Elvira Albert, Jesús Correas, Pablo Gordillo, Guillermo Román-Díez, and
Albert Rubio.
SAFEVM: A Safety Verifier for Ethereum Smart Contracts.
In Dongmei Zhang and Anders Møller, editors, ACM SIGSOFT
International Symposium on Software Testing and Analysis, ISSTA 2019.
Proceedings, ACM, pages 386-389, 2019.
[ bibtex |
abstract |
DOI |
PDF ]
Ethereum smart contracts are public, immutable and
distributed and, as such, they are prone to
vulnerabilities sourcing from programming mistakes
of developers. This paper presents SAFEVM, a
verification tool for Ethereum smart contracts that
makes use of state-of-the-art verification engines
for C programs. SAFEVM takes as input an Ethereum
smart contract (provided either in Solidity source
code, or in compiled EVM bytecode), optionally with
assert and require verification annotations, and
produces in the output a report with the
verification results. Besides general safety
annotations, SAFEVM handles the verification of
array accesses: it automatically generates SV-COMP
verification assertions such that C verification
engines can prove safety of array accesses. Our
experimental evaluation has been undertaken on all
contracts pulled from etherscan.io (more than
24,000) by using as back-end verifiers CPAchecker,
SeaHorn and VeryMax.
@inproceedings{AlbertCGRR19,
author = {Elvira Albert and
Jes\'us Correas and
Pablo Gordillo and
Guillermo Rom\'an-D\'iez and
Albert Rubio},
title = {{SAFEVM}: {A} {S}afety {V}erifier for {E}thereum {S}mart {C}ontracts},
booktitle = {ACM SIGSOFT International Symposium on Software Testing and Analysis, ISSTA 2019. Proceedings},
year = {2019},
series = {ACM},
pages = {386--389},
editor = {Dongmei Zhang and
Anders M{\o}ller},
doi = {10.1145/3293882.3338999}
}
|
[20]
|
Elvira Albert, Pablo Gordillo, Benjamin Livshits, Albert Rubio, and Ilya
Sergey.
EthIR: A Framework for High-Level Analysis of Ethereum
Bytecode.
In Shuvendu Lahiri and Chao Wang, editors, 16th International
Symposium on Automated Technology for Verification and Analysis, ATVA 2018.
Proceedings, volume 11138 of Lecture Notes in Computer Science, pages
513-520. Springer, 2018.
[ bibtex |
abstract |
DOI |
PDF ]
Abstract. Analyzing Ethereum bytecode, rather than the
source code from which it was generated, is a
necessity when: (1) the source code is not available
(e.g., the blockchain only stores the bytecode), (2)
the information to be gathered in the analysis is
only visible at the level of bytecode (e.g., gas
consumption is specified at the level of EVM
instructions), (3) the analysis results may be
affected by optimizations performed by the compiler
(thus the analysis should be done ideally after
compilation). This paper presents EthIR, a framework
for analyzing Ethereum bytecode, which relies on (an
extension of) OYENTE, a tool that generates CFGs;
EthIR produces from the CFGs, a rule-based
representation (RBR) of the bytecode that enables
the application of (existing) high-level analyses to
infer properties of EVM code.
@inproceedings{AlbertGLRS18,
author = {Elvira Albert and Pablo Gordillo and Benjamin Livshits and Albert Rubio and Ilya Sergey},
title = {EthIR: A {F}ramework for {H}igh-{L}evel {A}nalysis of {E}thereum {B}ytecode},
editor = {Shuvendu Lahiri and Chao Wang},
booktitle = {16th International Symposium on Automated Technology for Verification and Analysis, {ATVA} 2018. Proceedings},
publisher = {Springer},
year = {2018},
series = {Lecture Notes in Computer Science},
doi = {10.1007/978-3-030-01090-4_30},
pages = {513-520},
volume = {11138},
isbn = {978-3-030-01089-8}
}
|
[21]
|
Elvira Albert, Miguel Gómez-Zamalloa, Albert Rubio, Matteo Sammartino,
and Alexandra Silva.
SDN-Actors: Modeling and Verification of SDN Programs.
In Formal Methods - 22nd International Symposium, FM 2018,
Held as Part of the Federated Logic Conference, FloC 2018, Oxford, UK, July
15-17, 2018, Proceedings, pages 550-567, 2018.
[ bibtex |
abstract |
DOI |
http ]
|
[22]
|
Elvira Albert, Miguel Gómez-Zamalloa, Miguel Isabel, and Albert Rubio.
Constrained Dynamic Partial Order Reduction.
In Computer Aided Verification - 30th International Conference,
CAV 2018, Held as Part of the Federated Logic Conference, FloC 2018,
Oxford, UK, July 14-17, 2018, Proceedings, Part II, volume 10982 of
Lecture Notes in Computer Science, pages 392-410. Springer, 2018.
[ bibtex |
abstract |
DOI |
PDF |
http ]
@inproceedings{AlbertGIR18,
author = {Elvira Albert and Miguel G\'{o}mez-Zamalloa and Miguel Isabel and Albert Rubio},
title = {{C}onstrained {D}ynamic {P}artial {O}rder {R}eduction},
booktitle = {Computer Aided Verification - 30th International Conference, {CAV}
2018, Held as Part of the Federated Logic Conference, FloC 2018, Oxford,
UK, July 14-17, 2018, Proceedings, Part {II}},
pages = {392--410},
year = {2018},
url = {https://doi.org/10.1007/978-3-319-96142-2\_24},
doi = {10.1007/978-3-319-96142-2\_24},
timestamp = {Wed, 03 Oct 2018 12:55:01 +0200},
biburl = {https://dblp.org/rec/bib/conf/cav/AlbertGIR18},
bibsource = {dblp computer science bibliography, https://dblp.org},
series = {Lecture Notes in Computer Science},
volume = {10982},
doi = {10.1007/978-3-319-96142-2},
isbn = {978-3-319-96141-5},
publisher = {Springer},
rank:grinscie:class = {1}
}
|
[23]
|
Elvira Albert, Miguel Gómez-Zamalloa, and Miguel Isabel.
Generation of Initial Contexts for Effective Deadlock Detection.
In Logic-Based Program Synthesis and Transformation: 27th
International Symposium, LOPSTR 2017, Namur, Belgium, October 10-12, 2017,
Revised Selected Papers, volume 10855 of Lecture Notes in Computer
Science, pages 3-19. Springer International Publishing, 2018.
[ bibtex |
abstract |
DOI |
PDF ]
@inproceedings{AlbertGI18,
author = {Elvira Albert and Miguel G\'{o}mez-Zamalloa and Miguel Isabel},
title = {Generation of Initial Contexts for Effective Deadlock Detection},
booktitle = {Logic-Based Program Synthesis and Transformation: 27th International Symposium, {LOPSTR} 2017, Namur, Belgium, October 10-12, 2017, Revised Selected Papers},
year = {2018},
pages = {3--19},
publisher = {Springer International Publishing},
series = {Lecture Notes in Computer Science},
volume = {10855},
doi = {10.1007/978-3-319-94460-9_1}
}
|
[24]
|
Elvira Albert, Miguel Gómez-Zamalloa, and Miguel Isabel.
On the Generation of Initial Contexts for Effective Deadlock
Detection.
In LOPSTR 2017, October 2017.
Extended abstract.
[ bibtex |
abstract |
PDF ]
@inproceedings{AlbertGI17,
author = {Elvira Albert and Miguel G\'{o}mez-Zamalloa and Miguel Isabel},
title = {On the Generation of Initial Contexts for Effective Deadlock Detection},
booktitle = {LOPSTR 2017},
month = oct,
year = {2017},
note = {Extended abstract}
}
|
[25]
|
Amir M. Ben-Amram and Samir Genaim.
On Multiphase-Linear Ranking Functions.
In Viktor Kuncak and Rupak Majumdar, editors, Proceedings of the
29th International Conference on Computer Aided Verification (CAV'17),
volume 10427 of Lecture Notes in Computer Science, pages 601-620.
Springer, 2017.
[ bibtex |
abstract |
DOI ]
Multiphase ranking functions (MΦRF) were
proposed as a means to prove the termination of a
loop in which the computation progresses through a
number of "phases", and the progress of each phase
is described by a different linear ranking function.
Our work provides new insights regarding such
functions for loops described by a conjunction of
linear constraints (single-path loops). We provide
a complete polynomial-time solution to the problem
of existence and of synthesis of MΦRF
of bounded depth (number of phases), when variables
range over rational or real numbers; a complete
solution for the (harder) case that variables are
integer, with a matching lower-bound proof, showing
that the problem is coNP-complete; and a new theorem
which bounds the number of iterations for loops with
MΦRFs. Surprisingly, the bound is
linear, even when the variables involved change in
non-linear way. We also consider a type of
lexicographic ranking functions more
expressive than types of lexicographic functions for
which complete solutions have been given so far. We
prove that for the above type of loops,
lexicographic functions can be reduced to
MΦRFs, and thus the questions of
complexity of detection, synthesis, and iteration
bounds are also answered for this class.
@inproceedings{Ben-AmramG17,
author = {Amir M. Ben-Amram and Samir Genaim},
title = {{O}n {M}ultiphase-{L}inear {R}anking {F}unctions},
editor = {Viktor Kuncak and Rupak Majumdar},
booktitle = {Proceedings of the 29th International Conference on Computer Aided Verification (CAV'17)},
series = {Lecture Notes in Computer Science},
volume = {10427},
pages = {601--620},
publisher = {Springer},
isbn = {978-3-319-63389-3},
issn = {0302-9743},
year = {2017},
doi = {10.1007/978-3-319-63390-9_32}
}
|
[26]
|
Elvira Albert, Puri Arenas, María García de la Banda, Miguel Gómez-Zamalloa, and Peter J. Stuckey.
Context Sensitive Dynamic Partial Order Reduction.
In Victor Kuncak and Rupak Majumdar, editors, 29th International
Conference on Computer Aided Verification (CAV 2017), volume 10426 of
Lecture Notes in Computer Science, pages 526-543. Springer, 2017.
[ bibtex |
abstract |
DOI |
http ]
@inproceedings{AlbertAGGS17,
author = {Elvira Albert and Puri Arenas and Mar{\'i}a Garc{\'i}a de la Banda and Miguel G\'{o}mez-Zamalloa and Peter J. Stuckey},
title = {{C}ontext {S}ensitive {D}ynamic {P}artial {O}rder {R}eduction},
editor = {Victor Kuncak and Rupak Majumdar},
booktitle = {29th International Conference on Computer Aided Verification (CAV 2017)},
pages = {526--543},
year = {2017},
series = {Lecture Notes in Computer Science},
volume = {10426},
publisher = {Springer},
url = {https://doi.org/10.1007/978-3-319-63387-9_26},
doi = {10.1007/978-3-319-63387-9_26},
isbn = {978-3-319-63386-2},
rank:grinscie:class = {1}
}
|
[27]
|
Elvira Albert, Samir Genaim, and Pablo Gordillo.
May-Happen-in-Parallel Analysis with Returned Futures.
In Deepak D'Souza and K.Narayan Kumar, editors, 15th
International Symposium on Automated Technology for Verification and
Analysis, ATVA 2017. Proceedings, volume 10482 of Lecture Notes in
Computer Science, pages 42-58. Springer, 2017.
[ bibtex |
abstract |
DOI |
PDF ]
Abstract. May-Happen-in-Parallel (MHP) is a fundamental analysis
to reason about concurrent programs. It infers the
pairs of program points that may execute in
parallel, or interleave their execution. This
information is essential to prove, among other
things, absence of data races, deadlock freeness,
termination, and resource usage. This paper presents
an MHP analysis for asynchronous programs that use
futures as synchronization mechanism. Future
variables are available in most concurrent languages
(e.g., in the library concurrent of Java,
in the standard thread library of C++, and in Scala
and Python). The novelty of our analysis is that it
is able to infer MHP relations that involve future
variables that are returned by asynchronous
tasks. Futures are returned when a task needs to
await for another task created in an inner
scope, e.g., task t needs to await for the
termination of task p that is spawned by task q
that is spawned during the execution of t (not
necessarily by t). Thus, task p is awaited by
task t which is in an outer scope. The challenge
for the analysis is to (back)propagate the
synchronization of tasks through future variables
from inner to outer scopes.
@inproceedings{AlbertGG17,
author = {Elvira Albert and Samir Genaim and Pablo Gordillo},
title = {May-{H}appen-in-{P}arallel {A}nalysis with {R}eturned {F}utures},
editor = {Deepak D'Souza and K.Narayan Kumar},
booktitle = {15th International Symposium on Automated Technology for Verification and Analysis, {ATVA} 2017. Proceedings},
publisher = {Springer},
year = {2017},
series = {Lecture Notes in Computer Science},
doi = {10.1007/978-3-319-68167-2_3},
pages = {42-58},
volume = {10482},
isbn = {978-3-319-68167-2}
}
|
[28]
|
Jesús J. Doménech, Samir Genaim, Einar Broch Johnsen, and Rudolf Schlatte.
EasyInterface: A toolkit for rapid development of GUIs for
research prototype tools.
In Fundamental Approaches to Software Engineering: 20th
International Conference, FASE 2017, Held as Part of the European Joint
Conferences on Theory and Practice of Software, ETAPS 2017, Uppsala, Sweden,
April 22-29, 2017, Proceedings, Lecture Notes in Computer Science, pages
379-383. Springer, 2017.
[ bibtex |
abstract |
DOI |
http ]
EasyInterface is an open-source toolkit to develop web-based graphical user
interfaces (GUIs) for research prototype tools. This toolkit
enables researchers to make their tool prototypes available to the
community and integrating them in a common environment, rapidly and
without being familiar with web programming or GUI libraries.
@inproceedings{DomenechGJS17,
author = {Jes{\'{u}}s Dom{\'{e}}nech and
Samir Genaim and
Einar Broch Johnsen and
Rudolf Schlatte},
title = {{E}asy{I}nterface: {A} toolkit for rapid development of {GUI}s for research prototype tools},
booktitle = {Fundamental Approaches to Software Engineering: 20th International Conference, FASE 2017, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2017, Uppsala, Sweden, April 22-29, 2017, Proceedings},
publisher = {Springer},
pages = {379--383},
isbn = {978-3-662-54494-5},
doi = {10.1007/978-3-662-54494-5_22},
url = {http://dx.doi.org/10.1007/978-3-662-54494-5_22},
series = {Lecture Notes in Computer Science},
year = {2017}
}
|
[29]
|
Elvira Albert, Nikolaos Bezirgiannis, Frank de Boer, and Enrique Martin-Martin.
A Formal, Resource Consumption-Preserving Translation of
Actors to Haskell.
In Manuel V Hermenegildo and Pedro Lopez-Garcia, editors,
Logic-Based Program Synthesis and Transformation: 26th International
Symposium, LOPSTR 2016, Edinburgh, UK, September 6-8, 2016, Revised Selected
Papers, volume 10184 of Lecture Notes in Computer Science, pages
21-37. Springer International Publishing, 2017.
[ bibtex |
abstract |
DOI |
http ]
@inproceedings{AlbertBBM16,
author = {Elvira Albert and Nikolaos Bezirgiannis and Frank de Boer and Enrique Martin-Martin},
editor = {Hermenegildo, Manuel V and Lopez-Garcia, Pedro},
title = {{A} {F}ormal, {R}esource {C}onsumption-{P}reserving {T}ranslation of {A}ctors to {H}askell},
booktitle = {Logic-Based Program Synthesis and Transformation: 26th International Symposium, LOPSTR 2016, Edinburgh, UK, September 6--8, 2016, Revised Selected Papers},
year = {2017},
publisher = {Springer International Publishing},
pages = {21--37},
series = {Lecture Notes in Computer Science},
volume = 10184,
isbn = {978-3-319-63139-4},
doi = {10.1007/978-3-319-63139-4_2},
url = {https://doi.org/10.1007/978-3-319-63139-4_2}
}
|
[30]
|
Elvira Albert, Puri Arenas, and Miguel Gómez-Zamalloa.
Testing of Concurrent and Imperative Software Using CLP.
In Proceedings of the 18th International Symposium on Principles
and Practice of Declarative Programming, Edinburgh, United Kingdom, September
5-7, 2016, pages 1-8. ACM, 2016.
[ bibtex |
abstract |
DOI |
PDF |
http ]
@inproceedings{AlbertAG16,
author = {Elvira Albert and
Puri Arenas and
Miguel G{\'{o}}mez{-}Zamalloa},
title = {Testing of {C}oncurrent and {I}mperative {S}oftware Using {CLP}},
booktitle = {Proceedings of the 18th International Symposium on Principles and
Practice of Declarative Programming, Edinburgh, United Kingdom, September
5-7, 2016},
pages = {1--8},
year = {2016},
publisher = {ACM},
doi = {10.1145/2967973.2968593},
url = {http://dx.doi.org/10.1145/2967973.2968593}
}
|
[31]
|
Elvira Albert, Miguel Gómez-Zamalloa, and Miguel Isabel.
Combining Static Analysis and Testing for Deadlock
Detection.
In Erika Ábrahám and Marieke Huisman, editors,
Integrated Formal Methods - 12th International Conference, IFM 2016,
Reykjavik, Iceland, June 1-5, 2016, Proceedings, volume 9681 of Lecture
Notes in Computer Science, pages 409-424. Springer, 2016.
[ bibtex |
abstract |
DOI |
PDF |
http ]
Abstract. Static deadlock analyzers might be able to verify the absence of
deadlock. However, they are usually not able to detect its presence.
Also, when they detect a potential deadlock cycle, they provide
little (or even no) information on their output. Due to the complex
flow of concurrent programs, the user might not be able to find the
source of the anomalous behaviour from the abstract information
computed by static analysis. This paper proposes the combined use
of static analysis and testing for effective deadlock detection in
asynchronous programs.When the program features a deadlock, our combined use of
analysis and testing provides an effective technique to catch deadlock
traces. While if the program does not have deadlock, but the
analyzer inaccurately spotted it, we might prove deadlock freedom.
@inproceedings{AlbertGI16b,
author = {Elvira Albert and Miguel G\'omez-Zamalloa and Miguel Isabel},
title = {Combining {S}tatic {A}nalysis and {T}esting for {D}eadlock {D}etection},
booktitle = {Integrated Formal Methods - 12th International Conference, {IFM} 2016,
Reykjavik, Iceland, June 1-5, 2016, Proceedings},
publisher = {Springer},
editor = {Erika {\'{A}}brah{\'{a}}m and
Marieke Huisman},
year = {2016},
pages = {409--424},
series = {Lecture Notes in Computer Science},
volume = {9681},
doi = {10.1007/978-3-319-33693-0_26},
url = {http://dx.doi.org/10.1007/978-3-319-33693-0_26}
}
|
[32]
|
Elvira Albert, Miguel Gómez-Zamalloa, and Miguel Isabel.
SYCO: A Systematic Testing Tool for Concurrent Objects.
In Ayal Zaks and Manuel V. Hermenegildo, editors, Proceedings of
the 25th International Conference on Compiler Construction, CC 2016,
Barcelona, Spain, March 12-18, 2016, pages 269-270. ACM, 2016.
[ bibtex |
abstract |
DOI |
PDF |
http ]
Abstract. We present the concepts, usage and prototypical implementation of
SYCO: a SYstematic testing tool for Concurrent Objects. The
system receives as input a program, a selection of method to be
tested, and a set of initial values for its parameters. SYCO offers
a visual web interface to carry out the testing process and
visualize the results of the different executions as well as the
sequences of tasks scheduled as a sequence diagram. Its kernel
includes state-of-the-art partial-order reduction techniques to
avoid redundant computations during testing.
Besides, SYCO incorporates an option to effectively
catch deadlock errors. In particular, it uses advanced
techniques which guide the execution towards potential deadlock
paths and discard paths that are guaranteed to be deadlock free.
@inproceedings{AlbertGI16,
editor = {Ayal Zaks and Manuel V. Hermenegildo},
author = {Elvira Albert and Miguel G\'omez-Zamalloa and Miguel Isabel},
title = {SYCO: A {S}ystematic {T}esting {T}ool for {C}oncurrent {O}bjects},
booktitle = {Proceedings of the 25th International Conference on Compiler Construction,
{CC} 2016, Barcelona, Spain, March 12-18, 2016},
year = {2016},
pages = {269--270},
publisher = {ACM},
doi = {10.1145/2892208.2892236},
url = {http://dx.doi.org/10.1145/2892208.2892236}
}
|
[33]
|
Amir M. Ben-Amram and Samir Genaim.
Complexity of Bradley-Manna-Sipma Lexicographic Ranking
Functions.
In Daniel Kroening and Corina S. Pasareanu, editors, 27th
International Conference on Computer Aided Verification (CAV 2015), volume
9207 of Lecture Notes in Computer Science, pages 304-321. Springer,
July 2015.
[ bibtex |
abstract |
DOI |
arXiv |
PDF |
http ]
In this paper we turn the spotlight on a class of
lexicographic ranking functions introduced by
Bradley, Manna and Sipma in a seminal CAV 2005
paper, and establish for the first time the
complexity of some problems involving the inference
of such functions for linear-constraint loops
(without precondition). We show that finding such a
function, if one exists, can be done in polynomial
time in a way which is sound and complete when the
variables range over the rationals (or reals). We
show that when variables range over the integers,
the problem is harder-deciding the existence of a
ranking function is coNP-complete. Next, we study
the problem of minimizing the number of components
in the ranking function (a.k.a. the dimension). This
number is interesting in contexts like computing
iteration bounds and loop
parallelization. Surprisingly, and unlike the
situation for some other classes of lexicographic
ranking functions, we find that even deciding
whether a two-component ranking function exists is
harder than the unrestricted problem: NP-complete
over the rationals and ΣP2-complete over
the integers.
@inproceedings{Ben-AmramG15,
author = {Ben-Amram, Amir M. and Genaim, Samir},
title = {{C}omplexity of {B}radley-{M}anna-{S}ipma {L}exicographic {R}anking {F}unctions},
editor = {Daniel Kroening and Corina S. Pasareanu},
booktitle = {27th International Conference on Computer Aided Verification (CAV 2015)},
year = {2015},
month = jul,
pages = {304--321},
volume = {9207},
series = {Lecture Notes in Computer Science},
publisher = {Springer},
doi = {10.1007/978-3-319-21668-3_18},
url = {http://dx.doi.org/10.1007/978-3-319-21668-3_18},
arxiv = {http://arxiv.org/abs/1504.05018}
}
|
[34]
|
L. Bozzelli and D. Pearce.
On the Complexity of Temporal Equilibrium Logic.
In Logic in Computer Science (LICS), 2015 30th Annual ACM/IEEE
Symposium on, pages 645-656, 2015.
[ bibtex |
abstract |
DOI |
http ]
Temporal Equilibrium Logic (TEL) [1] is a promising framework that
extends the knowledge representation and reasoning capabilities of Answer
Set Programming with temporal operators in the style of LTL. To our knowledge
it is the first nonmonotonic logic that accommodates fully the syntax of a
standard temporal logic (specifically LTL) without requiring further
constructions. This paper provides a systematic complexity analysis for the (consistency) problem of checking the existence of a temporal equilibrium model of a TEL formula. It was previously shown that this problem in the general case lies somewhere between PSPACE and EXPSPACE. Here we establish a lower bound matching the EXPSPACE upper bound in [2]. Additionally we analyse the complexity for various natural subclasses of TEL formulas, identifying both tractable and intractable fragments. Finally the paper offers some new insights on the logic LTL by addressing satisfiability for minimal LTL models. The complexity results obtained highlight a substantial difference between interpreting LTL over finite or infinite words.
@inproceedings{Bozzelli15,
author = {L. Bozzelli and D. Pearce},
booktitle = {Logic in Computer Science (LICS), 2015 30th Annual ACM/IEEE Symposium on},
title = {{O}n the {C}omplexity of {T}emporal {E}quilibrium {L}ogic},
year = {2015},
pages = {645-656},
doi = {10.1109/LICS.2015.65},
url = {http://dl.acm.org/citation.cfm?id=2876571},
issn = {1043-6871}
}
|
[35]
|
Pierre Ganty, Samir Genaim, Ratan Lal, and Pavithra Prabhakar.
From Non-Zenoness Verification to Termination.
In Proceedings of the 13th ACM/IEEE International Conference
on Formal Methods and Models for Codesign, MEMOCODE'15, pages 228-237.
IEEE, 2015.
[ bibtex |
abstract |
DOI |
PDF |
http ]
We investigate the problem of verifying the absence
of zeno executions in a hybrid system. A zeno
execution is one in which there are infinitely many
discrete transitions in a finite time interval. The
presence of zeno executions poses challenges towards
implementation and analysis of hybrid control
systems. We present a simple transformation of the
hybrid system which reduces the non-zenoness
verification problem to the termination verification
problem, that is, the original system has no zeno
executions if and only if the transformed system has
no non-terminating executions. This provides both
theoretical insights and practical techniques for
non-zenoness verification. Further, it also
provides techniques for isolating parts of the
hybrid system and its initial states which do not
exhibit zeno executions. We illustrate the
feasibility of our approach by applying it on hybrid
system examples.
@inproceedings{GantyGLP15,
author = {Pierre Ganty and Samir Genaim and Ratan Lal and Pavithra Prabhakar},
title = {From Non-Zenoness Verification to Termination},
booktitle = {Proceedings of the 13th {ACM/IEEE} International Conference on Formal Methods and Models for Codesign, {MEMOCODE}'15},
pages = {228--237},
year = {2015},
doi = {10.1109/MEMCOD.2015.7340490},
url = {http://dx.doi.org/10.1109/MEMCOD.2015.7340490},
publisher = {{IEEE}},
isbn = {978-1-5090-0237-5}
}
|
[36]
|
Elvira Albert, Puri Arenas, Jesús Correas, Samir Genaim, Miguel Gómez-Zamalloa, Enrique Martin-Martin, Germán Puebla, and
Guillermo Román-Díez.
Resource Analysis: From Sequential to Concurrent and
Distributed Programs.
In Nikolaj Bjørner and Frank D. de Boer, editors, FM 2015:
Formal Methods - 20th International Symposium, Oslo, Norway, June 24-26,
2015, Proceedings, volume 9109 of Lecture Notes in Computer Science,
pages 3-17. Springer, 2015.
[ bibtex |
abstract |
DOI |
PDF |
http ]
@inproceedings{AlbertACGGMPR15,
author = {Elvira Albert and
Puri Arenas and
Jes{\'{u}}s Correas and
Samir Genaim and
Miguel G{\'{o}}mez{-}Zamalloa and
Enrique Martin{-}Martin and
Germ{\'{a}}n Puebla and
Guillermo Rom{\'{a}}n{-}D{\'{\i}}ez},
title = {{R}esource {A}nalysis: {F}rom {S}equential to {C}oncurrent and {D}istributed {P}rograms},
booktitle = {{FM} 2015: Formal Methods - 20th International Symposium, Oslo, Norway,
June 24-26, 2015, Proceedings},
pages = {3--17},
year = {2015},
doi = {10.1007/978-3-319-19249-9_1},
url = {https://doi.org/10.1007/978-3-319-19249-9_1},
editor = {Nikolaj Bj{\o}rner and Frank D. de Boer},
series = {Lecture Notes in Computer Science},
isbn = {978-3-319-19248-2},
volume = {9109},
publisher = {Springer},
rank:grinscie:class = {2}
}
|
[37]
|
Elvira Albert, Puri Arenas, and Miguel Gómez-Zamalloa.
Test Case Generation of Actor Systems.
In 13th International Symposium on Automated Technology for
Verification and Analysis, ATVA 2015. Proceedings, volume 9364 of
Lecture Notes in Computer Science, pages 259-275. Springer, 2015.
[ bibtex |
abstract |
DOI |
PDF |
http ]
Testing is a vital part of the software development process. It is
even more so in the context of concurrent languages, since due to
undesired task interleavings and to unexpected behaviours of the
underlying task scheduler, errors can go easily undetected. Test
case generation (TCG) is the process of automatically generating
test inputs for interesting coverage criteria, which
are then applied to the system under test. This paper presents a
TCG framework for actor systems, which consists of three main
elements, which are the original contributions of this work: (1) a
symbolic execution calculus, which allows symbolically executing the
program (i.e., executing the program for unknown input data), (2)
improved techniques to avoid performing redundant computations
during symbolic execution, (3) new termination and coverage criteria,
which ensure the termination of symbolic execution and guarantee
that the test cases provide the desired degree of code coverage.
Finally, our framework has been
implemented and evaluated within the aPET system.
@inproceedings{AlbertAGM15,
author = {Elvira Albert and Puri Arenas and Miguel G\'omez-Zamalloa},
title = {Test {C}ase {G}eneration of {A}ctor {S}ystems},
booktitle = {13th International Symposium on Automated Technology for Verification and Analysis, {ATVA} 2015. Proceedings},
publisher = {Springer},
year = {2015},
series = {Lecture Notes in Computer Science},
doi = {10.1007/978-3-319-24953-7_21},
url = {http://dx.doi.org/10.1007/978-3-319-24953-7_21},
pages = {259-275},
volume = {9364},
isbn = {978-3-319-24952-0}
}
|
[38]
|
Elvira Albert, Samir Genaim, and Pablo Gordillo.
May-Happen-in-Parallel Analysis for Asynchronous Programs
with Inter-Procedural Synchronization.
In Sandrine Blazy and Thomas P. Jensen, editors, Static Analysis
- 22nd International Symposium, SAS 2015. Proceedings, volume 9291 of
Lecture Notes in Computer Science, pages 72-89. Springer, 2015.
[ bibtex |
abstract |
DOI |
PDF |
http ]
Abstract. A may-happen-in-parallel (MHP) analysis
computes pairs of program points that may execute in
parallel across different distributed
components. This information has been proven to be
essential to infer both safety properties (e.g.,
deadlock freedom) and liveness properties
(termination and resource boundedness) of
asynchronous programs. Existing MHP analyses take
advantage of the synchronization points to learn
that one task has finished and thus will not happen
in parallel with other tasks that are still
active. Our starting point is an existing MHP
analysis developed for intra-procedural
synchronization, i.e., it only al- lows
synchronizing with tasks that have been spawned
inside the current task. This paper leverages such
MHP analysis to handle inter-procedural
synchronization, i.e., a task spawned by one task
can be awaited within a different task. This is
challenging because task synchronization goes beyond
the boundaries of methods, and thus the inference of
MHP relations requires novel extensions to capture
inter-procedural dependencies. The analysis has
been implemented and it can be tried online.
@inproceedings{AlbertGG15,
author = {Elvira Albert and Samir Genaim and Pablo Gordillo},
title = {May-{H}appen-in-{P}arallel {A}nalysis for {A}synchronous {P}rograms with {I}nter-{P}rocedural {S}ynchronization},
booktitle = {Static Analysis - 22nd International Symposium, {SAS} 2015. Proceedings},
editor = {Sandrine Blazy and
Thomas P. Jensen},
publisher = {Springer},
year = {2015},
rank:grinscie:class = {2},
series = {Lecture Notes in Computer Science},
doi = {10.1007/978-3-662-48288-9_5},
url = {http://dx.doi.org/10.1007/978-3-662-48288-9_5},
pages = {72-89},
volume = {9291},
isbn = {978-3-662-48287-2}
}
|
[39]
|
Elvira Albert, Jesús Correas, Einar Broch Johnsen, and Guillermo Román-Díez.
Parallel Cost Analysis of Distributed Systems.
In Static Analysis - 22nd International Symposium, SAS 2015.
Proceedings, volume 9291 of Lecture Notes in Computer Science, pages
275-292. Springer, 2015.
[ bibtex |
abstract |
DOI |
PDF |
http ]
We present a novel static analysis to infer the parallel cost of
distributed systems. Parallel cost differs from the standard notion
of serial cost by exploiting the truly concurrent execution model of
distributed processing to capture the cost of synchronized tasks
executing in parallel. It is challenging to infer parallel cost
because one needs to soundly infer the parallelism between tasks while
accounting for waiting and idle processor times at the different
locations. Our analysis works in three phases: (1) It first performs
a block-level analysis to estimate the serial costs of the blocks
between synchronization points in the program; (2) Next, it constructs
a distributed flow graph (DFG) to capture the parallelism, the waiting
and idle times at the locations of the distributed system; Finally,
(3) the parallel cost can be obtained as the path of maximal cost in
the DFG. A prototype implementation demonstrates the accuracy and
feasibility of the proposed analysis.
@inproceedings{AlbertCJR15,
author = {Elvira Albert and
Jes\'us Correas and
Einar Broch Johnsen and
Guillermo Rom\'an-D\'iez},
title = {{P}arallel {C}ost {A}nalysis of {D}istributed {S}ystems},
booktitle = {Static Analysis - 22nd International Symposium, {SAS} 2015. Proceedings},
publisher = {Springer},
year = {2015},
rank:grinscie:class = {2},
series = {Lecture Notes in Computer Science},
doi = {10.1007/978-3-662-48288-9_16},
url = {https://doi.org/10.1007/978-3-662-48288-9_16},
pages = {275-292},
volume = {9291},
isbn = {978-3-662-48287-2}
}
|
[40]
|
Elvira Albert, Jesús Correas, and Guillermo Román-Díez.
Non-Cumulative Resource Analysis.
In Proceedings of 21st International Conference on Tools and
Algorithms for the Construction and Analysis of Systems (TACAS 2015), volume
9035 of Lecture Notes in Computer Science, pages 85-100. Springer,
2015.
[ bibtex |
abstract |
DOI |
PDF |
http ]
Existing cost analysis frameworks have been defined for cumulative resources
which keep on increasing along the computation. Traditional cumulative
resources are execution time, number of executed steps, amount of memory
allocated, and energy consumption. Non-cumulative resources are acquired and
(possibly) released along the execution. Examples of non-cumulative cost are
memory usage in the presence of garbage collection, number of connections
established that are later closed, or resources requested to a virtual host
which are released after using them.
We present, to the best of our knowledge, the first generic static analysis
framework to infer an upper bound on the peak cost for
non-cumulative types of resources. Our analysis comprises several components:
(1) a pre-analysis to infer when resources are being used simultaneously, (2)
a program-point resource analysis which infers an upper bound on the
cost at the points of interest (namely the points where resources are
acquired) and (3) the elimination from the upper bounds obtained in (2) of
those resources accumulated that are not used simultaneously. We report on a
prototype implementation of our analysis that can be used on a simple
imperative language.
@inproceedings{AlbertCR15,
author = {Elvira Albert and Jes\'us Correas and Guillermo Rom\'{a}n-D\'{i}ez},
title = {{N}on-{C}umulative {R}esource {A}nalysis},
booktitle = {Proceedings of 21st International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2015)},
series = {Lecture Notes in Computer Science},
publisher = {Springer},
year = {2015},
doi = {10.1007/978-3-662-46681-0_6},
url = {https://doi.org/10.1007/978-3-662-46681-0_6},
pages = {85-100},
volume = {9035},
isbn = {978-3-662-46680-3},
rank:grinscie:class = {1}
}
|
[41]
|
Elvira Albert, Jesús Correas, Enrique Martín-Martín, and Guillermo Román-Díez.
Static Inference of Transmission Data Sizes in
Distributed Systems.
In 6th International Symposium On Leveraging Applications of
Formal Methods, Verification and Validation (ISOLA'14), volume 8803 of
Lecture Notes in Computer Science, pages 104-119. Springer, 2014.
[ bibtex |
abstract |
DOI |
PDF |
http ]
We present a static analysis to infer the amount of data that a
distributed system may transmit. The different locations of a
distributed system communicate and coordinate their actions by
posting tasks among them. A task is posted by building a message
with the task name and the data on which such task has to be
executed. When the task completes, the result can be retrieved by
means of another message from which the result of the computation
can be obtained. Thus, the transmission data size of a distributed
system mainly depends on the amount of messages posted among the
locations of the system, and the sizes of the data transferred in
the messages. Our static analysis has two main parts: (1) we
over-approximate the sizes of the data at the program points where
tasks are spawned and where the results are received, and (2) we
over-approximate the total number of messages. Knowledge of the
transmission data sizes is essential, among other things, to predict
the bandwidth required to achieve a certain response time, or
conversely, to estimate the response time for a given bandwidth. A
prototype implementation in the SACO system demonstrates the
accuracy and feasibility of the proposed analysis.
@inproceedings{AlbertCMR14,
author = {Elvira Albert and
Jes\'us Correas and
Enrique Mart\'in-Mart\'in and
Guillermo Rom\'an-D\'iez},
title = {{S}tatic {I}nference of {T}ransmission {D}ata {S}izes in {D}istributed {S}ystems},
booktitle = {6th International Symposium On Leveraging Applications of Formal Methods, Verification and Validation (ISOLA'14)},
publisher = {Springer},
series = {Lecture Notes in Computer Science},
volume = {8803},
pages = {104--119},
year = {2014},
url = {http://dx.doi.org/10.1007/978-3-662-45231-8_8},
doi = {10.1007/978-3-662-45231-8_8}
}
|
[42]
|
Elvira Albert, Jesús Correas, and Guillermo Román-Díez.
Peak Cost Analysis of Distributed Systems.
In Markus Müller-Olm and Helmut Seidl, editors, Static
Analysis - 21st International Symposium, SAS 2014, Munich, Germany,
September 11-13, 2014. Proceedings, volume 8723 of Lecture Notes in
Computer Science, pages 18-33. Springer, 2014.
[ bibtex |
abstract |
DOI |
PDF |
http ]
We present a novel static analysis to infer the peak cost of
distributed systems. The different locations of a distributed
system communicate and coordinate their actions by posting tasks
among them. Thus, the amount of work that each location has to
perform can greatly vary along the execution depending on: (1) the
amount of tasks posted to its queue, (2) their respective costs, and
(3) the fact that they may be posted in parallel and thus be pending
to execute simultaneously. The peak cost of a distributed
location refers to the maximum cost that it needs to carry out
along its execution.
Inferring the peak cost is challenging because it increases and
decreases along the execution, unlike the standard notion of
total cost which is cumulative. Our key contribution is the
novel notion of quantified queue configuration which captures
the worst-case cost of the tasks that may be simultaneously
pending to execute at each location along the execution.
Our analysis framework is developed for a generic notion of cost
that can be instantiated to infer the peak cost in terms of number
of steps, the amount of memory allocated, the number of tasks, etc.
A prototype implementation demonstrates the accuracy and
feasibility of the proposed peak cost analysis.
@inproceedings{AlbertCR14,
author = {Elvira Albert and
Jes\'us Correas and
Guillermo Rom\'an-D\'iez},
title = {{P}eak {C}ost {A}nalysis of {D}istributed {S}ystems},
editor = {Markus M{\"{u}}ller{-}Olm and
Helmut Seidl},
booktitle = {Static Analysis - 21st International Symposium, {SAS} 2014, Munich,
Germany, September 11-13, 2014. Proceedings},
publisher = {Springer},
year = {2014},
rank:grinscie:class = {2},
volume = {8723},
series = {Lecture Notes in Computer Science},
url = {http://dx.doi.org/10.1007/978-3-319-10936-7_2},
doi = {10.1007/978-3-319-10936-7_2},
isbn = {978-3-319-10935-0},
pages = {18--33}
}
|
[43]
|
Elvira Albert, Puri Arenas, and Miguel Gómez-Zamalloa.
Actor- and Task-Selection Strategies for Pruning Redundant
State-Exploration in Testing.
In Erika Ábrahám and Catuscia Palamidessi, editors, 34th
IFIP International Conference on Formal Techniques for Distributed Objects,
Components and Systems (FORTE 2014), volume 8461 of Lecture Notes in
Computer Science, pages 49-65. Springer, 2014.
[ bibtex |
abstract |
DOI |
PDF |
http ]
Testing concurrent systems requires exploring all possible
non-deterministic interleavings that the concurrent execution may
have. This is because any of the interleavings may reveal the
erroneous behaviour. In testing of actor systems, we can distinguish
two sources on non-determinism: (1) actor-selection, the
order in which actors are explored and (2) task-selection,
the order in which the messages (or tasks) within each actor are
explored. This paper provides new strategies and heuristics for
pruning redundant state-exploration when testing actor systems by
reducing the amount of unnecessary non-determinism. First, we
propose a method and heuristics for actor-selection based on
tracking the amount and the type of interactions among actors.
Second, we can avoid further redundant interleavings in task-selection
by taking into account the access to the shared-memory that
the tasks make.
@inproceedings{AlbertAG14,
author = {Elvira Albert and
Puri Arenas and
Miguel G\'omez-Zamalloa},
title = {Actor- and {T}ask-{S}election {S}trategies for {P}runing {R}edundant {S}tate-{E}xploration in {T}esting},
booktitle = {34th IFIP International Conference on Formal Techniques for Distributed Objects, Components and Systems (FORTE 2014)},
publisher = {Springer},
series = {Lecture Notes in Computer Science},
year = {2014},
doi = {10.1007/978-3-662-43613-4_4},
url = {https://doi.org/10.1007/978-3-662-43613-4_4},
pages = {49-65},
volume = {8461},
editor = {Erika {\'A}brah{\'a}m and Catuscia Palamidessi}
}
|
[44]
|
Elvira Albert, Puri Arenas, Antonio Flores-Montoya, Samir Genaim, Miguel Gómez-Zamalloa, Enrique Martin-Martin, Germán Puebla, and Guillermo Román-Díez.
SACO: Static Analyzer for Concurrent Objects.
In Erika Ábrahám and Klaus Havelund, editors, Tools and
Algorithms for the Construction and Analysis of Systems - 20th International
Conference, TACAS 2014, volume 8413 of Lecture Notes in Computer
Science, pages 562-567. Springer, 2014.
[ bibtex |
abstract |
DOI |
PDF |
http ]
We present the main concepts, usage and implementation of SACO,
a static analyzer for concurrent objects. Interestingly, SACO is
able to infer both liveness (namely termination and resource
boundedness) and safety properties (namely deadlock freeness)
of programs based on concurrent objects. The system integrates
auxiliary analyses such as points-to and
may-happen-in-parallel, which are essential for increasing the
accuracy of the aforementioned more complex properties. SACO
provides accurate information about the dependencies which may
introduce deadlocks, loops whose termination is not guaranteed, and
upper bounds on the resource consumption of methods.
@inproceedings{AlbertAFGGMPR14,
author = {Elvira Albert and
Puri Arenas and
Antonio Flores-Montoya and
Samir Genaim and
Miguel G\'omez-Zamalloa and
Enrique Martin-Martin and
Germ\'an Puebla and
Guillermo Rom\'an-D\'iez},
title = {{SACO}: {S}tatic {A}nalyzer for {C}oncurrent {O}bjects},
booktitle = {Tools and Algorithms for the Construction and Analysis of
Systems - 20th International Conference, TACAS 2014},
year = {2014},
editor = {Erika {\'A}brah{\'a}m and
Klaus Havelund},
pages = {562-567},
volume = {8413},
series = {Lecture Notes in Computer Science},
isbn = {978-3-642-54861-1},
publisher = {Springer},
doi = {10.1007/978-3-642-54862-8_46},
url = {https://doi.org/10.1007/978-3-642-54862-8_46},
rank:grinscie:class = {3}
}
|
[45]
|
Elvira Albert, Samir Genaim, and Raúl Gutiérrez.
A Transformational Approach to Resource Analysis with
Typed-Norms.
In Proc. of the 23rd International Symposium on Logic-based
Program Synthesis and Transformation (LOPSTR'13), volume 8901 of
Lecture Notes in Computer Science, pages 38-53. Springer, 2014.
[ bibtex |
abstract |
PDF |
http ]
In order to automatically infer the resource
consumption of programs, analyzers track how
data sizes change along program's execution.
Typically, analyzers measure the sizes of data by
applying norms which are mappings from data
to natural numbers that represent the sizes of the
corresponding data. When norms are defined by taking
type information into account, they are named
typed-norms. The main contribution of this
extended abstract is a transformational approach to
resource analysis with typed-norms. The analysis is
based on a transformation of the program into an
intermediate abstract program in which each
variable is abstracted with respect to all
considered norms which are valid for its type. We
also sketch a simple analysis that can be used to
automatically infer the required, useful,
typed-norms from programs.
@inproceedings{AlbertGG13,
author = {Elvira Albert and Samir Genaim and Ra{\'u}l Guti{\'e}rrez},
title = {A {T}ransformational {A}pproach to {R}esource {A}nalysis with {T}yped-{N}orms},
booktitle = {Proc. of the 23rd International Symposium on Logic-based Program Synthesis and Transformation (LOPSTR'13)},
year = {2014},
pages = {38-53},
url = {https://doi.org/10.1007/978-3-319-14125-1_3},
publisher = {Springer},
series = {Lecture Notes in Computer Science},
volume = {8901}
}
|
[46]
|
Elvira Albert, Samir Genaim, and Enrique Martin-Martin.
May-Happen-in-Parallel Analysis for Priority-based
Scheduling.
In Ken McMillan, Aart Middeldorp, and Andrei Voronkov, editors,
19th International Conference on Logic for Programming, Artificial
Intelligence, and Reasoning (LPAR-19), volume 8312 of Lecture Notes in
Computer Science, pages 18-34. Springer, December 2013.
[ bibtex |
abstract |
DOI |
PDF |
http ]
A may-happen-in-parallel (MHP) analysis infers the sets of
pairs of program points that may execute in parallel along a
program's execution. This is an essential piece of information to
detect data races, and also to infer more complex properties of
concurrent programs, e.g., deadlock freeness, termination and
resource consumption analyses can greatly benefit from the MHP
relations to increase their accuracy. Previous MHP analyses have
assumed a worst case scenario by adopting a simplistic
(non-deterministic) task scheduler which can select any available
task. While the results of the analysis for a non-deterministic
scheduler are obviously sound, they can lead to an overly
pessimistic result. We present an MHP analysis for an asynchronous
language with prioritized tasks buffers.
Priority-based scheduling is arguably the most common
scheduling strategy adopted in the implementation of concurrent
languages. The challenge is to be able to take task priorities into
account at static analysis time in order to filter out unfeasible
MHP pairs.
@inproceedings{AlbertGM13,
author = {Elvira Albert and Samir Genaim and Enrique Martin-Martin},
title = {May-{H}appen-in-{P}arallel {A}nalysis for {P}riority-based {S}cheduling},
booktitle = {19th International Conference on Logic for Programming, Artificial Intelligence, and Reasoning (LPAR-19)},
editor = {McMillan, Ken and Middeldorp, Aart and Voronkov, Andrei},
volume = {8312},
series = {Lecture Notes in Computer Science},
publisher = {Springer},
isbn = {978-3-642-45220-8},
month = dec,
year = {2013},
pages = {18-34},
doi = {10.1007/978-3-642-45221-5_2},
url = {http://dx.doi.org/10.1007/978-3-642-45221-5_2}
}
|
[47]
|
Pierre Ganty and Samir Genaim.
Proving Termination Starting from the End.
In Natasha Sharygina and Helmut Veith, editors, 25th
International Conference on Computer Aided Verification (CAV 2013), volume
8044 of Lecture Notes in Computer Science, pages 397-412. Springer,
July 2013.
[ bibtex |
abstract |
DOI |
http ]
We present a novel technique for proving program
termination which introduces a new dimension of
modularity. Existing techniques use the program to
incrementally construct a termination proof. While
the proof keeps changing, the program remains the
same. Our technique goes a step further. We show
how to use the current partial proof to partition
the transition relation into those behaviors known
to be terminating from the current proof, and those
whose status (terminating or not) is not known
yet. This partition enables a new and unexplored
dimension of incremental reasoning on the program
side. In addition, we show that our approach
naturally applies to conditional termination which
searches for a precondition ensuring termination.
We further report on a prototype implementation that
advances the state-of-the-art on the grounds of
termination and conditional termination.
@inproceedings{GantyG13,
author = {Pierre Ganty and Samir Genaim},
title = {{P}roving {T}ermination {S}tarting from the {E}nd},
editor = {Natasha Sharygina and Helmut Veith},
booktitle = {25th International Conference on Computer Aided Verification (CAV 2013)},
year = {2013},
month = jul,
pages = {397-412},
volume = {8044},
series = {Lecture Notes in Computer Science},
publisher = {Springer},
url = {http://dx.doi.org/10.1007/978-3-642-39799-8_27},
doi = {10.1007/978-3-642-39799-8_27}
}
|
[48]
|
Antonio Flores-Montoya, Elvira Albert, and Samir Genaim.
May-Happen-in-Parallel based Deadlock Analysis for
Concurrent Objects.
In Dirk Beyer and Michele Boreale, editors, Formal Techniques
for Distributed Systems (FMOODS/FORTE 2013), volume 7892 of Lecture
Notes in Computer Science, pages 273-288. Springer, June 2013.
[ bibtex |
abstract |
http ]
We present a novel deadlock analysis for concurrent objects based on
the results inferred by a points-to analysis and a
may-happen-in-parallel (MHP) analysis. Similarly to other analysis,
we build a dependency graph such that the absence of cycles
in the graph ensures deadlock freeness. An MHP
analysis provides an over-approximation of the pairs of program
points that may be running in parallel. The crux of the method is
that the analysis integrates the MHP information within the dependency
graph in order to discard unfeasible cycles that otherwise would
lead to false positives. We argue that our analysis is more precise
and/or efficient than previous proposals for deadlock analysis of
concurrent objects. As regards accuracy, we are able to handle cases
that other analyses have pointed out as challenges. As regards
efficiency, the complexity of our deadlock analysis is polynomial.
@inproceedings{FloresAG13,
author = {Antonio Flores-Montoya and Elvira Albert and Samir Genaim},
title = {May-{H}appen-in-{P}arallel based {D}eadlock {A}nalysis for {C}oncurrent {O}bjects},
booktitle = {Formal Techniques for Distributed Systems (FMOODS/FORTE 2013)},
editor = {Dirk Beyer and Michele Boreale},
series = {Lecture Notes in Computer Science},
month = jun,
year = {2013},
pages = {273-288},
volume = {7892},
isbn = {978-3-642-38591-9},
url = {http://dx.doi.org/10.1007/978-3-642-38592-6_19},
publisher = {Springer}
}
|
[49]
|
Elvira Albert, Jesús Correas, Germán Puebla, and Guillermo Román-Díez.
Quantified Abstractions of Distributed Systems.
In 10th International Conference on integrated Formal Methods
(iFM'13), volume 7940 of Lecture Notes in Computer Science, pages
285-300. Springer, June 2013.
[ bibtex |
abstract |
DOI |
PDF |
http ]
When reasoning about distributed systems, it is
essential to have information about the different
kinds of nodes which compose the system, how many
instances of each kind exist, and how nodes
communicate with other nodes. In this paper we
present a static-analysis-based approach which is
able to provide information about the questions
above. In order to cope with an unbounded number of
nodes and an unbounded number of calls among them,
the analysis performs an abstraction of the
system producing a graph whose nodes may represent
(infinitely) many concrete nodes and arcs represent
any number of (infinitely) many calls among nodes.
The crux of our approach is that the abstraction is
enriched with upper bounds inferred by a
resource analysis which limit the number of
concrete instances which the nodes and arcs
represent. The combined information provided by our
approach has interesting applications such as
debugging, optimizing and dimensioning distributed
systems.
@inproceedings{AlbertCPR13,
author = {Elvira Albert and Jes\'us Correas and Germ\'an Puebla and Guillermo Rom\'an-D\'iez},
title = {{Q}uantified {A}bstractions of {D}istributed {S}ystems},
booktitle = {10th International Conference on integrated Formal Methods (iFM'13)},
year = {2013},
month = jun,
publisher = {Springer},
series = {Lecture Notes in Computer Science},
volume = {7940},
pages = {285-300},
doi = {10.1007/978-3-642-38613-8_20},
url = {http://dx.doi.org/10.1007/978-3-642-38613-8_20},
rank:grinscie:class = {3}
}
|
[50]
|
Elvira Albert, Puri Arenas, Miguel Gómez-Zamalloa, and Peter Y.H. Wong.
aPET: A Test Case Generation Tool for Concurrent
Objects.
In Bertrand Meyer, Luciano Baresi, and Mira Mezini, editors,
Joint Meeting of the European Software Engineering Conference and the ACM
SIGSOFT Symposium on the Foundations of Software Engineering, ESEC/FSE'13,
Saint Petersburg, Russian Federation, August 18-26, 2013, pages 595-598.
ACM, 2013.
[ bibtex |
abstract |
DOI |
PDF |
http ]
We present the concepts, usage and prototypical implementation of
aPET, a test case generation tool for a distributed asynchronous
language based on concurrent objects. The system receives as
input a program, a selection of methods to be tested, and a set of
parameters that include a selection of a coverage criterion. It
yields as output a set of test cases which guarantee that the
selected coverage criterion is achieved.
aPET is completely integrated within the language's IDE via
Eclipse. The generated test cases can be displayed in textual mode
and, besides, it is possible to generate ABSUnit code (i.e., code
runnable in a simple framework similar to JUnit to write repeatable
tests). The information yield by aPET can be relevant to spot
bugs during program development and also to perform regression
testing.
@inproceedings{AlbertAGZW13,
author = {Elvira Albert and Puri Arenas and Miguel G\'omez-Zamalloa and Peter Y.H. Wong},
title = {a{PET}: {A} {T}est {C}ase {G}eneration {T}ool for {C}oncurrent {O}bjects},
booktitle = {Joint Meeting of the European Software Engineering Conference
and the ACM SIGSOFT Symposium on the Foundations of Software
Engineering, ESEC/FSE'13, Saint Petersburg, Russian Federation,
August 18-26, 2013},
year = 2013,
publisher = {ACM},
pages = {595--598},
editor = {Bertrand Meyer and
Luciano Baresi and
Mira Mezini},
doi = {10.1145/2491411.2494590},
url = {http://dx.doi.org/10.1145/2491411.2494590}
}
|
[51]
|
Elvira Albert, Antonio Flores-Montoya, Samir Genaim, and Enrique Martin-Martin.
Termination and Cost Analysis of Loops with Concurrent
Interleavings.
In Dang Van Hung and Mizuhito Ogawa, editors, Automated
Technology for Verification and Analysis - 11th International Symposium, ATVA
2013, Hanoi, Vietnam, October 15-18, 2013. Proceedings, volume 8172 of
Lecture Notes in Computer Science, pages 349-364. Springer, 2013.
[ bibtex |
abstract |
DOI |
PDF |
http ]
By following a rely-guarantee style of reasoning, we present
a novel termination analysis for concurrent programs that, in order to
prove termination of a considered loop, makes the assumption that the
"shared-data that is involved in the termination proof of the loop is
modified a finite number of times". In a subsequent step, it proves that
this assumption holds in all code whose execution might interleave with such
loop. At the core of the analysis, we use a may-happen-in-parallel analysis
to restrict the set of program points whose execution can interleave with
the considered loop. Interestingly, the same kind of reasoning can
be applied to infer upper bounds on the number of iterations of loops
with concurrent interleavings. To the best of our knowledge, this is the
first method to automatically bound the cost of such kind of loops.
@inproceedings{AlbertFGM13,
author = {Elvira Albert and Antonio Flores-Montoya and Samir Genaim and Enrique Martin-Martin},
title = {Termination and {C}ost {A}nalysis of {L}oops with {C}oncurrent {I}nterleavings},
booktitle = {Automated Technology for Verification and Analysis - 11th
International Symposium, ATVA 2013, Hanoi, Vietnam, October
15-18, 2013. Proceedings},
editor = {Dang Van Hung and Mizuhito Ogawa},
pages = {349-364},
volume = {8172},
series = {Lecture Notes in Computer Science},
publisher = {Springer},
year = {2013},
doi = {10.1007/978-3-319-02444-8_25},
url = {http://dx.doi.org/10.1007/978-3-319-02444-8_25}
}
|
[52]
|
Diego Esteban Alonso Blas, Puri Arenas, and Samir Genaim.
Precise Cost Analysis via Local Reasoning.
In Dang Van Hung and Mizuhito Ogawa, editors, Automated
Technology for Verification and Analysis - 11th International Symposium,
ATVA 2013, Hanoi, Vietnam, October 15-18, 2013. Proceedings, volume 8172
of Lecture Notes in Computer Science, pages 319-333. Springer, 2013.
[ bibtex |
abstract |
DOI |
PDF |
http ]
The classical approach to static cost analysis is based on first
transforming a given program into a set of cost relations, and then solving
them into closed-form upper-bounds.
The quality of the upper-bounds and the scalability of such cost
analysis highly depend on the precision and efficiency of the
solving phase.
Several techniques for solving cost relations exist, some are
efficient but not precise enough, and some are very precise but do
not scale to large cost relations.
In this paper we explore the gap between these techniques, seeking
for ones that are both precise and efficient.
In particular, we propose a novel technique that first splits the
cost relation into several atomic ones, and then uses precise
local reasoning for some and less precise but efficient reasoning for
others.
For the precise local reasoning, we propose several methods that
define the cost as a solution of a universally quantified formula.
Preliminary experiments demonstrate the effectiveness of our
approach.
@inproceedings{AlonsoAG13,
author = {Diego Esteban Alonso Blas and Puri Arenas and Samir Genaim},
title = {Precise {C}ost {A}nalysis via {L}ocal {R}easoning},
booktitle = {Automated Technology for Verification and Analysis - 11th International
Symposium, {ATVA} 2013, Hanoi, Vietnam, October 15-18, 2013. Proceedings},
editor = {Dang Van Hung and Mizuhito Ogawa},
pages = {319-333},
volume = {8172},
series = {Lecture Notes in Computer Science},
publisher = {Springer},
year = {2013},
doi = {10.1007/978-3-319-02444-8_23},
url = {http://dx.doi.org/10.1007/978-3-319-02444-8_23}
}
|
[53]
|
Amir M. Ben-Amram and Samir Genaim.
On the Linear Ranking Problem for Integer Linear-Constraint Loops.
In Roberto Giacobazzi and Radhia Cousot, editors, 40th ACM
SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL'13),
pages 51-62. ACM Press, January 2013.
[ bibtex |
abstract |
DOI |
http ]
In this paper we study the complexity of the Linear
Ranking problem: given a loop, described by linear
constraints over a finite set of integer variables,
is there a linear ranking function for this loop?
While existence of such a function implies
termination, this problem is not equivalent to
termination. When the variables range over the
rationals or reals, the Linear Ranking problem is
known to be PTIME decidable. However, when they
range over the integers, whether for single-path or
multipath loops, the decidability of the Linear
Ranking problem has not yet been determined. We show
that it is decidable, but harder than the former
case: it is coNP-complete. However, we point out
some special cases of importance of PTIME
complexity. We also present complete algorithms for
synthesizing linear ranking functions, both for the
general case and the special PTIME cases.
@inproceedings{Ben-AmramG13,
author = {Amir M. Ben-Amram and Samir Genaim},
title = {On the Linear Ranking Problem for Integer Linear-Constraint Loops},
booktitle = {40th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL'13)},
year = 2013,
month = jan,
editor = {Roberto Giacobazzi and Radhia Cousot},
publisher = {ACM Press},
pages = {51-62},
url = {http://doi.acm.org/10.1145/2429069.2429078},
ee = {http://doi.acm.org/10.1145/2429069.2429078},
isbn = {978-1-4503-1832-7},
bibsource = {DBLP, http://dblp.uni-trier.de}
}
|
[54]
|
José Miguel Rojas and Miguel Gómez-Zamalloa.
A Framework for Guided Test Case Generation in Constraint
Logic Programming.
In LOPSTR 2012, volume 7844 of Lecture Notes in Computer
Science, pages 176-193. Springer, 2013.
[ bibtex |
abstract |
DOI |
PDF |
http ]
It is well known that performing test case generation
by symbolic execution on large programs becomes
quickly impracticable due to the path explosion
phenomenon. This issue is considered a major
challenge in the software testing arena. Another
common limitation in the field is that test case
generation by symbolic execution tends to produce an
unnecessarily large number of test cases even for
medium size programs. In this paper we propose a
constraint logic programming approach to devise a
generic framework to guide symbolic execution and
thus test case generation. We show how the
framework can help alleviate these scalability
drawbacks that most symbolic execution-based test
generation approaches endure.
@inproceedings{RojasG-Z12,
author = {Jos\'{e} Miguel Rojas and Miguel G\'omez-Zamalloa},
title = {A {F}ramework for {G}uided {T}est {C}ase {G}eneration in {C}onstraint {L}ogic {P}rogramming},
booktitle = {LOPSTR 2012},
publisher = {Springer},
series = {Lecture Notes in Computer Science},
volume = {7844},
pages = {176--193},
year = {2013},
url = {http://dx.doi.org/10.1007/978-3-642-38197-3_12},
doi = {10.1007/978-3-642-38197-3_12}
}
|
[55]
|
Elvira Albert, Antonio Flores-Montoya, and Samir Genaim.
MayPar: a may-happen-in-parallel analyzer for concurrent objects.
In Will Tracz, Martin P. Robillard, and Tevfik Bultan, editors,
20th ACM SIGSOFT Symposium on the Foundations of Software Engineering
(FSE-20), SIGSOFT/FSE'12, Cary, NC, USA - November 11 - 16, 2012, pages
1-4. ACM, November 2012.
[ bibtex |
abstract |
DOI ]
We present the concepts, usage and prototypical
implementation of MayPar, a may-happen-in-parallel
(MHP) static analyzer for a distributed asynchronous
language based on concurrent objects. Our tool
allows analyzing an application and finding out the
pairs of statements that can execute in
parallel. The information can be displayed by means
of a graphical representation of the MHP analysis
graph or, in a textual way, as a set of pairs which
identify the program points that may run in
parallel. The information yield by MayPar can be
relevant (1) to spot bugs in the program related to
fragments of code which should not run in parallel
and also (2) to improve the precision of other
analyses which infer more complex properties (e.g.,
termination and resource consumption).
@inproceedings{AlbertFG12-FSE,
author = {Elvira Albert and Antonio Flores-Montoya and Samir Genaim},
title = {MayPar: a may-happen-in-parallel analyzer for concurrent objects},
editor = {Will Tracz and Martin P. Robillard and
Tevfik Bultan},
booktitle = {20th ACM SIGSOFT Symposium on the Foundations of Software
Engineering (FSE-20), SIGSOFT/FSE'12, Cary, NC, USA - November
11 - 16, 2012},
year = {2012},
month = nov,
pages = {1-4},
publisher = {ACM},
isbn = {978-1-4503-1614-9, 978-1-4503-0443-6},
ee = {http://doi.acm.org/10.1145/2393596.2393611}
}
|
[56]
|
Diego Esteban Alonso Blas and Samir Genaim.
On the Limits of the Classical Approach to Cost Analysis.
In Antoine Miné and David Schmidt, editors, Static Analysis
- 19th International Symposium, SAS 2012, Deauville, France, September 11-13,
2012. Proceedings, volume 7460 of Lecture Notes in Computer Science,
pages 405-421. Springer, September 2012.
[ bibtex |
abstract |
DOI |
PDF |
slides ]
The classical approach to static cost analysis is
based on transforming a given program into cost relations and
solving them into closed-form upper-bounds.
It is known that for some programs, this approach infers
upper-bounds that are asymptotically less precise than the actual
cost.
As yet, it was assumed that this imprecision is due to the way cost
relations are solved into upper-bounds.
In this paper:
(1) we show that this assumption is partially true, and identify the
reason due to which cost relations cannot precisely model the cost
of such programs; and
(2) to overcome this imprecision, we develop a new approach to cost
analysis, based on SMT and quantifier elimination.
Interestingly, we find a strong relation between our approach and
amortised cost analysis.
@inproceedings{AlonsoG12,
author = {Diego Esteban Alonso Blas and Samir Genaim},
title = {On the Limits of the Classical Approach to Cost Analysis},
booktitle = {Static Analysis - 19th International Symposium, SAS 2012,
Deauville, France, September 11-13, 2012. Proceedings},
editor = {Antoine Min{\'e} and David Schmidt},
year = {2012},
month = sep,
pages = {405-421},
ee = {http://dx.doi.org/10.1007/978-3-642-33125-1_27},
publisher = {Springer},
series = {Lecture Notes in Computer Science},
volume = {7460},
isbn = {978-3-642-33124-4},
bibsource = {DBLP, http://dblp.uni-trier.de},
slides = {http://costa.fdi.ucm.es/papers/costa/AlonsoG12-slides.pdf}
}
|
[57]
|
Elvira Albert, Miguel Gómez-Zamalloa, and José Miguel Rojas.
Resource-driven CLP-based Test Case Generation.
In LOPSTR 2011 Revised Selected Papers, volume 7225 of
Lecture Notes in Computer Science, pages 25-41. Springer, July 2012.
[ bibtex |
abstract |
PDF |
http ]
Test Data Generation (TDG) aims at automatically
obtaining test inputs which can then be used
by a software testing tool to validate the
functional behaviour of the program. In this paper,
we propose resource-aware TDG, whose purpose
is to generate test cases (from which the test
inputs are obtained) with associated resource
consumptions. The framework is parametric w.r.t. the notion of resource (it can measure memory,
steps, etc.) and allows using software testing to
detect bugs related to non-functional aspects of the
program. As a further step, we introduce
resource-driven TDG whose purpose is to guide
the TDG process by taking resource consumption into
account. Interestingly, given a resource
policy, TDG is guided to generate test cases that
adhere to the policy and avoid the generation of
test cases which violate it.
@inproceedings{AlbertGR11,
author = {Elvira Albert and Miguel G\'{o}mez-Zamalloa and Jos\'{e} Miguel Rojas},
title = {{R}esource-driven {CLP}-based {T}est {C}ase {G}eneration},
booktitle = {LOPSTR 2011 Revised Selected Papers},
series = {Lecture Notes in Computer Science},
volume = {7225},
pages = {25--41},
npages = {17},
publisher = {Springer},
month = jul,
year = {2012},
isbn = {978-3-642-32210-5},
url = {http://dx.doi.org/10.1007/978-3-642-32211-2_3}
}
|
[58]
|
Elvira Albert, Richard Bubel, Samir Genaim, Reiner Hähnle, and Guillermo Román-Díez.
Verified Resource Guarantees for Heap Manipulating
Programs.
In Proceedings of the 15th International Conference on
Fundamental Approaches to Software Engineering, FASE 2012, Tallinn, Estonia,
March, 2012, volume 7212 of Lecture Notes in Computer Science, pages
130-145. Springer, March 2012.
[ bibtex |
abstract |
DOI |
PDF |
http ]
Program properties that are automatically inferred by static
analysis tools are generally not considered to be completely
trustworthy, unless the tool implementation or the results are
formally verified. Here we focus on the formal verification of
resource guarantees inferred by automatic cost
analysis. Resource guarantees ensure that programs run within the
indicated amount of resources which may refer to memory consumption,
to number of instructions executed, etc. In previous work we
studied formal verification of inferred resource guarantees that
depend only on integer data. In realistic programs, however,
resource consumption is often bounded by the size of
heap-allocated data structures. Bounding their size
requires to perform a number of structural heap
analyses. The contributions of this paper are (i) to identify what
exactly needs to be verified to guarantee sound analysis of heap
manipulating programs, (ii) to provide a suitable extension of the
program logic used for verification to handle structural heap
properties in the context of resource guarantees, and (iii) to
improve the underlying theorem prover so that proof obligations
can be automatically discharged.
@inproceedings{AlbertBGHR12,
author = {Elvira Albert and Richard Bubel and Samir Genaim and Reiner
H\"ahnle and Guillermo Rom\'an-D\'iez},
title = {{V}erified {R}esource {G}uarantees for {H}eap {M}anipulating {P}rograms},
booktitle = {Proceedings of the 15th International Conference on Fundamental Approaches to Software Engineering, FASE 2012, Tallinn, Estonia, March, 2012},
publisher = {Springer},
series = {Lecture Notes in Computer Science},
year = {2012},
volume = {7212},
month = mar,
pages = {130-145},
isbn = {978-3-642-28871-5},
doi = {10.1007/978-3-642-28872-2_10},
url = {http://www.springerlink.com/content/a10153194828v8k1/},
rank:grinscie:class = {3}
}
|
[59]
|
Elvira Albert, Bjarte M. Østvold, and José Miguel Rojas.
Automated Extraction of Abstract Behavioural Models from
JMS Applications.
In Formal Methods for Industrial Critical Systems (FMICS'12),
volume 7437 of Lecture Notes in Computer Science, pages 16-31.
Springer, 2012.
[ bibtex |
abstract |
PDF ]
Distributed systems are hard to program, understand and
analyze. Two key sources of complexity are the many
possible behaviors of a system, arising from the
parallel execution of its distributed nodes, and the
handling of asynchronous messages exchanged between
nodes. We show how to systematically construct
executable models of publish/subscribe systems based
on the Java Messaging Service (JMS). These models,
written in the executable Abstract Behavioural
Specification (ABS) language, capture the essential
parts of the messaging behavior of the original Java
systems, and eliminate details not related to
distribution and messages. We report on JMS2ABS, a
tool that automatically extracts ABS models from the
bytecode of JMS systems. Since the extracted models
are formal and executable, they allow us to reason
about the modeled JMS systems by means of tools
built specifically for the modeling language. For
example, we have succeeded to apply simulation,
termination and resource analysis tools developed
for ABS to, respectively, execute, prove termination
and infer the resource consumption of the original
JMS applications.
@inproceedings{AlbertOR12,
author = {Elvira Albert and Bjarte M. {\O}stvold and Jos\'{e} Miguel Rojas},
title = {{A}utomated {E}xtraction of {A}bstract {B}ehavioural {M}odels from {JMS} {A}pplications},
booktitle = {Formal Methods for Industrial Critical Systems (FMICS'12)},
publisher = {Springer},
series = {Lecture Notes in Computer Science},
volume = {7437},
pages = {16--31},
isbn = {978-3-642-32468-0},
year = {2012}
}
|
[60]
|
Sonia Estévez-Martín, Jesús Correas-Fernández, and Fernando
Sáenz-Pérez.
Extending the TOY System with the ECLIPSE Solver over
Sets of Integers.
In FLOPS'12, volume 7294 of Lecture Notes in Computer
Science, pages 120-135. Springer, 2012.
[ bibtex |
abstract |
PDF ]
Starting from a computational model for the
cooperation of constraint domains in the CFLP
context (with lazy evaluation and higher-order
functions), we present the theoretical basis for the
coordination domain C tailored to the cooperation of
three pure domains: the domain of finite sets of
integers (FS), the finite domain of integers (FD)
and the Herbrand domain (H). We also present the
adaptation of the goal-solving calculus CCLNC
(Cooperative Constraint Lazy Narrowing Calculus over
C) to this particular case, as well as soundness and
limited completeness results. An implementation of
this cooperation in the CFLP system TOY is
presented. Our implementation is based on
inter-process communication between TOY and the
external solvers for sets of integers and finite
domain of ECLiPSe.
@inproceedings{EstevezFLOPS12,
author = {Sonia Est\'evez-Mart\'in and Jes\'us Correas-Fern\'andez and Fernando S\'aenz-P\'erez},
title = {{E}xtending the {TOY} {S}ystem with the {ECLIPSE} {S}olver over {S}ets of {I}ntegers},
booktitle = {FLOPS'12},
year = {2012},
series = {Lecture Notes in Computer Science},
volume = 7294,
pages = {120--135},
publisher = {Springer}
}
|
[61]
|
Elvira Albert, Puri Arenas, and Miguel Gómez-Zamalloa.
Towards Testing Concurrent Objects in CLP.
In Agostino Dovier and Vítor Santos Costa, editors, 28th
International Conference on Logic Programming (ICLP'12), volume 17 of
LIPIcs, pages 98-108. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik,
2012.
[ bibtex |
abstract |
DOI |
PDF ]
Testing is a vital part of the software development process. It is
even more so in the context of concurrent languages, since due to
undesired task interleavings and to unexpected behaviours of the
underlying task scheduler, errors can go easily undetected. This
paper studies the extension of the CLP-based framework for test data
generation of sequential programs to the context of concurrent
objects, a concurrency model which constitutes a promising
solution to concurrency in OO languages.
The challenge is in developing the following required extensions entirely in CLP
(which are our main contributions):
(1) we simulate
task interleavings which could occur in an actual execution and
could lead to additional test cases for the method under test (while
interleavings that would not add new test cases are disregarded),
(2) we integrate sophisticated scheduling policies and, besides,
use different policies for the different fragments of the program
that can run in parallel,
(3) we combine standard termination and coverage
criteria used for testing sequential programs with specific criteria
which control termination and coverage from the concurrency point of
view, e.g., we can limit the number of task interleavings allowed and
the number of loop unrollings performed in each parallel component, etc.
@inproceedings{AlbertAG-Z12,
author = {Elvira Albert and Puri Arenas and Miguel G\'omez-Zamalloa},
title = {Towards Testing Concurrent Objects in CLP},
booktitle = {28th International Conference on Logic Programming (ICLP'12)},
editor = {Agostino Dovier and V\'{\i}tor Santos Costa},
publisher = {Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik},
series = {LIPIcs},
volume = {17},
year = {2012},
doi = {10.4230/LIPIcs.ICLP.2012.98},
pages = {98-108}
}
|
[62]
|
Elvira Albert, Antonio Flores-Montoya, and Samir Genaim.
Analysis of May-Happen-in-Parallel in Concurrent Objects.
In Holger Giese and Grigore Rosu, editors, Formal Techniques for
Distributed Systems - Joint 14th IFIP WG 6.1 International Conference,
FMOODS 2012 and 32nd IFIP WG 6.1 International Conference, FORTE
2012, Stockholm, Sweden, June 13-16, 2012. Proceedings, volume 7273 of
Lecture Notes in Computer Science, pages 35-51. Springer, 2012.
[ bibtex |
abstract |
DOI |
http ]
This paper presents a may-happen-in-parallel (MHP)
analysis for OO languages based on concurrent
objects . In this concurrency model, objects are the
concurrency units such that, when a method is
invoked on an object o 2 from a task executing on
object o 1 , statements of the current task in o 1
may run in parallel with those of the (asynchronous)
call on o 2 , and with those of transitively invoked
methods. The goal of the MHP analysis is to identify
pairs of statements in the program that may run in
parallel in any execution. Our MHP analysis is
formalized as a method-level ( local ) analysis
whose information can be modularly composed to
obtain application-level ( global ) information.
@inproceedings{AlbertFG12,
author = {Elvira Albert and Antonio Flores-Montoya and Samir Genaim},
affiliation = {Complutense University of Madrid, Spain},
title = {Analysis of {M}ay-{H}appen-in-{P}arallel in {C}oncurrent {O}bjects},
booktitle = {Formal Techniques for Distributed Systems - Joint 14th {IFIP} {WG}
6.1 International Conference, {FMOODS} 2012 and 32nd {IFIP} {WG} 6.1
International Conference, {FORTE} 2012, Stockholm, Sweden, June 13-16,
2012. Proceedings},
series = {Lecture Notes in Computer Science},
editor = {Giese, Holger and Rosu, Grigore},
publisher = {Springer},
isbn = {978-3-642-30792-8},
keyword = {Computer Science},
pages = {35-51},
volume = {7273},
url = {http://dx.doi.org/10.1007/978-3-642-30793-5_3},
doi = {10.1007/978-3-642-30793-5_3},
year = {2012}
}
|
[63]
|
Elvira Albert, Jesús Correas, Germán Puebla, and Guillermo Román-Díez.
Incremental Resource Usage Analysis.
In Proceedings of the 2012 ACM SIGPLAN Workshop on Partial
Evaluation and Program Manipulation, PEPM 2012, Philadelphia, Pennsylvania,
USA, January 23-24, 2012, pages 25-34. ACM Press, January 2012.
[ bibtex |
abstract |
DOI |
PDF |
http ]
The aim of incremental global analysis is, given a program, its
analysis results and a series of changes to the program, to obtain
the new analysis results as efficiently as possible and, ideally,
without having to (re-)analyze fragments of code which are not
affected by the changes. Incremental analysis can significantly
reduce both the time and the memory requirements of analysis. This
paper presents an incremental resource usage (a.k.a. cost) analysis
for a sequential Java-like language. Our main contributions are (1)
a multi-domain incremental fixed-point algorithm which can be used
by all global (pre-)analyses required to infer the cost (including
class, sharing, cyclicity, constancy, and size analyses) and which
takes care of propagating dependencies among such domains, and (2) a
novel form of cost summaries which allows us to incrementally
reconstruct only those components of cost functions affected by the
change. Experimental results in the COSTA system show that the
proposed incremental analysis can perform very efficiently in
practice.
@inproceedings{AlbertCPR12,
author = {Elvira Albert and Jes\'us Correas and Germ\'an Puebla and Guillermo Rom\'an-D\'iez},
title = {{I}ncremental {R}esource {U}sage {A}nalysis},
booktitle = {Proceedings of the 2012 ACM SIGPLAN Workshop on Partial
Evaluation and Program Manipulation, PEPM 2012, Philadelphia, Pennsylvania, USA,
January 23-24, 2012},
publisher = {ACM Press},
year = {2012},
month = jan,
pages = {25--34},
isbn = {978-1-4503-1118-2},
doi = {10.1145/2103746.2103754},
url = {http://dl.acm.org/citation.cfm?doid=2103746.2103754},
rank:grinscie:class = {3}
}
|
[64]
|
Elvira Albert, Puri Arenas, Samir Genaim, Miguel Gómez-Zamalloa, and Germán Puebla.
COSTABS: A Cost and Termination Analyzer for ABS.
In Oleg Kiselyov and Simon Thompson, editors, Proceedings of the
2012 ACM SIGPLAN Workshop on Partial Evaluation and Program Manipulation,
PEPM 2012, Philadelphia, Pennsylvania, USA, January 23-24, 2012, pages
151-154. ACM Press, 2012.
[ bibtex |
abstract |
DOI ]
ABS is an abstract behavioural specification
language to model distributed concurrent systems. Characteristic
features of ABS are that: (1) it allows abstracting from
implementation details while remaining executable: a
functional sub-language over abstract data types is used to
specify internal, sequential computations; and (2) the
imperative sub-language provides flexible concurrency and
synchronization mechanisms by means of asynchronous method calls,
release points in method definitions, and cooperative scheduling of
method activations. This paper presents COSTABS, a COSt and
Termination analyzer for ABS, which is able to prove termination and
obtain resource usage bounds for both the imperative and
functional fragments of programs. The resources that COSTABS can
infer include termination, number of execution steps, memory
consumption, number of asynchronous calls, among others. The
analysis bounds provide formal guarantees that the execution of the
program will never exceed the inferred amount of resources. The
system can be downloaded as free software from its web site, where a
repository of examples and a web interface are also provided. To the
best of our knowledge, COSTABS is the first system able to perform
resource analysis for a concurrent language.
@inproceedings{AlbertAGGZP12,
author = {Elvira Albert and Puri Arenas and Samir Genaim and Miguel G\'omez-Zamalloa and Germ\'an Puebla},
title = {{COSTABS}: A {C}ost and {T}ermination {A}nalyzer for {ABS}},
booktitle = {Proceedings of the 2012 ACM SIGPLAN Workshop on Partial
Evaluation and Program Manipulation, PEPM 2012, Philadelphia, Pennsylvania, USA,
January 23-24, 2012},
publisher = {ACM Press},
editor = {Oleg Kiselyov and Simon Thompson},
year = {2012},
isbn = {978-1-4503-1118-2 },
pages = {151-154},
doi = {10.1145/2103746.2103774}
}
|
[65]
|
Elvira Albert, Puri Arenas, Samir Genaim, Miguel Gómez-Zamalloa, and
Germán Puebla.
Automatic Inference of Resource Consumption Bounds.
In Logic for Programming, Artificial Intelligence, and Reasoning
- 18th International Conference, LPAR-18, Mérida, Venezuela, March 11-15,
2012. Proceedings LPAR, volume 7180 of Lecture Notes in Computer
Science, pages 1-11. Springer, 2012.
[ bibtex |
abstract |
DOI ]
One of the main features of programs is the amount of resources
which are needed in order to run them. Different resources can be
taken into consideration, such as the number of execution steps,
amount of memory allocated, number of calls to certain methods,
etc. Unfortunately, manually determining the resource consumption of
programs is difficult and error-prone. We provide an overview of a
state of the art framework for automatically obtaining both upper
and lower bounds on the resource consumption of programs. The bounds
obtained are functions on the size of the input arguments to the
program and are obtained statically, i.e., without running the
program. Due to the approximations introduced, the framework can
fail to obtain (non-trivial) bounds even if they exist. On the other
hand, modulo implementation bugs, the bounds thus obtained are valid
for any execution of the program. The framework has been
implemented in the COSTA system and can provide useful bounds for
realistic object-oriented and actor-based concurrent programs.
@inproceedings{AlbertAGGZP12a,
author = {Elvira Albert and
Puri Arenas and
Samir Genaim and
Miguel G{\'o}mez-Zamalloa and
Germ{\'a}n Puebla},
title = {Automatic {I}nference of {R}esource {C}onsumption {B}ounds},
booktitle = {Logic for Programming, Artificial Intelligence, and Reasoning
- 18th International Conference, LPAR-18, M{\'e}rida,
Venezuela, March 11-15, 2012. Proceedings LPAR},
year = {2012},
pages = {1-11},
publisher = {Springer},
series = {Lecture Notes in Computer Science},
doi = {10.1007/978-3-642-28717-6},
volume = {7180}
}
|
[66]
|
Elvira Albert, Puri Arenas, and Miguel Gómez-Zamalloa.
Symbolic Execution of Concurrent Objects in CLP.
In Practical Aspects of Declarative Languages (PADL'12),
Lecture Notes in Computer Science, pages 123-137. Springer, January 2012.
[ bibtex |
abstract |
DOI |
PDF ]
In the concurrent objects model,objects have conceptually
dedicated processors and live in a distributed environment with
unordered communication by means of asynchronous method calls.
Method callers may decide at runtime when to synchronize with the reply
from a call. This paper presents a CLP-based approach to symbolic execution
of concurrent OO programs. Developing a symbolic execution engine for
concurrent objects is challenging because it needs to combine the
OO features of the language, concurrency and backtracking.
Our approach consists in, first, transforming the OO program into an
equivalent CLP program which contains calls to specific builtins that
handle the concurrency model. The builtins are implemented in CLP and
include primitives to handle asynchronous calls synchronization
operations and scheduling policies, among others. Interestingly,
symbolic execution of the transformed programs then relies
simply on the standard sequential execution of CLP.
We report on a prototype implementation within the PET system which
shows the feasibility of our approach.
@inproceedings{AlbertAGZ12,
author = {Elvira Albert and Puri Arenas and Miguel G\'omez-Zamalloa},
title = {Symbolic {E}xecution of {C}oncurrent {O}bjects in {CLP}},
booktitle = {Practical Aspects of Declarative Languages (PADL'12)},
publisher = {Springer},
series = {Lecture Notes in Computer Science},
year = {2012},
month = jan,
volumen = {7149},
pages = {123-137},
doi = {10.1007/978-3-642-27694-1_10}
}
|
[67]
|
Amir M. Ben-Amram, Samir Genaim, and Abu Naser Masud.
On the Termination of Integer Loops.
In Viktor Kuncak and Andrey Rybalchenko, editors, Verification,
Model Checking, and Abstract Interpretation - 13th International Conference,
VMCAI 2012, Philadelphia, USA, January 25-27, 2012. Proceedings, volume 7148
of Lecture Notes in Computer Science, pages 72-87. Springer, January
2012.
[ bibtex |
abstract |
DOI |
PDF ]
In this paper we study the decidability of termination
of several variants of simple integer loops, without
branching in the loop body and with affine
constraints as the loop guard (and possibly a
precondition). We show that termination of such
loops is undecidable in some cases, in particular,
when the body of the loop is expressed by a set of
linear inequalities where the coefficients are from
Z U{r} with r an arbitrary irrational; or
when the loop is a sequence of instructions, that
compute either linear expressions or the step
function. The undecidability result is proven by a
reduction from counter programs, whose termination
is known to be undecidable. For the common case of
integer constraints loops with rational coefficients
only we have not succeeded in proving decidability
nor undecidability of termination, however, this
attempt led to the result that a Petri net can be
simulated with such a loop, which implies some
interesting lower bounds. For example, termination
for a given input is at least EXPSPACE-hard.
@inproceedings{Ben-AmramGM12,
author = {Amir M. Ben-Amram and Samir Genaim and Abu Naser Masud},
title = {On the Termination of Integer Loops},
booktitle = {Verification, Model Checking, and Abstract Interpretation
- 13th International Conference, VMCAI 2012, Philadelphia,
USA, January 25-27, 2012. Proceedings},
editor = {Viktor Kuncak and Andrey Rybalchenko},
publisher = {Springer},
series = {Lecture Notes in Computer Science},
volume = {7148},
pages = {72-87},
year = {2012},
month = jan,
ee = {http://dx.doi.org/10.1007/978-3-642-27940-9_6}
}
|
[68]
|
Diana Ramírez-Deantes, Jesús Correas, and Germán Puebla.
Modular Termination Analysis of Java Bytecode and its
Application to phoneME Core Libraries.
In Formal Aspects of Computer Software (FACS 2010), volume 6921
of Lecture Notes in Computer Science, pages 218-236. Springer, 2012.
[ bibtex |
abstract ]
The paper summarizes a recent work about modular analysis in COSTA.
@inproceedings{RamirezCP10a,
author = {Diana Ram{\'i}rez-Deantes and Jes{\'u}s Correas and Germ{\'a}n Puebla},
title = {Modular {T}ermination {A}nalysis of {J}ava {B}ytecode and its {A}pplication to {phoneME} {C}ore {L}ibraries},
booktitle = {Formal Aspects of Computer Software (FACS 2010)},
series = {Lecture Notes in Computer Science},
publisher = {Springer},
volume = {6921},
pages = {218--236},
year = 2012
}
|
[69]
|
Elvira Albert, Jesús Correas, Germán Puebla, and Guillermo Román-Díez.
Towards Incremental Resource Usage Analysis.
In The Ninth Asian Symposium on Programming Languages and
Systems (APLAS'11), December 2011.
Poster Presentation.
[ bibtex |
abstract |
PDF ]
The aim of incremental global analysis is, given a program, its
analysis results and a series of changes to the program, to obtain
the new analysis results as efficiently as possible and, ideally,
without having to (re-)analyze fragments of code which are not
affected by the changes. Incremental analysis can significantly
reduce both the time and the memory requirements of analysis. This
paper presents an incremental resource usage (a.k.a. cost) analysis
for a sequential Java-like language. Our main contributions are (1)
a multi-domain incremental fixed-point algorithm which can be used
by all global (pre-)analyses required to infer the cost (including
class, sharing, cyclicity, constancy, and size analyses) and which
takes care of propagating dependencies among such domains, and (2) a
novel form of cost summaries which allows us to incrementally
reconstruct only those components of cost functions affected by the
change. Experimental results in the COSTA system show that the
proposed incremental analysis can perform very efficiently in
practice.
@inproceedings{AlbertCPR11,
author = {Elvira Albert and Jes\'us Correas and Germ\'an Puebla and Guillermo Rom\'an-D\'iez},
title = {{T}owards {I}ncremental {R}esource {U}sage {A}nalysis},
booktitle = {The Ninth Asian Symposium on Programming Languages and Systems (APLAS'11)},
year = {2011},
month = dec,
note = {Poster Presentation}
}
|
[70]
|
Elvira Albert, Israel Cabañas, Antonio Flores-Montoya, Miguel Gómez-Zamalloa, and Sergio Gutiérrez.
jPET: an Automatic Test-Case Generator for Java.
In 18th Working Conference on Reverse Engineering (WCRE 2011),
pages 441-442. IEEE Computer Society, October 2011.
[ bibtex |
abstract |
DOI ]
We present jPET, a white box test-case generator
(TCG) which can be used during software
development of Java applications within the Eclipse
environment. jPET builds on top of PET, a TCG
which automatically obtains test-cases from the byte
code associated to a Java program. jPET performs
reverse engineering of the test-cases obtained at
the byte code level by PET in order to yield this
information to the user at the source code
level. This allows understanding the information
gathered at the lower level and using it to test
source Java programs.
@inproceedings{AlbertCFGG11,
author = {Elvira Albert and Israel Caba{\~n}as and Antonio
Flores-Montoya and Miguel G\'{o}mez-Zamalloa and
Sergio Guti\'{e}rrez},
title = {j{PET}: an {A}utomatic {T}est-{C}ase {G}enerator for {J}ava},
booktitle = {18th Working Conference on Reverse Engineering (WCRE 2011)},
month = oct,
year = {2011},
pages = {441-442},
publisher = {IEEE Computer Society},
npages = {2},
doi = {10.1109/WCRE.2011.67}
}
|
[71]
|
Elvira Albert, Samir Genaim, Miguel Gómez-Zamalloa, Einar Broch Johnsen,
Rudolf Schlatte, and Silvia Lizeth Tapia Tarifa.
Simulating Concurrent Behaviors with Worst-Case Cost
Bounds.
In Michael Butler and Wolfram Schulte, editors, FM 2011: Formal
Methods - 17th International Symposium on Formal Methods, Limerick, Ireland,
June 20-24, 2011. Proceedings, volume 6664 of Lecture Notes in Computer
Science, pages 353-368, 2011.
[ bibtex |
abstract |
DOI ]
Modern software systems are increasingly being
developed for deployment on a range of
architectures. For this purpose, it is interesting
to capture aspects of low-level deployment concerns
in high-level modeling languages. In this paper, an
executable object-oriented modeling language is
extended with resource-restricted deployment
components. To analyze model behavior a formal
methodology is proposed to assess resource
consumption, which balances the scalability of the
method and the reliability of the obtained
results. The approach applies to a general notion of
resource, including traditional cost measures (e.g.,
time, memory) as well as concurrency-related
measures (e.g., requests to a server, spawned
tasks). The main idea of our approach is to combine
reliable (but expensive) worst-case cost analysis of
statically predictable parts of the model with fast
(but inherently incomplete) simulations of the
concurrent aspects in order to avoid the state-space
explosion. The approach is illustrated by the
analysis of memory consumption.
@inproceedings{AlbertGGJST11,
author = {Elvira Albert and
Samir Genaim and
Miguel G{\'o}mez-Zamalloa and
Einar Broch Johnsen and
Rudolf Schlatte and
Silvia Lizeth Tapia Tarifa},
title = {{S}imulating {C}oncurrent {B}ehaviors with
{W}orst-{C}ase {C}ost {B}ounds},
editor = {Michael Butler and Wolfram Schulte},
booktitle = {FM 2011: Formal Methods - 17th International Symposium on
Formal Methods, Limerick, Ireland, June 20-24, 2011. Proceedings},
year = 2011,
volume = {6664},
pages = {353-368},
series = {Lecture Notes in Computer Science},
ee = {http://dx.doi.org/10.1007/978-3-642-21437-0_27}
}
|
[72]
|
Elvira Albert, Puri Arenas, Samir Genaim, Miguel Gómez-Zamalloa, and
Germán Puebla.
Cost Analysis of Concurrent OO Programs.
In Hongseok Yang, editor, Programming Languages and Systems -
9th Asian Symposium, APLAS 2011, Kenting, Taiwan, December 5-7, 2011.
Proceedings, volume 7078 of Lecture Notes in Computer Science, pages
238-254. Springer, 2011.
[ bibtex |
abstract |
DOI ]
Cost analysis aims at automatically approximating the
resource consumption (e.g., memory) of
executing a program in terms of its input
parameters. While cost analysis for sequential
programming languages has received considerable
attention, concurrency and distribution have been
notably less studied. The main challenges (and our
contributions) of cost analysis in a concurrent
setting are: (1) Inferring precise size
relations for data in the program in the presence
of shared memory. This information is essential for
bounding the number of iterations of loops. (2)
Distribution suggests that analysis must keep the
cost of the diverse distributed components
separate. We handle this by means of a novel form of
recurrence equations which are parametric on
the notion of cost center, which represents a
corresponding component. To the best of our
knowledge, our work is the first one to present a
general cost analysis framework and an
implementation for concurrent OO programs.
@inproceedings{AlbertAGGP11,
author = {Elvira Albert and Puri Arenas and Samir Genaim and Miguel G\'{o}mez-Zamalloa and Germ\'an Puebla},
editor = {Hongseok Yang},
title = {Cost {A}nalysis of {C}oncurrent {OO} Programs},
booktitle = {Programming Languages and Systems - 9th Asian Symposium,
APLAS 2011, Kenting, Taiwan, December 5-7, 2011. Proceedings},
series = {Lecture Notes in Computer Science},
year = {2011},
publisher = {Springer},
pages = {238-254},
volume = {7078},
doi = {10.1007/978-3-642-25318-8_19}
}
|
[73]
|
Elvira Albert, Richard Bubel, Samir Genaim, Reiner Hähnle, Germán Puebla,
and Guillermo Román-Díez.
Verified Resource Guarantees using COSTA and KeY.
In Siau-Cheng Khoo and Jeremy G. Siek, editors, Proceedings of
the 2011 ACM SIGPLAN Workshop on Partial Evaluation and Program Manipulation,
PEPM 2011, Austin, TX, USA, January 24-25, 2011, SIGPLAN, pages 73-76. ACM,
2011.
[ bibtex |
abstract |
DOI |
DOI |
PDF ]
Resource guarantees allow being certain that programs
will run within the indicated amount of resources,
which may refer to memory consumption, number of
instructions executed, etc. This information can be
very useful, especially in real-time and
safety-critical applications.Nowadays, a number of
automatic tools exist, often based on type systems
or static analysis, which produce such resource
guarantees. In spite of being based on
theoretically sound techniques, the implemented
tools may contain bugs which render the resource
guarantees thus obtained not completely trustworthy.
Performing full-blown verification of such tools is
a daunting task, since they are large and complex.
In this work we investigate an alternative approach
whereby, instead of the tools, we formally verify
the results of the tools. We have implemented this
idea using COSTA, a state-of-the-art static analysis
system, for producing resource guarantees and KeY, a
state-of-the-art verification tool, for formally
verifying the correctness of such resource
guarantees. Our preliminary results show that the
proposed tool cooperation can be used for
automatically producing verified resource
guarantees.
@inproceedings{AlbertBGHPR11,
author = {Elvira Albert and Richard Bubel and Samir Genaim and Reiner H\"ahnle and Germ\'an Puebla and Guillermo Rom\'an-D\'iez},
title = {{V}erified {R}esource {G}uarantees using {COSTA} and {KeY}},
booktitle = {Proceedings of the 2011 ACM SIGPLAN Workshop on Partial
Evaluation and Program Manipulation, PEPM 2011, Austin,
TX, USA, January 24-25, 2011},
editor = {Siau-Cheng Khoo and Jeremy G. Siek},
publisher = {ACM},
series = {SIGPLAN},
year = {2011},
pages = {73-76},
doi = {10.1145/1929501.1929513},
ee = {http://doi.acm.org/10.1145/1929501.1929513},
rank:grinscie:class = {3}
}
|
[74]
|
Elvira Albert, Samir Genaim, and Abu Naser Masud.
More Precise yet Widely Applicable Cost Analysis.
In Ranjit Jhala and David A. Schmidt, editors, Verification,
Model Checking, and Abstract Interpretation - 12th International Conference,
VMCAI 2011, Austin, TX, USA, January 23-25, 2011. Proceedings, volume 6538
of Lecture Notes in Computer Science, pages 38-53. Springer, January
2011.
[ bibtex |
abstract |
DOI |
PDF ]
Cost analysis aims at determining the amount of
resources required to run a program in terms of its
input data sizes. Automatically inferring precise
bounds, while at the same time being able to handle
a wide class of programs, is a main challenge in
cost analysis. (1) Existing methods which rely on
computer algebra systems (CAS ) to solve the ob-
tained cost recurrence equations (CR) are very
precise when applicable, but handle a very
restricted class of CR. (2) Specific solvers
developed for CR tend to sacrifice accuracy for
wider applicability. In this paper, we present a
novel approach to inferring precise upper and lower
bounds on CR which, when compared to (1), is
strictly more widely applicable while precision is
kept and, when compared to (2), is in practice more
precise and still keeps wide applicability. The main
novelty is that we are able to accurately bound the
worst-case/best-case cost of each iteration of the
program loops and, then, by summing the resulting
sequences, we obtain precise upper/lower
bounds. Preliminary experimental results on Java
(bytecode) programs confirm the good balance between
the accu- racy of our analysis and its
applicability.
@inproceedings{AlbertGM11,
author = {Elvira Albert and Samir Genaim and Abu Naser Masud},
title = {More Precise yet Widely Applicable Cost Analysis},
booktitle = {Verification, Model Checking, and Abstract Interpretation
- 12th International Conference, VMCAI 2011, Austin, TX,
USA, January 23-25, 2011. Proceedings},
editor = {Ranjit Jhala and David A. Schmidt},
publisher = {Springer},
series = {Lecture Notes in Computer Science},
volume = {6538},
pages = {38-53},
ee = {http://dx.doi.org/10.1007/978-3-642-18275-4_5},
year = {2011},
month = jan
}
|
[75]
|
Elvira Albert, Miguel Gómez-Zamalloa, José Miguel Rojas, and Germán Puebla.
Compositional CLP-based Test Data Generation for Imperative
Languages.
In María Alpuente, editor, LOPSTR 2010 Revised Selected
Papers, volume 6564 of Lecture Notes in Computer Science, pages
99-116. Springer-Verlag, 2011.
[ bibtex |
abstract |
DOI |
PDF ]
Glass-box test data generation (TDG) is the process of
automatically generating test input data for a
program by considering its internal structure. This
is generally accomplished by performing symbolic
execution of the program where the contents of
variables are expressions rather than concrete
values. The main idea in CLP-based TDG is to
translate imperative programs into equivalent CLP
ones and then rely on the standard evaluation
mechanism of CLP to symbolically execute the
imperative program. Performing symbolic execution on
large programs becomes quickly expensive due to the
large number and the size of paths that need to be
explored. In this paper, we propose compositional
reasoning in CLP-based TDG where large programs can
be handled by testing parts (such as components,
modules, libraries, methods, etc.) separately and
then by composing the test cases obtained for these
parts to get the required information on the whole
program. Importantly, compositional reasoning also
gives us a practical solution to handle native code,
which may be unavailable or written in a different
programming language. Namely, we can model the
behavior of a native method by means of test cases
and compositional reasoning is able to use them.
@inproceedings{AlbertGRP10,
author = {Elvira Albert and Miguel G\'{o}mez-Zamalloa and Jos{\'e} Miguel Rojas and Germ{\'a}n Puebla},
title = {Compositional {CLP}-based {T}est {D}ata {G}eneration for {I}mperative {L}anguages},
booktitle = {LOPSTR 2010 Revised Selected Papers},
publisher = {Springer-Verlag},
series = {Lecture Notes in Computer Science},
editor = {Mar\'{i}a Alpuente},
year = {2011},
volume = {6564},
pages = {99--116},
isbn = {978-3-642-20550-7},
ee = {http://dx.doi.org/10.1007/978-3-642-20551-4_7}
}
|
[76]
|
Elvira Albert, Miguel Gómez-Zamalloa, and Germán Puebla.
PET: A Partial Evaluation-based Test Case Generation
Tool for Java Bytecode.
In ACM SIGPLAN Workshop on Partial Evaluation and
Semantics-based Program Manipulation (PEPM'10), pages 25-28. ACM Press,
2010.
[ bibtex |
abstract |
PDF ]
PET is a prototype Partial Evaluation-based Test case
generation tool for a subset of Java bytecode
programs. It performs white-box test generation by
means of two consecutive Partial Evaluations
(PE). The first PE decompiles the Java bytecode
program into an equivalent CLP (Constraint Logic
Programming) counterpart. The second PE generates a
test-case generator from the CLP program. This
generator captures interesting test coverage
criteria and it is able to generate further test
cases on demand. For the first PE, PET incorporates
an existing tool which decompiles bytecode to
CLP. The main contribution of this work is the
implementation of the second PE and the proof of
concept of the approach. This has required the
development of a partial evaluator for CLP with
appropriate control strategies to ensure the
required coverage criteria and to generate test-case
generators. PET can be downloaded as free software
from its web site, where a repository of examples
and a web interface are also provided. Though PET
has to be extended to be applicable to larger
programs, we argue that it provides some evidence
that the approach can be of practical interest.
@inproceedings{AlbertGP10,
author = {Elvira Albert and Miguel G\'{o}mez-Zamalloa and Germ{\'a}n Puebla},
title = {{PET}: A {P}artial {E}valuation-based {T}est {C}ase {G}eneration {T}ool for {J}ava {B}ytecode},
booktitle = {ACM SIGPLAN Workshop on Partial Evaluation and Semantics-based Program Manipulation (PEPM'10)},
year = {2010},
pages = {25--28},
isbn = {978-1-60558-727-1},
publisher = {ACM Press}
}
|
[77]
|
Elvira Albert, Puri Arenas, Samir Genaim, Germán Puebla, and Diana
Ramírez-Deantes.
From Object Fields to Local Variables: A Practical
Approach to Field-Sensitive Analysis.
In Radhia Cousot and Matthieu Martel, editors, Static Analysis -
17th International Symposium, SAS 2010, Perpignan, France, September 14-16,
2010. Proceedings, volume 6337 of Lecture Notes in Computer Science,
pages 100-116. Springer, 2010.
[ bibtex |
abstract |
DOI |
PDF ]
Static analysis which takes into account the value of
data stored in the heap is typically considered complex
and computationally intractable in practice. Thus, most
static analyzers do not keep track of object fields (or
fields for short), i.e., they are field-insensitive.
In this paper, we propose locality conditions for
soundly converting fields into local variables.
This way, field-insensitive analysis over the
transformed program can infer information on the original
fields.
Our notion of locality is context-sensitive and can be
applied both to numeric and reference fields.
We propose then a polyvariant transformation which
actually converts object fields meeting the locality
condition into variables and which is able to generate
multiple versions of code when this leads to increasing
the amount of fields which satisfy the locality
conditions. We have implemented our analysis within a
termination analyzer for Java bytecode.
Interestingly, we can now prove termination of programs
which use iterators without the need of a sophisticated
heap analysis.
@inproceedings{AlbertAGPR10,
author = {Elvira Albert and Puri Arenas and Samir Genaim and Germ{\'a}n Puebla and Diana Ram\'{\i}rez-Deantes},
title = {From {O}bject {F}ields to {L}ocal {V}ariables: A {P}ractical {A}pproach
to {F}ield-{S}ensitive {A}nalysis},
booktitle = {Static Analysis - 17th International Symposium, SAS 2010,
Perpignan, France, September 14-16, 2010. Proceedings},
editor = {Radhia Cousot and
Matthieu Martel},
publisher = {Springer},
series = {Lecture Notes in Computer Science},
volume = {6337},
year = {2010},
isbn = {978-3-642-15768-4},
pages = {100-116},
doi = {10.1007/978-3-642-15769-1_7},
bibsource = {DBLP, http://dblp.uni-trier.de}
}
|
[78]
|
Elvira Albert, Puri Arenas, Samir Genaim, Miguel Gómez-Zamalloa, Germán Puebla, Diana Ramirez, Guillermo Román-Díez, and Damiano Zanardini.
Termination and Cost Analysis with COSTA and its User
Interfaces.
In Spanish Conference on Programming and Computer Languages
(PROLE'09), volume 258 of Electronic Notes in Theoretical Computer
Science, pages 109-121. Elsevier, September 2009.
[ bibtex |
abstract |
DOI |
PDF ]
COSTA is a static analyzer for Java
bytecode which is able to infer cost and termination
information for large classes of programs. The
analyzer takes as input a program and a resource of
interest, in the form of a cost model, and aims at
obtaining an upper bound on the execution cost with
respect to the resource and at proving program
termination. The COSTA system has reached a
considerable degree of maturity in that (1) it
includes state-of-the-art techniques for statically
estimating the resource consumption and the
termination behavior of programs, plus a number of
specialized techniques which are required for
achieving accurate results in the context of
object-oriented programs, such as handling numeric
fields in value analysis; (2) it provides several
non-trivial notions of cost (resource consumption)
including, in addition to the number of execution
steps, the amount of memory allocated in the heap or
the number of calls to some user-specified method;
(3) it provides several user interfaces: a classical
command line, a Web interface which allows
experimenting remotely with the system without the
need of installing it locally, and a recently
developed Eclipse plugin which facilitates usage of
the analyzer, even during the development phase; (4)
it can deal with both the Standard and Micro
editions of Java. In the tool demonstration, we will
show that COSTA is able to produce meaningful
results for non-trivial programs, possibly using
Java libraries. Such results can then be used in
many applications, including program development,
resource usage certification, program optimization,
etc.
@inproceedings{AlbertAGGPRRZ09,
author = {Elvira Albert and Puri Arenas and Samir Genaim and Miguel G{\'o}mez-Zamalloa and Germ{\'a}n Puebla and Diana Ramirez and Guillermo Rom\'an-D\'iez and Damiano Zanardini},
title = {{T}ermination and {C}ost {A}nalysis with {COSTA} and its {U}ser {I}nterfaces},
booktitle = {Spanish Conference on Programming and Computer Languages (PROLE'09)},
series = {Electronic Notes in Theoretical Computer Science},
publisher = {Elsevier},
volume = {258},
year = {2009},
pages = {109-121},
doi = {10.1016/j.entcs.2009.12.008},
month = sep
}
|
[79]
|
Elvira Albert, Miguel Gómez-Zamalloa, and Germán Puebla.
Test Data Generation of Bytecode by CLP Partial
Evaluation.
In 18th International Symposium on Logic-based Program Synthesis
and Transformation (LOPSTR'08), number 5438 in Lecture Notes in Computer
Science, pages 4-23. Springer-Verlag, March 2009.
[ bibtex |
abstract |
PDF ]
@inproceedings{AlbertGP08,
author = {Elvira Albert and Miguel G\'{o}mez-Zamalloa and Germ{\'a}n Puebla},
title = {{T}est {D}ata {G}eneration of {B}ytecode by {CLP} {P}artial {E}valuation},
booktitle = {18th International Symposium on Logic-based Program Synthesis and Transformation (LOPSTR'08)},
publisher = {Springer-Verlag},
series = {Lecture Notes in Computer Science},
number = {5438},
isbn = {978-3-642-00514-5},
pages = {4--23},
month = mar,
year = {2009}
}
|
[80]
|
Elvira Albert, Diego Esteban Alonso Blas, Puri Arenas, Samir Genaim, and
Germán Puebla.
Asymptotic Resource Usage Bounds.
In Zhenjiang Hu, editor, Programming Languages and Systems, 7th
Asian Symposium, APLAS 2009, Seoul, Korea, December 14-16, 2009.
Proceedings, volume 5904 of Lecture Notes in Computer Science, pages
294-310. Springer, 2009.
[ bibtex |
abstract |
DOI |
PDF |
slides ]
When describing the resource usage of a program, it is
usual to talk in asymptotic terms, such as
the well-known “big O” notation, whereby we focus
on the behaviour of the program for large input data
and make a rough approximation by considering as
equivalent programs whose resource usage grows at
the same rate. Motivated by the existence of
non-asymptotic resource usage analyzers, in
this paper, we develop a novel transformation from a
non-asymptotic cost function (which can be produced
by multiple resource analyzers) into its asymptotic
form. Our transformation aims at producing tight
asymptotic forms which do not contain
redundant subexpressions (i.e., expressions
asymptotically subsumed by others). Interestingly,
we integrate our transformation at the heart of a
cost analyzer to generate asymptotic upper
bounds without having to first compute their
non-asymptotic counterparts. Our experimental
results show that, while non-asymptotic cost
functions become very complex, their asymptotic
forms are much more compact and manageable. This is
essential to improve scalability and to enable the
application of cost analysis in resource-aware
verification/certification.
@inproceedings{AlbertAAGP09,
author = {Elvira Albert and Diego Esteban Alonso Blas and Puri Arenas and Samir Genaim and Germ{\'a}n Puebla},
title = {Asymptotic {R}esource {U}sage {B}ounds},
booktitle = {Programming Languages and Systems, 7th Asian Symposium, {APLAS} 2009,
Seoul, Korea, December 14-16, 2009. Proceedings},
year = 2009,
editor = {Zhenjiang Hu},
series = {Lecture Notes in Computer Science},
volume = {5904},
pages = {294-310},
publisher = {Springer},
doi = {10.1007/978-3-642-10672-9_21},
slides = {http://costa.fdi.ucm.es/papers/costa/AlbertAAGP09-slides.pdf}
}
|
[81]
|
Elvira Albert, Puri Arenas, Samir Genaim, and Germán Puebla.
Field-Sensitive Value Analysis by Field-Insensitive
Analysis.
In Ana Cavalcanti and Dennis Dams, editors, 16th International
Symposium on Formal Methods, FM'09, volume 5850 of Lecture Notes in
Computer Science, pages 370-386. Springer, 2009.
[ bibtex |
abstract |
DOI |
PDF ]
Shared mutable data structures pose major problems in
static analysis and most analyzers are unable to
keep track of the values of numeric variables
stored in the heap. In this paper, we first
identify sufficient conditions under which heap
allocated numeric variables in object oriented
programs (i.e., numeric fields) can be handled as
non-heap allocated variables. Then, we present a
static analysis to infer which numeric fields
satisfy these conditions at the level of
(sequential) bytecode. This allows
instrumenting the code with ghost variables
which make such numeric fields observable to any
field-insensitive value analysis. Our experimental
results in termination analysis show that we greatly
enlarge the class of analyzable programs with a
reasonable overhead.
@inproceedings{AlbertAGP09,
author = {Elvira Albert and Puri Arenas and Samir Genaim and Germ{\'a}n Puebla},
title = {{F}ield-{S}ensitive {V}alue {A}nalysis by {F}ield-{I}nsensitive {A}nalysis},
booktitle = {16th International Symposium on Formal Methods, FM'09},
year = 2009,
editor = {Ana Cavalcanti and Dennis Dams},
series = {Lecture Notes in Computer Science},
publisher = {Springer},
volume = {5850},
pages = {370-386},
isbn = {978-3-642-05088-6},
doi = {10.1007/978-3-642-05089-3_24},
bibsource = {DBLP, http://dblp.uni-trier.de}
}
|
[82]
|
Elvira Albert, Samir Genaim, and Miguel Gómez-Zamalloa.
Live heap space analysis for languages with garbage collection.
In Hillel Kolodner and Guy L. Steele Jr., editors, Proceedings
of the 8th International Symposium on Memory Management, ISMM 2009, Dublin,
Ireland, June 19-20, 2009, pages 129-138. ACM, 2009.
[ bibtex |
abstract |
DOI |
PDF ]
The peak heap consumption of a program is the
maximum size of the live data on the heap
during the execution of the program, i.e., the
minimum amount of heap space needed to run the
program without exhausting the memory. It is
well-known that garbage collection (GC) makes the
problem of predicting the memory required to run a
program difficult. This paper presents, the best of
our knowledge, the first live heap space
analysis for garbage-collected languages which
infers accurate upper bounds on the peak heap usage
of a program's execution that are not restricted to
any complexity class, i.e., we can infer
exponential, logarithmic, polynomial, etc.,
bounds. Our analysis is developed for an
(sequential) object-oriented bytecode language with
a scoped-memory manager that reclaims
unreachable memory when methods return. We also show
how our analysis can accommodate other GC schemes
which are closer to the ideal GC which
collects objects as soon as they become unreachable.
The practicality of our approach is experimentally
evaluated on a prototype implementation. We
demonstrate that it is fully automatic, reasonably
accurate and efficient by inferring live heap space
bounds for a standardized set of benchmarks, the
JOlden suite.
@inproceedings{AlbertGG09,
author = {Elvira Albert and Samir Genaim and Miguel G{\'o}mez-Zamalloa},
title = {Live heap space analysis for languages with garbage collection},
editor = {Hillel Kolodner and Guy L. Steele Jr.},
booktitle = {Proceedings of the 8th International Symposium on Memory
Management, ISMM 2009, Dublin, Ireland, June 19-20, 2009},
year = 2009,
pages = {129-138},
publisher = {ACM},
isbn = {978-1-60558-347-1},
ee = {http://doi.acm.org/10.1145/1542431.1542450},
bibsource = {DBLP, http://dblp.uni-trier.de}
}
|
[83]
|
Elvira Albert, Puri Arenas, Samir Genaim, and Germán Puebla.
Cost Relation Systems: a Language-Independent Target
Language for Cost Analysis.
In Spanish Conference on Programming and Computer Languages
(PROLE'08), volume 248 of Electronic Notes in Theoretical Computer
Science, pages 31-46. Elsevier, 2009.
[ bibtex |
abstract |
DOI |
PDF ]
Cost analysis aims at obtaining information about the
execution cost of programs. This paper studies
cost relation systems (CESs): the sets of
recursive equations used in cost analysis in order
to capture the execution cost of programs in terms
of the size of their input arguments. We investigate
the notion of CES from a general perspective which
is independent of the particular cost analysis
framework. Our main contributions are: we provide a
formal definition of execution cost and of CES which
is not tied to a particular programming language; we
present the notion of sound CES, i.e., which
correctly approximates the cost of the corresponding
program; we identify the differences with recurrence
relation systems, its possible applications and the
new challenges that they bring about. Our general
framework is illustrated by instantiating it to cost
analysis of Java bytecode, Haskell, and Prolog.
@inproceedings{AlbertAGP08b,
author = {Elvira Albert and Puri Arenas and Samir Genaim and Germ{\'a}n Puebla},
title = {{C}ost {R}elation {S}ystems: a
{L}anguage--{I}ndependent {T}arget {L}anguage for
{C}ost {A}nalysis},
booktitle = {Spanish Conference on Programming and Computer Languages (PROLE'08)},
series = {Electronic Notes in Theoretical Computer Science},
volume = {248},
year = {2009},
pages = {31-46},
publisher = {Elsevier},
doi = {10.1016/j.entcs.2009.07.057},
bibsource = {DBLP, http://dblp.uni-trier.de}
}
|
[84]
|
Miguel Gómez-Zamalloa, Elvira Albert, and Germán Puebla.
Modular Decompilation of Low-Level Code by Partial
Evaluation.
In 8th IEEE International Working Conference on Source Code
Analysis and Manipulation (SCAM'08), pages 239-248. IEEE Computer Society,
September 2008.
[ bibtex |
abstract |
PDF ]
@inproceedings{Gomez-ZAP08,
author = {Miguel G\'{o}mez-Zamalloa and Elvira Albert and Germ{\'a}n Puebla},
title = {{M}odular {D}ecompilation of {L}ow-{L}evel {C}ode by {P}artial {E}valuation},
booktitle = {8th IEEE International Working Conference on Source Code Analysis and Manipulation (SCAM'08)},
publisher = {IEEE Computer Society},
year = {2008},
pages = {239--248},
month = sep
}
|
[85]
|
Elvira Albert, John P. Gallagher, Miguel Gómez-Zamalloa, and Germán Puebla.
Type-based Homeomorphic Embedding and its Applications to
Online Partial Evaluation.
In 17th International Symposium on Logic-based Program Synthesis
and Transformation (LOPSTR'07), volume 4915 of Lecture Notes in
Computer Science, pages 23-42. Springer-Verlag, February 2008.
[ bibtex |
abstract |
PDF ]
@inproceedings{AlbertGGP08,
author = {Elvira Albert and John P. Gallagher and Miguel G\'{o}mez-Zamalloa and Germ{\'a}n Puebla},
title = {{T}ype-based {H}omeomorphic {E}mbedding and its
{A}pplications to {O}nline {P}artial {E}valuation},
booktitle = {17th International Symposium on Logic-based Program
Synthesis and Transformation (LOPSTR'07)},
publisher = {Springer-Verlag},
series = {Lecture Notes in Computer Science},
volume = {4915},
pages = {23--42},
isbn = {978-3-540-78768-6},
npages = 20,
year = {2008},
month = feb
}
|
[86]
|
Elvira Albert, Puri Arenas, Samir Genaim, and Germán Puebla.
Automatic Inference of Upper Bounds for Recurrence Relations in Cost
Analysis.
In María Alpuente and Germán Vidal, editors, Static
Analysis, 15th International Symposium, SAS 2008, Valencia, Spain, July
16-18, 2008. Proceedings, volume 5079 of Lecture Notes in Computer
Science, pages 221-237. Springer, 2008.
[ bibtex |
abstract |
DOI |
PDF ]
COSTA is a research prototype which performs automatic
program analysis and which is able to infer cost and
termination information about Java bytecode
programs. The system receives as input a bytecode
program and a cost model chosen from a selection of
resource descriptions, and tries to bound the
resource consumption of the program with respect to
the given cost model. COSTA provides several
non-trivial notions of resource, such as the amount
of memory allocated on the heap, the number of
bytecode instructions executed, the number of
billable events (such as sending a text message on a
mobile phone) executed by the program. When
performing cost analysis, COSTA produces a cost
equation system, which is an extended form of
recurrence relations. In order to obtain a closed
(i.e., non-recursive) form for such recurrence
relations which represents an upper bound, COSTA
includes a dedicated solver. An interesting feature
of COSTA is that it uses pretty much the same
machinery for inferring upper bounds on cost as for
proving termination (which also implies the
boundedness of any resource consumption).
@inproceedings{AlbertAGP08,
author = {Elvira Albert and
Puri Arenas and
Samir Genaim and
Germ{\'a}n Puebla},
title = {Automatic Inference of Upper Bounds for Recurrence Relations
in Cost Analysis},
booktitle = {Static Analysis, 15th International Symposium, SAS 2008,
Valencia, Spain, July 16-18, 2008. Proceedings},
editor = {Mar\'{\i}a Alpuente and
Germ{\'a}n Vidal},
year = {2008},
pages = {221-237},
publisher = {Springer},
series = {Lecture Notes in Computer Science},
volume = {5079},
isbn = {978-3-540-69163-1},
doi = {10.1007/978-3-540-69166-2_15},
bibsource = {DBLP, http://dblp.uni-trier.de}
}
|
[87]
|
Elvira Albert, Puri Arenas, Michael Codish, Samir Genaim, Germán Puebla,
and Damiano Zanardini.
Termination Analysis of Java Bytecode.
In Gilles Barthe and Frank S. de Boer, editors, Formal Methods
for Open Object-Based Distributed Systems, 10th IFIP WG 6.1 International
Conference, FMOODS 2008, Oslo, Norway, June 4-6, 2008, Proceedings, volume
5051 of Lecture Notes in Computer Science, pages 2-18. Springer, 2008.
[ bibtex |
abstract |
DOI |
PDF ]
Termination analysis has received
considerable attention, traditionally in the context
of declarative programming, and recently also for
imperative languages. In existing approaches,
termination is performed on source
programs. However, there are many situations,
including mobile code, where only the compiled code
is available. In this work we present an automatic
termination analysis for sequential Java Bytecode
programs. Such analysis presents all of the
challenges of analyzing a low-level language as well
as those introduced by object-oriented languages.
Interestingly, given a bytecode program, we produce
a constraint logic program, CLP, whose
termination entails termination of the bytecode
program. This allows applying the large body of work
in termination of CLP programs to termination of
Java bytecode. A prototype analyzer is described
and initial experimentation is reported.
@inproceedings{AlbertACGPZ08,
author = {Elvira Albert and Puri Arenas and Michael Codish and Samir Genaim and Germ{\'a}n Puebla and Damiano Zanardini},
title = {Termination Analysis of Java Bytecode},
booktitle = {Formal Methods for Open Object-Based Distributed Systems, 10th IFIP WG 6.1 International Conference, FMOODS 2008, Oslo, Norway, June 4-6, 2008, Proceedings},
editor = {Gilles Barthe and Frank S. de Boer},
year = {2008},
pages = {2--18},
publisher = {Springer},
series = {Lecture Notes in Computer Science},
volume = {5051},
isbn = {978-3-540-68862-4},
doi = {10.1007/978-3-540-68863-1_2},
bibsource = {DBLP, http://dblp.uni-trier.de}
}
|
[88]
|
Elvira Albert, Puri Arenas, Samir Genaim, Germán Puebla, and Damiano Zanardini.
Removing Useless Variables in Cost Analysis of Java
Bytecode.
In Roger L. Wainwright and Hisham Haddad, editors, Proceedings
of the 2008 ACM Symposium on Applied Computing (SAC), Fortaleza, Ceara,
Brazil, March 16-20, 2008, pages 368-375. ACM, 2008.
[ bibtex |
abstract |
DOI |
PDF ]
Automatic cost analysis has interesting
applications in the context of verification and
certification of mobile code. For instance, the
code receiver can use cost information in order to
decide whether to reject mobile code which has too
large cost requirements in terms of computing
resources (in time and/or space) or billable events
(SMSs sent, bandwidth required). Existing cost
analyses for a variety of languages describe the
resource consumption of programs by means of
Cost Equation Systems (CESs), which are
similar to, but more general than recurrence
equations. CESs express the cost of a program in
terms of the size of its input data. In a further
step, a closed form (i.e., non-recursive) solution
or upper bound can sometimes be found by using
existing Computer Algebra Systems (CASs), such as
Maple and Mathematica. In this work, we focus on
cost analysis of Java bytecode, a language which is
widely used in the context of mobile code and we
study the problem of identifying variables which are
useless in the sense that they do not affect
the execution cost and therefore can be ignored by
cost analysis. We identify two classes of useless
variables and propose automatic analysis techniques
to detect them. The first class corresponds to
stack variables that can be replaced by
program variables or constant values. The second
class corresponds to variables whose value is
cost-irrelevant, i.e., does not affect the
cost of the program. We propose an algorithm,
inspired in static slicing which safely
identifies cost-irrelevant variables. The benefits
of eliminating useless variables are two-fold: (1)
cost analysis without useless variables can be more
efficient and (2) resulting CESs are more likely to
be solvable by existing CASs.
@inproceedings{AlbertAGPZ08,
author = {Elvira Albert and Puri Arenas and Samir Genaim and Germ{\'a}n Puebla and Damiano Zanardini},
title = {Removing {U}seless {V}ariables in {C}ost {A}nalysis of {J}ava {B}ytecode},
editor = {Roger L. Wainwright and Hisham Haddad},
booktitle = {Proceedings of the 2008 ACM Symposium on Applied Computing (SAC), Fortaleza, Ceara, Brazil, March 16-20, 2008},
year = {2008},
pages = {368--375},
publisher = {ACM},
isbn = {978-1-59593-753-7},
doi = {10.1145/1363686.1363779},
bibsource = {DBLP, http://dblp.uni-trier.de}
}
|
[89]
|
Elvira Albert, Puri Arenas, Samir Genaim, Germán Puebla, and Damiano Zanardini.
A Generic Framework for the Cost Analysis of Java
Bytecode.
In Pimentel Ernesto, editor, Spanish Conference on Programming
and Computer Languages (PROLE'07), pages 61-70, September 2007.
[ bibtex |
abstract |
PDF ]
Cost analysis of Java bytecode is
complicated by its unstructured control flow, the
use of an operand stack and its object-oriented
programming features (like dynamic dispatching).
This paper addresses these problems and develops a
generic framework for the automatic cost analysis of
sequential Java bytecode. Our method generates
cost relations which define at compile-time
the cost of programs as a function of their input
data size. To the best of our knowledge, this is the
first approach to the automatic cost analysis of
Java bytecode.
@inproceedings{AlbertAGPZ07b,
author = {Elvira Albert and Puri Arenas and Samir Genaim and Germ{\'a}n Puebla and Damiano Zanardini},
title = {A {G}eneric {F}ramework for the {C}ost {A}nalysis of {J}ava {B}ytecode},
booktitle = {Spanish Conference on Programming and Computer Languages (PROLE'07)},
year = {2007},
pages = {61--70},
month = sep,
editor = {Pimentel Ernesto}
}
|
[90]
|
Elvira Albert, Puri Arenas, Samir Genaim, Germán Puebla, and Damiano Zanardini.
Cost Analysis of Java Bytecode.
In Rocco De Nicola, editor, Programming Languages and Systems,
16th European Symposium on Programming, ESOP 2007, Held as Part of the
Joint European Conferences on Theory and Practics of Software, ETAPS 2007,
Braga, Portugal, March 24 - April 1, 2007, Proceedings, volume 4421 of
Lecture Notes in Computer Science, pages 157-172. Springer-Verlag, March
2007.
[ bibtex |
abstract |
DOI |
PDF ]
Cost analysis of Java bytecode is
complicated by its unstructured control flow, the
use of an operand stack and its object-oriented
programming features (like dynamic dispatching).
This paper addresses these problems and develops a
generic framework for the automatic cost analysis of
sequential Java bytecode. Our method generates
cost relations which define at compile-time
the cost of programs as a function of their input
data size. To the best of our knowledge, this is the
first approach to the automatic cost analysis of
Java bytecode.
@inproceedings{AlbertAGPZ07,
author = {Elvira Albert and Puri Arenas and Samir Genaim and Germ{\'a}n Puebla and Damiano Zanardini},
title = {{C}ost {A}nalysis of {J}ava {B}ytecode},
booktitle = {Programming Languages and Systems, 16th European Symposium on Programming,
{ESOP} 2007, Held as Part of the Joint European Conferences on Theory
and Practics of Software, {ETAPS} 2007, Braga, Portugal, March 24
- April 1, 2007, Proceedings},
year = 2007,
editor = {Rocco De Nicola},
series = {Lecture Notes in Computer Science},
month = mar,
publisher = {Springer-Verlag},
volume = {4421},
pages = {157--172},
isbn = {978-3-540-71314-2},
doi = {10.1007/978-3-540-71316-6_12},
bibsource = {DBLP, http://dblp.uni-trier.de}
}
|
[91]
|
Elvira Albert, Miguel Gómez-Zamalloa, Laurent Hubert, and Germán Puebla.
Verification of Java Bytecode using Analysis and
Transformation of Logic Programs.
In Ninth International Symposium on Practical Aspects of
Declarative Languages (PADL 2007), number 4354 in Lecture Notes in Computer
Science, pages 124-139. Springer-Verlag, January 2007.
[ bibtex |
abstract |
PDF ]
@inproceedings{AlbertGHP07,
author = {Elvira Albert and Miguel G\'{o}mez-Zamalloa and Laurent Hubert and Germ{\'a}n Puebla},
title = {{V}erification of {J}ava {B}ytecode using {A}nalysis and
{T}ransformation of {L}ogic {P}rograms},
booktitle = {Ninth International Symposium on Practical Aspects of
Declarative Languages (PADL 2007)},
year = 2007,
month = jan,
pages = {124--139},
isbn = {978-3-540-69608-7},
publisher = {Springer-Verlag},
series = {Lecture Notes in Computer Science},
number = {4354}
}
|
[92]
|
Elvira Albert, Samir Genaim, and Miguel Gómez-Zamalloa.
Heap Space Analysis for Java Bytecode.
In ISMM '07: Proceedings of the 6th International Symposium on
Memory Management, pages 105-116, New York, NY, USA, 2007. ACM Press.
[ bibtex |
abstract |
DOI |
PDF ]
This article presents a heap space analysis for
(sequential) Java bytecode. The analysis generates
heap space cost relations which define at
compile-time the heap consumption of a program as a
function of its data size. These relations can be
used to obtain upper bounds on the heap space
allocated during the execution of the different
methods. In addition, we describe how to refine the
cost relations, by relying on escape
analysis, in order to take into account the heap
space that can be safely deallocated by the garbage
collector upon exit from a corresponding
method. These refined cost relations are then used
to infer upper bounds on the active heap
space upon methods return. Example applications
for the analysis consider inference of constant heap
usage and heap usage proportional to the data size
(including polynomial and exponential heap
consumption). Our prototype implementation is
reported and demonstrated by means of a series of
examples which illustrate how the analysis naturally
encompasses standard data-structures like lists,
trees and arrays with several dimensions written in
object-oriented programming style.
@inproceedings{AlbertGG07,
author = {Elvira Albert and Samir Genaim and Miguel G{\'o}mez-Zamalloa},
title = {{H}eap {S}pace {A}nalysis for {J}ava {B}ytecode},
booktitle = {ISMM '07: Proceedings of the 6th International Symposium on Memory Management},
year = 2007,
pages = {105--116},
location = {Montreal, Quebec, Canada},
publisher = {ACM Press},
address = {New York, NY, USA},
isbn = {978-1-59593-893-0},
ee = {http://doi.acm.org/10.1145/1296907.1296922.061},
bibsource = {DBLP, http://dblp.uni-trier.de}
}
|