[1]
|
Elvira Albert, Jesús Correas, Pablo Gordillo, Guillermo Román-Díez, and
Albert Rubio.
Don't Run on Fumes - Parametric Gas Bounds for Smart Contracts.
Journal of Systems and Software, 176:110923, 2021.
[ bibtex |
abstract |
DOI |
PDF ]
Gas is a measurement unit of the computational effort
that it will take to execute every single replicated
operation that takes part in the Ethereum blockchain
platform. Each transaction executed by the Ethereum
Virtual Machine (EVM), has an associated gas
consumption specified by Ethereum. If a transaction
exceeds the amount of gas allotted by the user
(known as gas limit), an out-of-gas exception is
raised, interrupting the current execution. We
present, to the best of our knowledge, the first
static analysis that is able to infer parametric gas
bounds for smart contracts. The inferred bounds are
typically parametric on the sizes of the input
parameters for the functions, but also they are
often parametric on the contract state, and even
sometimes they will be parametric on blockchain
data. The analysis is implemented in a tool named
GASTAP, Gas-Aware Smart contracT Analysis Platform,
which takes as input a smart contract (either in
EVM, disassembled EVM, or in Solidity source code)
and automatically infers sound gas upper bounds for
its public functions. GASTAP has been applied over
the entire contents of the Ethereum blockchain, in
total we have been able to analyze 318,397
functions, and succeeded to obtain gas bounds for
91.02% of them.
@article{AlbertCGRR21,
author = {Elvira Albert and
Jes\'us Correas and
Pablo Gordillo and
Guillermo Rom\'an-D\'iez and
Albert Rubio},
title = {{Don't Run on Fumes --- Parametric Gas Bounds for Smart Contracts}},
journal = {{J}ournal of {S}ystems and {S}oftware},
volume = {176},
pages = {110923},
issn = {0164-1212},
doi = {https://doi.org/10.1016/j.jss.2021.110923},
year = {2021},
rank:jcr:q = {2}
}
|
[2]
|
Cosimo Laneve, Michael Lienhardt, Ka I Pun, and Guillermo Román-Díez.
Time analysis of actor programs.
Journal of Logical and Algebraic Methods in Programming, 105:1
- 27, 2019.
[ bibtex |
abstract |
DOI ]
This paper proposes a technique for estimating the computational time
of programs in an actor model, which is intended to serve as a
compiler target of a wide variety of actor-based programming
languages. We define a compositional translation function returning
cost equations, which are fed to an automatic off-the-shelf solver for
obtaining the time bounds. Our approach is based on a new notion of
synchronization sets, which captures possible difficult
synchronization patterns between actors and helps make the analysis
efficient and precise. The approach is proven to correctly
over-approximate the worst computational time of an actor model of a
concurrent programs.
@article{LaneveLPR19,
author = {Cosimo Laneve and
Michael Lienhardt and
Ka I Pun and
Guillermo Rom\'an-D\'iez},
title = {Time analysis of actor programs},
journal = {Journal of Logical and Algebraic Methods in Programming},
issn = {2352-2208},
publisher = {Elsevier},
doi = {https://doi.org/10.1016/j.jlamp.2019.02.007},
year = {2019},
rank:jcr:q = {2},
volume = {105},
pages = {1 -- 27}
}
|
[3]
|
Elvira Albert, Jesús Correas, and Guillermo Román-Díez.
Peak Resource Analysis of Concurrent Distributed Systems.
Journal of Systems and Software, 149:35 - 62, 2019.
[ bibtex |
abstract |
DOI ]
Traditional cost analysis frameworks have been defined
for cumulative resources which keep on increasing
along the computation. Examples of cumulative
resources are execution time, number of executed
steps, and energy consumption. Non-cumulative
resources are acquired and (possibly) released along
the execution. Examples of non-cumulative cost are
number of connections established that are later
closed, or resources requested to a virtual host
which are released after using them. We present a
static analysis framework to infer the peak cost for
non-cumulative types of resources in concurrent
distributed systems. Our framework is generic
w.r.t. the type of resource and can be instantiated
with any of the above mentioned resources as well as
with any other resource that is measurable by
inspecting the instructions of the program. The
concurrent distributed language that we consider
allows creating distributed locations dynamically
within the program and spawning tasks that execute
concurrently at any of the existing locations. Our
analysis infers, for the different distributed
locations, the maximum (or peak) amount of resources
that they may require along any execution. This
information can be used, among other applications,
to detect bottlenecks in the system and to
efficiently dimension the processing capacity and
storage that the locations of the concurrent
distributed system require.
@article{AlbertCR18,
author = {Elvira Albert and
Jes\'us Correas and
Guillermo Rom\'an-D\'iez},
title = {{P}eak {R}esource {A}nalysis of {C}oncurrent {D}istributed {S}ystems},
journal = {{J}ournal of {S}ystems and {S}oftware},
volume = {149},
pages = {35 - 62},
issn = {0164-1212},
year = {2019},
doi = {https://doi.org/10.1016/j.jss.2018.11.018},
publisher = {Elsevier},
rank:jcr:q = {2}
}
|
[4]
|
Elvira Albert, Jesús Correas, Einar Broch Johnsen Ka I Pun, and Guillermo Román-Díez.
Parallel Cost Analysis.
ACM Transactions on Computational Logic, 19(4):1 - 37, 2018.
[ bibtex |
abstract |
DOI ]
This paper presents parallel cost analysis, a static
cost analysis targeting to over-approximate the cost
of parallel execution in distributed systems. In
contrast to the standard notion of serial cost,
parallel cost captures the cost of synchronized
tasks executing in parallel by exploiting the true
concurrency available in the execution model of
distributed processing. True concurrency is
challenging for static cost analysis, because the
parallelism between tasks needs to be soundly
inferred, and the waiting and idle processor times
at the different locations need to be accounted for.
Parallel cost analysis works in three phases: (1) it
performs a block-level analysis to estimate the
serial costs of the blocks between synchronization
points in the program; (2) it then constructs a
distributed flow graph (DCG) to capture the
parallelism, the waiting and idle times at the
locations of the distributed system; and (3) the
parallel cost can finally be obtained as the path of
maximal cost in the DCG. We prove the correctness of
the proposed parallel cost analysis, and provide a
prototype implementation to perform an experimental
evaluation of the accuracy and feasibility of the
proposed analysis.
@article{AlbertCPJR18,
author = {Elvira Albert and
Jes\'us Correas and
Ka I Pun,
Einar Broch Johnsen and
Guillermo Rom\'an-D\'iez},
title = {{P}arallel {C}ost {A}nalysis},
journal = {ACM Transactions on Computational Logic},
issn = {1529-3785},
publisher = {ACM},
rank:jcr:q = {4},
year = {2018},
volume = {19},
pages = {1 -- 37},
number = {4},
doi = {10.1145/3274278}
}
|
[5]
|
Elvira Albert, Richard Bubel, Samir Genaim, Reiner Hähnle, Germán Puebla,
and Guillermo Román-Díez.
A Formal Verification Framework for Static Analysis.
Software and Systems Modeling, 15(4):987-1012, 2016.
[ bibtex |
abstract |
DOI |
PDF |
http ]
Static analysis tools, such as resource analyzers, give useful
information on software systems, especially in real-time and
safety-critical applications. Therefore, the question of the
reliability of the obtained results is highly important.
State-of-the-art static analyzers typically combine a range of
complex techniques, make use of external tools, and evolve quickly. To
formally verify such systems is not a realistic option.
In this work we propose a different approach whereby, instead of
the tools, we formally verify the results of the
tools. The central idea of such a formal verification framework for
static analysis is the method-wise translation of the information
about a program gathered during its static analysis into
specification contracts that contain enough information for them to
be verified automatically.
We instantiate this framework with , a state-of-the-art
static analysis system for sequential Java programs, for producing
resource guarantees and , a state-of-the-art verification tool,
for formally verifying the correctness of such resource guarantees.
Resource guarantees allow to be certain that programs will
run within the indicated amount of resources, which may refer to
memory consumption, number of instructions executed, etc.
Our results show that the proposed tool cooperation can be used for
automatically producing verified resource guarantees.
@article{AlbertBGHPR15,
author = {Elvira Albert and
Richard Bubel and
Samir Genaim and
Reiner H\"ahnle and
Germ\'{a}n Puebla and
Guillermo Rom\'an-D\'iez},
title = {{A} {F}ormal {V}erification {F}ramework for {S}tatic {A}nalysis},
journal = {{S}oftware and {S}ystems {M}odeling},
year = {2016},
pages = {987-1012},
volume = {15},
number = {4},
publisher = {Springer},
url = {http://dx.doi.org/10.1007/s10270-015-0476-y},
doi = {10.1007/s10270-015-0476-y},
issn = {1619-1366},
rank:jcr:q = {2}
}
|
[6]
|
Elvira Albert, Puri Arenas, Jesús Correas, Samir Genaim, Miguel Gómez-Zamalloa, and Germán Puebla an d Guillermo Román-Díez.
Object-Sensitive Cost Analysis for Concurrent Objects.
Software Testing, Verification and Reliability, 25(3):218-271,
2015.
[ bibtex |
abstract |
DOI |
PDF |
http ]
This article presents a novel cost
analysis framework for concurrent objects.
Concurrent objects form a well established model for distributed
concurrent systems. In this model, objects are the
concurrency units which communicate among them via
asynchronous method calls. Cost analysis aims at
automatically approximating the resource consumption of
executing a program in terms of its input parameters. While cost
analysis for sequential programming languages has received
considerable attention, concurrency and distribution have been notably
less studied. The main challenges of cost analysis in a concurrent
setting are:
(1) Inferring precise size abstractions for data in the program in
the presence of shared memory. This information is essential for
bounding the number of iterations of loops.
(2) Distribution suggests that analysis must infer the cost of the
diverse distributed components separately. We handle this by means of a
novel form of object-sensitive recurrence equations which use
cost centers in order to keep the resource usage assigned to
the different components separate.
We have implemented our analysis and evaluated it
on several small applications which are
classical examples of concurrent and distributed programming.
@article{AlbertACGGPR15,
author = {Elvira Albert and
Puri Arenas and
Jes\'us Correas and
Samir Genaim and
Miguel G\'omez-Zamalloa and
Germ\'{a}n Puebla an d
Guillermo Rom\'an-D\'iez},
title = {{O}bject-{S}ensitive {C}ost {A}nalysis for {C}oncurrent {O}bjects},
journal = {Software Testing, Verification and Reliability},
year = {2015},
publisher = {Wiley},
pages = {218--271},
url = {http://dx.doi.org/10.1002/stvr.1569},
doi = {10.1002/stvr.1569},
volume = {25},
number = {3},
issn = {0960-0833},
rank:jcr:q = {2}
}
|
[7]
|
Elvira Albert, Jesús Correas, Germán Puebla, and Guillermo Román-Díez.
Quantified Abstract Configurations of Distributed Systems.
Formal Aspects of Computing, 27(4):665-699, 2015.
[ bibtex |
abstract |
DOI |
PDF |
http ]
When reasoning about distributed systems, it is essential to have
information about the different kinds of nodes that compose the
system, how many instances of each kind exist, and how nodes
communicate with
other nodes.
In this paper we present a static-analysis-based approach which is
able to provide information about the questions above.
In order to cope with an unbounded number of nodes and an unbounded
number of calls among them, the analysis
performs an abstraction of the system producing a graph whose
nodes may represent (infinitely) many concrete nodes and arcs represent
any number of (infinitely) many calls among nodes.
The crux of our approach is that the abstraction is enriched with
upper bounds inferred by resource analysis that limit the
number of concrete instances that the nodes and arcs represent and
their resource consumption.
The information available in our quantified abstract
configurations allows us to define performance indicators which
measure the quality of the system. In particular, we present
several indicators that assess the level of distribution in the
system, the amount of communication among distributed nodes
that it requires, and how balanced the load of the distributed nodes
that compose the system is. Our performance indicators are given as
functions on the input data sizes, and they can be used to automate the
comparison of different distributed settings and guide towards finding the
optimal configuration.
@article{AlbertCPR14,
author = {Elvira Albert and Jes\'us Correas and Germ\'{a}n Puebla and Guillermo Rom\'{a}n-D\'{i}ez},
title = {{Q}uantified {A}bstract {C}onfigurations of {D}istributed {S}ystems},
journal = {Formal Aspects of Computing},
year = {2015},
publisher = {Springer},
issn = {0934-5043},
volume = {27},
number = {4},
pages = {665--699},
rank:jcr:q = {4},
doi = {10.1007/s00165-014-0321-z},
url = {https://doi.org/10.1007/s00165-014-0321-z}
}
|
[8]
|
Elvira Albert, Jesús Correas, Germán Puebla, and Guillermo Román-Díez.
A Multi-Domain Incremental Analysis Engine and its
Application to Incremental Resource Analysis.
Theoretical Computer Science, 585(0):91 - 114, 2015.
[ bibtex |
abstract |
DOI |
PDF |
http ]
The aim of incremental analysis is, given a program,
its analysis results, and a series of changes to the program, to
obtain the new analysis results as efficiently as possible and,
ideally, without having to (re-)analyze fragments of code which are
not affected by the changes. Incremental analysis can significantly
reduce both the time and the memory requirements of analysis.
The first contribution of this article is a multi-domain
incremental fixed-point algorithm for a sequential Java-like
language. The algorithm is multi-domain in the sense that it
interleaves the (re-)analysis for multiple domains by taking into
account dependencies among them. Importantly, this allows the
incremental analyzer to invalidate only those analysis results
previously inferred by certain dependent domains. The second
contribution is an incremental resource usage analysis which,
in its first phase, uses the multi-domain incremental fixed-point
algorithm to carry out all global pre-analyses required to infer
cost in an interleaved way. Such resource analysis is parametric on
the cost metrics one wants to measure (e.g., number of executed
instructions, number of objects created, etc.). Besides, we present
a novel form of cost summaries which allows us to
incrementally reconstruct only those components of cost functions
affected by the changes. Experimental results in the COSTA system
show that the proposed incremental analysis
provides significant performance gains, ranging from a speedup of 1.48 up to
5.13 times faster than non-incremental analysis.
@article{AlbertCPR13-tcs,
author = {Elvira Albert and Jes\'us Correas and Germ\'{a}n Puebla and Guillermo Rom\'{a}n-D\'{i}ez},
title = {A {M}ulti-{D}omain {I}ncremental {A}nalysis {E}ngine and its {A}pplication to {I}ncremental {R}esource {A}nalysis},
journal = {Theoretical Computer Science},
year = {2015},
publisher = {Elsevier},
volume = {585},
number = {0},
pages = {91 - 114},
issn = {0304-3975},
doi = {10.1016/j.tcs.2015.03.002},
url = {https://doi.org/10.1016/j.tcs.2015.03.002},
rank:jcr:q = {3}
}
|
[9]
|
Elvira Albert, Puri Arenas, Samir Genaim, Germán Puebla, and Guillermo Román-Díez.
Conditional Termination of Loops over Heap-allocated Data.
Science of Computer Programming, 92:2 - 24, 2014.
[ bibtex |
abstract |
DOI |
PDF |
http ]
Static analysis which takes into account the values of data stored
in the heap is considered complex and computationally intractable in
practice. Thus, most static analyzers do not keep track of object
fields nor of array contents, i.e., they are
heap-insensitive. In this article, we propose locality
conditions for soundly tracking heap-allocated data in Java
(bytecode) programs, by means of ghost non-heap allocated
variables. This way, heap-insensitive analysis over the transformed
program can infer information on the original heap-allocated data
without sacrificing efficiency. If the locality conditions cannot
be proven unconditionally, we seek to generate aliasing
preconditions which, when they hold in the initial state,
guarantee the termination of the program. Experimental results show
that we greatly improve the accuracy w.r.t. a heap-insensitive
analysis while the overhead introduced is reasonable.
@article{AlbertAGPG13,
author = {Elvira Albert and Puri Arenas and Samir Genaim and Germ\'{a}n Puebla and Guillermo Rom\'{a}n-D\'{i}ez},
title = {{C}onditional {T}ermination of {L}oops over {H}eap-allocated {D}ata},
journal = {Science of Computer Programming},
publisher = {Elsevier},
doi = {10.1016/j.scico.2013.04.006},
url = {http://dx.doi.org/10.1016/j.scico.2013.04.006},
issn = {0167-6423},
volume = {92},
pages = {2 - 24},
year = {2014},
rank:jcr:q = {3}
}
|
[1]
|
Elvira Albert, Jesús Correas, Pablo Gordillo, Guillermo Román-Díez, and
Albert Rubio.
Inferring Needless Write Memory Accesses on Ethereum Smart Contracts.
In 29th International Conference on Tools and Algorithms for the
Construction and Analysis of Systems, TACAS 2023. Proceedings, volume 13993
of Lecture Notes in Computer Science, pages 448-466. Springer, 2022.
[ bibtex |
abstract |
DOI |
PDF ]
Efficiency is a fundamental property of any type of
program, but it is even more so in the context of
the programs executing on the blockchain (known as
smart contracts). This is because optimizing smart
contracts has direct consequences on reducing the
costs of deploying and executing the contracts, as
there are fees to pay related to their bytes-size
and to their resource consumption (called
gas). Optimizing memory usage is considered a
challenging problem that, among other things,
requires a precise inference of the memory locations
being accessed. This is also the case for the
Ethereum Virtual Machine (EVM) bytecode generated by
the most-widely used compiler, solc, whose rather
unconventional and low-level memory usage challenges
automated reasoning. This paper presents a static
analysis, developed at the level of the EVM bytecode
generated by solc, that infers write memory accesses
that are needless and thus can be safely
removed. The application of our implementation on
more than 19,000 real smart contracts has detected
about 6,200 needless write accesses in less than 4
hours. Interestingly, many of these writes were
involved in memory usage patterns generated by solc
that can be greatly optimized by removing entire
blocks of bytecodes. To the best of our knowledge,
existing optimization tools cannot infer such
needless write accesses, and hence cannot detect
these inefficiencies that affect both the deployment
and the execution costs of Ethereum smart
contracts.
@inproceedings{AlbertCGRR23,
author = {Elvira Albert and
Jes\'us Correas and Pablo Gordillo and
Guillermo Rom\'an-D\'iez and
Albert Rubio},
title = {Inferring Needless Write Memory Accesses on Ethereum Smart Contracts},
booktitle = {29th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2023. Proceedings},
year = {2022},
rank:grinscie:class = {1},
volume = {13993},
pages = {448--466},
publisher = {Springer},
series = {Lecture Notes in Computer Science},
doi = {10.1007/978-3-031-30823-9_23}
}
|
[2]
|
Elvira Albert, Jesús Correas, Pablo Gordillo, Guillermo Román-Díez, and
Albert Rubio.
GASOL: Gas Analysis and Optimization for Ethereum Smart
Contracts.
In Armin Biere and David Parker, editors, 26th International
Conference on Tools and Algorithms for the Construction and Analysis of
Systems, TACAS 2020. Proceedings, volume 12079 of Lecture Notes in
Computer Science, pages 118-125. Springer, 2020.
[ bibtex |
abstract |
DOI |
PDF ]
We present the main concepts, components, and usage of
GASOL, a Gas AnalysiS and Optimization tooL for
Ethereum smart contracts. GASOL offers a wide
variety of cost models that allow inferring
the gas consumption associated to selected types of
EVM instructions and/or inferring the number of
times that such types of bytecode instructions are
executed. Among others, we have cost models to
measure only storage opcodes, to measure a selected
family of gas-consumption opcodes following the
Ethereum's classification, to estimate the cost of a
selected program line, etc. After choosing the
desired cost model and the function of interest,
GASOL returns to the user an upper bound of the
cost for this function. As the gas consumption is
often dominated by the instructions that access the
storage, GASOL uses the gas analysis to detect
under-optimized storage patterns, and includes an
(optional) automatic optimization of the selected
function. Our tool can be used within an Eclipse
plugin for Solidity which displays the gas and
instructions bounds and, when applicable, the
gas-optimized Solidity function.
@inproceedings{AlbertCGRR20,
author = {Elvira Albert and
Jes\'us Correas and
Pablo Gordillo and
Guillermo Rom\'an-D\'iez and
Albert Rubio},
title = {{GASOL}: {G}as {A}nalysis and {O}ptimization for {E}thereum {S}mart {C}ontracts},
booktitle = {26th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2020. Proceedings},
year = {2020},
rank:grinscie:class = {1},
publisher = {Springer},
series = {Lecture Notes in Computer Science},
editor = {Armin Biere and
David Parker},
doi = {10.1007/978-3-030-45237-7_7},
pages = {118-125},
volume = {12079},
isbn = {978-3-030-45237-7}
}
|
[3]
|
Elvira Albert, Jesús Correas, Pablo Gordillo, Guillermo Román-Díez, and
Albert Rubio.
Smart, and also reliable and gas-efficient, contracts.
In 13th IEEE International Conference on Software Testing,
Validation and Verification, ICST 2020. Proceedings, IEEE, 2020.
[ bibtex |
abstract |
DOI |
PDF ]
@inproceedings{AlbertCGRR20-icst,
author = {Elvira Albert and
Jes\'us Correas and
Pablo Gordillo and
Guillermo Rom\'an-D\'iez and
Albert Rubio},
title = {{Smart, and also reliable and gas-efficient, contracts}},
booktitle = {13th {IEEE} International Conference on Software Testing, Validation
and Verification, {ICST} 2020. Proceedings},
year = {2020},
series = {IEEE},
rank:grinscie:class = {2},
doi = {10.1109/ICST46399.2020.00010}
}
|
[4]
|
Elvira Albert, Jesús Correas, Pablo Gordillo, Guillermo Román-Díez, and
Albert Rubio.
SAFEVM: A Safety Verifier for Ethereum Smart Contracts.
In Dongmei Zhang and Anders Møller, editors, ACM SIGSOFT
International Symposium on Software Testing and Analysis, ISSTA 2019.
Proceedings, ACM, pages 386-389, 2019.
[ bibtex |
abstract |
DOI |
PDF ]
Ethereum smart contracts are public, immutable and
distributed and, as such, they are prone to
vulnerabilities sourcing from programming mistakes
of developers. This paper presents SAFEVM, a
verification tool for Ethereum smart contracts that
makes use of state-of-the-art verification engines
for C programs. SAFEVM takes as input an Ethereum
smart contract (provided either in Solidity source
code, or in compiled EVM bytecode), optionally with
assert and require verification annotations, and
produces in the output a report with the
verification results. Besides general safety
annotations, SAFEVM handles the verification of
array accesses: it automatically generates SV-COMP
verification assertions such that C verification
engines can prove safety of array accesses. Our
experimental evaluation has been undertaken on all
contracts pulled from etherscan.io (more than
24,000) by using as back-end verifiers CPAchecker,
SeaHorn and VeryMax.
@inproceedings{AlbertCGRR19,
author = {Elvira Albert and
Jes\'us Correas and
Pablo Gordillo and
Guillermo Rom\'an-D\'iez and
Albert Rubio},
title = {{SAFEVM}: {A} {S}afety {V}erifier for {E}thereum {S}mart {C}ontracts},
booktitle = {ACM SIGSOFT International Symposium on Software Testing and Analysis, ISSTA 2019. Proceedings},
year = {2019},
series = {ACM},
rank:grinscie:class = {1},
pages = {386--389},
editor = {Dongmei Zhang and
Anders M{\o}ller},
doi = {10.1145/3293882.3338999}
}
|
[5]
|
Elvira Albert, Puri Arenas, Jesús Correas, Samir Genaim, Miguel Gómez-Zamalloa, Enrique Martin-Martin, Germán Puebla, and
Guillermo Román-Díez.
Resource Analysis: From Sequential to Concurrent and
Distributed Programs.
In Nikolaj Bjørner and Frank D. de Boer, editors, FM 2015:
Formal Methods - 20th International Symposium, Oslo, Norway, June 24-26,
2015, Proceedings, volume 9109 of Lecture Notes in Computer Science,
pages 3-17. Springer, 2015.
[ bibtex |
abstract |
DOI |
PDF |
http ]
@inproceedings{AlbertACGGMPR15,
author = {Elvira Albert and
Puri Arenas and
Jes{\'{u}}s Correas and
Samir Genaim and
Miguel G{\'{o}}mez{-}Zamalloa and
Enrique Martin{-}Martin and
Germ{\'{a}}n Puebla and
Guillermo Rom{\'{a}}n{-}D{\'{\i}}ez},
title = {{R}esource {A}nalysis: {F}rom {S}equential to {C}oncurrent and {D}istributed {P}rograms},
booktitle = {{FM} 2015: Formal Methods - 20th International Symposium, Oslo, Norway,
June 24-26, 2015, Proceedings},
pages = {3--17},
year = {2015},
doi = {10.1007/978-3-319-19249-9_1},
url = {https://doi.org/10.1007/978-3-319-19249-9_1},
editor = {Nikolaj Bj{\o}rner and Frank D. de Boer},
series = {Lecture Notes in Computer Science},
isbn = {978-3-319-19248-2},
volume = {9109},
publisher = {Springer},
rank:grinscie:class = {2}
}
|
[6]
|
Elvira Albert, Jesús Correas, Einar Broch Johnsen, and Guillermo Román-Díez.
Parallel Cost Analysis of Distributed Systems.
In Static Analysis - 22nd International Symposium, SAS 2015.
Proceedings, volume 9291 of Lecture Notes in Computer Science, pages
275-292. Springer, 2015.
[ bibtex |
abstract |
DOI |
PDF |
http ]
We present a novel static analysis to infer the parallel cost of
distributed systems. Parallel cost differs from the standard notion
of serial cost by exploiting the truly concurrent execution model of
distributed processing to capture the cost of synchronized tasks
executing in parallel. It is challenging to infer parallel cost
because one needs to soundly infer the parallelism between tasks while
accounting for waiting and idle processor times at the different
locations. Our analysis works in three phases: (1) It first performs
a block-level analysis to estimate the serial costs of the blocks
between synchronization points in the program; (2) Next, it constructs
a distributed flow graph (DFG) to capture the parallelism, the waiting
and idle times at the locations of the distributed system; Finally,
(3) the parallel cost can be obtained as the path of maximal cost in
the DFG. A prototype implementation demonstrates the accuracy and
feasibility of the proposed analysis.
@inproceedings{AlbertCJR15,
author = {Elvira Albert and
Jes\'us Correas and
Einar Broch Johnsen and
Guillermo Rom\'an-D\'iez},
title = {{P}arallel {C}ost {A}nalysis of {D}istributed {S}ystems},
booktitle = {Static Analysis - 22nd International Symposium, {SAS} 2015. Proceedings},
publisher = {Springer},
year = {2015},
rank:grinscie:class = {2},
series = {Lecture Notes in Computer Science},
doi = {10.1007/978-3-662-48288-9_16},
url = {https://doi.org/10.1007/978-3-662-48288-9_16},
pages = {275-292},
volume = {9291},
isbn = {978-3-662-48287-2}
}
|
[7]
|
Elvira Albert, Jesús Correas, and Guillermo Román-Díez.
Non-Cumulative Resource Analysis.
In Proceedings of 21st International Conference on Tools and
Algorithms for the Construction and Analysis of Systems (TACAS 2015), volume
9035 of Lecture Notes in Computer Science, pages 85-100. Springer,
2015.
[ bibtex |
abstract |
DOI |
PDF |
http ]
Existing cost analysis frameworks have been defined for cumulative resources
which keep on increasing along the computation. Traditional cumulative
resources are execution time, number of executed steps, amount of memory
allocated, and energy consumption. Non-cumulative resources are acquired and
(possibly) released along the execution. Examples of non-cumulative cost are
memory usage in the presence of garbage collection, number of connections
established that are later closed, or resources requested to a virtual host
which are released after using them.
We present, to the best of our knowledge, the first generic static analysis
framework to infer an upper bound on the peak cost for
non-cumulative types of resources. Our analysis comprises several components:
(1) a pre-analysis to infer when resources are being used simultaneously, (2)
a program-point resource analysis which infers an upper bound on the
cost at the points of interest (namely the points where resources are
acquired) and (3) the elimination from the upper bounds obtained in (2) of
those resources accumulated that are not used simultaneously. We report on a
prototype implementation of our analysis that can be used on a simple
imperative language.
@inproceedings{AlbertCR15,
author = {Elvira Albert and Jes\'us Correas and Guillermo Rom\'{a}n-D\'{i}ez},
title = {{N}on-{C}umulative {R}esource {A}nalysis},
booktitle = {Proceedings of 21st International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2015)},
series = {Lecture Notes in Computer Science},
publisher = {Springer},
year = {2015},
doi = {10.1007/978-3-662-46681-0_6},
url = {https://doi.org/10.1007/978-3-662-46681-0_6},
pages = {85-100},
volume = {9035},
isbn = {978-3-662-46680-3},
rank:grinscie:class = {2}
}
|
[8]
|
Elvira Albert, Jesús Correas, Enrique Martín-Martín, and Guillermo Román-Díez.
Static Inference of Transmission Data Sizes in
Distributed Systems.
In 6th International Symposium On Leveraging Applications of
Formal Methods, Verification and Validation (ISOLA'14), volume 8803 of
Lecture Notes in Computer Science, pages 104-119. Springer, 2014.
[ bibtex |
abstract |
DOI |
PDF |
http ]
We present a static analysis to infer the amount of data that a
distributed system may transmit. The different locations of a
distributed system communicate and coordinate their actions by
posting tasks among them. A task is posted by building a message
with the task name and the data on which such task has to be
executed. When the task completes, the result can be retrieved by
means of another message from which the result of the computation
can be obtained. Thus, the transmission data size of a distributed
system mainly depends on the amount of messages posted among the
locations of the system, and the sizes of the data transferred in
the messages. Our static analysis has two main parts: (1) we
over-approximate the sizes of the data at the program points where
tasks are spawned and where the results are received, and (2) we
over-approximate the total number of messages. Knowledge of the
transmission data sizes is essential, among other things, to predict
the bandwidth required to achieve a certain response time, or
conversely, to estimate the response time for a given bandwidth. A
prototype implementation in the SACO system demonstrates the
accuracy and feasibility of the proposed analysis.
@inproceedings{AlbertCMR14,
author = {Elvira Albert and
Jes\'us Correas and
Enrique Mart\'in-Mart\'in and
Guillermo Rom\'an-D\'iez},
title = {{S}tatic {I}nference of {T}ransmission {D}ata {S}izes in {D}istributed {S}ystems},
booktitle = {6th International Symposium On Leveraging Applications of Formal Methods, Verification and Validation (ISOLA'14)},
publisher = {Springer},
series = {Lecture Notes in Computer Science},
volume = {8803},
pages = {104--119},
year = {2014},
url = {http://dx.doi.org/10.1007/978-3-662-45231-8_8},
doi = {10.1007/978-3-662-45231-8_8}
}
|
[9]
|
Elvira Albert, Jesús Correas, and Guillermo Román-Díez.
Peak Cost Analysis of Distributed Systems.
In Markus Müller-Olm and Helmut Seidl, editors, Static
Analysis - 21st International Symposium, SAS 2014, Munich, Germany,
September 11-13, 2014. Proceedings, volume 8723 of Lecture Notes in
Computer Science, pages 18-33. Springer, 2014.
[ bibtex |
abstract |
DOI |
PDF |
http ]
We present a novel static analysis to infer the peak cost of
distributed systems. The different locations of a distributed
system communicate and coordinate their actions by posting tasks
among them. Thus, the amount of work that each location has to
perform can greatly vary along the execution depending on: (1) the
amount of tasks posted to its queue, (2) their respective costs, and
(3) the fact that they may be posted in parallel and thus be pending
to execute simultaneously. The peak cost of a distributed
location refers to the maximum cost that it needs to carry out
along its execution.
Inferring the peak cost is challenging because it increases and
decreases along the execution, unlike the standard notion of
total cost which is cumulative. Our key contribution is the
novel notion of quantified queue configuration which captures
the worst-case cost of the tasks that may be simultaneously
pending to execute at each location along the execution.
Our analysis framework is developed for a generic notion of cost
that can be instantiated to infer the peak cost in terms of number
of steps, the amount of memory allocated, the number of tasks, etc.
A prototype implementation demonstrates the accuracy and
feasibility of the proposed peak cost analysis.
@inproceedings{AlbertCR14,
author = {Elvira Albert and
Jes\'us Correas and
Guillermo Rom\'an-D\'iez},
title = {{P}eak {C}ost {A}nalysis of {D}istributed {S}ystems},
editor = {Markus M{\"{u}}ller{-}Olm and
Helmut Seidl},
booktitle = {Static Analysis - 21st International Symposium, {SAS} 2014, Munich,
Germany, September 11-13, 2014. Proceedings},
publisher = {Springer},
year = {2014},
rank:grinscie:class = {2},
volume = {8723},
series = {Lecture Notes in Computer Science},
url = {http://dx.doi.org/10.1007/978-3-319-10936-7_2},
doi = {10.1007/978-3-319-10936-7_2},
isbn = {978-3-319-10935-0},
pages = {18--33}
}
|
[10]
|
Elvira Albert, Puri Arenas, Antonio Flores-Montoya, Samir Genaim, Miguel Gómez-Zamalloa, Enrique Martin-Martin, Germán Puebla, and Guillermo Román-Díez.
SACO: Static Analyzer for Concurrent Objects.
In Erika Ábrahám and Klaus Havelund, editors, Tools and
Algorithms for the Construction and Analysis of Systems - 20th International
Conference, TACAS 2014, volume 8413 of Lecture Notes in Computer
Science, pages 562-567. Springer, 2014.
[ bibtex |
abstract |
DOI |
PDF |
http ]
We present the main concepts, usage and implementation of SACO,
a static analyzer for concurrent objects. Interestingly, SACO is
able to infer both liveness (namely termination and resource
boundedness) and safety properties (namely deadlock freeness)
of programs based on concurrent objects. The system integrates
auxiliary analyses such as points-to and
may-happen-in-parallel, which are essential for increasing the
accuracy of the aforementioned more complex properties. SACO
provides accurate information about the dependencies which may
introduce deadlocks, loops whose termination is not guaranteed, and
upper bounds on the resource consumption of methods.
@inproceedings{AlbertAFGGMPR14,
author = {Elvira Albert and
Puri Arenas and
Antonio Flores-Montoya and
Samir Genaim and
Miguel G\'omez-Zamalloa and
Enrique Martin-Martin and
Germ\'an Puebla and
Guillermo Rom\'an-D\'iez},
title = {{SACO}: {S}tatic {A}nalyzer for {C}oncurrent {O}bjects},
booktitle = {Tools and Algorithms for the Construction and Analysis of
Systems - 20th International Conference, TACAS 2014},
year = {2014},
editor = {Erika {\'A}brah{\'a}m and
Klaus Havelund},
pages = {562-567},
volume = {8413},
series = {Lecture Notes in Computer Science},
isbn = {978-3-642-54861-1},
publisher = {Springer},
doi = {10.1007/978-3-642-54862-8_46},
url = {https://doi.org/10.1007/978-3-642-54862-8_46},
rank:grinscie:class = {3}
}
|
[11]
|
Elvira Albert, Jesús Correas, Germán Puebla, and Guillermo Román-Díez.
Quantified Abstractions of Distributed Systems.
In 10th International Conference on integrated Formal Methods
(iFM'13), volume 7940 of Lecture Notes in Computer Science, pages
285-300. Springer, June 2013.
[ bibtex |
abstract |
DOI |
PDF |
http ]
When reasoning about distributed systems, it is
essential to have information about the different
kinds of nodes which compose the system, how many
instances of each kind exist, and how nodes
communicate with other nodes. In this paper we
present a static-analysis-based approach which is
able to provide information about the questions
above. In order to cope with an unbounded number of
nodes and an unbounded number of calls among them,
the analysis performs an abstraction of the
system producing a graph whose nodes may represent
(infinitely) many concrete nodes and arcs represent
any number of (infinitely) many calls among nodes.
The crux of our approach is that the abstraction is
enriched with upper bounds inferred by a
resource analysis which limit the number of
concrete instances which the nodes and arcs
represent. The combined information provided by our
approach has interesting applications such as
debugging, optimizing and dimensioning distributed
systems.
@inproceedings{AlbertCPR13,
author = {Elvira Albert and Jes\'us Correas and Germ\'an Puebla and Guillermo Rom\'an-D\'iez},
title = {{Q}uantified {A}bstractions of {D}istributed {S}ystems},
booktitle = {10th International Conference on integrated Formal Methods (iFM'13)},
year = {2013},
month = jun,
publisher = {Springer},
series = {Lecture Notes in Computer Science},
volume = {7940},
pages = {285-300},
doi = {10.1007/978-3-642-38613-8_20},
url = {http://dx.doi.org/10.1007/978-3-642-38613-8_20},
rank:grinscie:class = {3}
}
|
[12]
|
Elvira Albert, Richard Bubel, Samir Genaim, Reiner Hähnle, and Guillermo Román-Díez.
Verified Resource Guarantees for Heap Manipulating
Programs.
In Proceedings of the 15th International Conference on
Fundamental Approaches to Software Engineering, FASE 2012, Tallinn, Estonia,
March, 2012, volume 7212 of Lecture Notes in Computer Science, pages
130-145. Springer, March 2012.
[ bibtex |
abstract |
DOI |
PDF |
http ]
Program properties that are automatically inferred by static
analysis tools are generally not considered to be completely
trustworthy, unless the tool implementation or the results are
formally verified. Here we focus on the formal verification of
resource guarantees inferred by automatic cost
analysis. Resource guarantees ensure that programs run within the
indicated amount of resources which may refer to memory consumption,
to number of instructions executed, etc. In previous work we
studied formal verification of inferred resource guarantees that
depend only on integer data. In realistic programs, however,
resource consumption is often bounded by the size of
heap-allocated data structures. Bounding their size
requires to perform a number of structural heap
analyses. The contributions of this paper are (i) to identify what
exactly needs to be verified to guarantee sound analysis of heap
manipulating programs, (ii) to provide a suitable extension of the
program logic used for verification to handle structural heap
properties in the context of resource guarantees, and (iii) to
improve the underlying theorem prover so that proof obligations
can be automatically discharged.
@inproceedings{AlbertBGHR12,
author = {Elvira Albert and Richard Bubel and Samir Genaim and Reiner
H\"ahnle and Guillermo Rom\'an-D\'iez},
title = {{V}erified {R}esource {G}uarantees for {H}eap {M}anipulating {P}rograms},
booktitle = {Proceedings of the 15th International Conference on Fundamental Approaches to Software Engineering, FASE 2012, Tallinn, Estonia, March, 2012},
publisher = {Springer},
series = {Lecture Notes in Computer Science},
year = {2012},
volume = {7212},
month = mar,
pages = {130-145},
isbn = {978-3-642-28871-5},
doi = {10.1007/978-3-642-28872-2_10},
url = {http://www.springerlink.com/content/a10153194828v8k1/},
rank:grinscie:class = {3}
}
|
[13]
|
Elvira Albert, Jesús Correas, Germán Puebla, and Guillermo Román-Díez.
Incremental Resource Usage Analysis.
In Proceedings of the 2012 ACM SIGPLAN Workshop on Partial
Evaluation and Program Manipulation, PEPM 2012, Philadelphia, Pennsylvania,
USA, January 23-24, 2012, pages 25-34. ACM Press, January 2012.
[ bibtex |
abstract |
DOI |
PDF |
http ]
The aim of incremental global analysis is, given a program, its
analysis results and a series of changes to the program, to obtain
the new analysis results as efficiently as possible and, ideally,
without having to (re-)analyze fragments of code which are not
affected by the changes. Incremental analysis can significantly
reduce both the time and the memory requirements of analysis. This
paper presents an incremental resource usage (a.k.a. cost) analysis
for a sequential Java-like language. Our main contributions are (1)
a multi-domain incremental fixed-point algorithm which can be used
by all global (pre-)analyses required to infer the cost (including
class, sharing, cyclicity, constancy, and size analyses) and which
takes care of propagating dependencies among such domains, and (2) a
novel form of cost summaries which allows us to incrementally
reconstruct only those components of cost functions affected by the
change. Experimental results in the COSTA system show that the
proposed incremental analysis can perform very efficiently in
practice.
@inproceedings{AlbertCPR12,
author = {Elvira Albert and Jes\'us Correas and Germ\'an Puebla and Guillermo Rom\'an-D\'iez},
title = {{I}ncremental {R}esource {U}sage {A}nalysis},
booktitle = {Proceedings of the 2012 ACM SIGPLAN Workshop on Partial
Evaluation and Program Manipulation, PEPM 2012, Philadelphia, Pennsylvania, USA,
January 23-24, 2012},
publisher = {ACM Press},
year = {2012},
month = jan,
pages = {25--34},
isbn = {978-1-4503-1118-2},
doi = {10.1145/2103746.2103754},
url = {http://dl.acm.org/citation.cfm?doid=2103746.2103754},
rank:grinscie:class = {3}
}
|
[14]
|
Elvira Albert, Jesús Correas, Germán Puebla, and Guillermo Román-Díez.
Towards Incremental Resource Usage Analysis.
In The Ninth Asian Symposium on Programming Languages and
Systems (APLAS'11), December 2011.
Poster Presentation.
[ bibtex |
abstract |
PDF ]
The aim of incremental global analysis is, given a program, its
analysis results and a series of changes to the program, to obtain
the new analysis results as efficiently as possible and, ideally,
without having to (re-)analyze fragments of code which are not
affected by the changes. Incremental analysis can significantly
reduce both the time and the memory requirements of analysis. This
paper presents an incremental resource usage (a.k.a. cost) analysis
for a sequential Java-like language. Our main contributions are (1)
a multi-domain incremental fixed-point algorithm which can be used
by all global (pre-)analyses required to infer the cost (including
class, sharing, cyclicity, constancy, and size analyses) and which
takes care of propagating dependencies among such domains, and (2) a
novel form of cost summaries which allows us to incrementally
reconstruct only those components of cost functions affected by the
change. Experimental results in the COSTA system show that the
proposed incremental analysis can perform very efficiently in
practice.
@inproceedings{AlbertCPR11,
author = {Elvira Albert and Jes\'us Correas and Germ\'an Puebla and Guillermo Rom\'an-D\'iez},
title = {{T}owards {I}ncremental {R}esource {U}sage {A}nalysis},
booktitle = {The Ninth Asian Symposium on Programming Languages and Systems (APLAS'11)},
year = {2011},
month = dec,
note = {Poster Presentation}
}
|
[15]
|
Elvira Albert, Richard Bubel, Samir Genaim, Reiner Hähnle, Germán Puebla,
and Guillermo Román-Díez.
Verified Resource Guarantees using COSTA and KeY.
In Siau-Cheng Khoo and Jeremy G. Siek, editors, Proceedings of
the 2011 ACM SIGPLAN Workshop on Partial Evaluation and Program Manipulation,
PEPM 2011, Austin, TX, USA, January 24-25, 2011, SIGPLAN, pages 73-76. ACM,
2011.
[ bibtex |
abstract |
DOI |
DOI |
PDF ]
Resource guarantees allow being certain that programs
will run within the indicated amount of resources,
which may refer to memory consumption, number of
instructions executed, etc. This information can be
very useful, especially in real-time and
safety-critical applications.Nowadays, a number of
automatic tools exist, often based on type systems
or static analysis, which produce such resource
guarantees. In spite of being based on
theoretically sound techniques, the implemented
tools may contain bugs which render the resource
guarantees thus obtained not completely trustworthy.
Performing full-blown verification of such tools is
a daunting task, since they are large and complex.
In this work we investigate an alternative approach
whereby, instead of the tools, we formally verify
the results of the tools. We have implemented this
idea using COSTA, a state-of-the-art static analysis
system, for producing resource guarantees and KeY, a
state-of-the-art verification tool, for formally
verifying the correctness of such resource
guarantees. Our preliminary results show that the
proposed tool cooperation can be used for
automatically producing verified resource
guarantees.
@inproceedings{AlbertBGHPR11,
author = {Elvira Albert and Richard Bubel and Samir Genaim and Reiner H\"ahnle and Germ\'an Puebla and Guillermo Rom\'an-D\'iez},
title = {{V}erified {R}esource {G}uarantees using {COSTA} and {KeY}},
booktitle = {Proceedings of the 2011 ACM SIGPLAN Workshop on Partial
Evaluation and Program Manipulation, PEPM 2011, Austin,
TX, USA, January 24-25, 2011},
editor = {Siau-Cheng Khoo and Jeremy G. Siek},
publisher = {ACM},
series = {SIGPLAN},
year = {2011},
pages = {73-76},
doi = {10.1145/1929501.1929513},
ee = {http://doi.acm.org/10.1145/1929501.1929513},
rank:grinscie:class = {3}
}
|
[16]
|
Elvira Albert, Puri Arenas, Samir Genaim, Miguel Gómez-Zamalloa, Germán Puebla, Diana Ramirez, Guillermo Román-Díez, and Damiano Zanardini.
Termination and Cost Analysis with COSTA and its User
Interfaces.
In Spanish Conference on Programming and Computer Languages
(PROLE'09), volume 258 of Electronic Notes in Theoretical Computer
Science, pages 109-121. Elsevier, September 2009.
[ bibtex |
abstract |
DOI |
PDF ]
COSTA is a static analyzer for Java
bytecode which is able to infer cost and termination
information for large classes of programs. The
analyzer takes as input a program and a resource of
interest, in the form of a cost model, and aims at
obtaining an upper bound on the execution cost with
respect to the resource and at proving program
termination. The COSTA system has reached a
considerable degree of maturity in that (1) it
includes state-of-the-art techniques for statically
estimating the resource consumption and the
termination behavior of programs, plus a number of
specialized techniques which are required for
achieving accurate results in the context of
object-oriented programs, such as handling numeric
fields in value analysis; (2) it provides several
non-trivial notions of cost (resource consumption)
including, in addition to the number of execution
steps, the amount of memory allocated in the heap or
the number of calls to some user-specified method;
(3) it provides several user interfaces: a classical
command line, a Web interface which allows
experimenting remotely with the system without the
need of installing it locally, and a recently
developed Eclipse plugin which facilitates usage of
the analyzer, even during the development phase; (4)
it can deal with both the Standard and Micro
editions of Java. In the tool demonstration, we will
show that COSTA is able to produce meaningful
results for non-trivial programs, possibly using
Java libraries. Such results can then be used in
many applications, including program development,
resource usage certification, program optimization,
etc.
@inproceedings{AlbertAGGPRRZ09,
author = {Elvira Albert and Puri Arenas and Samir Genaim and Miguel G{\'o}mez-Zamalloa and Germ{\'a}n Puebla and Diana Ramirez and Guillermo Rom\'an-D\'iez and Damiano Zanardini},
title = {{T}ermination and {C}ost {A}nalysis with {COSTA} and its {U}ser {I}nterfaces},
booktitle = {Spanish Conference on Programming and Computer Languages (PROLE'09)},
series = {Electronic Notes in Theoretical Computer Science},
publisher = {Elsevier},
volume = {258},
year = {2009},
pages = {109-121},
doi = {10.1016/j.entcs.2009.12.008},
month = sep
}
|