↓ Skip to main content

The Train Benchmark: cross-technology performance evaluation of continuous model queries

Overview of attention for article published in Software & Systems Modeling, January 2017
Altmetric Badge

About this Attention Score

  • Among the highest-scoring outputs from this source (#38 of 390)
  • Good Attention Score compared to outputs of the same age (69th percentile)
  • Average Attention Score compared to outputs of the same age and source

Mentioned by

twitter
5 tweeters

Citations

dimensions_citation
10 Dimensions

Readers on

mendeley
20 Mendeley
Title
The Train Benchmark: cross-technology performance evaluation of continuous model queries
Published in
Software & Systems Modeling, January 2017
DOI 10.1007/s10270-016-0571-8
Pubmed ID
Authors

Gábor Szárnyas, Benedek Izsó, István Ráth, Dániel Varró

Abstract

In model-driven development of safety-critical systems (like automotive, avionics or railways), well-formedness of models is repeatedly validated in order to detect design flaws as early as possible. In many industrial tools, validation rules are still often implemented by a large amount of imperative model traversal code which makes those rule implementations complicated and hard to maintain. Additionally, as models are rapidly increasing in size and complexity, efficient execution of validation rules is challenging for the currently available tools. Checking well-formedness constraints can be captured by declarative queries over graph models, while model update operations can be specified as model transformations. This paper presents a benchmark for systematically assessing the scalability of validating and revalidating well-formedness constraints over large graph models. The benchmark defines well-formedness validation scenarios in the railway domain: a metamodel, an instance model generator and a set of well-formedness constraints captured by queries, fault injection and repair operations (imitating the work of systems engineers by model transformations). The benchmark focuses on the performance of query evaluation, i.e. its execution time and memory consumption, with a particular emphasis on reevaluation. We demonstrate that the benchmark can be adopted to various technologies and query engines, including modeling tools; relational, graph and semantic databases. The Train Benchmark is available as an open-source project with continuous builds from https://github.com/FTSRG/trainbenchmark.

Twitter Demographics

The data shown below were collected from the profiles of 5 tweeters who shared this research output. Click here to find out more about how the information was compiled.

Mendeley readers

The data shown below were compiled from readership statistics for 20 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Hungary 1 5%
Unknown 19 95%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 9 45%
Student > Bachelor 3 15%
Professor 2 10%
Student > Master 2 10%
Other 1 5%
Other 3 15%
Readers by discipline Count As %
Computer Science 11 55%
Unspecified 3 15%
Engineering 2 10%
Business, Management and Accounting 1 5%
Psychology 1 5%
Other 2 10%

Attention Score in Context

This research output has an Altmetric Attention Score of 4. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 18 September 2018.
All research outputs
#3,397,029
of 12,662,942 outputs
Outputs from Software & Systems Modeling
#38
of 390 outputs
Outputs of similar age
#103,885
of 340,250 outputs
Outputs of similar age from Software & Systems Modeling
#3
of 6 outputs
Altmetric has tracked 12,662,942 research outputs across all sources so far. This one has received more attention than most of these and is in the 72nd percentile.
So far Altmetric has tracked 390 research outputs from this source. They receive a mean Attention Score of 1.7. This one has done particularly well, scoring higher than 90% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 340,250 tracked outputs that were published within six weeks on either side of this one in any source. This one has gotten more attention than average, scoring higher than 69% of its contemporaries.
We're also able to compare this research output to 6 others from the same source and published within six weeks on either side of this one. This one has scored higher than 3 of them.