Benchmarks

From BiCEP

Jump to: navigation, search
(Created page with 'In this page you can find a description of the benchmarks that comprise the ''BiCEP'' suite. == ''Pairs'' == The ''Pairs'' benchmark is set on the capital markets environment an…')
(Pairs)
 
Line 1: Line 1:
-
In this page you can find a description of the benchmarks that comprise the ''BiCEP'' suite.
 
-
 
== ''Pairs'' ==
== ''Pairs'' ==
-
The ''Pairs'' benchmark is set on the capital markets environment and exercises a wide range of features commonly found in most event processing applications, including:
+
The scenario for ''Pairs'' is an investment firm where a number of analysts interact with an enterprise trading system responsible for automating and optimizing the execution of orders in stock markets. Users of the system pose trading strategies which are continuously matched against live stock market data. The task of an event processing system implementing ''Pairs'' is thus to process this tick stream and compute, for each running strategy, a set of indicators, signalizing whenever those reveal an opportunity to capitalize on market inefficiencies.
 +
 
 +
''Pairs'' was designed to assess the ability of CEP systems in processing increasingly larger number of continuous queries and event arrival rates while providing quick answers – three quality attributes equally important for an event processing engine. For that, the benchmark exercises a wide range of features commonly found in most event processing applications, including:
* Filtering, aggregation, and correlation of events;
* Filtering, aggregation, and correlation of events;
* Detection of event patterns and trends;
* Detection of event patterns and trends;
Line 9: Line 9:
* Changing load conditions.
* Changing load conditions.
-
''Pairs'' was designed to assess the ability of CEP systems in processing increasingly larger number of continuous queries and event arrival rates while providing quick answers – three quality attributes equally important for an event processing engine. The benchmark was also designed to be fully customizable, so that users can carry out performance studies that resemble more closely their own environments.
+
The benchmark should be implemented as illustrated in the figure below:
 +
 
 +
[[File:Benchmark Flow.png]]
 +
 
 +
Initially, the user specifies a couple of workload parameters or, alternatively, uses the standard benchmark configuration to create a test setup (1). Then, a data generator application generates data and auxiliary files (2), which are used afterwards by a query generator to produce the strategies that compose the benchmark workload (3). The output of the query generator is then parsed by a vendor-specific translator, which converts the workload, initially represented in a neutral format (e.g., xml file), into the query language used by the SUT (4). After loading the query/rule set into the SUT (5), the user starts a performance run (6). During the run, the benchmark driver (FINCoS) loads the generated data file and submits the events on it to the SUT (7), which in turn returns the corresponding results to the framework (8). After test completion, a validator verifies the correctness of the answers produced by the SUT (9).
-
The most updated specification of ''Pairs'' (revision 0.2) can be found [http://bicep.dei.uc.pt here]. The benchmark kit, required to run ''Pairs'', can be donwloaded from the [http://bicep.dei.uc.pt/Tools Tools] section.
+
All the aforementioned tools are written in Java and are available for download in the [http://bicep.dei.uc.pt/index.php/Tools Tools] section. Further details about the ''Pairs'' benchmark can be found on its [http://bicep.dei.uc.pt/images/8/8f/Pairs_Benchmark_%28rev._1.0%29.pdf specification] (currently on revision 1.0).

Current revision as of 17:30, 24 April 2014

Personal tools