Main Page

From BiCEP

Jump to: navigation, search
(Software)
(What's New)
 
Line 1: Line 1:
== Description ==
== Description ==
-
BiCEP, a project from the Systems and Software Engineering Group at the University of Coimbra, has the goal to study and improve performance and produce benchmarks for [http://www.complexevents.com:Complex Event Processing] systems (CEP). In the papers below we present benchmarking experiments that evaluate engines CEP along several metrics. Some of these are obvious, others not quite: throughput, maximum latency, latency degradation ratio, post-peak latency variation ratio, memory consumption, instruction count, clocks-per-instruction, cache miss rates and more. We studied many different basic and not so basic operations such as selection, projections, aggregations, joins, windows, and patterns detection and the impact of types and sizes of windows, query sharing, variations to peaks, the effects of garbage collection options, and internal data structures and tuple representations.
+
BiCEP, a project from the Systems and Software Engineering Group at the University of Coimbra, has the goal to study and improve performance and produce benchmarks for [http://www.complexevents.com:Complex Event Processing] systems (CEP). The project, partially funded by an [http://cordis.europa.eu/mariecurie-actions/irg/home.html FP6 Marie Curie International Reintegration Grant], started in September 2007. Since then, we have developed benchmarking tools, carried out a number of performance evaluations on real CEP engines, and studied several use-cases of the technology. Currently, we are working towards the definition of novel application benchmarks that allow assessing the overall performance and scalability of event processing platforms.  
-
To perform all this, we built FINCoS, a testing framework (available below) that lets users generate and/or load test data, start and play benchmarking experiments, visualize and capture metrics, and run multiple engines at the same time.
+
The ''BiCEP benchmark suite'' shall comprise a number of smaller, domain-specific benchmarks, each with its own workload, dataset and metrics. Our goal is that each benchmark will allow evaluating one or more aspects of event processing systems (e.g. latency, scalability with respect to number of queries and rules, storage efficiency, etc.).
 +
<!--
 +
In the papers below we present benchmarking experiments that evaluate engines CEP along several metrics. Some of these are obvious, others not quite: throughput, maximum latency, latency degradation ratio, post-peak latency variation ratio, memory consumption, instruction count, clocks-per-instruction, cache miss rates and more. We studied many different basic and not so basic operations such as selection, projections, aggregations, joins, windows, and patterns detection and the impact of types and sizes of windows, query sharing, variations to peaks, the effects of garbage collection options, and internal data structures and tuple representations.
-
The project started in September 2007 and is partially funded by an [http://cordis.europa.eu/mariecurie-actions/irg/home.html FP6 Marie Curie International Reintegration Grant].
+
To perform all this, we built FINCoS, a testing framework (available below) that lets users generate and/or load test data, start and play benchmarking experiments, visualize and capture metrics, and run multiple engines at the same time.
-
 
-
 
-
<!--
 
* '''Sustainable throughput''': the steady-state number of events per unit of time that a (warmed-up) CEP engine can process while performing query processing. Even within the same system, sustainable throughput can vary widely depending on the amount of work to be done during query processing.
* '''Sustainable throughput''': the steady-state number of events per unit of time that a (warmed-up) CEP engine can process while performing query processing. Even within the same system, sustainable throughput can vary widely depending on the amount of work to be done during query processing.
* '''Response time''': the time since the last event of some event pattern is fed into the system until the system notifies the event pattern detection.
* '''Response time''': the time since the last event of some event pattern is fed into the system until the system notifies the event pattern detection.
Line 18: Line 17:
-->
-->
-
== Publications ==
+
== What's New ==
-
=== Conferences and Journals ===
+
* '''15-Oct-2013''': ''Pairs''” benchmark updated. A new version of the benchmark specification has been released ([http://bicep.dei.uc.pt/images/8/8f/Pairs_Benchmark_%28rev._1.0%29.pdf view]).
-
*'''Assessing and Optimizing Microarchitectural Performance of Event Processing Systems''' ([[Media:Mendes_tpctc2010.pdf|pdf]]).<BR>Marcelo R. N. Mendes, Pedro Bizarro, and Paulo Marques.<BR>In the Second TPC Technology Conference on Performance Evaluation & Benchmarking (TPC TC). <BR>Collocated with VLDB10, September 17, 2010 - Singapore.
+
-
*'''A Performance Study of Event Processing Systems''' ([[Media:Mendes_tpctc2009.pdf|pdf]]).<BR>Marcelo R. N. Mendes, Pedro Bizarro, and Paulo Marques.<BR>In the First TPC Technology Conference on Performance Evaluation & Benchmarking (TPC TC). <BR>Collocated with VLDB09, August 24, 2009 - Lyon France.
+
-
*'''BiCEP - Benchmarking Complex Event Processing Systems''' ([http://drops.dagstuhl.de/opus/volltexte/2007/1143/pdf/07191.BizarroPedro.ExtAbstract.1143.pdf pdf]).<BR>Pedro Bizarro<BR>in [http://www.dagstuhl.de Dagshtul] Event Processing Seminar, 2007. (POSITION PAPER)
+
-
=== Demos ===
+
* '''17-Apr-2013''': FINCoS certified by the Standard Performance Evaluation Corporation (SPEC). The FINCoS framework has undergone a thorough review process, having been accepted to integrate SPEC Research Group’s repository of quantitative evaluation and analysis tools  ([http://research.spec.org/tools.html More info]).
-
*'''A framework for performance evaluation of complex event processing systems''' ([[Media:FINCoS_DEBS2008.pdf|pdf]]). <BR>Marcelo R. N. Mendes, Pedro Bizarro, and Paulo Marques.<BR>In the Proc. of the Intl. Conf. DEBS 2008: 313-316.
+
* '''20-Mar-2013''': A new version of the FINCoS Framework (2.4.2) has been released ([http://fincos.googlecode.com/files/FINCoS%202.4.2.zip Download]).
-
=== Tutorials ===
+
* '''06-Fev-2013''': New Publication: the paper titled “''Towards a Standard Event Processing Benchmark''” has been accepted to be presented in the Vision/Work in Progress track of 4th ACM/SPEC International Conference on Performance Engineering (ICPE 2013), to be held in Prague, from April 21 to 24. ([http://bicep.dei.uc.pt/index.php/Publications View full list of publications.])
-
'''Benchmarking Event Processing Systems: Current State and Future Directions'''<BR>
+
-
Marcelo Mendes, Pedro Bizarro, Paulo Marques<BR>
+
-
In the First Joint WOSP/SIPEW International Conference on Performance Engineering, San Jose, California, USA, January 28, 2010.
+
-
([[Media:BiCEP_wosp2010.pdf|pdf]])
+
-
== Software ==
+
* '''21-Jan-2013''': Benchmark “''Pairs''” released. The first of the BiCEP domain-specific benchmarks is now available. The benchmark description and its specification can be found [http://bicep.dei.uc.pt/index.php/Benchmarks here].
-
*FINCoS framework, version 2.3, July 2012 ([http://fincos.googlecode.com/files/FINCoS-2.3.zip download])
+
-
The FINCoS framework is a Java-based set of benchmarking tools for load generation and performance measuring of Event Processing systems. It provides a flexible and neutral approach for experimenting diverse CEP systems, where load generators, datasets, queries, and adapters can be easily attached, swapped, reconfigured and scaled.
+
 +
* '''09-Jan-2013''': New Publication: the paper titled “''Overcoming Memory Limitations in High-Throughput Event-Based Applications''” has been accepted to be presented in the Industry track of 4th ACM/SPEC International Conference on Performance Engineering (ICPE 2013), to be held in Prague, from April 21 to 24. ([http://bicep.dei.uc.pt/index.php/Publications View full list of publications.])
 +
* '''10-Dec-2012''': A new version of the FINCoS Framework (2.4.1) is now available. This release fixes problems on RMI communication when running under Java 7 and brings several other minor improvements.([http://fincos.googlecode.com/files/FINCoS%202.4.1.zip Download])
 +
* '''07-Out-2012''': A new version of the FINCoS Framework (2.4) has been released. This version adds support for improved load generation through user-provided data files, and brings a number of other minor enhancements and fixes.
-
 
+
== ==
-
<!--*'''An Integrated Data Management Approach to Manage Health Care Sensor Data''' ([[Media:ICU_debs2009.pdf|pdf]]).<BR> Diogo Guerra, Ute Gawlick, and Pedro Bizarro.<BR>In the Proc. of the Intl. Conf. DEBS 2009.
+
-
*'''9ticks – The Web as a Stream''' ([[Media:9ticks_dait2009.pdf‎|pdf]]).<BR>Rafael Marmelo, Pedro Bizarro, and Paulo Marques.<BR>In the First International Workshop on Database Architectures for the Internet of Things (DAIT 2009).<BR>Extended version ([[Media:9ticks_iete2009.pdf|pdf]]) also published on the 2009 September/October issue of IETE Technical Review journal.
+
-
-->
+

Current revision as of 16:36, 18 December 2013

Personal tools