View source
From BiCEP
for
Main Page
Jump to:
navigation
,
search
== News == *'''NEW paper: '''<BR> Marcelo R. N. Mendes, Pedro Bizarro, and Paulo Marques.<BR>A Performance Study of Event Processing Systems ([[Media:Mendes_tpctc2009.pdf|pdf]]).<BR>In the First TPC Technology Conference on Performance Evaluation & Benchmarking (TPC TC). <BR>Collocated with VLDB09, August 24, 2009 - Lyon France. *'''NEW paper: ''' Rafael Marmelo, Pedro Bizarro, and Paulo Marques. 9ticks – The Web as a Stream ([[Media:9ticks_dait2009.pdf|pdf]]). In the First International Workshop on Database Architectures for the Internet of Things (DAIT 2009). Extended version ([[Media:9ticks_iete2009.pdf|pdf]]) also published on the 2009 September/October issue of IETE Technical Review journal. == Description == BiCEP, a project from the Systems and Software Engineering Group at the University of Coimbra, has the goal to study and improve performance and produce benchmarks for [http://www.complexevents.com:Complex Event Processing] systems (CEP). CEP metrics should include the following metrics (some obvious, others not quite): * '''Sustainable throughput''': the steady-state number of events per unit of time that a (warmed-up) CEP engine can process while performing query processing. Even within the same system, sustainable throughput can vary widely depending on the amount of work to be done during query processing. * '''Response time''': the time since the last event of some event pattern is fed into the system until the system notifies the event pattern detection. * '''Scalability''': Unlike other benchmarks that considerer scalability only as a variation of the benchmark with more data and more users, in BiCEP we would like scalability to be a first-class metric. That is, while it is useful to compare systems at different scale levels, it is also very interesting to assess how well a given system scales. For example, CEP engines can use some of the new techniques (e.g., [http://aws.amazon.com/ec2 Amazon Elastic Compute Cloud]) that allow a system to grab hardware resources on demand makes. We are planning scalability experiments along three directions: i) scale-up: increase the system and increase the load, ii) speed-up: increase the system and maintain the load, and iii) load-up: maintain system but increase the load. * '''Adaptivity''': Typically, systems are benchmarked after they are “warmed-up” and in a steady state. However, while it seems that there will be periods where CEP systems are in steady states, it also appears likely that, due to the very unpredictable nature of the real-world events being processed by CEP engines, there will be frequent disruptive moments, when the system should adapt its query processing to be more efficient. * '''Computation Sharing''': Many CEP applications process tens, hundreds, millions of similar queries concurrently. For example, a CEP engine in a financial trading company may be processing thousands of rules for each stock ticket: many customers may be monitoring the same stock but each customer may have slightly different buy or sell values. If the CEP engine can devise query processing techniques such that different queries are able to share computation, then the scalability potential of the system is greatly improved. * '''Similarity search and precision and recall''': As far as we know, no CEP engine uses any kind of similarity search: the patterns being searched are always precisely specified by a query language. Thus, we expect no false positives and no false negatives. However, if CEP users demand more and more complex patterns, we expect CEP engines to start using similarity search. If similarity search is used, then CEP engines may occasionally produce incorrect results by way of false positives and false negatives. We also expect false positives and false negatives if CEP engines use past events to forecast real-world future events. The project started in September 2007 and is funded by an [http://cordis.europa.eu/mariecurie-actions/irg/home.html FP6 Marie Curie International Reintegration Grant]. == Publications == === Conferences and Journals === *Marcelo R. N. Mendes, Pedro Bizarro, and Paulo Marques. A Performance Study of Event Processing Systems ([[Media:Mendes_tpctc2009.pdf|pdf]]). In the First TPC Technology Conference on Performance Evaluation & Benchmarking (TPC TC). Collocated with VLDB09, August 24, 2009 - Lyon France. *Rafael Marmelo, Pedro Bizarro, and Paulo Marques. 9ticks – The Web as a Stream (Short paper - [[Media:9ticks_dait2009.pdf|pdf]]). In the First International Workshop on Database Architectures for the Internet of Things (DAIT 2009). Extended version ([[Media:9ticks_iete2009.pdf|pdf]]) also published on the 2009 September/October issue of IETE Technical Review journal. *Pedro Bizarro, BiCEP - Benchmarking Complex Event Processing Systems ([http://drops.dagstuhl.de/opus/volltexte/2007/1143/pdf/07191.BizarroPedro.ExtAbstract.1143.pdf pdf]). in [http://www.dagstuhl.de Dagshtul] Event Processing Seminar, 2007. (POSITION PAPER) === Demos === *Diogo Guerra, Ute Gawlick, and Pedro Bizarro. An Integrated Data Management Approach to Manage Health Care Sensor Data ([[Media:ICU_debs2009.pdf|pdf]]). In the Proc. of the Intl. Conf. DEBS 2009. *Marcelo R. N. Mendes, Pedro Bizarro, and Paulo Marques. A framework for performance evaluation of complex event processing systems ([[Media:FINCoS_DEBS2008.pdf|pdf]]). In the Proc. of the Intl. Conf. DEBS 2008: 313-316. == Software == *FINCoS framework, version 2.2, July 2009 ([[Media:FINCoS-2.2.zip|zip]]) The FINCoS framework is a Java-based set of benchmarking tools for load generation and performance measuring of Event Processing systems. It provides a flexible and neutral approach for experimenting diverse CEP systems, where load generators, datasets, queries, and adapters can be easily attached, swapped, reconfigured and scaled.
Return to
Main Page
.
Views
Page
Discussion
View source
History
Personal tools
Log in
Navigation
Main Page
Benchmarks
Tools
Publications
People
Search
Toolbox
What links here
Related changes
Special pages