Source Code, Datasets
MESSI: In-Memory Data Series Indexing
Botao Peng, Panagiota Fatourou, Themis Palpanas
Data series similarity search is a core operation for
several data series analysis applications across many different domains.
However, the state-of-the-art techniques fail to deliver the
time performance required for interactive exploration, or analysis
of large data series collections. In this work, we propose MESSI,
the first data series index designed for in-memory operation on
modern hardware. Our index takes advantage of the modern
hardware parallelization opportunities (i.e., SIMD instructions,
multi-core and multi-socket architectures), in order to accelerate
both index construction and similarity search processing times.
Moreover, it benefits from a careful design in the setup and
coordination of the parallel workers and data structures, so that
it maximizes its performance for in-memory operations. Our
experiments with synthetic and real datasets demonstrate that
overall MESSI is up to 4x faster at index construction, and up
to 11x faster at query answering than the state-of-the-art parallel
approach. MESSI is the first to answer exact similarity search
queries on 100GB datasets in ~50msec (30-75msec across diverse
datasets), which enables real-time, interactive data exploration on
very large data series collections.
Source Code
You may freely use this code for research purposes, provided that you properly acknowledge the authors using the following references:
Botao Peng, Panagiota Fatourou, Themis Palpanas. MESSI: In-Memory Data Series Indexing. IEEE International Conference on Data Engineering (ICDE), 2020.
Botao Peng, Panagiota Fatourou, Themis Palpanas. Fast Data Series Indexing for In-Memory Data. International Journal on Very Large Data Bases (VLDBJ) 2021.
- Zip file with source code for all the algorithms used in the paper will be made available after the acceptance of the paper (email the authors for the password).
Synthetic Datasets
We produced a set of synthetic datasets with sizes from 50 million to 200 million data series composed by random walks of length 256. Each data point in the data series is produced as xi+1=N(xi,1), where N(0,1) is a standard normal distribution.
The synthetic data generator code is included in the source code we are making available.
Real Datasets
Our method was tested on two real datasets.
- For our first real dataset, Seismic, we used the IRIS Seismic Data Access repository to gather data series representing seismic waves from various locations.
We obtained 100 million data series of size 256.
The complete dataset size was 100 GB.
- The second real dataset, SALD, includes neuroscience MRI data.
The dataset comprised of 200 million data series of size 128.
The complete dataset size was 100GB.