Open Information Extraction (OIE) is an important intermediate step for many text mining tasks, such as summarization, relation extraction or knowledge base construction.
Often users desire to select a OIE system suitable for their specific application domain.
Unfortunately, there is surprisingly little work on evaluating and comparing results among different OIE systems.
Therefore we demonstrate RelVis, a web based open source OIE benchmarking suite.

RelVis enables the user to perform a comparative analysis among OIE systems like ClausIE, OpenIE 4.2, Stanford OpenIE or PredPatt.
It features an intuitive dashboard that enables a user to explore annotations created by OIE systems and evaluate the impact of five common error classes.
Our comprehensive benchmark contains four data sets with overall 4522 labeled sentences and 11243 binary or n-ary OIE relations.

To our best knowledge, RelVis is the first attempt integrating four different OIE systems and four different data sets in a single comprehensive benchmark system for OIE systems.
It provides dashboards for in-depth qualitative evaluations, classifies errors in five common expendable classes and supports user defined annotations or data sets.
RelVis enables the community exploring existing and adding home grown OIE systems and is available as open source at github.com/SchmaR/RelVis.