BACHELOR Assignment
A probabilistic approach towards handling data quality problems and imperfect data integration tasks
Type: Bachelor CS
Period: TBD
Student: (Unassigned)
If you are interested please contact :
Description:
DuBio is an extension for PostgreSQL that we are currently developing for managing and manipulating uncertain data, or to use a more technical term, probabilistic data. Being able to manage data and the uncertainty about the data is an effective way to get a grip on and effectively dealing with data quality issues. One prominent purpose is data integration. Probabilistic data integration (PDI) is a specific kind of data integration where integration problems such as inconsistency and uncertainty are handled by means of a probabilistic data representation. The approach is based on the view that data quality problems (as they occur in an integration process) can be modeled as uncertainty and this uncertainty is considered an important result of the integration process. The PDI process contains two phases: (i) a quick partial integration where certain data quality problems are not solved immediately, but explicitly represented as uncertainty in the resulting integrated data stored in a probabilistic database such as DuBio; (ii) continuous improvement by using the data — a probabilistic database can be queried directly resulting in possible or approximate answers — and gathering evidence (e.g., user feedback) for improving the data quality.
We invite you to participate in this research and in making DuBio a success! You can do so by choosing one of the following subprojects
- Dictionary as a table
One component in DuBio is a dictionary that contains all random variables with their alternatives and probabilities. It is now implemented as a user-defined type. It could also be implemented with a (possibly hidden) table. We'd like to have this solution direction designed and realised as well and compared with the existing solution in terms of performance and scalability characteristics. - Probabilistic SQL
The current implementation is not meant as a final interface to the user. Rather, we envisage an SQL-based language for querying and manipulating uncertain data as well as for certain data integration tasks. This subproject is about designing this language with a mapping onto the current solution (for execution). This subproject is suitable for students interested in Data Science as well as in Software Technology. - Large-scale real-world data integration use case
WDC is a Product Data Corpus and Gold Standard for Large-Scale Product Matching (version 2.0). This corpus can be used as a means to validate the scalability of DuBio for probabilistic data integration. The use case has to adapted to a probabilistic setting. - Impact of caching probabilities in BDDs
The current implementation relies on (a) a dictionary as mentioned above, and (b) a binary decision diagram (BDD) for each record. The BDD is implemented as a user-defined type. It is unclear whether or not it would be beneficial to cache the probabilities in the BDD type. This subproject is about experimental investigation under which circumstances this is beneficial. - Evaluation of the conditioning task
One aspect of the probabilistic data integration approach is to improve data quality based on gathered evidence, which is called, conditioning. This subproject is about experimentally investigating the performance and scalability of conditioning.
A good place to start learning about what a probabilistic database is and what it can be useful for is the book chapter Probabilistic Data Integration published in the Springer "Encyclopedia of Big Data Technologies". Besides being an introduction, it also contains references to scientific publications on DuBio and probabilistic databases, in general. Secondly, the documentation of DuBio is also a good place to learn about the system: https://github.com/utwente-db/DuBio/wiki.