Navigationsweiche Anfang

Navigationsweiche Ende

Select language

Chair for Technologies and Management of Digital Transformation


Univ. Prof. Dr. Ing. Tobias Meisen

Interpretable Learning Models

Our research group Interpretable Learning Models (ILM) focusses on data driven approaches and corresponding analysis methods for time series and image data in production environments. Our goal is to facilitate the application of state-of-the-art learning models in complex and challenging scenarios such as condition monitoring, predictive quality, or predictive maintenance. Our main research topics cover the facilitation of transparency and interpretability of trained learning models, with a strong focus on artificial neural networks and deep learning models, to soften their inherent black-box character enabling their robust and reliable application in these environments. To this end, we rely on a broad variety of classical analytical methods from the field of signal processing, and modern learning mechanisms from the current state of the art in deep learning research. Additionally, we take inspiration from the research field of neuroscience aiming to promote a new perspective on artificial learning models as objects of interest in large scale empirical studies. We strongly believe that interactive and visual exploration of learning models and their corresponding data is the key for better transparency and interpretability. We envision the future of artificial learning models to be just as tangible, accessible, and easy to investigate as common everyday objects in the palm of our hands.

Main Topics

  • Sensor time series data analysis
  • Sensor signal labeling
  • Sensor signal classification
  • Sensor signal anomaly detection
  • Sensor signal forecasting
  • Sensor signal recostruction
  • Sensor signal similarity estimation
  • Sensor signal importance estimation
  • Sensor signal segmentation
  • Sensor signal motif extraction

Application Areas

  • Manufacturing and production scenarios
  • Condition monitoring
  • Predictive quality
  • Predictive maintenance
  • Soft sensors

Contact

Richard Meyes, M.Sc.

Selected relevant publications

References
Marc Haßler; Christian Kohlschein; Tobias Meisen
Similarity Analysis of Time Interval Data Sets---A Graph Theory Approach
ITISE 2017: Time Series Analysis and Forecasting, :159--171
2017

Keywords: research-interpretable-learning

Philipp Meisen; Diane Keng; Tobias Meisen; Marco Recchioni; Sabina Jeschke
Similarity Search of Bounded TIDASETs within Large Time Interval Databases
2015 International Conference on Computational Science and Computational Intelligence (CSCI), :24--29
2016

Keywords: research-interpretable-learning

Philipp Meisen; Diane Keng; Tobias Meisen; Marco Recchioni; Sabina Jeschke
Querying Time Interval Data
ICEIS 2015,
2015

Keywords: research-interpretable-learning

Philipp Meisen; Diane Keng; Tobias Meisen; Marco Recchioni; Sabina Jeschke
TIDAQL - A Query Language Enabling on-Line Analytical Processing of Time Interval Data
Proceedings of the 17th International Conference on Enterprise Information Systems, :54--66
2015

Keywords: research-interpretable-learning

Philipp Meisen; Diane Keng; Tobias Meisen; Marco Recchioni; Sabina Jeschke
Bitmap-Based On-line Analytical Processing of Time Interval Data
2015 12th International Conference on Information Technology - New Generations, :20--26
2015

Keywords: research-interpretable-learning

Philipp Meisen; Marco Recchioni; Tobias Meisen; Daniel Schilberg; Sabina Jeschke
Modeling and Processing of Time Interval Data for Data-driven Decision Support
2014 IEEE International Conference on Systems, Man, and Cybernetics (SMC), :2946--2953
2014

Keywords: research-interpretable-learning

Page:  
Previous | 1, 2 | Next
Total:
18
Export as:
BibTeX, XML