Dennis Ritter

Hi! I'm a Data Scientist, Software Developer and Scrum Master who is passionate about working on meaningful projects.

Check out my projects

Contact me ✉

SynthNet

2021-2024 // Berliner Hochschule für Technik (BHT) - Intelligent and Interactive Systems

Image data synthesis from CAD data for efficient training of deep neural networks. Utilizing Transfer Learning and Domain Adaptation techniques for 3D Object Retrieval and image classification without labeled target domain data.

a shields.io skills badge for python a shields.io skills badge for pytorch a shields.io skills badge for lightning a shields.io skills badge for pandas a shields.io skills badge for numpy a shields.io skills badge for wandb a shields.io skills badge for META a shields.io skills badge for docker a shields.io skills badge for kubernetes
explore

The goal of the SynthNet project was to develop generative neural approaches to create synthetic image data needed to build and optimize a visual search index. We generated all images from real-world, industry-grade 3D CAD data (SolidWorks) and used them for object identification. For each component, a set of views and depth data (RGB+D) of an object is automatically generated under different lighting and material conditions by respecting the object's metadata using the SynthNet Render Pipeline. We also used neural image augmentation to add backgrounds or distractors and performed style transfer to mimic real photos with our synthetic data. As a result, the Topex-Printer Dataset↗ has been released for public usage and marks the first CAD-to-real multi-domain industrial image dataset, comprising 102 machine parts.

brief summary of the synthnet processes
Topex-Printer rendered examples
brief summary of the synthnet processes
Topex-Printer real photo examples

We used the Topex-Printer Dataset to develop a practical approach to Transfer Learning and Unsupervised Domain Adaptation (UDA) in industrial object classification. To be more precise, this means to use no labeled photos but only labeled synthetic images to train a classifier for real photos. For this challenge I developed the SynthNet Transfer Learning Framework, which allows rapid reproducible training configurations including models, hyperparameters, dataloaders, domain transfer techniques, feature fusion techniques, gradual unfreezing, and more. As a result, we developed a two-stage Transfer Learning UDA method, achieving state-of-the-art results on the VisDA-2017 benchmark dataset (93.47%). Our findings were presented at the ECML PKDD 2023 in Turin and published by Springer Nature. Check out the paper website↗.

Finally, our trained models were used as a feature extractor in the SynthNet Retrieval Pipeline to build a visual search index with our synthetic data using Facebook AI Similarity Search (FAISS) and query it with real photos to measure performance metrics (Acc, mAP, NDCG, MRR).

brief summary of the synthnet processes
Our Transfer Learning method:
1. Init pre-trained model, 2. Train classification head on source domain data, 3. Train all layers on source domain data

SynthNet Transfer Learning

SynthNet Transfer Learning is a framework to train neural networks with a focus on Transfer Learning and Domain Adaptation. It allows fast configuration of models, data, and hyperparameters to rapidly create and run reproducible experiments supporting flexible image augmentations, gradual unreezing, feature fusion, and domain transfer techniques.

a shields.io skills badge for python a shields.io skills badge for pytorch a shields.io skills badge for lightning a shields.io skills badge for pandas a shields.io skills badge for numpy a shields.io skills badge for wandb a shields.io skills badge for docker a shields.io skills badge for kubernetes

SynthNet Render Pipeline

The SynthNet Render Pipeline renders RGB and depth images of OBJ files or structured BLEND files generated from exported CAD data. The pipeline allows to configure various camera, light, environment map, texture material assignment, and rendering options.

a shields.io skills badge for python a shields.io skills badge for pandas a shields.io skills badge for numpy a shields.io skills badge for docker a shields.io skills badge for kubernetes

SynthNet Retrieval Pipeline

The SynthNet Retrieval Pipeline uses trained neural networks as feature extractors to perform 3D object retrieval. The search index is built using Facebook AI Similarity Search (FAISS) and evaluation metrics include Acc, mAP, NDCG, and MRR. The project also includes data visualizations for confusion matrices, retrieval image grids, and 2D TSNE plots.

a shields.io skills badge for python a shields.io skills badge for pytorch a shields.io skills badge for pandas a shields.io skills badge for numpy a shields.io skills badge for META a shields.io skills badge for docker a shields.io skills badge for kubernetes

BewARe

2019-2021 // Berliner Hochschule für Technik (BHT) - Intelligent and Interactive Systems

Exercise identification, counting, and rating from motion tracking streams in an augmented reality and virtual reality setting for hypertension patients.

a shields.io skills badge for python a shields.io skills badge for pytorch a shields.io skills badge for pandas a shields.io skills badge for numpy a shields.io skills badge for scipy a shields.io skills badge for plotly a shields.io skills badge for pytest
explore

The goal of the BewARe project was the development of a technically supported movement and mobility training for seniors with hypertension based on an intelligent augmented reality system. Various exercises from the training components of endurance, strength, mobility, and coordination were provided, and individual stress norms were taken into account.

I started working on the project as a working student in the context of my master thesis↗ ,"Human Motion Analysis Using 3-D Pose Estimation Input", where I developed algorithms to cut motions from workouts into singular exercise motions and rate the execution quality of each individual motion sequence. After finishing my thesis, I continued working on the project as a research assistant and mainly worked on the artificial intelligence and sensor data components but also contributed on the other parts of the system. Together with my colleague, we developed an assistance system to process sequential heart rate and depth camera data and analyze it within the backend system. Our main task was counting, identifying, and rating the execution quality of motion sequences in near-real-time. To be able to prepare the data for further analysis, we developed the performance-oriented motion analysis library MANA, which provides tested functions to transform, manipulate, and visualize motion sequences from different sources. The prepared motion data was used to perform intelligent analysis steps using different approaches.

Components of the BewARe project
Components of the BewARe project
Schematic architecture visualisation of the implemented system
Architecture visualisation of the implemented system

One technique was to create so-called motion images from pose estimation data by mapping time and joint positions to colors in the Motion Feature Extraction (MOFEX) system. This allowed us to utilize large pre-trained image classification CNNs for transfer learning and extract meaningful image features, which represent motions. To take into account people who can only perform movements to a limited extent or who are restricted by physical characteristics, we have decided to pursue a distance-based approach for exercise rating. The idea was that patients perform and record exercises once under professional supervision to create their individual gold standard. Later, when patients perform the exercises at home and without supervision, they get near real-time feedback based on their individual body properties.

BewARe MANA

BewARe MANA is a software library to process and analyze motion-capturing data. It delivers performance-oriented functions to perform processing and visualization for high-dimensional motion sequence data as well as translations for different skeleton formats.

a shields.io skills badge for python a shields.io skills badge for numpy a shields.io skills badge for plotly a shields.io skills badge for pytest

BewARe Human Motion Analysis

BewARe Human Motion Analysis delivers functions to analyze human sport exercise motion sequences. The main functionalities are the identification of single-iteration subsequences, the identification of the exercise performed in motion sequences, and finally the rating of the trainee's exercise execution performance. This project was part of my master thesis 'Human Motion Analysis Using 3D Pose Estimation Input'.

a shields.io skills badge for python a shields.io skills badge for pandas a shields.io skills badge for numpy a shields.io skills badge for plotly a shields.io skills badge for scipy

BewARe Motion Feature Extractor (MOFEX)

MOFEX is a feature extractor for motion-capturing data. The motivation was to measure differences between motions by utilizing so-called 'Motion Images' to extract feature vectors for each motion sequence. Motion images are image representations of motion sequences where pixel colors are mapped to body part positions and time/frames to utilize CNNs pre-trained on large image datasets (ImageNet) for fine-tuning and feature extraction.

a shields.io skills badge for python a shields.io skills badge for pytorch a shields.io skills badge for numpy a shields.io skills badge for plotly

APERTOS

2017-2019 // Fraunhofer Institute for Open Communication Systems (FOKUS)

A framework to create individual, reactive, responsive, and accessible single page open data portal frontends in minutes.

a shields.io skills badge for javascript a shields.io skills badge for vue.js a shields.io skills badge for d3 a shields.io skills badge for bulma a shields.io skills badge for sass a shields.io skills badge for webpack a shields.io skills badge for mocha a shields.io skills badge for selenium
explore

I started working on the APERTOS project as part of my bachelor thesis (german)↗ 'Design and development of a generic single-page application for the realization of open data platforms', at Fraunhofer FOKUS↗. and continued working on the project until 2019. APERTOS is a core application for the realization of interfaces of open data platforms in the form of a single-page application. The framework made it possible to rapidly create individual, working open data portal prototypes in a few minutes. This enabled us to share styled, working prototypes with potential clients even during the first meeting and significantly contribute to the approval of project funds. Clients include the European Data Portal, WindNODE, International Data Spaces, and the Leistungszentrum Digitale Vernetzung.

To ensure flexible application possibilities, interfaces were designed to be customizable without having to modify any source code of the core application itself. This includes changes to the design of the user interface as well as adjustments to the content of selected components and individually definable interfaces for obtaining the data to be displayed.