Master's Theses in Computer Science
Formal Methods for AI - in collaboration with Collins Aerospace
Focus Area: AI Trustworthiness and Certification in Aviation Applications
Description: The use of AI components in safety-critical aviation systems (e.g., avionics) brings various benefits in terms of performance and enables new functionalities that are not possible to implement in traditional software. Such systems often require certification from global aviation authorities and, therefore, must exhibit high level of trust and guarantees on the absence of unintended behavior. This is specifically challenging for Machine Learning (ML) based systems (including Deep Learning and Reinforcement Learning), since ML models, such as neural networks, are complex and black box. Trustworthiness of AI/ML is achieved by providing design assurance, i.e., evidence that certain guidelines and verification processes have been followed during design and deployment. R&D activities in this area shall focus on the exploration of novel methods for development and verification of AI/ML models in aviation applications. This includes data collection and data assurance techniques (e.g., completeness analysis), optimization techniques for ML models (e.g., simplification, distillation) to improve their performance, explainability and verifiability, formal and statistical analysis approaches for neural networks, and new processes for providing guarantees and guardrails during deployment and updating the AI/ML systems [EASA24, FAA24]. The obtained outcomes shall contribute to existing guidelines for AI/ML development in aviation by offering new means of compliance with certification objectives. The following thesis proposals provide a (not exhaustive) list of subjects that are of relevance for Master theses and internship in this context.
References:
Research theses available:
- Optimization Methods for More Explainable and Verifiable AI/ML
- Ensuring Safety of Adaptive Learning Systems
- Trustworthy AI Solutions for Prognostics Applications