Webseite Intelligent Vehicles Lab @HM
In the field of Autonomous Driving, explainability plays a crucial role not only in model
development but also in gaining public trust and acceptance of autonomous systems.
Natural language explanations, such as scene descriptions or question-answering about
the model’s decisions, help bridge the gap between complex algorithms and human
understanding. However, ensuring that these explanations are faithful—accurately
reflecting the model’s actual reasoning—remains a significant challenge, especially in
safety-critical domains like Autonomous Driving. Furthermore, explanations must be
plausible, sounding convincing to humans and be useful. We want to design a score
function to evaluate and benchmark autonomous drivng explanations
Link to Thesis Description:
https://iv.ee.hm.edu/wp-content/uploads/2024/10/Explainability_Thesis_Topic.pdf
Your Project
- Review SOTA techniques to evaluate LLMs in Autonomous Driving
- Based on your research, create a score to evaluate the quality of explanations including relevant categories and metrics, applicable in the Autonomous Driving domain
- Evaluate SOTA methods using your score
Your Profile
- Your studies are preferably in the field of computer science, electrical engineering, or a related field
- You are able to work independently, conscientiously and develop your own
ideas based on research - You have programming experience in Python
What we offer
- You gain insight into the field of Autonomous Driving and Large Language Models
- Access to High Performance Computers and GPU clusters
- You are supervised directly from a PhD student at the Intelligent Vehicles Lab
Does this appeal to you? Then reach out to us via mail to <intelligent-vehicles@hm.edu> and send a short introduction and motivation, your current grade report, and a CV with a photo.
Um sich für diesen Job zu bewerben, sende deine Unterlagen per E-Mail an intelligent-vehicles@hm.edu