„Machine Learning–Based Quantification of Security Mechanism Outputs into Subjective Logic Opinions in V2X Environments,“ Projektarbeit oder Bachelorarbeit or Masterarbeit, A. Hermann (Betreuung), F. Kargl (Prüfer), Inst. of Distr. Sys., Ulm Univ., 2026 –
Verfügbar.
Security mechanisms in Vehicle-to-Everything (V2X) environments, such as misbehavior detection systems, generate outputs that indicate potential malicious behavior but do not directly provide a unified and interpretable trust representation. This thesis investigates methods for quantifying such outputs into subjective logic opinions that can be used by trust assessment frameworks. The focus lies on a machine learning–based approach that learns the mapping from security mechanism outputs to belief, disbelief, and uncertainty values. The proposed method will be compared against existing quantification techniques to evaluate improvements in accuracy, robustness, and interpretability. The evaluation will be conducted using realistic V2X datasets and scenarios.