[Talk Ideas] – 25th of June 2025, Luís Vieira and João Donato


25th of June
 at 16h00, Luís Vieira and João Donato  will give two short presentations, to promote discussion on two relevant ongoing or disruptive topics. Afterwards, there will be a social gathering where everyone can talk freely on whatever subjects they like.
Location
: G4.1

Luís Vieira – “On the use of Deep Graph Convolution Neural Networks (DGCNN) to Detect Software Vulnerabilities”

Bio
Luís Vieira is a Master’s student in Cybersecurity at the University of Coimbra. He received his Bachelor’s degree in Informatics Engineering from the same institution in 2023. His current research focuses on detecting software vulnerabilities using graph-based deep learning models, with an emphasis on DGCNNs and structural code representations such as CFGs, ASTs, and PDGs.

Abstract
Software vulnerabilities remain a major threat to security, often leading to critical breaches, data loss, and financial consequences. Traditional detection methods, such as static and dynamic analysis, play an important role but face limitations: static tools rely on predefined rules crafted by experts, making them labor-intensive and less adaptable to emerging vulnerabilities, while dynamic tools struggle with incomplete code coverage and high false-negative rates. To address these challenges, recent research has turned to Machine Learning (ML) and Deep Learning (DL) techniques.
This work investigates the use of Deep Graph Convolutional Neural Networks (DGCNNs) for detecting vulnerabilities in C functions by leveraging graph-based code representations, such as Control Flow Graphs (CFGs), Abstract Syntax Trees (ASTs), and Program Dependence Graphs (PDGs).
The research focuses on developing a framework that integrates feature extraction, graph-based representations, and embedding techniques to enrich code data while addressing problems such as overfitting and data imbalance. By evaluating the impact of diverse graph features on model performance, this study aims to advance the understanding of DGCNNs in software vulnerability detection and contribute to scalable, effective solutions for evolving security challenges.

João Donato – “Benchmarking LLM Robustness Against Prompt-based Adversarial Attacks”

Abstract
Large Language Models (LLMs) are increasingly integrated into various applications, raising significant concerns about their security and vulnerability to adversarial attacks. This work addresses the lack of systematic methods for evaluating LLMs’ adversarial robustness against these threats. We propose a comprehensive benchmarking methodology to assess the resilience of LLMs’ built-in safety measures against these inference-time text-based attacks. To demonstrate its utility, we also applied the framework to benchmark various LLMs on their capacity to generate vulnerable and malicious code

Bio
João Donato is currently finishing is master thesis in Informatics Security (MSI) at the University of Coimbra. He received his bachelor’s degree in Informatics Engineering in 2023 at the same university. Under the supervision of Professor João Campos, his current research and the topic of his thesis is centered on assessing and comparing the adversarial robustness of LLMs against text-based inference-time attacks.