Methanol to acetic acid production plant
Authorship
M.A.G.
Bachelor's Degree in Chemical Engeneering
M.A.G.
Bachelor's Degree in Chemical Engeneering
Defense date
07.16.2025 09:30
07.16.2025 09:30
Summary
Obtaining 70,000 t/year of acetic acid from methanol and carbon monoxide by means of the methanol carbonylation process. This organic compound has a wide variety of uses, from the production of cosmetic and pharmaceutical products to the food, textile and chemical industries, which makes it a product of great industrial interest. In this project, Martín Álvarez will carry out the rigorous design of the reactor, in charge of the methanol carbonylation process to obtain acetic acid, and Elena Ojea of the first column, in charge of separating acetic acid and water from the other components from the reaction medium.
Obtaining 70,000 t/year of acetic acid from methanol and carbon monoxide by means of the methanol carbonylation process. This organic compound has a wide variety of uses, from the production of cosmetic and pharmaceutical products to the food, textile and chemical industries, which makes it a product of great industrial interest. In this project, Martín Álvarez will carry out the rigorous design of the reactor, in charge of the methanol carbonylation process to obtain acetic acid, and Elena Ojea of the first column, in charge of separating acetic acid and water from the other components from the reaction medium.
Direction
FREIRE LEIRA, MARIA SONIA (Tutorships)
González Álvarez, Julia (Co-tutorships)
FREIRE LEIRA, MARIA SONIA (Tutorships)
González Álvarez, Julia (Co-tutorships)
Court
VIDAL TATO, MARIA ISABEL (Chairman)
MAURICIO IGLESIAS, MIGUEL (Secretary)
RODRIGUEZ MARTINEZ, HECTOR (Member)
VIDAL TATO, MARIA ISABEL (Chairman)
MAURICIO IGLESIAS, MIGUEL (Secretary)
RODRIGUEZ MARTINEZ, HECTOR (Member)
Backend Archetype with Hexagonal Architecture
Authorship
S.A.P.
Bachelor’s Degree in Informatics Engineering
S.A.P.
Bachelor’s Degree in Informatics Engineering
Defense date
07.18.2025 11:00
07.18.2025 11:00
Summary
In software development, the initial phase of a new project often involves repetitive and error-prone tasks, especially when the use of complex architectural patterns is required, as is the case of the hexagonal architecture, which promotes a strict separation between the business domain and its external dependencies. In this context, this Final Degree Project addresses the need to facilitate and standardise this phase in a real business environment, specifically in collaboration with Altia Consultores. To this end, a project generator has been developed based on the Yeoman template engine, which automates the construction of a base structure in Java with Spring Boot. The generator creates, from a data model, all the necessary layers of a microservice in hexagonal architecture, including business entities, use cases, ports and adapters, serving as a starting point for the development of new applications.
In software development, the initial phase of a new project often involves repetitive and error-prone tasks, especially when the use of complex architectural patterns is required, as is the case of the hexagonal architecture, which promotes a strict separation between the business domain and its external dependencies. In this context, this Final Degree Project addresses the need to facilitate and standardise this phase in a real business environment, specifically in collaboration with Altia Consultores. To this end, a project generator has been developed based on the Yeoman template engine, which automates the construction of a base structure in Java with Spring Boot. The generator creates, from a data model, all the necessary layers of a microservice in hexagonal architecture, including business entities, use cases, ports and adapters, serving as a starting point for the development of new applications.
Direction
Fernández Pena, Anselmo Tomás (Tutorships)
Rial Dosil, Mónica (Co-tutorships)
Antón Bueso, José Luis (Co-tutorships)
Fernández Pena, Anselmo Tomás (Tutorships)
Rial Dosil, Mónica (Co-tutorships)
Antón Bueso, José Luis (Co-tutorships)
Court
Blanco Heras, Dora (Chairman)
GAGO COUSO, FELIPE (Secretary)
Sánchez Vila, Eduardo Manuel (Member)
Blanco Heras, Dora (Chairman)
GAGO COUSO, FELIPE (Secretary)
Sánchez Vila, Eduardo Manuel (Member)
TrollHunter: Review Management System Based on Large Language Models and AI
Authorship
J.A.R.
Bachelor’s Degree in Informatics Engineering
J.A.R.
Bachelor’s Degree in Informatics Engineering
Defense date
07.18.2025 11:30
07.18.2025 11:30
Summary
In today's digital era, online reputation has become one of the most strategic assets for any company. Platforms such as Google Maps, Trustpilot, and social media act as public forums where customers share their experiences, generating large volumes of data. Properly managing this information, distinguishing between legitimate criticism and malicious attacks, and extracting useful insights is now a major challenge for organizations. This Final Degree Project presents the design and implementation of TrollHunter, an AI-based system for analyzing and managing corporate online reputation. Its main objective is twofold: on the one hand, to optimize customer satisfaction measurement through detailed statistical reports from reviews; and on the other, to enhance corporate image by providing tools to detect and manage anomalous or troll-generated comments, including AI-assisted reply generation. To achieve these goals, a technological solution has been developed with a robust Python backend and a web interface built with React and TypeScript. The system allows automated review extraction from Google Places, combining the official API with web scraping techniques. The AI component runs locally through Ollama and is based on a Retrieval-Augmented Generation (RAG) architecture, supported by a persistent vector database. This database centralizes both the conversational system and analysis processes, ensuring consistent information handling. Finally, a web portal has been implemented, enabling users to interact with all functionalities: business search, chatbot interaction, company-specific reporting, and automated suggestions for replying to anomalous comments. This system offers an effective way to better understand public perception of a business in digital environments.
In today's digital era, online reputation has become one of the most strategic assets for any company. Platforms such as Google Maps, Trustpilot, and social media act as public forums where customers share their experiences, generating large volumes of data. Properly managing this information, distinguishing between legitimate criticism and malicious attacks, and extracting useful insights is now a major challenge for organizations. This Final Degree Project presents the design and implementation of TrollHunter, an AI-based system for analyzing and managing corporate online reputation. Its main objective is twofold: on the one hand, to optimize customer satisfaction measurement through detailed statistical reports from reviews; and on the other, to enhance corporate image by providing tools to detect and manage anomalous or troll-generated comments, including AI-assisted reply generation. To achieve these goals, a technological solution has been developed with a robust Python backend and a web interface built with React and TypeScript. The system allows automated review extraction from Google Places, combining the official API with web scraping techniques. The AI component runs locally through Ollama and is based on a Retrieval-Augmented Generation (RAG) architecture, supported by a persistent vector database. This database centralizes both the conversational system and analysis processes, ensuring consistent information handling. Finally, a web portal has been implemented, enabling users to interact with all functionalities: business search, chatbot interaction, company-specific reporting, and automated suggestions for replying to anomalous comments. This system offers an effective way to better understand public perception of a business in digital environments.
Direction
TABOADA IGLESIAS, MARÍA JESÚS (Tutorships)
Vidal De la Rosa, José Luis (Co-tutorships)
TABOADA IGLESIAS, MARÍA JESÚS (Tutorships)
Vidal De la Rosa, José Luis (Co-tutorships)
Court
Blanco Heras, Dora (Chairman)
GAGO COUSO, FELIPE (Secretary)
Sánchez Vila, Eduardo Manuel (Member)
Blanco Heras, Dora (Chairman)
GAGO COUSO, FELIPE (Secretary)
Sánchez Vila, Eduardo Manuel (Member)
State-based deep learning architectures for predictive monitoring
Authorship
J.A.V.
Bachelor’s Degree in Informatics Engineering
J.A.V.
Bachelor’s Degree in Informatics Engineering
Defense date
07.17.2025 09:30
07.17.2025 09:30
Summary
Process mining is an area that allows analysing event logs in order to discover, monitor and improve business processes. Within this field, predicting the next activity is a central task in predictive monitoring, which has been successfully addressed by deep learning architectures, especially recurrent neural networks, LSTM and Transformers. In this paper we study the performance of Mamba, a recent architecture based on state space models, which has shown promising results in areas such as natural language processing and will be adapted specifically for the task of predicting the next activity for the first time. To this end, a comparative analysis between Mamba, LSTM and Transformers is carried out by implementing five incremental versions of each architecture. These versions progressively incorporate temporal, resource attributes and attention mechanisms, and are evaluated on three public datasets following a cross-validation protocol. The results show that Mamba can compete with traditional architectures in terms of accuracy. However, its performance is more variable and its training times are considerably high, indicating that it still requires optimisation. This work highlights the potential of Mamba in the field of process mining and lays the groundwork for future research.
Process mining is an area that allows analysing event logs in order to discover, monitor and improve business processes. Within this field, predicting the next activity is a central task in predictive monitoring, which has been successfully addressed by deep learning architectures, especially recurrent neural networks, LSTM and Transformers. In this paper we study the performance of Mamba, a recent architecture based on state space models, which has shown promising results in areas such as natural language processing and will be adapted specifically for the task of predicting the next activity for the first time. To this end, a comparative analysis between Mamba, LSTM and Transformers is carried out by implementing five incremental versions of each architecture. These versions progressively incorporate temporal, resource attributes and attention mechanisms, and are evaluated on three public datasets following a cross-validation protocol. The results show that Mamba can compete with traditional architectures in terms of accuracy. However, its performance is more variable and its training times are considerably high, indicating that it still requires optimisation. This work highlights the potential of Mamba in the field of process mining and lays the groundwork for future research.
Direction
VIDAL AGUIAR, JUAN CARLOS (Tutorships)
LAMA PENIN, MANUEL (Co-tutorships)
GAMALLO FERNANDEZ, PEDRO (Co-tutorships)
VIDAL AGUIAR, JUAN CARLOS (Tutorships)
LAMA PENIN, MANUEL (Co-tutorships)
GAMALLO FERNANDEZ, PEDRO (Co-tutorships)
Court
CABALEIRO DOMINGUEZ, JOSE CARLOS (Chairman)
FLORES GONZALEZ, JULIAN CARLOS (Secretary)
Jeremías López, Ana (Member)
CABALEIRO DOMINGUEZ, JOSE CARLOS (Chairman)
FLORES GONZALEZ, JULIAN CARLOS (Secretary)
Jeremías López, Ana (Member)
Benchmark for signal compression and image processing in 16- and 32-bit arithmetic
Authorship
E.B.P.
Bachelor’s Degree in Informatics Engineering
E.B.P.
Bachelor’s Degree in Informatics Engineering
Defense date
07.17.2025 10:00
07.17.2025 10:00
Summary
The field of reduced-precision floating-point data types, binary16 and bfloat16, is dominated by GPUs; a field where most of the tools, development and research are concentrated. In comparison, the situation of these data types on CPUs is more limited in all these areas. In this work, a benchmark is developed to evaluate the accuracy and performance of these data types on x86 and ARM CPU architectures. Throughout this work, the current status of the support of these data types by the architectures will be presented, the programs that compose this benchmark and the CPUs where it will be executed will be explained, and the results obtained will be analyzed. Subsequently, metrics will be developed to evaluate and contextualize the performance of these data types in order to draw conclusions about the current status of binary16 and bfloat16 on CPUs.
The field of reduced-precision floating-point data types, binary16 and bfloat16, is dominated by GPUs; a field where most of the tools, development and research are concentrated. In comparison, the situation of these data types on CPUs is more limited in all these areas. In this work, a benchmark is developed to evaluate the accuracy and performance of these data types on x86 and ARM CPU architectures. Throughout this work, the current status of the support of these data types by the architectures will be presented, the programs that compose this benchmark and the CPUs where it will be executed will be explained, and the results obtained will be analyzed. Subsequently, metrics will be developed to evaluate and contextualize the performance of these data types in order to draw conclusions about the current status of binary16 and bfloat16 on CPUs.
Direction
QUESADA BARRIUSO, PABLO (Tutorships)
LOPEZ FANDIÑO, JAVIER (Co-tutorships)
QUESADA BARRIUSO, PABLO (Tutorships)
LOPEZ FANDIÑO, JAVIER (Co-tutorships)
Court
CABALEIRO DOMINGUEZ, JOSE CARLOS (Chairman)
FLORES GONZALEZ, JULIAN CARLOS (Secretary)
Jeremías López, Ana (Member)
CABALEIRO DOMINGUEZ, JOSE CARLOS (Chairman)
FLORES GONZALEZ, JULIAN CARLOS (Secretary)
Jeremías López, Ana (Member)
Solketal production plant by ketalisation of bio-derived glycerol
Authorship
R.B.C.
Bachelor's Degree in Chemical Engeneering
R.B.C.
Bachelor's Degree in Chemical Engeneering
Defense date
07.16.2025 10:10
07.16.2025 10:10
Summary
The purpose of this project is the design of a plant for the production of solketal by means of a ketalisation reaction between acetone and bio-derived glycerol. The production capacity of the plant will be 20.021 tonnes per year of solketal product with a purity of 99,9 % by weight. The plant will operate continuously 330 days a year, 24 hours a day. This project involves the rigorous design of the T-202 unit, which is a distillation column. Through the development of this project, the student Rodrigo Barrio Corbillón opts for the degree of Chemical Engineering, awarded by the Escola Técnica Superior de Enxeñaría of the Universidade de Santiago de Compostela.
The purpose of this project is the design of a plant for the production of solketal by means of a ketalisation reaction between acetone and bio-derived glycerol. The production capacity of the plant will be 20.021 tonnes per year of solketal product with a purity of 99,9 % by weight. The plant will operate continuously 330 days a year, 24 hours a day. This project involves the rigorous design of the T-202 unit, which is a distillation column. Through the development of this project, the student Rodrigo Barrio Corbillón opts for the degree of Chemical Engineering, awarded by the Escola Técnica Superior de Enxeñaría of the Universidade de Santiago de Compostela.
Direction
SANCHEZ FERNANDEZ, ADRIAN (Tutorships)
SANCHEZ FERNANDEZ, ADRIAN (Tutorships)
Court
VIDAL TATO, MARIA ISABEL (Chairman)
MAURICIO IGLESIAS, MIGUEL (Secretary)
RODRIGUEZ MARTINEZ, HECTOR (Member)
VIDAL TATO, MARIA ISABEL (Chairman)
MAURICIO IGLESIAS, MIGUEL (Secretary)
RODRIGUEZ MARTINEZ, HECTOR (Member)
Named Entity Linking in the Medical Domain
Authorship
E.B.P.
Bachelor’s Degree in Informatics Engineering
E.B.P.
Bachelor’s Degree in Informatics Engineering
Defense date
07.18.2025 09:30
07.18.2025 09:30
Summary
Semantic interoperability is one of the main challenges in current clinical systems, especially regarding the automatic linking of freely defined medical concepts to standardized entities in ontologies such as SNOMED CT or UMLS. This process, known as biomedical entity linking, is key to ensuring the quality of clinical data, integrating information from heterogeneous sources, and enabling automated, semantically-based analyses. However, it is a particularly complex task due to terminological ambiguity, linguistic variability, and the dependence on clinical context. This work proposed the development of an automatic system capable of verifying and/or establishing mappings between expert-defined concepts and entities in the UMLS knowledge base. To this end, pretrained biomedical language models such as BioBERT were used, capable of generating vector representations (embeddings) that capture the semantics of medical descriptions and allow for the comparison of entities based on contextual similarity. The system was validated through a set of experiments covering three main scenarios: reviewing existing mappings between concepts and UMLS entities; automatically linking entities without prior identification; and re-evaluating already annotated mappings using semantic retrieval techniques. These experiments allowed the observation of the system’s behavior in both controlled and real-world settings, showing that higher-capacity models like BioBERT-large achieve better results. Furthermore, the importance of the quality and richness of textual descriptions was highlighted: poorly defined entities or those lacking context lead to errors, even when the correct entity is present in the knowledge base. One of the most significant technical challenges was building the knowledge base, which required obtaining more than 318,000 SNOMED CT concepts via the BioPortal API. This process, limited by access restrictions and time-consuming, highlighted the need for more direct and efficient sources for biomedical knowledge integration. As a final result, this work presents a reproducible and functional architecture that demonstrates the potential of language models in the task of biomedical entity linking, while also identifying their current limitations. The conclusions point toward the need to integrate complementary strategies such as semantic reranking, the use of ontology-based type filters, and the inclusion of extended clinical context. This system thus represents a first step toward the reliable automation of terminological normalization in real clinical records.
Semantic interoperability is one of the main challenges in current clinical systems, especially regarding the automatic linking of freely defined medical concepts to standardized entities in ontologies such as SNOMED CT or UMLS. This process, known as biomedical entity linking, is key to ensuring the quality of clinical data, integrating information from heterogeneous sources, and enabling automated, semantically-based analyses. However, it is a particularly complex task due to terminological ambiguity, linguistic variability, and the dependence on clinical context. This work proposed the development of an automatic system capable of verifying and/or establishing mappings between expert-defined concepts and entities in the UMLS knowledge base. To this end, pretrained biomedical language models such as BioBERT were used, capable of generating vector representations (embeddings) that capture the semantics of medical descriptions and allow for the comparison of entities based on contextual similarity. The system was validated through a set of experiments covering three main scenarios: reviewing existing mappings between concepts and UMLS entities; automatically linking entities without prior identification; and re-evaluating already annotated mappings using semantic retrieval techniques. These experiments allowed the observation of the system’s behavior in both controlled and real-world settings, showing that higher-capacity models like BioBERT-large achieve better results. Furthermore, the importance of the quality and richness of textual descriptions was highlighted: poorly defined entities or those lacking context lead to errors, even when the correct entity is present in the knowledge base. One of the most significant technical challenges was building the knowledge base, which required obtaining more than 318,000 SNOMED CT concepts via the BioPortal API. This process, limited by access restrictions and time-consuming, highlighted the need for more direct and efficient sources for biomedical knowledge integration. As a final result, this work presents a reproducible and functional architecture that demonstrates the potential of language models in the task of biomedical entity linking, while also identifying their current limitations. The conclusions point toward the need to integrate complementary strategies such as semantic reranking, the use of ontology-based type filters, and the inclusion of extended clinical context. This system thus represents a first step toward the reliable automation of terminological normalization in real clinical records.
Direction
CHAVES FRAGA, DAVID (Tutorships)
LAMA PENIN, MANUEL (Co-tutorships)
PEÑA GIL, CARLOS (Co-tutorships)
CHAVES FRAGA, DAVID (Tutorships)
LAMA PENIN, MANUEL (Co-tutorships)
PEÑA GIL, CARLOS (Co-tutorships)
Court
MOSQUERA GONZALEZ, ANTONIO (Chairman)
CORES COSTA, DANIEL (Secretary)
QUESADA BARRIUSO, PABLO (Member)
MOSQUERA GONZALEZ, ANTONIO (Chairman)
CORES COSTA, DANIEL (Secretary)
QUESADA BARRIUSO, PABLO (Member)
Application for exploration and analysis of Internet connected devices
Authorship
D.C.D.
Bachelor’s Degree in Informatics Engineering
D.C.D.
Bachelor’s Degree in Informatics Engineering
Defense date
07.18.2025 10:00
07.18.2025 10:00
Summary
This Bachelor’s Thesis (TFG) proposes a computing system inspired by device search engines such as Shodan or Censys, designed to provide researchers, security auditors, and network administrators with a clear and up-to-date view of exposed assets on the global network. The developed system integrates a distributed crawler that explores IPv4 address ranges using parallel scanning and load balancing techniques, ensuring a comprehensive scan without compromising infrastructure stability. The process includes port and service scanning, with version detection, IP address geolocation, vulnerability analysis, and device type classification. To manage the large amount of information generated, a storage and indexing system based on a NoSQL database has been implemented, allowing devices to be filtered by location, service type, or software version, and performing high-performance aggregations. In addition, an intuitive web interface is available to end users, allowing them to explore data using dynamic filters, configure custom alerts, and export detailed reports. Finally, a real-world case study is presented, encompassing a massive scan of a large set of IP addresses and the subsequent analysis of the obtained data. The experiments conducted demonstrate that the proposed solution successfully met the objectives of scan coverage, performance, and scalability, as well as accurate detection of services and vulnerabilities. User testing also confirmed the reliability of the web interface and the robustness of the system across various usage scenarios.
This Bachelor’s Thesis (TFG) proposes a computing system inspired by device search engines such as Shodan or Censys, designed to provide researchers, security auditors, and network administrators with a clear and up-to-date view of exposed assets on the global network. The developed system integrates a distributed crawler that explores IPv4 address ranges using parallel scanning and load balancing techniques, ensuring a comprehensive scan without compromising infrastructure stability. The process includes port and service scanning, with version detection, IP address geolocation, vulnerability analysis, and device type classification. To manage the large amount of information generated, a storage and indexing system based on a NoSQL database has been implemented, allowing devices to be filtered by location, service type, or software version, and performing high-performance aggregations. In addition, an intuitive web interface is available to end users, allowing them to explore data using dynamic filters, configure custom alerts, and export detailed reports. Finally, a real-world case study is presented, encompassing a massive scan of a large set of IP addresses and the subsequent analysis of the obtained data. The experiments conducted demonstrate that the proposed solution successfully met the objectives of scan coverage, performance, and scalability, as well as accurate detection of services and vulnerabilities. User testing also confirmed the reliability of the web interface and the robustness of the system across various usage scenarios.
Direction
CARIÑENA AMIGO, MARIA PURIFICACION (Tutorships)
Viqueira Hierro, Bernardo (Co-tutorships)
CARIÑENA AMIGO, MARIA PURIFICACION (Tutorships)
Viqueira Hierro, Bernardo (Co-tutorships)
Court
MOSQUERA GONZALEZ, ANTONIO (Chairman)
CORES COSTA, DANIEL (Secretary)
QUESADA BARRIUSO, PABLO (Member)
MOSQUERA GONZALEZ, ANTONIO (Chairman)
CORES COSTA, DANIEL (Secretary)
QUESADA BARRIUSO, PABLO (Member)
Hybrid transformer-based earth observation image classification technique
Authorship
H.C.R.
Bachelor’s Degree in Informatics Engineering
H.C.R.
Bachelor’s Degree in Informatics Engineering
Defense date
07.18.2025 10:30
07.18.2025 10:30
Summary
In recent years, machine learning and deep learning neural networks have been the focal point of interest for many researchers, companies and public administrations. Specifically, in the area of remote sensing, knowledge is focused on multispectral images and on the creation of digital twins for the analysis of spillages, tree surfaces or the maintenance of large extensions of agricultural meadows. For this reason, it was considered that carrying out work on a topical subject could be interesting to learn about the state of the art, as well as to contribute a grain of sand to the field of image classification. For this purpose, several transformers were used, in order to adapt them to a set of multispectral images obtained from the basins of several Galician rivers. One of the main challenges consisted of choosing precise and efficient methods that could be transferred to real-time situations, very applicable, as we mentioned, to the digital twins that are also being promoted in Galicia. With this, experimentation was carried out, resulting in some very interesting accuracy and cost metrics. It was mainly highlighted that hybrid transformer models, such as FastViT or CoAtNet, are potentially useful for situations that require the highest accuracy and the shortest time. It was also concluded that there is a strong global component in the remotely sensed images of river cups, so it is necessary to capture such features at these early stages to perform class inference correctly.
In recent years, machine learning and deep learning neural networks have been the focal point of interest for many researchers, companies and public administrations. Specifically, in the area of remote sensing, knowledge is focused on multispectral images and on the creation of digital twins for the analysis of spillages, tree surfaces or the maintenance of large extensions of agricultural meadows. For this reason, it was considered that carrying out work on a topical subject could be interesting to learn about the state of the art, as well as to contribute a grain of sand to the field of image classification. For this purpose, several transformers were used, in order to adapt them to a set of multispectral images obtained from the basins of several Galician rivers. One of the main challenges consisted of choosing precise and efficient methods that could be transferred to real-time situations, very applicable, as we mentioned, to the digital twins that are also being promoted in Galicia. With this, experimentation was carried out, resulting in some very interesting accuracy and cost metrics. It was mainly highlighted that hybrid transformer models, such as FastViT or CoAtNet, are potentially useful for situations that require the highest accuracy and the shortest time. It was also concluded that there is a strong global component in the remotely sensed images of river cups, so it is necessary to capture such features at these early stages to perform class inference correctly.
Direction
Argüello Pedreira, Francisco Santiago (Tutorships)
Blanco Heras, Dora (Co-tutorships)
Argüello Pedreira, Francisco Santiago (Tutorships)
Blanco Heras, Dora (Co-tutorships)
Court
MOSQUERA GONZALEZ, ANTONIO (Chairman)
CORES COSTA, DANIEL (Secretary)
QUESADA BARRIUSO, PABLO (Member)
MOSQUERA GONZALEZ, ANTONIO (Chairman)
CORES COSTA, DANIEL (Secretary)
QUESADA BARRIUSO, PABLO (Member)
Comparison of search technologies in evaluation benchmarks for the study of misinformation in the context of health-related queries
Authorship
X.C.A.
Double Bachelor's Degree in Informatics Engineering and Mathematics
X.C.A.
Double Bachelor's Degree in Informatics Engineering and Mathematics
Defense date
07.18.2025 11:00
07.18.2025 11:00
Summary
The presence of misinformation in web search results related to health is a concerning issue both socially and scientifically, as it can negatively influence users' decision-making and lead to serious health consequences. This phenomenon, which gained visibility because of the COVID-19 pandemic, is one of the current research areas in Information Retrieval (IR), around which this bachelor thesis is structured. The main goal of the project is to explore how to distinguish relevant and accurate documents from harmful ones, given a specific search intent. With this goal in mind, a study structured along three research lines is proposed. First, a systematic analysis is conducted on the performance of state-of-the-art search systems with respect to this task. Second, a new technique is designed, implemented, and evaluated, based on Large Language Models (LLMs), for generating alternative versions of user queries in such a way that the variants promote the retrieval of relevant and accurate documents, while discouraging the retrieval of harmful ones. Lastly, a study is presented on predicting the presence of health misinformation in search results, for which techniques from related fields are tested, and a new LLM-based predictor, specifically tailored for this task, is designed, implemented, and evaluated. The findings of this work support the potential of LLMs in the field of IR, as they manage to improve the effectiveness of state-of-the-art search systems. Moreover, the project addresses the literature gap with regard to misinformation prediction in queries, while also showing the superior capability of LLMs for this task compared to more general techniques. Parts of the contributions from this project have been accepted for publication at the ACM SIGIR 2025 conference.
The presence of misinformation in web search results related to health is a concerning issue both socially and scientifically, as it can negatively influence users' decision-making and lead to serious health consequences. This phenomenon, which gained visibility because of the COVID-19 pandemic, is one of the current research areas in Information Retrieval (IR), around which this bachelor thesis is structured. The main goal of the project is to explore how to distinguish relevant and accurate documents from harmful ones, given a specific search intent. With this goal in mind, a study structured along three research lines is proposed. First, a systematic analysis is conducted on the performance of state-of-the-art search systems with respect to this task. Second, a new technique is designed, implemented, and evaluated, based on Large Language Models (LLMs), for generating alternative versions of user queries in such a way that the variants promote the retrieval of relevant and accurate documents, while discouraging the retrieval of harmful ones. Lastly, a study is presented on predicting the presence of health misinformation in search results, for which techniques from related fields are tested, and a new LLM-based predictor, specifically tailored for this task, is designed, implemented, and evaluated. The findings of this work support the potential of LLMs in the field of IR, as they manage to improve the effectiveness of state-of-the-art search systems. Moreover, the project addresses the literature gap with regard to misinformation prediction in queries, while also showing the superior capability of LLMs for this task compared to more general techniques. Parts of the contributions from this project have been accepted for publication at the ACM SIGIR 2025 conference.
Direction
Losada Carril, David Enrique (Tutorships)
FERNANDEZ PICHEL, MARCOS (Co-tutorships)
Losada Carril, David Enrique (Tutorships)
FERNANDEZ PICHEL, MARCOS (Co-tutorships)
Court
MOSQUERA GONZALEZ, ANTONIO (Chairman)
CORES COSTA, DANIEL (Secretary)
QUESADA BARRIUSO, PABLO (Member)
MOSQUERA GONZALEZ, ANTONIO (Chairman)
CORES COSTA, DANIEL (Secretary)
QUESADA BARRIUSO, PABLO (Member)
PIDASS - Intelligent Platform for the Design of Healthy and Sustainable Foods
Authorship
A.C.R.
Bachelor’s Degree in Informatics Engineering
A.C.R.
Bachelor’s Degree in Informatics Engineering
Defense date
07.18.2025 13:30
07.18.2025 13:30
Summary
This project develops a web application to optimize the creation of food products for small and medium companies, following a structured methodology based on the approach of researcher Jairo Torres. The tool aims to reduce development time and improve competitiveness against larger companies in the sector. The application is an ERP that allows users to manage food development projects, integrating key functionalities. It is developed in Flask with SQL databases and follows a workflow organized into four main phases, including 27 functionalities defined in a Miro board. To move between phases, the user must manually review and approve progress. The platform is implemented as a web solution accessible from any device without installation, hosted on a remote server. Its expected impact is to meet the needs of researcher Jairo Torres, reducing food development time, with the potential to adapt to other SMEs in the sector with minimal modifications. This document details the entire process: project management, requirements, design, testing, conclusions, and future lines of work. Additionally, it includes a technical manual (for developers) and a user manual (for end clients) to facilitate understanding and use of the system.
This project develops a web application to optimize the creation of food products for small and medium companies, following a structured methodology based on the approach of researcher Jairo Torres. The tool aims to reduce development time and improve competitiveness against larger companies in the sector. The application is an ERP that allows users to manage food development projects, integrating key functionalities. It is developed in Flask with SQL databases and follows a workflow organized into four main phases, including 27 functionalities defined in a Miro board. To move between phases, the user must manually review and approve progress. The platform is implemented as a web solution accessible from any device without installation, hosted on a remote server. Its expected impact is to meet the needs of researcher Jairo Torres, reducing food development time, with the potential to adapt to other SMEs in the sector with minimal modifications. This document details the entire process: project management, requirements, design, testing, conclusions, and future lines of work. Additionally, it includes a technical manual (for developers) and a user manual (for end clients) to facilitate understanding and use of the system.
Direction
PICHEL CAMPOS, JUAN CARLOS (Tutorships)
BARBOSA PEREIRA, LETRICIA (Co-tutorships)
PICHEL CAMPOS, JUAN CARLOS (Tutorships)
BARBOSA PEREIRA, LETRICIA (Co-tutorships)
Court
Pardo López, Xosé Manuel (Chairman)
SUAREZ GAREA, JORGE ALBERTO (Secretary)
CARIÑENA AMIGO, MARIA PURIFICACION (Member)
Pardo López, Xosé Manuel (Chairman)
SUAREZ GAREA, JORGE ALBERTO (Secretary)
CARIÑENA AMIGO, MARIA PURIFICACION (Member)
Acrylonitrile production plant
Authorship
D.I.C.S.
Bachelor's Degree in Chemical Engeneering
D.I.C.S.
Bachelor's Degree in Chemical Engeneering
Defense date
07.16.2025 10:40
07.16.2025 10:40
Summary
The objective of this Final Degree Project is the design of a production plant for 50,000 tons per year of acrylonitrile. This compound has widespread industrial use. It is commonly employed in the production of synthetic fibers, plastics, resins, and in the synthesis of other chemicals. Acrylonitrile is obtained through the ammoxidation reaction of propylene, ammonia, and oxygen, known as the Sohio process. The main reaction products are acrylonitrile, HCN, acetonitrile, water, carbon dioxide, and carbon monoxide. The reaction takes place in a fluidized-bed reactor using a bismuth molybdate oxide catalyst supported on alumina. Subsequently, the reaction products pass through an absorber (using water as the solvent) to recover the highest possible amount of acrylonitrile. Finally, the products are sent through a distillation column train, whose main objective is the separation and purification of acrylonitrile to 99.7% purity.
The objective of this Final Degree Project is the design of a production plant for 50,000 tons per year of acrylonitrile. This compound has widespread industrial use. It is commonly employed in the production of synthetic fibers, plastics, resins, and in the synthesis of other chemicals. Acrylonitrile is obtained through the ammoxidation reaction of propylene, ammonia, and oxygen, known as the Sohio process. The main reaction products are acrylonitrile, HCN, acetonitrile, water, carbon dioxide, and carbon monoxide. The reaction takes place in a fluidized-bed reactor using a bismuth molybdate oxide catalyst supported on alumina. Subsequently, the reaction products pass through an absorber (using water as the solvent) to recover the highest possible amount of acrylonitrile. Finally, the products are sent through a distillation column train, whose main objective is the separation and purification of acrylonitrile to 99.7% purity.
Direction
SOTO CAMPOS, ANA MARIA (Tutorships)
SOTO CAMPOS, ANA MARIA (Tutorships)
Court
VIDAL TATO, MARIA ISABEL (Chairman)
MAURICIO IGLESIAS, MIGUEL (Secretary)
RODRIGUEZ MARTINEZ, HECTOR (Member)
VIDAL TATO, MARIA ISABEL (Chairman)
MAURICIO IGLESIAS, MIGUEL (Secretary)
RODRIGUEZ MARTINEZ, HECTOR (Member)
Developing and implementing an independent videogame using Unreal Engine 5: Code Breaker
Authorship
A.O.C.
Bachelor’s Degree in Informatics Engineering
A.O.C.
Bachelor’s Degree in Informatics Engineering
Defense date
07.17.2025 09:30
07.17.2025 09:30
Summary
Motivated by the relevance and growth of the video game industry, as well as by the complexity involved in developing a video game, this project aims to explore the various aspects of the video game development process independently. Beginning with an analysis of the state of the art, and after selecting a game engine as a support tool for creating the software system, the development of a multiplayer video game is undertaken, following similar patterns to those used in the industry. The central focus lies in implementing a real-time system that enables communication between multiple interconnected devices. The project addresses the stages of design, implementation, and testing, providing explanations for the decisions made and the tests carried out, and concludes with a summary that brings together all the project's efforts. Additionally, complementary technical and user manuals are provided to facilitate future development and software use.
Motivated by the relevance and growth of the video game industry, as well as by the complexity involved in developing a video game, this project aims to explore the various aspects of the video game development process independently. Beginning with an analysis of the state of the art, and after selecting a game engine as a support tool for creating the software system, the development of a multiplayer video game is undertaken, following similar patterns to those used in the industry. The central focus lies in implementing a real-time system that enables communication between multiple interconnected devices. The project addresses the stages of design, implementation, and testing, providing explanations for the decisions made and the tests carried out, and concludes with a summary that brings together all the project's efforts. Additionally, complementary technical and user manuals are provided to facilitate future development and software use.
Direction
FLORES GONZALEZ, JULIAN CARLOS (Tutorships)
González Santos, Alejandro (Co-tutorships)
FLORES GONZALEZ, JULIAN CARLOS (Tutorships)
González Santos, Alejandro (Co-tutorships)
Court
LAMA PENIN, MANUEL (Chairman)
BORRAJO GARCIA, MARIA ISABEL (Secretary)
VARELA PET, JOSE (Member)
LAMA PENIN, MANUEL (Chairman)
BORRAJO GARCIA, MARIA ISABEL (Secretary)
VARELA PET, JOSE (Member)
ENERGETIC: Foundation few-shot models for medical image segmentation
Authorship
C.C.R.
Bachelor’s Degree in Informatics Engineering
C.C.R.
Bachelor’s Degree in Informatics Engineering
Defense date
07.17.2025 10:00
07.17.2025 10:00
Summary
Automatic segmentation of medical images can facilitate the diagnosis and prognosis of pathologies in the clinical setting. However, it faces two major challenges: the scarcity of labelled data and the enormous heterogeneity of images, due to the diversity of imaging modalities and variations in contrast, noise and anatomies. Foundation models, trained on large volumes of data, promise to offer great generalisability, while few-shot techniques allow adaptation to environments with very few annotated examples, reducing costs. The aim of this work is to perform a comparative study of several few-shot segmentation models trained on multimodal medical imaging datasets, assessing their generalisability as a foundation model. First, several 2D and 3D medical image datasets were collected to create a heterogeneous database consisting of different image modalities and segmentation tasks. Four few-shot segmentation architectures (PANet, PFENet, CyCTR and UniverSeg) were then trained on this, varying both the use of data augmentation and the size of the support set (1-shot vs. 5-shot). Quantitative results based on the DSC metric and Hausdorff distance showed that increasing the amount of data worsened performance, while increasing the number of support images improved segmentation accuracy. CyCTR and UniverSeg were the most accurate in segmentation tasks used in the training phase, but both suffered a noticeable drop in modalities not included in that phase. Only UniverSeg with 5 support images without data augmentation maintained relatively stable results in new scenarios, suggesting that it is the model that best extracts relevant features from the support images. Overall, this study demonstrates that current few-shot approaches have significant limitations to perform as foundation segmentation models. This highlights the need for further research and development of methods with improved capabilities in this area.
Automatic segmentation of medical images can facilitate the diagnosis and prognosis of pathologies in the clinical setting. However, it faces two major challenges: the scarcity of labelled data and the enormous heterogeneity of images, due to the diversity of imaging modalities and variations in contrast, noise and anatomies. Foundation models, trained on large volumes of data, promise to offer great generalisability, while few-shot techniques allow adaptation to environments with very few annotated examples, reducing costs. The aim of this work is to perform a comparative study of several few-shot segmentation models trained on multimodal medical imaging datasets, assessing their generalisability as a foundation model. First, several 2D and 3D medical image datasets were collected to create a heterogeneous database consisting of different image modalities and segmentation tasks. Four few-shot segmentation architectures (PANet, PFENet, CyCTR and UniverSeg) were then trained on this, varying both the use of data augmentation and the size of the support set (1-shot vs. 5-shot). Quantitative results based on the DSC metric and Hausdorff distance showed that increasing the amount of data worsened performance, while increasing the number of support images improved segmentation accuracy. CyCTR and UniverSeg were the most accurate in segmentation tasks used in the training phase, but both suffered a noticeable drop in modalities not included in that phase. Only UniverSeg with 5 support images without data augmentation maintained relatively stable results in new scenarios, suggesting that it is the model that best extracts relevant features from the support images. Overall, this study demonstrates that current few-shot approaches have significant limitations to perform as foundation segmentation models. This highlights the need for further research and development of methods with improved capabilities in this area.
Direction
VILA BLANCO, NICOLAS (Tutorships)
Carreira Nouche, María José (Co-tutorships)
VILA BLANCO, NICOLAS (Tutorships)
Carreira Nouche, María José (Co-tutorships)
Court
LAMA PENIN, MANUEL (Chairman)
BORRAJO GARCIA, MARIA ISABEL (Secretary)
VARELA PET, JOSE (Member)
LAMA PENIN, MANUEL (Chairman)
BORRAJO GARCIA, MARIA ISABEL (Secretary)
VARELA PET, JOSE (Member)
Design of a carbon dioxide capture facility using ionic liquids
Authorship
T.C.D.
Bachelor's Degree in Chemical Engeneering
T.C.D.
Bachelor's Degree in Chemical Engeneering
Defense date
07.16.2025 11:35
07.16.2025 11:35
Summary
The objective of this project is the design of an industrial facility to capture carbon dioxide from the gaseous stream generated by an industry (the facilities of MGR Magnesitas de Rubián S.A., in whose manufacturing activity the CO2 is generated). Therefore, the objective of this facility is the capture of the CO2 to avoid its emission to the atmosphere (as it causes global warming). To reach this goal, the usage of chemical absorption technology with an ionic liquid of the AHA (aprotic heterocyclic anion) family is proposed, specifically the [P2228][2-CNPyr]. The facility is organised into three sections: the first one is where both the gases and the ionic liquid are conditioned; in the second one, the chemical absorption takes place and then the regeneration of the ionic liquid is carried out, as well as the isolation of CO2; the third one is where the carbon dioxide is converted into liquid phase (making possible its selling). The unit whose design has been carried out in detail is the absorption tower.
The objective of this project is the design of an industrial facility to capture carbon dioxide from the gaseous stream generated by an industry (the facilities of MGR Magnesitas de Rubián S.A., in whose manufacturing activity the CO2 is generated). Therefore, the objective of this facility is the capture of the CO2 to avoid its emission to the atmosphere (as it causes global warming). To reach this goal, the usage of chemical absorption technology with an ionic liquid of the AHA (aprotic heterocyclic anion) family is proposed, specifically the [P2228][2-CNPyr]. The facility is organised into three sections: the first one is where both the gases and the ionic liquid are conditioned; in the second one, the chemical absorption takes place and then the regeneration of the ionic liquid is carried out, as well as the isolation of CO2; the third one is where the carbon dioxide is converted into liquid phase (making possible its selling). The unit whose design has been carried out in detail is the absorption tower.
Direction
RODRIGUEZ MARTINEZ, HECTOR (Tutorships)
RODRIGUEZ MARTINEZ, HECTOR (Tutorships)
Court
MOREIRA MARTINEZ, RAMON FELIPE (Chairman)
Pedrouso Fuentes, Alba (Secretary)
SINEIRO TORRES, JORGE (Member)
MOREIRA MARTINEZ, RAMON FELIPE (Chairman)
Pedrouso Fuentes, Alba (Secretary)
SINEIRO TORRES, JORGE (Member)
FasterThanLight: Aggregate metrics in Agile environments with integration with heterogeneous systems
Authorship
A.D.S.
Bachelor’s Degree in Informatics Engineering
A.D.S.
Bachelor’s Degree in Informatics Engineering
Defense date
07.18.2025 12:00
07.18.2025 12:00
Summary
This thesis, developed in collaboration with HP, aims to design and implement an automated system for extracting, processing, storing, and visualizing data from the Jira and Azure DevOps platforms. The main motivation stems from HP's real need to consolidate and analyze the information dispersed across both tools, thereby optimizing project management and data-driven decision-making. The developed system automates the retrieval of relevant data through the use of REST APIs, standardizes the information to ensure consistency, and stores it in a database managed with Django. Finally, the data is presented in interactive dashboards created with Power BI, facilitating visual analysis and drawing practical conclusions.
This thesis, developed in collaboration with HP, aims to design and implement an automated system for extracting, processing, storing, and visualizing data from the Jira and Azure DevOps platforms. The main motivation stems from HP's real need to consolidate and analyze the information dispersed across both tools, thereby optimizing project management and data-driven decision-making. The developed system automates the retrieval of relevant data through the use of REST APIs, standardizes the information to ensure consistency, and stores it in a database managed with Django. Finally, the data is presented in interactive dashboards created with Power BI, facilitating visual analysis and drawing practical conclusions.
Direction
RIOS VIQUEIRA, JOSE RAMON (Tutorships)
Gago Pérez, Alicia (Co-tutorships)
Morán Bautista, Rafael (Co-tutorships)
RIOS VIQUEIRA, JOSE RAMON (Tutorships)
Gago Pérez, Alicia (Co-tutorships)
Morán Bautista, Rafael (Co-tutorships)
Court
Pardo López, Xosé Manuel (Chairman)
SUAREZ GAREA, JORGE ALBERTO (Secretary)
CARIÑENA AMIGO, MARIA PURIFICACION (Member)
Pardo López, Xosé Manuel (Chairman)
SUAREZ GAREA, JORGE ALBERTO (Secretary)
CARIÑENA AMIGO, MARIA PURIFICACION (Member)
Study and optimization of Octree models for neighbourhood searches in 3D point clouds
Authorship
P.D.V.
Double Bachelor's Degree in Informatics Engineering and Mathematics
P.D.V.
Double Bachelor's Degree in Informatics Engineering and Mathematics
Defense date
07.18.2025 12:30
07.18.2025 12:30
Summary
This work presents an approach for efficient neighbour search in 3D point clouds, particularly those acquired via LiDAR technology. The proposed method involves a spatial reordering of the point cloud based on Space-Filling Curves (SFCs), coupled with efficient implementations of Octree-based search algorithms. The study explores how Morton and Hilbert SFCs can optimize the organization of three-dimensional data, improving the performance of neighbourhood queries and the construction of spatial structures over the cloud. Several Octree variants are proposed and evaluated, analyzing their impact on the efficiency and scalability of the proposed method. Experimental results demonstrate that SFC reordering, along with specialized Octree-based search algorithms, significantly enhances spatial data access, providing a robust solution for applications requiring fast processing of large 3D point cloud datasets. Part of this work was presented at the XXXV Jornadas de Paralelismo organized by SARTECO.
This work presents an approach for efficient neighbour search in 3D point clouds, particularly those acquired via LiDAR technology. The proposed method involves a spatial reordering of the point cloud based on Space-Filling Curves (SFCs), coupled with efficient implementations of Octree-based search algorithms. The study explores how Morton and Hilbert SFCs can optimize the organization of three-dimensional data, improving the performance of neighbourhood queries and the construction of spatial structures over the cloud. Several Octree variants are proposed and evaluated, analyzing their impact on the efficiency and scalability of the proposed method. Experimental results demonstrate that SFC reordering, along with specialized Octree-based search algorithms, significantly enhances spatial data access, providing a robust solution for applications requiring fast processing of large 3D point cloud datasets. Part of this work was presented at the XXXV Jornadas de Paralelismo organized by SARTECO.
Direction
Fernández Rivera, Francisco (Tutorships)
YERMO GARCIA, MIGUEL (Co-tutorships)
Fernández Rivera, Francisco (Tutorships)
YERMO GARCIA, MIGUEL (Co-tutorships)
Court
Pardo López, Xosé Manuel (Chairman)
SUAREZ GAREA, JORGE ALBERTO (Secretary)
CARIÑENA AMIGO, MARIA PURIFICACION (Member)
Pardo López, Xosé Manuel (Chairman)
SUAREZ GAREA, JORGE ALBERTO (Secretary)
CARIÑENA AMIGO, MARIA PURIFICACION (Member)
Modification and Customization of Arkime for Network Attack Detection
Authorship
L.D.C.
Bachelor’s Degree in Informatics Engineering
L.D.C.
Bachelor’s Degree in Informatics Engineering
Defense date
07.17.2025 10:30
07.17.2025 10:30
Summary
This Final Degree Project presents the design and implementation of a network traffic capture and analysis system aimed at threat detection in a virtualized network environment. The main objective has been to evaluate the Arkime tool as a central component for passive detection and forensic analysis, integrating it with complementary technologies to enhance its effectiveness in identifying malicious behavior. The test environment was deployed on an infrastructure based on Proxmox VE, with multiple virtual machines simulating typical roles in a corporate network: vulnerable nodes, attacker, firewall, and analysis machine. Arkime was installed on a Debian machine and connected to an external OpenSearch service for indexing and advanced visualization of the captured data. Throughout the project, various custom detection rules were developed in YAML and YARA formats, aimed at identifying common attack vectors such as SQL injection, XSS, remote command execution, or the use of insecure protocols. In addition, Python scripts were designed to interact with Arkime's API in order to detect behavioral patterns that cannot be identified through static rules, such as denial-of-service attacks, brute-force attempts, or access from external IPs. The tests were conducted in realistic scenarios: web attacks using DVWA, exploitation of vulnerable services, attacks on Active Directory (GOAD v2), and intrusion simulations. The results have shown that Arkime, when complemented with custom developments, can serve as a solid foundation for a flexible, extensible, and effective passive detection system, even in environments with limited visibility or encrypted traffic. The project concludes with a reflection on the system's current limitations and proposes possible areas for improvement, such as integration with correlation engines, encrypted traffic analysis, and automated incident response mechanisms.
This Final Degree Project presents the design and implementation of a network traffic capture and analysis system aimed at threat detection in a virtualized network environment. The main objective has been to evaluate the Arkime tool as a central component for passive detection and forensic analysis, integrating it with complementary technologies to enhance its effectiveness in identifying malicious behavior. The test environment was deployed on an infrastructure based on Proxmox VE, with multiple virtual machines simulating typical roles in a corporate network: vulnerable nodes, attacker, firewall, and analysis machine. Arkime was installed on a Debian machine and connected to an external OpenSearch service for indexing and advanced visualization of the captured data. Throughout the project, various custom detection rules were developed in YAML and YARA formats, aimed at identifying common attack vectors such as SQL injection, XSS, remote command execution, or the use of insecure protocols. In addition, Python scripts were designed to interact with Arkime's API in order to detect behavioral patterns that cannot be identified through static rules, such as denial-of-service attacks, brute-force attempts, or access from external IPs. The tests were conducted in realistic scenarios: web attacks using DVWA, exploitation of vulnerable services, attacks on Active Directory (GOAD v2), and intrusion simulations. The results have shown that Arkime, when complemented with custom developments, can serve as a solid foundation for a flexible, extensible, and effective passive detection system, even in environments with limited visibility or encrypted traffic. The project concludes with a reflection on the system's current limitations and proposes possible areas for improvement, such as integration with correlation engines, encrypted traffic analysis, and automated incident response mechanisms.
Direction
CARIÑENA AMIGO, MARIA PURIFICACION (Tutorships)
Cernadas Pérez, Guzmán (Co-tutorships)
García Villada, Diego (Co-tutorships)
CARIÑENA AMIGO, MARIA PURIFICACION (Tutorships)
Cernadas Pérez, Guzmán (Co-tutorships)
García Villada, Diego (Co-tutorships)
Court
LAMA PENIN, MANUEL (Chairman)
BORRAJO GARCIA, MARIA ISABEL (Secretary)
VARELA PET, JOSE (Member)
LAMA PENIN, MANUEL (Chairman)
BORRAJO GARCIA, MARIA ISABEL (Secretary)
VARELA PET, JOSE (Member)
Generation of MPI bindings for the Julia programming language
Authorship
J.E.V.
Bachelor’s Degree in Informatics Engineering
J.E.V.
Bachelor’s Degree in Informatics Engineering
Defense date
07.18.2025 13:00
07.18.2025 13:00
Summary
In the field of high-performance computing (HPC), the use of the Message Passing Interface (MPI) standard is essential for the development of parallel applications on distributed systems. As high-level languages like Julia become increasingly present in scientific and technical environments, the need for robust mechanisms that enable interoperability with libraries written in C, such as MPI, becomes more critical. However, existing solutions show significant limitations in terms of portability, maintenance, and adaptation to new versions of the standard. This work addresses the problem within the framework of the MPI4All project, an infrastructure designed to automatically generate MPI bindings for various programming languages. Specifically, the goal is to develop and integrate a new generator for Julia, enabling MPI4All to produce complete and functional bindings for this language. To achieve this, the system's modular architecture, which separates the analysis of the MPI API from code generation, was leveraged to construct a robust, maintainable, and extensible solution. To validate the solution, several representative HPC benchmarks were implemented. These tests, executed on the Finisterrae III supercomputer at CESGA, demonstrated that the generated bindings allow efficient and reliable use of MPI from Julia, achieving competitive performance compared to implementations in C and other existing solutions in the Julia ecosystem. The result is a flexible and portable system that facilitates the integration of Julia into high-performance parallel workflows, significantly reducing the technical complexity and effort required to maintain compatibility with different versions of the MPI standard.
In the field of high-performance computing (HPC), the use of the Message Passing Interface (MPI) standard is essential for the development of parallel applications on distributed systems. As high-level languages like Julia become increasingly present in scientific and technical environments, the need for robust mechanisms that enable interoperability with libraries written in C, such as MPI, becomes more critical. However, existing solutions show significant limitations in terms of portability, maintenance, and adaptation to new versions of the standard. This work addresses the problem within the framework of the MPI4All project, an infrastructure designed to automatically generate MPI bindings for various programming languages. Specifically, the goal is to develop and integrate a new generator for Julia, enabling MPI4All to produce complete and functional bindings for this language. To achieve this, the system's modular architecture, which separates the analysis of the MPI API from code generation, was leveraged to construct a robust, maintainable, and extensible solution. To validate the solution, several representative HPC benchmarks were implemented. These tests, executed on the Finisterrae III supercomputer at CESGA, demonstrated that the generated bindings allow efficient and reliable use of MPI from Julia, achieving competitive performance compared to implementations in C and other existing solutions in the Julia ecosystem. The result is a flexible and portable system that facilitates the integration of Julia into high-performance parallel workflows, significantly reducing the technical complexity and effort required to maintain compatibility with different versions of the MPI standard.
Direction
PICHEL CAMPOS, JUAN CARLOS (Tutorships)
PIÑEIRO POMAR, CESAR ALFREDO (Co-tutorships)
PICHEL CAMPOS, JUAN CARLOS (Tutorships)
PIÑEIRO POMAR, CESAR ALFREDO (Co-tutorships)
Court
Pardo López, Xosé Manuel (Chairman)
SUAREZ GAREA, JORGE ALBERTO (Secretary)
CARIÑENA AMIGO, MARIA PURIFICACION (Member)
Pardo López, Xosé Manuel (Chairman)
SUAREZ GAREA, JORGE ALBERTO (Secretary)
CARIÑENA AMIGO, MARIA PURIFICACION (Member)
Redesign of a conventional Fischer-Tropsch plant for sustainable fuels
Authorship
J.F.C.
Bachelor's Degree in Chemical Engeneering
J.F.C.
Bachelor's Degree in Chemical Engeneering
Defense date
07.16.2025 12:05
07.16.2025 12:05
Summary
This project proposes two modifications to the conventional Fischer Tropsch production process to convert it to sustainable fuels to make it more flexible. The decision was made to produce sustainable aviation fuels SAF due to its growing demand and the impact on the carbon footprint of conventional fuels. The project includes an analysis of available alternatives, a justification for SAF production, and the detailed design of two of the complex's units, one in the conditioning line and one in the recirculation line. Lucía Freire is designing the packed absorption column in the feed line, and Julia Feijoo Cacabelos is designing the catalytic fixed-bed autothermal reactor in the recirculation section.
This project proposes two modifications to the conventional Fischer Tropsch production process to convert it to sustainable fuels to make it more flexible. The decision was made to produce sustainable aviation fuels SAF due to its growing demand and the impact on the carbon footprint of conventional fuels. The project includes an analysis of available alternatives, a justification for SAF production, and the detailed design of two of the complex's units, one in the conditioning line and one in the recirculation line. Lucía Freire is designing the packed absorption column in the feed line, and Julia Feijoo Cacabelos is designing the catalytic fixed-bed autothermal reactor in the recirculation section.
Direction
VIDAL TATO, MARIA ISABEL (Tutorships)
VIDAL TATO, MARIA ISABEL (Tutorships)
Court
MOREIRA MARTINEZ, RAMON FELIPE (Chairman)
Pedrouso Fuentes, Alba (Secretary)
SINEIRO TORRES, JORGE (Member)
MOREIRA MARTINEZ, RAMON FELIPE (Chairman)
Pedrouso Fuentes, Alba (Secretary)
SINEIRO TORRES, JORGE (Member)
Golden Hydrogen Production Plant from wastewater treatment plant sludge..
Authorship
C.F.N.
Bachelor's Degree in Chemical Engeneering
C.F.N.
Bachelor's Degree in Chemical Engeneering
Defense date
07.16.2025 12:45
07.16.2025 12:45
Summary
The aim of this TFG is the design of a hydrogen production plant with carbon dioxide capture from biomethane, known as golden hydrogen. The biomethane will be generated on-site through the anaerobic digestion of sludge from urban wastewater treatment plants, WWTPs. The plant's production capacity is 3,150 tonnes of H2 per year, at 99.99% purity. In accordance with the relevant standards and technical manuals, a rigorous design is carried out for the anaerobic digester, the catalytic reformer reactor, and the carbon dioxide absorption tower. The value proposition of this project lies in offering an alternative to close the carbon cycle and to use organic waste as a raw material for producing a valuable product, hydrogen, in this case.
The aim of this TFG is the design of a hydrogen production plant with carbon dioxide capture from biomethane, known as golden hydrogen. The biomethane will be generated on-site through the anaerobic digestion of sludge from urban wastewater treatment plants, WWTPs. The plant's production capacity is 3,150 tonnes of H2 per year, at 99.99% purity. In accordance with the relevant standards and technical manuals, a rigorous design is carried out for the anaerobic digester, the catalytic reformer reactor, and the carbon dioxide absorption tower. The value proposition of this project lies in offering an alternative to close the carbon cycle and to use organic waste as a raw material for producing a valuable product, hydrogen, in this case.
Direction
SANCHEZ FERNANDEZ, ADRIAN (Tutorships)
SANCHEZ FERNANDEZ, ADRIAN (Tutorships)
Court
MOREIRA MARTINEZ, RAMON FELIPE (Chairman)
Pedrouso Fuentes, Alba (Secretary)
SINEIRO TORRES, JORGE (Member)
MOREIRA MARTINEZ, RAMON FELIPE (Chairman)
Pedrouso Fuentes, Alba (Secretary)
SINEIRO TORRES, JORGE (Member)
Analysis and Comparison of Concept Drift Detection and Adaptation Techniques
Authorship
F.F.M.
Double Bachelor's Degree in Informatics Engineering and Mathematics
F.F.M.
Double Bachelor's Degree in Informatics Engineering and Mathematics
Defense date
07.17.2025 10:00
07.17.2025 10:00
Summary
This work compares different techniques for the detection and adaptation of Concept Drift, a phenomenon that arises in dynamic and non-stationary environments where the statistical relationships between the model's variables change over time, affecting its performance in some cases. The main objective is to evaluate various drift detectors present in the Literature, along with the analysis of several adaptation techniques following detection. The study is carried out in an experimental setting using artificial datasets that simulate different types of drift, applied to a classification model. Classical algorithms from the RiverML library are analyzed, in scenarios with and without prior knowledge of the environment. In the first case, a hyperparameter optimization is applied using a novel method based on Random Search. Regarding KSWIN, one of the evaluated algorithms, a modification developed as a complementary part of the Bachelor's Thesis in Mathematics is incorporated. This modification introduces statistical techniques such as multiple hypothesis testing and the Benjamini Hochberg correction to improve the detection process, as well as a system for identifying the type of drift through non-parametric inference, which is considered innovative in the Literature. The results highlight the strengths and limitations of both the detectors and the adaptation strategies analyzed. While some algorithms, such as HDDMW, show good overall performance, the choice of the most appropriate detector largely depends on the use case and the type of drift present. Likewise, adaptation based on mini batches offers solid performance compared to periodic retraining. In addition, the proposed modification of KSWIN outperforms other detectors in terms of balancing false positives and false negatives during the detection process, and establishes a solid foundation for the method of drift type identification.
This work compares different techniques for the detection and adaptation of Concept Drift, a phenomenon that arises in dynamic and non-stationary environments where the statistical relationships between the model's variables change over time, affecting its performance in some cases. The main objective is to evaluate various drift detectors present in the Literature, along with the analysis of several adaptation techniques following detection. The study is carried out in an experimental setting using artificial datasets that simulate different types of drift, applied to a classification model. Classical algorithms from the RiverML library are analyzed, in scenarios with and without prior knowledge of the environment. In the first case, a hyperparameter optimization is applied using a novel method based on Random Search. Regarding KSWIN, one of the evaluated algorithms, a modification developed as a complementary part of the Bachelor's Thesis in Mathematics is incorporated. This modification introduces statistical techniques such as multiple hypothesis testing and the Benjamini Hochberg correction to improve the detection process, as well as a system for identifying the type of drift through non-parametric inference, which is considered innovative in the Literature. The results highlight the strengths and limitations of both the detectors and the adaptation strategies analyzed. While some algorithms, such as HDDMW, show good overall performance, the choice of the most appropriate detector largely depends on the use case and the type of drift present. Likewise, adaptation based on mini batches offers solid performance compared to periodic retraining. In addition, the proposed modification of KSWIN outperforms other detectors in terms of balancing false positives and false negatives during the detection process, and establishes a solid foundation for the method of drift type identification.
Direction
MERA PEREZ, DAVID (Tutorships)
MERA PEREZ, DAVID (Tutorships)
Court
LADRA GONZALEZ, MANUEL EULOGIO (Chairman)
LOPEZ FANDIÑO, JAVIER (Secretary)
VIDAL AGUIAR, JUAN CARLOS (Member)
LADRA GONZALEZ, MANUEL EULOGIO (Chairman)
LOPEZ FANDIÑO, JAVIER (Secretary)
VIDAL AGUIAR, JUAN CARLOS (Member)
Redesign of a conventional Fischer Tropsch plant for sustainable fuels
Authorship
L.F.M.
Bachelor's Degree in Chemical Engeneering
L.F.M.
Bachelor's Degree in Chemical Engeneering
Defense date
07.16.2025 12:05
07.16.2025 12:05
Summary
This project proposes two modifications to the conventional Fischer Tropsch production process to convert it to sustainable fuels to make it more flexible. The decision was made to produce sustainable aviation fuels, SAF, due to its growing demand and the impact on the carbon footprint of conventional fuels. The project includes an analysis of available alternatives, a justification for SAF production, and the detailed design of two of the complexs units: one in the conditioning line and one in the recirculation line. Lucía Freire is designing the packed absorption column in the feed line, and Julia Feijoo Cacabelos is designing the catalytic fixed bed autothermal reactor in the recirculation section.
This project proposes two modifications to the conventional Fischer Tropsch production process to convert it to sustainable fuels to make it more flexible. The decision was made to produce sustainable aviation fuels, SAF, due to its growing demand and the impact on the carbon footprint of conventional fuels. The project includes an analysis of available alternatives, a justification for SAF production, and the detailed design of two of the complexs units: one in the conditioning line and one in the recirculation line. Lucía Freire is designing the packed absorption column in the feed line, and Julia Feijoo Cacabelos is designing the catalytic fixed bed autothermal reactor in the recirculation section.
Direction
VIDAL TATO, MARIA ISABEL (Tutorships)
VIDAL TATO, MARIA ISABEL (Tutorships)
Court
MOREIRA MARTINEZ, RAMON FELIPE (Chairman)
Pedrouso Fuentes, Alba (Secretary)
SINEIRO TORRES, JORGE (Member)
MOREIRA MARTINEZ, RAMON FELIPE (Chairman)
Pedrouso Fuentes, Alba (Secretary)
SINEIRO TORRES, JORGE (Member)
Creation of a web application for a basketball club
Authorship
A.G.G.
Bachelor’s Degree in Informatics Engineering
A.G.G.
Bachelor’s Degree in Informatics Engineering
Defense date
07.17.2025 12:00
07.17.2025 12:00
Summary
This work aims to design and develop a web application oriented towards the internal management of a basketball club as well as its public projection. The platform allows both the management of players, matches, call-ups, categories, seasons, and registration periods, as well as the consultation of relevant club information through the website. The application is structured in two parts: a backend developed in Java with Spring Boot, which is responsible for managing the business logic, and a frontend built in Vue.js that provides users with a visual and responsive interface to interact with the system. During the development of the project, roles such as project manager, developer, analyst, or DevOps, among others, were assumed. The final result is a custom-made, functional, and deployable tool that adapts to the needs of a real client.
This work aims to design and develop a web application oriented towards the internal management of a basketball club as well as its public projection. The platform allows both the management of players, matches, call-ups, categories, seasons, and registration periods, as well as the consultation of relevant club information through the website. The application is structured in two parts: a backend developed in Java with Spring Boot, which is responsible for managing the business logic, and a frontend built in Vue.js that provides users with a visual and responsive interface to interact with the system. During the development of the project, roles such as project manager, developer, analyst, or DevOps, among others, were assumed. The final result is a custom-made, functional, and deployable tool that adapts to the needs of a real client.
Direction
VARELA PET, JOSE (Tutorships)
VARELA PET, JOSE (Tutorships)
Court
VARELA PET, JOSE (Student’s tutor)
VARELA PET, JOSE (Student’s tutor)
LABS type detergent production plant
Authorship
A.G.V.
Bachelor's Degree in Chemical Engeneering
A.G.V.
Bachelor's Degree in Chemical Engeneering
Defense date
07.16.2025 09:30
07.16.2025 09:30
Summary
Desing of a LABS type detergent production plant from benzene and dodecene with a production capacity of 120,000 tonnes per year. The process involves three reactions, alkylation, sulfonation, and neutralization. In addition, preconditioning before the reactors and the corresponding separation operations are carried out. The alkylation reactor is designed by Lucía Pardilla Fraguas, the second distillation column by Noelia López García, and the sulfonation reactor by Antía Gándara Vieito.
Desing of a LABS type detergent production plant from benzene and dodecene with a production capacity of 120,000 tonnes per year. The process involves three reactions, alkylation, sulfonation, and neutralization. In addition, preconditioning before the reactors and the corresponding separation operations are carried out. The alkylation reactor is designed by Lucía Pardilla Fraguas, the second distillation column by Noelia López García, and the sulfonation reactor by Antía Gándara Vieito.
Direction
SOTO CAMPOS, ANA MARIA (Tutorships)
SOTO CAMPOS, ANA MARIA (Tutorships)
Court
MOSQUERA CORRAL, ANUSKA (Chairman)
SOUTO GONZALEZ, JOSE ANTONIO (Secretary)
RODIL RODRIGUEZ, EVA (Member)
MOSQUERA CORRAL, ANUSKA (Chairman)
SOUTO GONZALEZ, JOSE ANTONIO (Secretary)
RODIL RODRIGUEZ, EVA (Member)
Security Analysis and Vulnerability Assessment in IoT Devices
Authorship
A.G.F.
Bachelor’s Degree in Informatics Engineering
A.G.F.
Bachelor’s Degree in Informatics Engineering
Defense date
07.17.2025 10:30
07.17.2025 10:30
Summary
The rapid growth of the IoT ecosystem has led to an exponential increase in vulnerabilities, many of which are linked to firmware, a critical component that is often overlooked in terms of protection. The lack of structured, low-cost methodologies for tackling this analysis limits the ability to detect and mitigate flaws in both educational and professional contexts. To address this gap, the thesis develops an original methodology aimed at domestic environments, which can be applied without specialized equipment or advanced knowledge in electronics. The proposed approach relies on open-source tools and affordable hardware, enabling its adoption in training, self-learning, or lightweight auditing scenarios. The methodology is structured into five sequential phases: information gathering, exploration of physical interfaces, firmware extraction, static analysis, and validation of findings. Each phase includes clear objectives, procedures, and verification criteria. The design of the methodology focuses on balancing technical effectiveness with operational realism. It is based on plausible assumptions, such as the absence of official documentation or the presence of moderate physical protections. Furthermore, it prioritizes non-invasive techniques, promoting a responsible and safe approach to analyzing real-world devices. The proposed methodology is validated through its application to a practical case, demonstrating the feasibility of the approach, identifying real vulnerabilities, and documenting the analysis process in a reproducible way. The results show that, even with limited resources, it is possible to carry out technically investigations and reach meaningful conclusions regarding embedded system security.
The rapid growth of the IoT ecosystem has led to an exponential increase in vulnerabilities, many of which are linked to firmware, a critical component that is often overlooked in terms of protection. The lack of structured, low-cost methodologies for tackling this analysis limits the ability to detect and mitigate flaws in both educational and professional contexts. To address this gap, the thesis develops an original methodology aimed at domestic environments, which can be applied without specialized equipment or advanced knowledge in electronics. The proposed approach relies on open-source tools and affordable hardware, enabling its adoption in training, self-learning, or lightweight auditing scenarios. The methodology is structured into five sequential phases: information gathering, exploration of physical interfaces, firmware extraction, static analysis, and validation of findings. Each phase includes clear objectives, procedures, and verification criteria. The design of the methodology focuses on balancing technical effectiveness with operational realism. It is based on plausible assumptions, such as the absence of official documentation or the presence of moderate physical protections. Furthermore, it prioritizes non-invasive techniques, promoting a responsible and safe approach to analyzing real-world devices. The proposed methodology is validated through its application to a practical case, demonstrating the feasibility of the approach, identifying real vulnerabilities, and documenting the analysis process in a reproducible way. The results show that, even with limited resources, it is possible to carry out technically investigations and reach meaningful conclusions regarding embedded system security.
Direction
CARIÑENA AMIGO, MARIA PURIFICACION (Tutorships)
CARIÑENA AMIGO, MARIA PURIFICACION (Tutorships)
Court
LADRA GONZALEZ, MANUEL EULOGIO (Chairman)
LOPEZ FANDIÑO, JAVIER (Secretary)
VIDAL AGUIAR, JUAN CARLOS (Member)
LADRA GONZALEZ, MANUEL EULOGIO (Chairman)
LOPEZ FANDIÑO, JAVIER (Secretary)
VIDAL AGUIAR, JUAN CARLOS (Member)
Embedded AI for multispectral image classification: an optimized approach with Raspberry Pi 5 and Hailo 8L
Authorship
P.G.F.
Bachelor’s Degree in Informatics Engineering
P.G.F.
Bachelor’s Degree in Informatics Engineering
Defense date
07.17.2025 09:30
07.17.2025 09:30
Summary
The aim of this work is relating deep learning classification of multispectral images to edge computing systems. The main objective is the adaptation of multispectral image classification models so they can be executed in an embedded system formed by a Raspeberry Pi 5 with Hailo 8L artificial intelligence accelerator chip. To achieve this objective, a pipeline that allows adaptation and compilation of deep learning models is proposed. It starts with its base representation on PyTorch until achieving an executable model in the embedded system. Such an adaptation comprises the use of model optimization techniques that quantize their weights and modify their values so as to minimize precision losses. It also allows for obtaining reasonable inference times in low computing power and reduced power consumption devices. Two models based on a convolutional neural network and a generative adversarial network have been selected to perform a series of experiments in order to analyze their behavior over different optimization techniques. These tests have allowed us to draw some conclusions about the adaptation process of the models, the applied techniques and the selected architectures. Overall, results showed that the initial approach is feasible for the selected system. It also has some limitations and requires further analysis in deeper research. However, it represents a starting point for a new range of applications that can benefit from this paradigm.
The aim of this work is relating deep learning classification of multispectral images to edge computing systems. The main objective is the adaptation of multispectral image classification models so they can be executed in an embedded system formed by a Raspeberry Pi 5 with Hailo 8L artificial intelligence accelerator chip. To achieve this objective, a pipeline that allows adaptation and compilation of deep learning models is proposed. It starts with its base representation on PyTorch until achieving an executable model in the embedded system. Such an adaptation comprises the use of model optimization techniques that quantize their weights and modify their values so as to minimize precision losses. It also allows for obtaining reasonable inference times in low computing power and reduced power consumption devices. Two models based on a convolutional neural network and a generative adversarial network have been selected to perform a series of experiments in order to analyze their behavior over different optimization techniques. These tests have allowed us to draw some conclusions about the adaptation process of the models, the applied techniques and the selected architectures. Overall, results showed that the initial approach is feasible for the selected system. It also has some limitations and requires further analysis in deeper research. However, it represents a starting point for a new range of applications that can benefit from this paradigm.
Direction
SUAREZ GAREA, JORGE ALBERTO (Tutorships)
LOPEZ FANDIÑO, JAVIER (Co-tutorships)
ORDOÑEZ IGLESIAS, ALVARO (Co-tutorships)
SUAREZ GAREA, JORGE ALBERTO (Tutorships)
LOPEZ FANDIÑO, JAVIER (Co-tutorships)
ORDOÑEZ IGLESIAS, ALVARO (Co-tutorships)
Court
PICHEL CAMPOS, JUAN CARLOS (Chairman)
VILA BLANCO, NICOLAS (Secretary)
COSTOYA RAMOS, MARIA CRISTINA (Member)
PICHEL CAMPOS, JUAN CARLOS (Chairman)
VILA BLANCO, NICOLAS (Secretary)
COSTOYA RAMOS, MARIA CRISTINA (Member)
Ethyl butyrate production plant based on the esterification of butyric acid with ethanol.
Authorship
P.G.M.
Bachelor's Degree in Chemical Engeneering
P.G.M.
Bachelor's Degree in Chemical Engeneering
Defense date
07.16.2025 11:25
07.16.2025 11:25
Summary
The objective of this project is to develop the design of an industrial plant dedicated to the production of ethyl butyrate through the esterification reaction between butyric acid and ethanol. The facility will have a production capacity of 5,000 tons per year of ethyl butyrate with a purity of 99.62% by weight and will operate continuously for 330 days a year, 24 hours a day. The main equipment includes the multitubular reactor R-201, designed by Paula Garrido Moreno, and the distillation column T-302, designed by Estela Martínez Regueira. This document constitutes the Final Degree Project with which Paula Garrido Moreno and Estela Martínez Regueira are applying for the Degree in Chemical Engineering, awarded by the Escola Técnica Superior de Enxeñaría of the Universidade de Santiago de Compostela.
The objective of this project is to develop the design of an industrial plant dedicated to the production of ethyl butyrate through the esterification reaction between butyric acid and ethanol. The facility will have a production capacity of 5,000 tons per year of ethyl butyrate with a purity of 99.62% by weight and will operate continuously for 330 days a year, 24 hours a day. The main equipment includes the multitubular reactor R-201, designed by Paula Garrido Moreno, and the distillation column T-302, designed by Estela Martínez Regueira. This document constitutes the Final Degree Project with which Paula Garrido Moreno and Estela Martínez Regueira are applying for the Degree in Chemical Engineering, awarded by the Escola Técnica Superior de Enxeñaría of the Universidade de Santiago de Compostela.
Direction
Rodríguez Figueiras, Óscar (Tutorships)
Rodríguez Figueiras, Óscar (Tutorships)
Court
ROCA BORDELLO, ENRIQUE (Chairman)
BELLO BUGALLO, PASTORA MARIA (Secretary)
SOTO CAMPOS, ANA MARIA (Member)
ROCA BORDELLO, ENRIQUE (Chairman)
BELLO BUGALLO, PASTORA MARIA (Secretary)
SOTO CAMPOS, ANA MARIA (Member)
Robust vision-language models for small objects
Authorship
N.G.S.D.V.
Double Bachelor's Degree in Informatics Engineering and Mathematics
N.G.S.D.V.
Double Bachelor's Degree in Informatics Engineering and Mathematics
Defense date
07.17.2025 11:00
07.17.2025 11:00
Summary
This work focuses on the optimization of vision-language models (VLMs) applied to visual question answering (VQA) tasks on videos containing small objects, a scenario that presents significant computational challenges due to the large number of irrelevant visual tokens generated. To address this issue, a method is proposed based on the integration of a detector that identifies relevant visual regions in the video frames, allowing the filtering of background-associated visual tokens generated by the vision module (ViT) before being processed by the language model (LLM). Experimental results show that this filtering effectively eliminates a large proportion of visual tokens, leading to a notable reduction in the computational complexity of the language model and, consequently, a decrease in the overall system complexity, without compromising performance. Furthermore, an improvement in the LLM’s execution time is observed, contributing to greater efficiency in textual processing. However, the overall inference time is still influenced by the ViT, which remains the main bottleneck due to high-resolution image processing, as well as by the additional computational cost introduced by the detector. This work validates the use of filtering techniques as an effective strategy to improve the efficiency of VLMs and opens new lines of research aimed at optimizing visual processing and exploring lighter-weight detectors.
This work focuses on the optimization of vision-language models (VLMs) applied to visual question answering (VQA) tasks on videos containing small objects, a scenario that presents significant computational challenges due to the large number of irrelevant visual tokens generated. To address this issue, a method is proposed based on the integration of a detector that identifies relevant visual regions in the video frames, allowing the filtering of background-associated visual tokens generated by the vision module (ViT) before being processed by the language model (LLM). Experimental results show that this filtering effectively eliminates a large proportion of visual tokens, leading to a notable reduction in the computational complexity of the language model and, consequently, a decrease in the overall system complexity, without compromising performance. Furthermore, an improvement in the LLM’s execution time is observed, contributing to greater efficiency in textual processing. However, the overall inference time is still influenced by the ViT, which remains the main bottleneck due to high-resolution image processing, as well as by the additional computational cost introduced by the detector. This work validates the use of filtering techniques as an effective strategy to improve the efficiency of VLMs and opens new lines of research aimed at optimizing visual processing and exploring lighter-weight detectors.
Direction
MUCIENTES MOLINA, MANUEL FELIPE (Tutorships)
CORES COSTA, DANIEL (Co-tutorships)
MUCIENTES MOLINA, MANUEL FELIPE (Tutorships)
CORES COSTA, DANIEL (Co-tutorships)
Court
LADRA GONZALEZ, MANUEL EULOGIO (Chairman)
LOPEZ FANDIÑO, JAVIER (Secretary)
VIDAL AGUIAR, JUAN CARLOS (Member)
LADRA GONZALEZ, MANUEL EULOGIO (Chairman)
LOPEZ FANDIÑO, JAVIER (Secretary)
VIDAL AGUIAR, JUAN CARLOS (Member)
Procedural Generation Techniques and Dynamic Environments Applied to Video Game Development in Godot
Authorship
P.G.B.
Bachelor’s Degree in Informatics Engineering
P.G.B.
Bachelor’s Degree in Informatics Engineering
Defense date
07.17.2025 12:30
07.17.2025 12:30
Summary
This Bachelor's Thesis presents the development of a functional prototype of a 2D video game based on procedural generation. It also includes typical video game mechanics such as exploration, combat, and character progression in fast-paced gameplay sessions. The main objective was to design and build a system that enables the modular implementation of game mechanics, allowing for future expansions of the video game after the project’s completion. The implementation was carried out using the open-source game engine, Godot Engine. Object-oriented programming principles were applied, along with an agile methodology adapted to an individual project. Special attention was given to developing a custom procedural generation system, as well as other modules such as player movement, a combat system with basic enemy AI, a progression system, and the integration of a dynamic user interface. The result is a playable prototype that integrates all the mentioned mechanics. Additionally, version control tools, code validation, and technical documentation were incorporated. These features facilitate the maintenance of the product and reflect a project management approach aligned with software engineering principles.
This Bachelor's Thesis presents the development of a functional prototype of a 2D video game based on procedural generation. It also includes typical video game mechanics such as exploration, combat, and character progression in fast-paced gameplay sessions. The main objective was to design and build a system that enables the modular implementation of game mechanics, allowing for future expansions of the video game after the project’s completion. The implementation was carried out using the open-source game engine, Godot Engine. Object-oriented programming principles were applied, along with an agile methodology adapted to an individual project. Special attention was given to developing a custom procedural generation system, as well as other modules such as player movement, a combat system with basic enemy AI, a progression system, and the integration of a dynamic user interface. The result is a playable prototype that integrates all the mentioned mechanics. Additionally, version control tools, code validation, and technical documentation were incorporated. These features facilitate the maintenance of the product and reflect a project management approach aligned with software engineering principles.
Direction
VARELA PET, JOSE (Tutorships)
VARELA PET, JOSE (Tutorships)
Court
VARELA PET, JOSE (Student’s tutor)
VARELA PET, JOSE (Student’s tutor)
Design of a propanoic acid production plant from propanal
Authorship
B.I.V.
Bachelor's Degree in Chemical Engeneering
B.I.V.
Bachelor's Degree in Chemical Engeneering
Defense date
07.16.2025 12:05
07.16.2025 12:05
Summary
The objective of this thesis is to design a production plant for propanoic acid from propanal. Propanoic acid is a short-chain carboxylic acid with extensive uses. Some of its main applications would be as a preservative in food or as a precursor compound in the pharmaceutical industry. This compound is produced through the aerobic oxidation of propanal in the presence of a catalyst, specifically manganese propionate, and under mild temperature and pressure conditions. The oxidation will have a conversion of 60% and will take place in a bubbling reactor. The unreacted propanal and propanoic acid will be obtained as effluents from the reactor. The separation process will consist of a distillation column that separates the unreacted propanal from the propanoic acid. This equipment will be selected for rigorous design, determining the number of trays using approximate and rigorous methods, and performing the hydraulic and mechanical design. Finally, the unreacted propanal will be recirculated to the reactor and the commercial-grade propanoic acid will be obtained from the tail stream.
The objective of this thesis is to design a production plant for propanoic acid from propanal. Propanoic acid is a short-chain carboxylic acid with extensive uses. Some of its main applications would be as a preservative in food or as a precursor compound in the pharmaceutical industry. This compound is produced through the aerobic oxidation of propanal in the presence of a catalyst, specifically manganese propionate, and under mild temperature and pressure conditions. The oxidation will have a conversion of 60% and will take place in a bubbling reactor. The unreacted propanal and propanoic acid will be obtained as effluents from the reactor. The separation process will consist of a distillation column that separates the unreacted propanal from the propanoic acid. This equipment will be selected for rigorous design, determining the number of trays using approximate and rigorous methods, and performing the hydraulic and mechanical design. Finally, the unreacted propanal will be recirculated to the reactor and the commercial-grade propanoic acid will be obtained from the tail stream.
Direction
RODIL RODRIGUEZ, EVA (Tutorships)
RODIL RODRIGUEZ, EVA (Tutorships)
Court
ROCA BORDELLO, ENRIQUE (Chairman)
BELLO BUGALLO, PASTORA MARIA (Secretary)
SOTO CAMPOS, ANA MARIA (Member)
ROCA BORDELLO, ENRIQUE (Chairman)
BELLO BUGALLO, PASTORA MARIA (Secretary)
SOTO CAMPOS, ANA MARIA (Member)
Introduction to time series models
Authorship
I.L.C.
Double Bachelor's Degree in Informatics Engineering and Mathematics
I.L.C.
Double Bachelor's Degree in Informatics Engineering and Mathematics
Defense date
07.17.2025 09:10
07.17.2025 09:10
Summary
The study of time series is a fundamental tool in the analysis of data that evolve over time, allowing us to model and predict phenomena in fields such as economics, meteorology, engineering or social sciences. This Final Degree Thesis deals with the analysis of time series from a classical statistical perspective, considering that the observed series are realisations of stochastic processes. Autoregressive (AR), moving average (MA), mixed (ARMA), integrated (ARIMA) and seasonal (SARIMA) models are presented in detail, explaining their mathematical formulation, parameter estimation methods and associated forecasting techniques. A methodology for time series analysis is also proposed, which will be used in the analysis of two real time series. In addition, a simulation study is included to illustrate and evaluate the estimation and forecasting processes of the models.
The study of time series is a fundamental tool in the analysis of data that evolve over time, allowing us to model and predict phenomena in fields such as economics, meteorology, engineering or social sciences. This Final Degree Thesis deals with the analysis of time series from a classical statistical perspective, considering that the observed series are realisations of stochastic processes. Autoregressive (AR), moving average (MA), mixed (ARMA), integrated (ARIMA) and seasonal (SARIMA) models are presented in detail, explaining their mathematical formulation, parameter estimation methods and associated forecasting techniques. A methodology for time series analysis is also proposed, which will be used in the analysis of two real time series. In addition, a simulation study is included to illustrate and evaluate the estimation and forecasting processes of the models.
Direction
CRUJEIRAS CASAIS, ROSA MARÍA (Tutorships)
CRUJEIRAS CASAIS, ROSA MARÍA (Tutorships)
Court
Majadas Soto, José Javier (Chairman)
SALGADO RODRIGUEZ, MARIA DEL PILAR (Secretary)
CASARES DE CAL, MARIA ANGELES (Member)
Majadas Soto, José Javier (Chairman)
SALGADO RODRIGUEZ, MARIA DEL PILAR (Secretary)
CASARES DE CAL, MARIA ANGELES (Member)
Exploitation of Large Language Models for Automatic Annotation of Credibility
Authorship
P.L.P.
Double Bachelor's Degree in Informatics Engineering and Mathematics
P.L.P.
Double Bachelor's Degree in Informatics Engineering and Mathematics
Defense date
07.17.2025 11:30
07.17.2025 11:30
Summary
This thesis focuses on the application of state-of-the-art language models, such as GPT-4 and LlaMa 3, to the labeling of health-related documents in the context of the TREC project. The main objective is to evaluate the possibility of replacing annotations made by human experts with labels generated by LLMs. The field of Information Retrieval requires large labeled datasets, which are created by expert human annotators, a process that is costly in terms of both time and money. If it can be proven that these human annotations can be replaced by automatically generated ones, this would represent a major advance in the generation of high-quality datasets. In this work, we will label the same web documents that were labeled by humans; this will allow us to analyze discrepancies between human labels and those generated by the models. We also studied the effect that the instructions given to the language model have on the accuracy of the labeling. We based our methodology on a publication by Microsoft researchers, in which the relevance of each document is labeled. The results obtained in thomas2024 were very satisfactory and were implemented in Bing due to their improvement in time, cost, and quality compared to crowdsourced labelers. Our results represent an advance over this previous publication, as we carry out labeling of more complex features such as medical correctness and credibility. The results obtained in our work were in some cases very similar to those of Paul Thomas et al, so we consider them positive enough to replace human labels.
This thesis focuses on the application of state-of-the-art language models, such as GPT-4 and LlaMa 3, to the labeling of health-related documents in the context of the TREC project. The main objective is to evaluate the possibility of replacing annotations made by human experts with labels generated by LLMs. The field of Information Retrieval requires large labeled datasets, which are created by expert human annotators, a process that is costly in terms of both time and money. If it can be proven that these human annotations can be replaced by automatically generated ones, this would represent a major advance in the generation of high-quality datasets. In this work, we will label the same web documents that were labeled by humans; this will allow us to analyze discrepancies between human labels and those generated by the models. We also studied the effect that the instructions given to the language model have on the accuracy of the labeling. We based our methodology on a publication by Microsoft researchers, in which the relevance of each document is labeled. The results obtained in thomas2024 were very satisfactory and were implemented in Bing due to their improvement in time, cost, and quality compared to crowdsourced labelers. Our results represent an advance over this previous publication, as we carry out labeling of more complex features such as medical correctness and credibility. The results obtained in our work were in some cases very similar to those of Paul Thomas et al, so we consider them positive enough to replace human labels.
Direction
Losada Carril, David Enrique (Tutorships)
FERNANDEZ PICHEL, MARCOS (Co-tutorships)
Losada Carril, David Enrique (Tutorships)
FERNANDEZ PICHEL, MARCOS (Co-tutorships)
Court
LADRA GONZALEZ, MANUEL EULOGIO (Chairman)
LOPEZ FANDIÑO, JAVIER (Secretary)
VIDAL AGUIAR, JUAN CARLOS (Member)
LADRA GONZALEZ, MANUEL EULOGIO (Chairman)
LOPEZ FANDIÑO, JAVIER (Secretary)
VIDAL AGUIAR, JUAN CARLOS (Member)
HVQ-Transformer for anomaly detection in multidimensional remote sensing images
Authorship
M.L.F.
Bachelor’s Degree in Informatics Engineering
M.L.F.
Bachelor’s Degree in Informatics Engineering
Defense date
07.17.2025 10:00
07.17.2025 10:00
Summary
Remote sensing is the technique of obtaining information about the Earth's surface without direct contact. For these studies, multispectral and hyperspectral images are used, which provide a detailed representation of the terrain by capturing information across multiple bands of the electromagnetic spectrum. This spectral richness allows for highly precise analysis of the surface, making these images a key tool for anomaly detection. In recent years, the use of deep learning models has gained relevance in anomaly detection in this type of imagery due to their ability to extract complex representations and detect subtle patterns in high-dimensional data. Unlike traditional methods, deep neural networks can learn discriminative features directly from the data, making them more robust to noise and variations. This is particularly useful in real-world settings where anomalies do not always follow fixed or predictable patterns, as is common in multispectral and hyperspectral images. This work focuses on the implementation of a new version of an anomaly detection model that originally worked with 3-band images and performed image-level classification, adapting it to work with multispectral images at the pixel level. A state-of-the-art model called Hierarchical Vector Quantized Transformer for Multi-class Unsupervised Anomaly Detection (HVQ-TR) has been incorporated. This model is based on transformer architectures designed to work with RGB datasets, such as CIFAR-10, composed of three bands. Its main function is to identify anomalous images in an unsupervised manner, classifying those that deviate from the normal distribution of the dataset. The main contribution of this project has been adapting the transformer model workflow to operate on multispectral images. This required adjustments in data processing, input structure, class management, and output image generation. This approach enables the exploration of applying models originally designed for RGB images in multispectral contexts using 5-band images from the Oitavén River, thereby extending the reach of deep learning techniques in anomaly detection in remote sensing imagery. The results showed that the new model is capable of successfully detecting anomalies after a reduced number of training epochs, improving on the results of other state-of-the-art models such as RX.
Remote sensing is the technique of obtaining information about the Earth's surface without direct contact. For these studies, multispectral and hyperspectral images are used, which provide a detailed representation of the terrain by capturing information across multiple bands of the electromagnetic spectrum. This spectral richness allows for highly precise analysis of the surface, making these images a key tool for anomaly detection. In recent years, the use of deep learning models has gained relevance in anomaly detection in this type of imagery due to their ability to extract complex representations and detect subtle patterns in high-dimensional data. Unlike traditional methods, deep neural networks can learn discriminative features directly from the data, making them more robust to noise and variations. This is particularly useful in real-world settings where anomalies do not always follow fixed or predictable patterns, as is common in multispectral and hyperspectral images. This work focuses on the implementation of a new version of an anomaly detection model that originally worked with 3-band images and performed image-level classification, adapting it to work with multispectral images at the pixel level. A state-of-the-art model called Hierarchical Vector Quantized Transformer for Multi-class Unsupervised Anomaly Detection (HVQ-TR) has been incorporated. This model is based on transformer architectures designed to work with RGB datasets, such as CIFAR-10, composed of three bands. Its main function is to identify anomalous images in an unsupervised manner, classifying those that deviate from the normal distribution of the dataset. The main contribution of this project has been adapting the transformer model workflow to operate on multispectral images. This required adjustments in data processing, input structure, class management, and output image generation. This approach enables the exploration of applying models originally designed for RGB images in multispectral contexts using 5-band images from the Oitavén River, thereby extending the reach of deep learning techniques in anomaly detection in remote sensing imagery. The results showed that the new model is capable of successfully detecting anomalies after a reduced number of training epochs, improving on the results of other state-of-the-art models such as RX.
Direction
LOPEZ FANDIÑO, JAVIER (Tutorships)
Argüello Pedreira, Francisco Santiago (Co-tutorships)
LOPEZ FANDIÑO, JAVIER (Tutorships)
Argüello Pedreira, Francisco Santiago (Co-tutorships)
Court
PICHEL CAMPOS, JUAN CARLOS (Chairman)
VILA BLANCO, NICOLAS (Secretary)
COSTOYA RAMOS, MARIA CRISTINA (Member)
PICHEL CAMPOS, JUAN CARLOS (Chairman)
VILA BLANCO, NICOLAS (Secretary)
COSTOYA RAMOS, MARIA CRISTINA (Member)
ArtAlchemy: Modernización de Arte con IA
Authorship
A.L.M.
Bachelor’s Degree in Informatics Engineering
A.L.M.
Bachelor’s Degree in Informatics Engineering
Defense date
07.17.2025 10:30
07.17.2025 10:30
Summary
ArtAlchemy: Modernización de Arte con IA is a web application developed as a Bachelor's Thesis that democratizes access to advanced artificial intelligence technologies for artistic transformation. The platform offers three main functionalities: automatic background removal using RMBG-2.0 and BiRefNet models, facial animation of portraits utilizing the fal.ai API, and conversion of 2D images to 3D stereoscopy with the MiDaS model. The system implements a decoupled architecture with a frontend developed in Vue.js/Nuxt.js and a backend in Python with FastAPI, providing an intuitive interface that allows users without technical knowledge to apply complex machine learning techniques to artworks. The project has been validated through functional and usability testing, achieving a SUS score of 82.5 points, demonstrating the feasibility of integrating AI models into accessible web applications.
ArtAlchemy: Modernización de Arte con IA is a web application developed as a Bachelor's Thesis that democratizes access to advanced artificial intelligence technologies for artistic transformation. The platform offers three main functionalities: automatic background removal using RMBG-2.0 and BiRefNet models, facial animation of portraits utilizing the fal.ai API, and conversion of 2D images to 3D stereoscopy with the MiDaS model. The system implements a decoupled architecture with a frontend developed in Vue.js/Nuxt.js and a backend in Python with FastAPI, providing an intuitive interface that allows users without technical knowledge to apply complex machine learning techniques to artworks. The project has been validated through functional and usability testing, achieving a SUS score of 82.5 points, demonstrating the feasibility of integrating AI models into accessible web applications.
Direction
VIDAL AGUIAR, JUAN CARLOS (Tutorships)
Gago Pérez, Alicia (Co-tutorships)
García Ullán, Pablo (Co-tutorships)
VIDAL AGUIAR, JUAN CARLOS (Tutorships)
Gago Pérez, Alicia (Co-tutorships)
García Ullán, Pablo (Co-tutorships)
Court
PICHEL CAMPOS, JUAN CARLOS (Chairman)
VILA BLANCO, NICOLAS (Secretary)
COSTOYA RAMOS, MARIA CRISTINA (Member)
PICHEL CAMPOS, JUAN CARLOS (Chairman)
VILA BLANCO, NICOLAS (Secretary)
COSTOYA RAMOS, MARIA CRISTINA (Member)
Cell detection and morphology categorization in pharmaceutical technology
Authorship
B.L.M.
Bachelor’s Degree in Informatics Engineering
B.L.M.
Bachelor’s Degree in Informatics Engineering
Defense date
07.18.2025 09:15
07.18.2025 09:15
Summary
When you want to cultivate cells, their observation is an important component to decide when to continue with the protocol. The cells change morphology constantly and the operator has to evaluate the progress of the cells, which he does based on his experience. The analysis of cell morphology also offers the opportunity to evaluate the progress in cell differentiation and to correlate cell shape with its functionality. It is usually performed using optical microscopy where the user differentiates the behavior of the cell based on the variations in elongation and shape of the cells. However, it is a process that involves a high degree of subjectivity and there is no automatic system capable of identifying, through an image, the differentiation state of the cells. This project manages to solve this situation by designing an architecture based on deep learning that allows for the automatic segmentation and classification of cells.
When you want to cultivate cells, their observation is an important component to decide when to continue with the protocol. The cells change morphology constantly and the operator has to evaluate the progress of the cells, which he does based on his experience. The analysis of cell morphology also offers the opportunity to evaluate the progress in cell differentiation and to correlate cell shape with its functionality. It is usually performed using optical microscopy where the user differentiates the behavior of the cell based on the variations in elongation and shape of the cells. However, it is a process that involves a high degree of subjectivity and there is no automatic system capable of identifying, through an image, the differentiation state of the cells. This project manages to solve this situation by designing an architecture based on deep learning that allows for the automatic segmentation and classification of cells.
Direction
Pardo López, Xosé Manuel (Tutorships)
FERNANDEZ VIDAL, XOSE RAMON (Co-tutorships)
DIAZ RODRIGUEZ, PATRICIA (Co-tutorships)
Pardo López, Xosé Manuel (Tutorships)
FERNANDEZ VIDAL, XOSE RAMON (Co-tutorships)
DIAZ RODRIGUEZ, PATRICIA (Co-tutorships)
Court
TABOADA IGLESIAS, MARÍA JESÚS (Chairman)
SEOANE IGLESIAS, NATALIA (Secretary)
RODRIGUEZ CASAL, ALBERTO (Member)
TABOADA IGLESIAS, MARÍA JESÚS (Chairman)
SEOANE IGLESIAS, NATALIA (Secretary)
RODRIGUEZ CASAL, ALBERTO (Member)
Tool for Automating the Build and Test Process of Native Python Libraries Based on Cross-Compilation and Emulation Technologies
Authorship
A.L.O.
Bachelor’s Degree in Informatics Engineering
A.L.O.
Bachelor’s Degree in Informatics Engineering
Defense date
07.18.2025 09:45
07.18.2025 09:45
Summary
This Bachelor's Thesis focuses on the study of cross-compilation of Python libraries that include C and C++ code, as well as the development of environments that enable their compilation for various architectures and operating systems. The main goal is to provide a tool that allows users to compile and test such libraries through a transparent and simplified process. During the development of the project, cross-compilation and emulation techniques were applied to ensure the correct build and validation of the libraries in different environments. The results obtained demonstrate the feasibility of automating the compilation and testing of Python libraries with native C and C++ components, significantly improving their portability and compatibility across multiple platforms. This represents a major step forward in the efficiency of developing and deploying these libraries, offering users an integrated, accessible, and transparent solution.
This Bachelor's Thesis focuses on the study of cross-compilation of Python libraries that include C and C++ code, as well as the development of environments that enable their compilation for various architectures and operating systems. The main goal is to provide a tool that allows users to compile and test such libraries through a transparent and simplified process. During the development of the project, cross-compilation and emulation techniques were applied to ensure the correct build and validation of the libraries in different environments. The results obtained demonstrate the feasibility of automating the compilation and testing of Python libraries with native C and C++ components, significantly improving their portability and compatibility across multiple platforms. This represents a major step forward in the efficiency of developing and deploying these libraries, offering users an integrated, accessible, and transparent solution.
Direction
PICHEL CAMPOS, JUAN CARLOS (Tutorships)
PIÑEIRO POMAR, CESAR ALFREDO (Co-tutorships)
PICHEL CAMPOS, JUAN CARLOS (Tutorships)
PIÑEIRO POMAR, CESAR ALFREDO (Co-tutorships)
Court
TABOADA IGLESIAS, MARÍA JESÚS (Chairman)
SEOANE IGLESIAS, NATALIA (Secretary)
RODRIGUEZ CASAL, ALBERTO (Member)
TABOADA IGLESIAS, MARÍA JESÚS (Chairman)
SEOANE IGLESIAS, NATALIA (Secretary)
RODRIGUEZ CASAL, ALBERTO (Member)
VAKaloura: development of a VAK learning style predictor based on text analysis
Authorship
M.L.P.
Bachelor’s Degree in Informatics Engineering
M.L.P.
Bachelor’s Degree in Informatics Engineering
Defense date
07.17.2025 11:00
07.17.2025 11:00
Summary
This Bachelor's Degree Final Project explores the possibility of automatically predicting VAK learning styles (Visual, Auditory, and Kinesthetic) through the analysis of free-form text. The main motivation stems from the need to move towards more personalized and efficient educational models, where understanding each person's cognitive style allows for better adaptation of content and teaching methodologies. To address this challenge, a web application was designed and implemented to collect texts written by participants about their hobbies, along with a questionnaire to identify their learning style, validated by the expert psychologist Jaume Romero. The resulting corpus includes 116 entries, with strong representation from the children's group thanks to the collaboration of the public school CPI Curros Enríquez. Four prediction approaches were tested: traditional text classification models (Naive Bayes, SVM, and Logistic Regression), a fine-tuned BERT neural model, zero-shot prompting with GPT-4o, and the evaluation of an expert educational psychologist as a benchmark. The experiments were conducted using multilabel classification, considering that a single person may exhibit more than one learning style. The results show that the traditional models clearly outperformed the baseline classifiers (random and majority) and, in many cases, even the human expert. Furthermore, even under constrained conditions (short texts, data in Galician/Spanish, and class imbalance), they were able to identify useful patterns. The GPT-4o LLM was also able to detect the visual style with some success without specific training, reinforcing the idea that free-form text contains relevant linguistic signals for this task. This work represents a solid first step towards the automatic prediction of learning styles from free-form text. Despite its limitations, the results open the door to the development of accessible and adaptable systems that contribute to more personalized and efficient education, especially in the context of minoritized languages and limited resources.
This Bachelor's Degree Final Project explores the possibility of automatically predicting VAK learning styles (Visual, Auditory, and Kinesthetic) through the analysis of free-form text. The main motivation stems from the need to move towards more personalized and efficient educational models, where understanding each person's cognitive style allows for better adaptation of content and teaching methodologies. To address this challenge, a web application was designed and implemented to collect texts written by participants about their hobbies, along with a questionnaire to identify their learning style, validated by the expert psychologist Jaume Romero. The resulting corpus includes 116 entries, with strong representation from the children's group thanks to the collaboration of the public school CPI Curros Enríquez. Four prediction approaches were tested: traditional text classification models (Naive Bayes, SVM, and Logistic Regression), a fine-tuned BERT neural model, zero-shot prompting with GPT-4o, and the evaluation of an expert educational psychologist as a benchmark. The experiments were conducted using multilabel classification, considering that a single person may exhibit more than one learning style. The results show that the traditional models clearly outperformed the baseline classifiers (random and majority) and, in many cases, even the human expert. Furthermore, even under constrained conditions (short texts, data in Galician/Spanish, and class imbalance), they were able to identify useful patterns. The GPT-4o LLM was also able to detect the visual style with some success without specific training, reinforcing the idea that free-form text contains relevant linguistic signals for this task. This work represents a solid first step towards the automatic prediction of learning styles from free-form text. Despite its limitations, the results open the door to the development of accessible and adaptable systems that contribute to more personalized and efficient education, especially in the context of minoritized languages and limited resources.
Direction
Losada Carril, David Enrique (Tutorships)
ROMERO MIRO, JAUME (Co-tutorships)
Losada Carril, David Enrique (Tutorships)
ROMERO MIRO, JAUME (Co-tutorships)
Court
PICHEL CAMPOS, JUAN CARLOS (Chairman)
VILA BLANCO, NICOLAS (Secretary)
COSTOYA RAMOS, MARIA CRISTINA (Member)
PICHEL CAMPOS, JUAN CARLOS (Chairman)
VILA BLANCO, NICOLAS (Secretary)
COSTOYA RAMOS, MARIA CRISTINA (Member)
Real-time processing of terrestrial LiDAR point clouds
Authorship
P.L.C.
Bachelor’s Degree in Informatics Engineering
P.L.C.
Bachelor’s Degree in Informatics Engineering
Defense date
07.18.2025 10:15
07.18.2025 10:15
Summary
This work focuses on the development of a real-time processing pipeline for terrestrial LiDAR data, aimed at automotive applications. Specifically, a system has been designed and implemented that is capable of identifying, classifying, and tracking objects in real time using tools from the ROS2 environment. To perform the classification task, state-of-the-art Deep Learning models have been used. In particular, the models PointNet, PointNet++, PointPillars, and PointRCNN have been considered, all of them applied to the processing of 3D point clouds. Given the real-time focus of the system, a detailed performance study was carried out, identifying the inference process of the models as the main bottleneck for meeting this requirement. To address this limitation, an evaluation tool was developed to compare Deep Learning models based on a configurable set of metrics, including accuracy, inference time, and resource consumption. This evaluation made it possible to draw relevant conclusions about the trade-off between computational efficiency and result quality across different models. The experimental results, obtained using the KITTI dataset, show that PointPillars offers the best balance between computational performance and accuracy. PointRCNN stands out as the most accurate model, while PointNet shows the shortest inference times. In terms of resource usage, PointRCNN and PointPillars, despite their greater structural complexity, consume less memory per object than PointNet and PointNet++ during processing. Finally, it is concluded that all evaluated models require GPU acceleration in order to meet the time constraints defined by the real-time execution environment in the developed pipeline.
This work focuses on the development of a real-time processing pipeline for terrestrial LiDAR data, aimed at automotive applications. Specifically, a system has been designed and implemented that is capable of identifying, classifying, and tracking objects in real time using tools from the ROS2 environment. To perform the classification task, state-of-the-art Deep Learning models have been used. In particular, the models PointNet, PointNet++, PointPillars, and PointRCNN have been considered, all of them applied to the processing of 3D point clouds. Given the real-time focus of the system, a detailed performance study was carried out, identifying the inference process of the models as the main bottleneck for meeting this requirement. To address this limitation, an evaluation tool was developed to compare Deep Learning models based on a configurable set of metrics, including accuracy, inference time, and resource consumption. This evaluation made it possible to draw relevant conclusions about the trade-off between computational efficiency and result quality across different models. The experimental results, obtained using the KITTI dataset, show that PointPillars offers the best balance between computational performance and accuracy. PointRCNN stands out as the most accurate model, while PointNet shows the shortest inference times. In terms of resource usage, PointRCNN and PointPillars, despite their greater structural complexity, consume less memory per object than PointNet and PointNet++ during processing. Finally, it is concluded that all evaluated models require GPU acceleration in order to meet the time constraints defined by the real-time execution environment in the developed pipeline.
Direction
GARCIA LORENZO, OSCAR (Tutorships)
RODRIGUEZ ALCARAZ, SILVIA (Co-tutorships)
GARCIA LORENZO, OSCAR (Tutorships)
RODRIGUEZ ALCARAZ, SILVIA (Co-tutorships)
Court
TABOADA IGLESIAS, MARÍA JESÚS (Chairman)
SEOANE IGLESIAS, NATALIA (Secretary)
RODRIGUEZ CASAL, ALBERTO (Member)
TABOADA IGLESIAS, MARÍA JESÚS (Chairman)
SEOANE IGLESIAS, NATALIA (Secretary)
RODRIGUEZ CASAL, ALBERTO (Member)
Validation of reference data for high-resolution Earth observation images
Authorship
P.L.R.
Bachelor’s Degree in Informatics Engineering
P.L.R.
Bachelor’s Degree in Informatics Engineering
Defense date
07.18.2025 10:45
07.18.2025 10:45
Summary
As the field of remote sensing advances, it is crucial to establish robust validation mechanisms for the reference information used in change detection. This work focuses on developing a methodology that allows for the construction of reliable reference information applied to high-resolution terrestrial observation images obtained in Galician river ecosystems, corresponding to two different time periods. The fundamental objective is to generate reference information for recent images from those captured previously, through a comprehensive analysis of the spectral and spatial changes detected. To this end, advanced classification and change detection techniques are selected and applied, combining spectral and spatial methodologies that allow for the identification of relevant alterations in the river landscape. In addition, validation criteria based on the reference information are defined to assess the reliability of the results obtained. The work also explores the use of classification schemes based on deep neural networks to improve the discrimination capacity of the proposed system. Finally, an experimental evaluation is conducted to ensure the robustness of the model, verifying its performance in different scenarios and operating conditions. This proposal contributes to the field of remote sensing applied to environmental management and monitoring, and lays the groundwork for future developments aimed at automating validation processes using images taken at different points in time.
As the field of remote sensing advances, it is crucial to establish robust validation mechanisms for the reference information used in change detection. This work focuses on developing a methodology that allows for the construction of reliable reference information applied to high-resolution terrestrial observation images obtained in Galician river ecosystems, corresponding to two different time periods. The fundamental objective is to generate reference information for recent images from those captured previously, through a comprehensive analysis of the spectral and spatial changes detected. To this end, advanced classification and change detection techniques are selected and applied, combining spectral and spatial methodologies that allow for the identification of relevant alterations in the river landscape. In addition, validation criteria based on the reference information are defined to assess the reliability of the results obtained. The work also explores the use of classification schemes based on deep neural networks to improve the discrimination capacity of the proposed system. Finally, an experimental evaluation is conducted to ensure the robustness of the model, verifying its performance in different scenarios and operating conditions. This proposal contributes to the field of remote sensing applied to environmental management and monitoring, and lays the groundwork for future developments aimed at automating validation processes using images taken at different points in time.
Direction
SUAREZ GAREA, JORGE ALBERTO (Tutorships)
Blanco Heras, Dora (Co-tutorships)
SUAREZ GAREA, JORGE ALBERTO (Tutorships)
Blanco Heras, Dora (Co-tutorships)
Court
TABOADA IGLESIAS, MARÍA JESÚS (Chairman)
SEOANE IGLESIAS, NATALIA (Secretary)
RODRIGUEZ CASAL, ALBERTO (Member)
TABOADA IGLESIAS, MARÍA JESÚS (Chairman)
SEOANE IGLESIAS, NATALIA (Secretary)
RODRIGUEZ CASAL, ALBERTO (Member)
LABS type detergent production plant
Authorship
N.L.G.
Bachelor's Degree in Chemical Engeneering
N.L.G.
Bachelor's Degree in Chemical Engeneering
Defense date
07.16.2025 09:30
07.16.2025 09:30
Summary
Design of a LABS type detergent production from benzene and dodecene with a production capacity of 120,000 tonnes per year. The process involves three reactions, alkylation, sulfonation and neutralization. In addition, preconditioning before the reactors and the corresponding separation operations are carried out. The alkylation reactor is designed by Lucía Pardilla Fraguas, the second distillation column by Noelia López García, and the sulfonation reactor by Antía Gándara Vieito.
Design of a LABS type detergent production from benzene and dodecene with a production capacity of 120,000 tonnes per year. The process involves three reactions, alkylation, sulfonation and neutralization. In addition, preconditioning before the reactors and the corresponding separation operations are carried out. The alkylation reactor is designed by Lucía Pardilla Fraguas, the second distillation column by Noelia López García, and the sulfonation reactor by Antía Gándara Vieito.
Direction
SOTO CAMPOS, ANA MARIA (Tutorships)
SOTO CAMPOS, ANA MARIA (Tutorships)
Court
MOSQUERA CORRAL, ANUSKA (Chairman)
SOUTO GONZALEZ, JOSE ANTONIO (Secretary)
RODIL RODRIGUEZ, EVA (Member)
MOSQUERA CORRAL, ANUSKA (Chairman)
SOUTO GONZALEZ, JOSE ANTONIO (Secretary)
RODIL RODRIGUEZ, EVA (Member)
Application of machine learning techniques in the field of biomedicine
Authorship
P.L.O.
Bachelor’s Degree in Informatics Engineering
P.L.O.
Bachelor’s Degree in Informatics Engineering
Defense date
07.17.2025 09:30
07.17.2025 09:30
Summary
This Final Degree Project aims, within the specific context of binary classification, to analyze, adapt, and improve the MMGIN model, originally proposed for multitask molecular toxicity prediction. To optimize the model, several architectural modifications were introduced, such as: The replacement of the traditional neural network with a Variational Autoencoder (VAE). The incorporation of bond features through the use of the GINE architecture. The application of global attention mechanisms with Transformer blocks and GCN layers. The introduction of an adaptive fusion system via Gate Fusion. Finally, a stratified K-Fold cross-validation strategy was employed to robustly evaluate the performance of each variant. The results show improved performance when combining these techniques, outperforming the original model. The architecture based on Transformer+GCN stands out for its ability to capture non-local relationships in the molecular structure. In the field of computational toxicology, the work suggests future lines of research and concludes with a discussion of the implications of these findings.
This Final Degree Project aims, within the specific context of binary classification, to analyze, adapt, and improve the MMGIN model, originally proposed for multitask molecular toxicity prediction. To optimize the model, several architectural modifications were introduced, such as: The replacement of the traditional neural network with a Variational Autoencoder (VAE). The incorporation of bond features through the use of the GINE architecture. The application of global attention mechanisms with Transformer blocks and GCN layers. The introduction of an adaptive fusion system via Gate Fusion. Finally, a stratified K-Fold cross-validation strategy was employed to robustly evaluate the performance of each variant. The results show improved performance when combining these techniques, outperforming the original model. The architecture based on Transformer+GCN stands out for its ability to capture non-local relationships in the molecular structure. In the field of computational toxicology, the work suggests future lines of research and concludes with a discussion of the implications of these findings.
Direction
LAMA PENIN, MANUEL (Tutorships)
TABOADA ANTELO, PABLO (Co-tutorships)
Suárez Barro, Noel (Co-tutorships)
LAMA PENIN, MANUEL (Tutorships)
TABOADA ANTELO, PABLO (Co-tutorships)
Suárez Barro, Noel (Co-tutorships)
Court
BUGARIN DIZ, ALBERTO JOSE (Chairman)
Cotos Yáñez, José Manuel (Secretary)
ALONSO TARRIO, LEOVIGILDO (Member)
BUGARIN DIZ, ALBERTO JOSE (Chairman)
Cotos Yáñez, José Manuel (Secretary)
ALONSO TARRIO, LEOVIGILDO (Member)
Automatic Extraction of Constraints for Knowledge Graphs using Large Language Models
Authorship
A.M.B.
Bachelor’s Degree in Informatics Engineering
A.M.B.
Bachelor’s Degree in Informatics Engineering
Defense date
07.17.2025 09:15
07.17.2025 09:15
Summary
Knowledge graphs and the Semantic Web enable the representation of structured information in an interoperable manner, linking entities through meaningful relationships and facilitating automatic reasoning. These systems rely on technologies such as RDF (Resource Description Framework) and on ontologies that define the semantics of the data. To ensure the consistency and quality of the represented data, it is essential to apply validation mechanisms, especially in regulatory or technical contexts. One of the most widely used languages for this task is SHACL (Shapes Constraint Language), which allows defining constraints on RDF data. However, in many real-world scenarios, such as the regulatory domain, constraints are not explicitly formulated but rather written in natural language. This presents a major challenge: converting these descriptions into SHACL constraints requires a manual process carried out by experts, which is slow, error-prone, and difficult to scale. To address this problem, this Bachelor's Thesis proposes a solution based on multi-agent architectures combined with LLMs guided by advanced prompting techniques. The proposal also leverages structured information, such as ontologies and retrieval-augmented generation (RAG) systems, to enrich the semantic context of the model. Throughout the work, various prompting strategies and model configurations have been explored. The evaluation was carried out systematically on a benchmark set composed of constraints manually written by experts. The results show that the proposed approach achieves very high coverage, even surpassing previous state-of-the-art solutions. Furthermore, the tool is capable of detecting both simple and complex constraints. However, this high coverage comes with low precision, which underscores the need for a collaborative approach in which the human expert validates or corrects the generated constraints. This line of work opens up new possibilities to streamline the transformation of regulatory knowledge into formal validation schemas, making the work of knowledge engineers more efficient and advancing toward the partial automation of key tasks in the Semantic Web domain.
Knowledge graphs and the Semantic Web enable the representation of structured information in an interoperable manner, linking entities through meaningful relationships and facilitating automatic reasoning. These systems rely on technologies such as RDF (Resource Description Framework) and on ontologies that define the semantics of the data. To ensure the consistency and quality of the represented data, it is essential to apply validation mechanisms, especially in regulatory or technical contexts. One of the most widely used languages for this task is SHACL (Shapes Constraint Language), which allows defining constraints on RDF data. However, in many real-world scenarios, such as the regulatory domain, constraints are not explicitly formulated but rather written in natural language. This presents a major challenge: converting these descriptions into SHACL constraints requires a manual process carried out by experts, which is slow, error-prone, and difficult to scale. To address this problem, this Bachelor's Thesis proposes a solution based on multi-agent architectures combined with LLMs guided by advanced prompting techniques. The proposal also leverages structured information, such as ontologies and retrieval-augmented generation (RAG) systems, to enrich the semantic context of the model. Throughout the work, various prompting strategies and model configurations have been explored. The evaluation was carried out systematically on a benchmark set composed of constraints manually written by experts. The results show that the proposed approach achieves very high coverage, even surpassing previous state-of-the-art solutions. Furthermore, the tool is capable of detecting both simple and complex constraints. However, this high coverage comes with low precision, which underscores the need for a collaborative approach in which the human expert validates or corrects the generated constraints. This line of work opens up new possibilities to streamline the transformation of regulatory knowledge into formal validation schemas, making the work of knowledge engineers more efficient and advancing toward the partial automation of key tasks in the Semantic Web domain.
Direction
CHAVES FRAGA, DAVID (Tutorships)
BUGARIN DIZ, ALBERTO JOSE (Co-tutorships)
LAMA PENIN, MANUEL (Co-tutorships)
CHAVES FRAGA, DAVID (Tutorships)
BUGARIN DIZ, ALBERTO JOSE (Co-tutorships)
LAMA PENIN, MANUEL (Co-tutorships)
Court
CATALA BOLOS, ALEJANDRO (Chairman)
Triñanes Fernández, Joaquín Ángel (Secretary)
GONZALEZ DIAZ, JULIO (Member)
CATALA BOLOS, ALEJANDRO (Chairman)
Triñanes Fernández, Joaquín Ángel (Secretary)
GONZALEZ DIAZ, JULIO (Member)
Butadiene production plant from the catalytic dehydrogenation of n-butane
Authorship
C.M.F.
Bachelor's Degree in Chemical Engeneering
C.M.F.
Bachelor's Degree in Chemical Engeneering
Defense date
07.16.2025 11:10
07.16.2025 11:10
Summary
The present project aims to design, at the basic engineering level, a production plant with a capacity of 50,000 tons per year of 1,3-butadiene with a purity of 99.99%, based on the catalytic dehydrogenation of n-butane, operating continuously 24 hours a day for 330 days a year, reserving the remaining days for plant maintenance. A detailed design of the oxidative dehydrogenation reactor for 1-butene (R-202), carried out by the student Alejandra Novo Íñigo, of the tray distillation column (T-304) by Andrea Yáñez Rey, and of the absorption tower (T-301) by Celia Martínez Fernández is included.This project is presented as the Bachelor's Thesis of the students Alejandra Novo Íñigo, Celia Martínez Fernández, and Andrea Yáñez Rey, with the aim of obtaining the degree in Chemical Engineering awarded by the University of Santiago de Compostela.
The present project aims to design, at the basic engineering level, a production plant with a capacity of 50,000 tons per year of 1,3-butadiene with a purity of 99.99%, based on the catalytic dehydrogenation of n-butane, operating continuously 24 hours a day for 330 days a year, reserving the remaining days for plant maintenance. A detailed design of the oxidative dehydrogenation reactor for 1-butene (R-202), carried out by the student Alejandra Novo Íñigo, of the tray distillation column (T-304) by Andrea Yáñez Rey, and of the absorption tower (T-301) by Celia Martínez Fernández is included.This project is presented as the Bachelor's Thesis of the students Alejandra Novo Íñigo, Celia Martínez Fernández, and Andrea Yáñez Rey, with the aim of obtaining the degree in Chemical Engineering awarded by the University of Santiago de Compostela.
Direction
RODIL RODRIGUEZ, EVA (Tutorships)
RODIL RODRIGUEZ, EVA (Tutorships)
Court
Omil Prieto, Francisco (Chairman)
Rodríguez Figueiras, Óscar (Secretary)
VAL DEL RIO, MARIA ANGELES (Member)
Omil Prieto, Francisco (Chairman)
Rodríguez Figueiras, Óscar (Secretary)
VAL DEL RIO, MARIA ANGELES (Member)
Ethyl butyrate production plant based on the esterification of butyric acid with ethanol.
Authorship
E.M.R.
Bachelor's Degree in Chemical Engeneering
E.M.R.
Bachelor's Degree in Chemical Engeneering
Defense date
07.16.2025 11:25
07.16.2025 11:25
Summary
The objective of this project is to develop the design of an industrial plant dedicated to the production of ethyl butyrate through the esterification reaction between butyric acid and ethanol. The facility will have a production capacity of 5,000 tons per year of ethyl butyrate with a purity of 99.62% by weight and will operate continuously for 330 days a year, 24 hours a day. The main equipment includes the multitubular reactor R-201, designed by Paula Garrido Moreno, and the distillation column T-302, designed by Estela Martínez Regueira. This document constitutes the Final Degree Project with which Paula Garrido Moreno and Estela Martínez Regueira are applying for the Degree in Chemical Engineering, awarded by the Escola Técnica Superior de Enxeñaría of the Universidade de Santiago de Compostela.
The objective of this project is to develop the design of an industrial plant dedicated to the production of ethyl butyrate through the esterification reaction between butyric acid and ethanol. The facility will have a production capacity of 5,000 tons per year of ethyl butyrate with a purity of 99.62% by weight and will operate continuously for 330 days a year, 24 hours a day. The main equipment includes the multitubular reactor R-201, designed by Paula Garrido Moreno, and the distillation column T-302, designed by Estela Martínez Regueira. This document constitutes the Final Degree Project with which Paula Garrido Moreno and Estela Martínez Regueira are applying for the Degree in Chemical Engineering, awarded by the Escola Técnica Superior de Enxeñaría of the Universidade de Santiago de Compostela.
Direction
Rodríguez Figueiras, Óscar (Tutorships)
Rodríguez Figueiras, Óscar (Tutorships)
Court
ROCA BORDELLO, ENRIQUE (Chairman)
BELLO BUGALLO, PASTORA MARIA (Secretary)
SOTO CAMPOS, ANA MARIA (Member)
ROCA BORDELLO, ENRIQUE (Chairman)
BELLO BUGALLO, PASTORA MARIA (Secretary)
SOTO CAMPOS, ANA MARIA (Member)
epsilon-caprolactam production plant
Authorship
I.M.H.
Bachelor's Degree in Chemical Engeneering
I.M.H.
Bachelor's Degree in Chemical Engeneering
Defense date
07.16.2025 12:35
07.16.2025 12:35
Summary
The epsilon-caprolactam (CPL) production plant is the project carried out by Jose Otero López and Iago Meijide Hidalgo in order to obtain the Bachelor's Degree in Chemical Engineering. This production plant obtains caprolactam through, in a first instance, the formation of cyclohexanone oxime via the ammoximation of cyclohexanone with hydrogen peroxide and ammonia in a packed bed shell-and-tube reactor; subsequently, in a second phase, using a Beckmann rearrangement reactor, also packed bed with a shell-and-tube design, the cyclohexanone oxime molecule is rearranged to produce caprolactam. Finally, for its industrial-scale production, the purification of said product will be necessary through a distillation column operating under reduced pressure, thus obtaining a mass purity of 99.99 percentage. It is worth noting that the rigorous design carried out by Iago Meijide corresponds to the cyclohexanone ammoximation reactor, while the unit designed by Jose Otero is the vacuum distillation column for the purification of caprolactam. Finally, this project establishes that the plant will operate under continuous regime for 330 days per year, 24 hours a day, leaving 35 days for maintenance and downtime. In this way, the required production capacity of 60,000 tons per year of caprolactam can be achieved.
The epsilon-caprolactam (CPL) production plant is the project carried out by Jose Otero López and Iago Meijide Hidalgo in order to obtain the Bachelor's Degree in Chemical Engineering. This production plant obtains caprolactam through, in a first instance, the formation of cyclohexanone oxime via the ammoximation of cyclohexanone with hydrogen peroxide and ammonia in a packed bed shell-and-tube reactor; subsequently, in a second phase, using a Beckmann rearrangement reactor, also packed bed with a shell-and-tube design, the cyclohexanone oxime molecule is rearranged to produce caprolactam. Finally, for its industrial-scale production, the purification of said product will be necessary through a distillation column operating under reduced pressure, thus obtaining a mass purity of 99.99 percentage. It is worth noting that the rigorous design carried out by Iago Meijide corresponds to the cyclohexanone ammoximation reactor, while the unit designed by Jose Otero is the vacuum distillation column for the purification of caprolactam. Finally, this project establishes that the plant will operate under continuous regime for 330 days per year, 24 hours a day, leaving 35 days for maintenance and downtime. In this way, the required production capacity of 60,000 tons per year of caprolactam can be achieved.
Direction
RODIL RODRIGUEZ, EVA (Tutorships)
RODIL RODRIGUEZ, EVA (Tutorships)
Court
ROCA BORDELLO, ENRIQUE (Chairman)
BELLO BUGALLO, PASTORA MARIA (Secretary)
SOTO CAMPOS, ANA MARIA (Member)
ROCA BORDELLO, ENRIQUE (Chairman)
BELLO BUGALLO, PASTORA MARIA (Secretary)
SOTO CAMPOS, ANA MARIA (Member)
Solar grade silicon production plant
Authorship
N.M.R.
Bachelor's Degree in Chemical Engeneering
N.M.R.
Bachelor's Degree in Chemical Engeneering
Defense date
07.16.2025 10:20
07.16.2025 10:20
Summary
The transition to renewable energy sources has increased the demand for solar energy. Silicon is a fundamental material in the manufacture of photovoltaic cells, whose efficiency requires high purity. This project aims to design a solar-grade silicon production plant from metallurgical-grade silicon. The plant is composed of three different sections: the synthesis section, where trichlorosilane is produced; the separation and purification section, where silane is obtained; and the deposition section, from which silicon with the desired purity is obtained. The unit to be designed will be the deposition reactor, where the silane decomposes in the presence of hydrogen and the solar-grade silicon is deposed.
The transition to renewable energy sources has increased the demand for solar energy. Silicon is a fundamental material in the manufacture of photovoltaic cells, whose efficiency requires high purity. This project aims to design a solar-grade silicon production plant from metallurgical-grade silicon. The plant is composed of three different sections: the synthesis section, where trichlorosilane is produced; the separation and purification section, where silane is obtained; and the deposition section, from which silicon with the desired purity is obtained. The unit to be designed will be the deposition reactor, where the silane decomposes in the presence of hydrogen and the solar-grade silicon is deposed.
Direction
González Álvarez, Julia (Tutorships)
FREIRE LEIRA, MARIA SONIA (Co-tutorships)
González Álvarez, Julia (Tutorships)
FREIRE LEIRA, MARIA SONIA (Co-tutorships)
Court
MOSQUERA CORRAL, ANUSKA (Chairman)
SOUTO GONZALEZ, JOSE ANTONIO (Secretary)
RODIL RODRIGUEZ, EVA (Member)
MOSQUERA CORRAL, ANUSKA (Chairman)
SOUTO GONZALEZ, JOSE ANTONIO (Secretary)
RODIL RODRIGUEZ, EVA (Member)
Performance Analysis of Neighbor Search in 3D Point Clouds Structured in Octrees
Authorship
N.N.P.
Bachelor’s Degree in Informatics Engineering
N.N.P.
Bachelor’s Degree in Informatics Engineering
Defense date
07.17.2025 10:00
07.17.2025 10:00
Summary
In the field of LiDAR point cloud processing, efficient neighbor search is one of the most computationally expensive tasks. In order to minimize its impact on the overall performance, specialized data structures such as octrees are utilized, which can take advantage of the spatial locality present in the points. To reinforce and guarantee this locality, advanced techniques are used, including Morton and Hilbert encoding. Both allow the transformation of three-dimensional spatial coordinates into one-dimensional indexes that preserve spatial loacality as far as its possible. However, this approach introduces a potential efficiency bottleneck: the sorting of the generated spatial indexes. This step can be crucial for efficient data access and reducing search times. Given the importance of neighbor searchs, the present work focuses on carrying out an exhaustive analysis of the sorting process of these indexes, especially in parallel execution environments using OpenMP, taking advantage of the processing capabilities offered by modern architectures. This approach consisted of selecting and evaluating different sorting algorithms. Specifically, versions of Selection sort, Quicksort, Parallel Quicksort, Radix sort and Bucket sort have been implemented and compared, each with its respective characteristics and optimization strategies. The results enable a solid comparison among different approaches and determine, depending on the circumstances, which method offers the best performance in terms of execution time and scalability, with improvements observed in several cases over the original methods. In this way, the sorting process of indexes is optimized, representing a valuable alternative to consider in neighbor searches within 3D point cloud processing.
In the field of LiDAR point cloud processing, efficient neighbor search is one of the most computationally expensive tasks. In order to minimize its impact on the overall performance, specialized data structures such as octrees are utilized, which can take advantage of the spatial locality present in the points. To reinforce and guarantee this locality, advanced techniques are used, including Morton and Hilbert encoding. Both allow the transformation of three-dimensional spatial coordinates into one-dimensional indexes that preserve spatial loacality as far as its possible. However, this approach introduces a potential efficiency bottleneck: the sorting of the generated spatial indexes. This step can be crucial for efficient data access and reducing search times. Given the importance of neighbor searchs, the present work focuses on carrying out an exhaustive analysis of the sorting process of these indexes, especially in parallel execution environments using OpenMP, taking advantage of the processing capabilities offered by modern architectures. This approach consisted of selecting and evaluating different sorting algorithms. Specifically, versions of Selection sort, Quicksort, Parallel Quicksort, Radix sort and Bucket sort have been implemented and compared, each with its respective characteristics and optimization strategies. The results enable a solid comparison among different approaches and determine, depending on the circumstances, which method offers the best performance in terms of execution time and scalability, with improvements observed in several cases over the original methods. In this way, the sorting process of indexes is optimized, representing a valuable alternative to consider in neighbor searches within 3D point cloud processing.
Direction
Fernández Rivera, Francisco (Tutorships)
YERMO GARCIA, MIGUEL (Co-tutorships)
Fernández Rivera, Francisco (Tutorships)
YERMO GARCIA, MIGUEL (Co-tutorships)
Court
BUGARIN DIZ, ALBERTO JOSE (Chairman)
Cotos Yáñez, José Manuel (Secretary)
ALONSO TARRIO, LEOVIGILDO (Member)
BUGARIN DIZ, ALBERTO JOSE (Chairman)
Cotos Yáñez, José Manuel (Secretary)
ALONSO TARRIO, LEOVIGILDO (Member)
Butadiene production plant from the catalytic dehydrogenation of n-butane.
Authorship
A.N.I.
Bachelor's Degree in Chemical Engeneering
A.N.I.
Bachelor's Degree in Chemical Engeneering
Defense date
07.16.2025 11:10
07.16.2025 11:10
Summary
The objective of this project is to design, at a basic engineering level, a production plant with a capacity of 50,000 tonnes per year of 1,3-butadiene with a purity of 99.99%, obtained from the catalytic dehydrogenation of n-butane. The plant is designed to operate continuously, 24 hours a day, for 330 days a year, with the remaining days reserved for plant maintenance. The project includes a detailed design of the oxidative dehydrogenation reactor for 1-butene (R-202), carried out by the student Alejandra Novo Iñigo; the distillation column (T-304) designed by Andrea Yáñez Rey; and the absorption tower (T-301) designed by Celia Martínez Fernández. This project is submitted as the final degree project of the students Alejandra Novo Iñigo, Celia Martínez Fernández, and Andrea Yáñez Rey, with the aim of obtaining the Degree in Chemical Engineering from the University of Santiago de Compostela.
The objective of this project is to design, at a basic engineering level, a production plant with a capacity of 50,000 tonnes per year of 1,3-butadiene with a purity of 99.99%, obtained from the catalytic dehydrogenation of n-butane. The plant is designed to operate continuously, 24 hours a day, for 330 days a year, with the remaining days reserved for plant maintenance. The project includes a detailed design of the oxidative dehydrogenation reactor for 1-butene (R-202), carried out by the student Alejandra Novo Iñigo; the distillation column (T-304) designed by Andrea Yáñez Rey; and the absorption tower (T-301) designed by Celia Martínez Fernández. This project is submitted as the final degree project of the students Alejandra Novo Iñigo, Celia Martínez Fernández, and Andrea Yáñez Rey, with the aim of obtaining the Degree in Chemical Engineering from the University of Santiago de Compostela.
Direction
RODIL RODRIGUEZ, EVA (Tutorships)
RODIL RODRIGUEZ, EVA (Tutorships)
Court
Omil Prieto, Francisco (Chairman)
Rodríguez Figueiras, Óscar (Secretary)
VAL DEL RIO, MARIA ANGELES (Member)
Omil Prieto, Francisco (Chairman)
Rodríguez Figueiras, Óscar (Secretary)
VAL DEL RIO, MARIA ANGELES (Member)
Benchmark for ontology learning from relational databases
Authorship
I.N.V.
Bachelor’s Degree in Informatics Engineering
I.N.V.
Bachelor’s Degree in Informatics Engineering
Defense date
07.17.2025 09:45
07.17.2025 09:45
Summary
Ontologies are especially relevant in the fields of data integration, semantic interoperability, and information retrieval, among others. The construction of an ontology requires extensive manual work, which is why the automatic creation of ontologies (Ontology Learning) is used. This approach leverages machine learning techniques, natural language processing, and, more recently, the use of large language models (LLMs) to generate ontologies from texts, documents, or, in this case, relational databases. It is essential to have standardized methodologies that allow for an objective and reproducible evaluation of the quality of the ontologies produced by different systems. This work proposes a specific evaluation framework for Ontology Learning from relational database schemas, consisting of a set of reference resources (gold standard) and a set of formal metrics for the structural and semantic comparison of ontologies. This framework enables systematic comparisons between different ontology learning tools and approaches, providing a solid foundation for experimentation and progress in this field. In addition, an experiment was carried out on large language models (LLMs) to validate the usefulness and applicability of the proposed evaluation framework. The code is publicly available at RDB2OWL-Bench.
Ontologies are especially relevant in the fields of data integration, semantic interoperability, and information retrieval, among others. The construction of an ontology requires extensive manual work, which is why the automatic creation of ontologies (Ontology Learning) is used. This approach leverages machine learning techniques, natural language processing, and, more recently, the use of large language models (LLMs) to generate ontologies from texts, documents, or, in this case, relational databases. It is essential to have standardized methodologies that allow for an objective and reproducible evaluation of the quality of the ontologies produced by different systems. This work proposes a specific evaluation framework for Ontology Learning from relational database schemas, consisting of a set of reference resources (gold standard) and a set of formal metrics for the structural and semantic comparison of ontologies. This framework enables systematic comparisons between different ontology learning tools and approaches, providing a solid foundation for experimentation and progress in this field. In addition, an experiment was carried out on large language models (LLMs) to validate the usefulness and applicability of the proposed evaluation framework. The code is publicly available at RDB2OWL-Bench.
Direction
CHAVES FRAGA, DAVID (Tutorships)
BUGARIN DIZ, ALBERTO JOSE (Co-tutorships)
LAMA PENIN, MANUEL (Co-tutorships)
CHAVES FRAGA, DAVID (Tutorships)
BUGARIN DIZ, ALBERTO JOSE (Co-tutorships)
LAMA PENIN, MANUEL (Co-tutorships)
Court
CATALA BOLOS, ALEJANDRO (Chairman)
Triñanes Fernández, Joaquín Ángel (Secretary)
GONZALEZ DIAZ, JULIO (Member)
CATALA BOLOS, ALEJANDRO (Chairman)
Triñanes Fernández, Joaquín Ángel (Secretary)
GONZALEZ DIAZ, JULIO (Member)
Methanol to acetic acid production plant
Authorship
E.O.M.
Bachelor's Degree in Chemical Engeneering
E.O.M.
Bachelor's Degree in Chemical Engeneering
Defense date
07.16.2025 09:30
07.16.2025 09:30
Summary
Obtaining 70,000 t/year of acetic acid from methanol and carbon monoxide by means of the methanol carbonylation process. This organic compound has a wide variety of uses, from the production of cosmetic and pharmaceutical products to the food, textile and chemical industries, which makes it a product of great industrial interest. In this project, Martín Álvarez will carry out the rigorous design of the reactor, in charge of the methanol carbonylation process to obtain acetic acid, and Elena Ojea of the first column, in charge of separating acetic acid and water from the other components of the reaction medium.
Obtaining 70,000 t/year of acetic acid from methanol and carbon monoxide by means of the methanol carbonylation process. This organic compound has a wide variety of uses, from the production of cosmetic and pharmaceutical products to the food, textile and chemical industries, which makes it a product of great industrial interest. In this project, Martín Álvarez will carry out the rigorous design of the reactor, in charge of the methanol carbonylation process to obtain acetic acid, and Elena Ojea of the first column, in charge of separating acetic acid and water from the other components of the reaction medium.
Direction
FREIRE LEIRA, MARIA SONIA (Tutorships)
González Álvarez, Julia (Co-tutorships)
FREIRE LEIRA, MARIA SONIA (Tutorships)
González Álvarez, Julia (Co-tutorships)
Court
VIDAL TATO, MARIA ISABEL (Chairman)
MAURICIO IGLESIAS, MIGUEL (Secretary)
RODRIGUEZ MARTINEZ, HECTOR (Member)
VIDAL TATO, MARIA ISABEL (Chairman)
MAURICIO IGLESIAS, MIGUEL (Secretary)
RODRIGUEZ MARTINEZ, HECTOR (Member)
Biodiesel plant from vegetable oils by transesterification
Authorship
P.O.F.
Bachelor's Degree in Chemical Engeneering
P.O.F.
Bachelor's Degree in Chemical Engeneering
Defense date
07.16.2025 10:50
07.16.2025 10:50
Summary
This undergraduate dissertation presents the design of an industrial-scale biodiesel production plant using refined soybean oil as feedstock through a homogeneous base-catalyzed transesterification process. With an annual capacity of 100,000 metric tons, the plant operates continuously and prioritizes sustainability, technical feasibility, and alignment with European decarbonization goals. The project includes a detailed design of the main reactor (R-201), selection of sodium methoxide as the optimal catalyst, and comprehensive analyses covering mass and energy balances, market dynamics, economic viability, site selection, safety protocols, and environmental compliance.
This undergraduate dissertation presents the design of an industrial-scale biodiesel production plant using refined soybean oil as feedstock through a homogeneous base-catalyzed transesterification process. With an annual capacity of 100,000 metric tons, the plant operates continuously and prioritizes sustainability, technical feasibility, and alignment with European decarbonization goals. The project includes a detailed design of the main reactor (R-201), selection of sodium methoxide as the optimal catalyst, and comprehensive analyses covering mass and energy balances, market dynamics, economic viability, site selection, safety protocols, and environmental compliance.
Direction
RODRIGUEZ MARTINEZ, HECTOR (Tutorships)
RODRIGUEZ MARTINEZ, HECTOR (Tutorships)
Court
MOSQUERA CORRAL, ANUSKA (Chairman)
SOUTO GONZALEZ, JOSE ANTONIO (Secretary)
RODIL RODRIGUEZ, EVA (Member)
MOSQUERA CORRAL, ANUSKA (Chairman)
SOUTO GONZALEZ, JOSE ANTONIO (Secretary)
RODIL RODRIGUEZ, EVA (Member)
epsilon-caprolactam production plant
Authorship
J.O.L.
Bachelor's Degree in Chemical Engeneering
J.O.L.
Bachelor's Degree in Chemical Engeneering
Defense date
07.16.2025 12:35
07.16.2025 12:35
Summary
The epsilon-caprolactam (CPL) production plant is the project carried out by Jose Otero López and Iago Meijide Hidalgo in order to obtain the Bachelor's Degree in Chemical Engineering. This production plant obtains caprolactam through, in a first stage, the formation of cyclohexanone oxime via the ammoximation of cyclohexanone with hydrogen peroxide and ammonia, in a packed bed shell-and-tube reactor. Subsequently, in a second stage, using a Beckmann rearrangement reactor, also a packed bed with shell-and-tube design, the cyclohexanone oxime molecule is rearranged to produce caprolactam. Finally, for industrial-scale production, it is necessary to purify the product using a distillation column operating under reduced pressure, thus obtaining a mass purity of 99.99%. It is worth noting that the rigorous design carried out by Iago Meijide corresponds to the cyclohexanone ammoximation reactor, while the unit designed by Jose Otero is the vacuum distillation column for caprolactam purification. Finally, this project establishes that the plant will operate under continuous regime for 330 days per year, 24 hours a day, leaving 35 days for maintenance and downtime. In this way, the required production capacity of 60,000 tons per year of caprolactam can be achieved.
The epsilon-caprolactam (CPL) production plant is the project carried out by Jose Otero López and Iago Meijide Hidalgo in order to obtain the Bachelor's Degree in Chemical Engineering. This production plant obtains caprolactam through, in a first stage, the formation of cyclohexanone oxime via the ammoximation of cyclohexanone with hydrogen peroxide and ammonia, in a packed bed shell-and-tube reactor. Subsequently, in a second stage, using a Beckmann rearrangement reactor, also a packed bed with shell-and-tube design, the cyclohexanone oxime molecule is rearranged to produce caprolactam. Finally, for industrial-scale production, it is necessary to purify the product using a distillation column operating under reduced pressure, thus obtaining a mass purity of 99.99%. It is worth noting that the rigorous design carried out by Iago Meijide corresponds to the cyclohexanone ammoximation reactor, while the unit designed by Jose Otero is the vacuum distillation column for caprolactam purification. Finally, this project establishes that the plant will operate under continuous regime for 330 days per year, 24 hours a day, leaving 35 days for maintenance and downtime. In this way, the required production capacity of 60,000 tons per year of caprolactam can be achieved.
Direction
RODIL RODRIGUEZ, EVA (Tutorships)
RODIL RODRIGUEZ, EVA (Tutorships)
Court
ROCA BORDELLO, ENRIQUE (Chairman)
BELLO BUGALLO, PASTORA MARIA (Secretary)
SOTO CAMPOS, ANA MARIA (Member)
ROCA BORDELLO, ENRIQUE (Chairman)
BELLO BUGALLO, PASTORA MARIA (Secretary)
SOTO CAMPOS, ANA MARIA (Member)
LABS type detergent production plant
Authorship
L.P.F.
Bachelor's Degree in Chemical Engeneering
L.P.F.
Bachelor's Degree in Chemical Engeneering
Defense date
07.16.2025 09:30
07.16.2025 09:30
Summary
Design of LABS type detergent production plant from benzene and dodecene with a production capacity of 120,000 tonnes per year. The process involves three reactions, alkylation, sulfonation and neutralization. In addition, preconditioning before the reactores and the corresponding separation operations are carried out. The alkylation reactor is designed by Lucía Pardilla Fraguas, the second distillation column by Noelia López García and the sulfonation reactor by Antía Gándara Vieito
Design of LABS type detergent production plant from benzene and dodecene with a production capacity of 120,000 tonnes per year. The process involves three reactions, alkylation, sulfonation and neutralization. In addition, preconditioning before the reactores and the corresponding separation operations are carried out. The alkylation reactor is designed by Lucía Pardilla Fraguas, the second distillation column by Noelia López García and the sulfonation reactor by Antía Gándara Vieito
Direction
SOTO CAMPOS, ANA MARIA (Tutorships)
SOTO CAMPOS, ANA MARIA (Tutorships)
Court
MOSQUERA CORRAL, ANUSKA (Chairman)
SOUTO GONZALEZ, JOSE ANTONIO (Secretary)
RODIL RODRIGUEZ, EVA (Member)
MOSQUERA CORRAL, ANUSKA (Chairman)
SOUTO GONZALEZ, JOSE ANTONIO (Secretary)
RODIL RODRIGUEZ, EVA (Member)
Nolik
Authorship
K.P.G.
Bachelor’s Degree in Informatics Engineering
K.P.G.
Bachelor’s Degree in Informatics Engineering
Defense date
07.17.2025 10:30
07.17.2025 10:30
Summary
Knowledge management in technical support services is frequently hindered by the inefficiency of traditional search systems, which, by relying on keywords, fail to understand the context and semantic meaning of the problems. This Bachelor's Thesis addresses this challenge through the design and implementation of a complete software system that applies Artificial Intelligence techniques to offer an advanced semantic search solution. The solution is structured as a web application with a decoupled architecture, consisting of an interactive frontend built with React and a backend API (Application Programming Interface) developed with Flask. One of the main contributions of this work is the development of an automated and dynamic data pipeline that extracts, processes, and structures issues from the public microsoft/PowerToys repository. This pipeline implements a multi-stage solution extraction heuristic that interprets community conventions, such as duplicate detection and bot suggestions, to identify the most reliable responses. At the core of the system is a search engine that uses language models from the sentence-transformers library to convert problem descriptions into feature vector representations (known as embeddings). These vectors are indexed in a FAISS (Facebook AI Similarity Search) database to perform high-speed cosine similarity searches. Additionally, a rigorous comparative study has been carried out to evaluate different embedding models and select the most suitable one for this domain. The final result is a functional and robust prototype that, given a natural language query, returns a ranked list of resolved and relevant issues, with links to the original source. The system validates the superiority of the semantic approach and presents a comprehensive and scalable solution to the problem of efficient information retrieval in technical support environments.
Knowledge management in technical support services is frequently hindered by the inefficiency of traditional search systems, which, by relying on keywords, fail to understand the context and semantic meaning of the problems. This Bachelor's Thesis addresses this challenge through the design and implementation of a complete software system that applies Artificial Intelligence techniques to offer an advanced semantic search solution. The solution is structured as a web application with a decoupled architecture, consisting of an interactive frontend built with React and a backend API (Application Programming Interface) developed with Flask. One of the main contributions of this work is the development of an automated and dynamic data pipeline that extracts, processes, and structures issues from the public microsoft/PowerToys repository. This pipeline implements a multi-stage solution extraction heuristic that interprets community conventions, such as duplicate detection and bot suggestions, to identify the most reliable responses. At the core of the system is a search engine that uses language models from the sentence-transformers library to convert problem descriptions into feature vector representations (known as embeddings). These vectors are indexed in a FAISS (Facebook AI Similarity Search) database to perform high-speed cosine similarity searches. Additionally, a rigorous comparative study has been carried out to evaluate different embedding models and select the most suitable one for this domain. The final result is a functional and robust prototype that, given a natural language query, returns a ranked list of resolved and relevant issues, with links to the original source. The system validates the superiority of the semantic approach and presents a comprehensive and scalable solution to the problem of efficient information retrieval in technical support environments.
Direction
TABOADA IGLESIAS, MARÍA JESÚS (Tutorships)
Delás Ruíz, José Carlos (Co-tutorships)
TABOADA IGLESIAS, MARÍA JESÚS (Tutorships)
Delás Ruíz, José Carlos (Co-tutorships)
Court
BUGARIN DIZ, ALBERTO JOSE (Chairman)
Cotos Yáñez, José Manuel (Secretary)
ALONSO TARRIO, LEOVIGILDO (Member)
BUGARIN DIZ, ALBERTO JOSE (Chairman)
Cotos Yáñez, José Manuel (Secretary)
ALONSO TARRIO, LEOVIGILDO (Member)
TimeWise: an application for time management and personal task organization
Authorship
S.P.S.
Bachelor’s Degree in Informatics Engineering
S.P.S.
Bachelor’s Degree in Informatics Engineering
Defense date
07.17.2025 11:00
07.17.2025 11:00
Summary
The purpose of this project is to develop an Android mobile application that enables its users to organize their time, manage their personal tasks, and define their long-terg goals. To achive these aims, several key feature were implemented. First, an interactive calendar was created to display users'daily events and to set reminders for those events. Next, a Kanban-style board was developed for visualizing and managing tasks. Finally, functionality was added to support the achievement of long-term milestone, with the ability to break them down into more specific objectives. Throughout the entire development process, the user experience was kept in mind, designing the app to be as intuitive and engaging as possible. Once the implementation phase was complete, the application's functionality was validated and verified through a testing process that identified and corrected any errors. In conclusion, all of the projects'initial objectives have been met, resuulting in an efficient, useful, and attractive application.
The purpose of this project is to develop an Android mobile application that enables its users to organize their time, manage their personal tasks, and define their long-terg goals. To achive these aims, several key feature were implemented. First, an interactive calendar was created to display users'daily events and to set reminders for those events. Next, a Kanban-style board was developed for visualizing and managing tasks. Finally, functionality was added to support the achievement of long-term milestone, with the ability to break them down into more specific objectives. Throughout the entire development process, the user experience was kept in mind, designing the app to be as intuitive and engaging as possible. Once the implementation phase was complete, the application's functionality was validated and verified through a testing process that identified and corrected any errors. In conclusion, all of the projects'initial objectives have been met, resuulting in an efficient, useful, and attractive application.
Direction
TOBAR QUINTANAR, ALEJANDRO JOSE (Tutorships)
TOBAR QUINTANAR, ALEJANDRO JOSE (Tutorships)
Court
BUGARIN DIZ, ALBERTO JOSE (Chairman)
Cotos Yáñez, José Manuel (Secretary)
ALONSO TARRIO, LEOVIGILDO (Member)
BUGARIN DIZ, ALBERTO JOSE (Chairman)
Cotos Yáñez, José Manuel (Secretary)
ALONSO TARRIO, LEOVIGILDO (Member)
Classification trees and optimisation
Authorship
I.Q.R.
Double Bachelor's Degree in Informatics Engineering and Mathematics
I.Q.R.
Double Bachelor's Degree in Informatics Engineering and Mathematics
Defense date
07.17.2025 10:40
07.17.2025 10:40
Summary
This paper studies classification trees, a technique widely used in machine learning due to its simplicity, predictive capacity and ease of interpretation. In particular, different strategies for their construction are analysed, with special attention to methods based on mathematical optimisation. Three representative approaches are considered: Random Forests (RF), as a heuristic model based on tree assembly; Optimal Classification Trees (OCT), which poses the problem as a mixed integer linear optimisation; and Optimal Randomized Classification Trees (ORCT), which uses a continuous formulation that improves scalability while maintaining interpretability. The paper begins with a review of the fundamentals of statistical classification and decision tree-based methods. This is followed by a detailed description of optimisation models that allow the construction of optimal trees. Finally, a comparative empirical study is performed using five datasets of varying complexity, evaluating each model for accuracy, training time, interpretability and practical feasibility. The results show that RF offers high performance at low computational cost, while ORCT strikes a balance between accuracy and scalability. In contrast, OCT, while theoretically attractive, has computational limitations that restrict its use to smaller scale problems.
This paper studies classification trees, a technique widely used in machine learning due to its simplicity, predictive capacity and ease of interpretation. In particular, different strategies for their construction are analysed, with special attention to methods based on mathematical optimisation. Three representative approaches are considered: Random Forests (RF), as a heuristic model based on tree assembly; Optimal Classification Trees (OCT), which poses the problem as a mixed integer linear optimisation; and Optimal Randomized Classification Trees (ORCT), which uses a continuous formulation that improves scalability while maintaining interpretability. The paper begins with a review of the fundamentals of statistical classification and decision tree-based methods. This is followed by a detailed description of optimisation models that allow the construction of optimal trees. Finally, a comparative empirical study is performed using five datasets of varying complexity, evaluating each model for accuracy, training time, interpretability and practical feasibility. The results show that RF offers high performance at low computational cost, while ORCT strikes a balance between accuracy and scalability. In contrast, OCT, while theoretically attractive, has computational limitations that restrict its use to smaller scale problems.
Direction
GONZALEZ DIAZ, JULIO (Tutorships)
GONZALEZ DIAZ, JULIO (Tutorships)
Court
Majadas Soto, José Javier (Chairman)
SALGADO RODRIGUEZ, MARIA DEL PILAR (Secretary)
CASARES DE CAL, MARIA ANGELES (Member)
Majadas Soto, José Javier (Chairman)
SALGADO RODRIGUEZ, MARIA DEL PILAR (Secretary)
CASARES DE CAL, MARIA ANGELES (Member)
Automatic Text Analysis Technologies for Personality Trait Estimation
Authorship
I.Q.R.
Double Bachelor's Degree in Informatics Engineering and Mathematics
I.Q.R.
Double Bachelor's Degree in Informatics Engineering and Mathematics
Defense date
07.17.2025 10:15
07.17.2025 10:15
Summary
This work is framed within the field of Text-based Personality Computing (TPC), which seeks to estimate personality traits from texts written by users using natural language processing (NLP) techniques. Traditionally, personality traits are measured with questionnaires, but these methods have limitations such as the subjectivity of the answers or the difficulty of applying them on a large scale. Thanks to advances in PLN, it is now possible to analyse texts and predict certain traits without the need for surveys. In this paper, we have used a Reddit dataset with information about the personality traits of its users and applied modern techniques to compare their texts with the items of the NEO-FFI questionnaire. Through this process, Big-5 scores were estimated, the results were evaluated and MBTI traits were derived from these results. The proposed approach offers a simple, scalable and interpretable alternative for automatic personality analysis.
This work is framed within the field of Text-based Personality Computing (TPC), which seeks to estimate personality traits from texts written by users using natural language processing (NLP) techniques. Traditionally, personality traits are measured with questionnaires, but these methods have limitations such as the subjectivity of the answers or the difficulty of applying them on a large scale. Thanks to advances in PLN, it is now possible to analyse texts and predict certain traits without the need for surveys. In this paper, we have used a Reddit dataset with information about the personality traits of its users and applied modern techniques to compare their texts with the items of the NEO-FFI questionnaire. Through this process, Big-5 scores were estimated, the results were evaluated and MBTI traits were derived from these results. The proposed approach offers a simple, scalable and interpretable alternative for automatic personality analysis.
Direction
Losada Carril, David Enrique (Tutorships)
FERNANDEZ PICHEL, MARCOS (Co-tutorships)
Losada Carril, David Enrique (Tutorships)
FERNANDEZ PICHEL, MARCOS (Co-tutorships)
Court
CATALA BOLOS, ALEJANDRO (Chairman)
Triñanes Fernández, Joaquín Ángel (Secretary)
GONZALEZ DIAZ, JULIO (Member)
CATALA BOLOS, ALEJANDRO (Chairman)
Triñanes Fernández, Joaquín Ángel (Secretary)
GONZALEZ DIAZ, JULIO (Member)
Development of an eCommerce Application (Online Store) for a Fishing Business
Authorship
I.Q.G.
Bachelor’s Degree in Informatics Engineering
I.Q.G.
Bachelor’s Degree in Informatics Engineering
Defense date
07.17.2025 13:00
07.17.2025 13:00
Summary
Nowadays, many small and medium-sized businesses need to improve their visibility to remain competitive in the market. In the fishing retail sector, the sheer quantity of products to display to customers and the difficult management of these thousands of products in a business make it very complicated to maintain a clear and accessible catalog using traditional methods such as paper listings, physical catalogs, or even static websites. This Trabajo Fin de Grado consists of the design and implementation of an eCommerce type web application for the fishing shop Automar Baiona. The main objective of this application is to allow customers to easily browse the store's product catalog, using mechanisms such as filters and categories. Furthermore, the development of an administrator panel is proposed, which will provide the store owner with easy management of their online catalog, allowing them to add, modify, or delete products without requiring advanced technical knowledge. It is worth noting that, although the term eCommerce is used, the online purchase functionality is not implemented in this application. This is because the majority of Automar Baiona's customers make their purchases in person, which is why the client did not consider it appropriate to implement this part.
Nowadays, many small and medium-sized businesses need to improve their visibility to remain competitive in the market. In the fishing retail sector, the sheer quantity of products to display to customers and the difficult management of these thousands of products in a business make it very complicated to maintain a clear and accessible catalog using traditional methods such as paper listings, physical catalogs, or even static websites. This Trabajo Fin de Grado consists of the design and implementation of an eCommerce type web application for the fishing shop Automar Baiona. The main objective of this application is to allow customers to easily browse the store's product catalog, using mechanisms such as filters and categories. Furthermore, the development of an administrator panel is proposed, which will provide the store owner with easy management of their online catalog, allowing them to add, modify, or delete products without requiring advanced technical knowledge. It is worth noting that, although the term eCommerce is used, the online purchase functionality is not implemented in this application. This is because the majority of Automar Baiona's customers make their purchases in person, which is why the client did not consider it appropriate to implement this part.
Direction
VARELA PET, JOSE (Tutorships)
VARELA PET, JOSE (Tutorships)
Court
VARELA PET, JOSE (Student’s tutor)
VARELA PET, JOSE (Student’s tutor)
Acetic acid production plant from apple waste (pomace)
Authorship
L.R.T.
Bachelor's Degree in Chemical Engeneering
L.R.T.
Bachelor's Degree in Chemical Engeneering
Defense date
07.16.2025 10:00
07.16.2025 10:00
Summary
The main objective of this project is the management of organic waste produced in the cider and juice industry in northern Spain, with the capacity to treat all waste produced in Galicia. To this end, a facility was designed in the Chantada industrial estate (Lugo). In this process, which uses apple pomace as a raw material, 1,500 tons of acetic acid are produced sustainably annually, utilizing 43,440 tons of the residue. The process consists of two main fermentations, the first of which produces ethanol, which is subsequently transformed into acetic acid by the presence of the bacteria Acetobacter Pasteurianus SKYAA25. Apple fiber is also obtained as a byproduct, which will be marketed for other industries to use it. Furthermore, the plant design was completed, establishing the mass and energy balances for each piece of equipment, an estimate of the dimensions and other characteristics of them, and a detailed design of the fermenters used to produce acetic acid, including a construction plan. The following plans are also included: a location and site plan, a layout plan, a flow diagram, instrumentation and piping diagrams, and a Gantt diagram for the equipment. Finally, a health and safety study, an environmental impact study, and an economic analysis of the facility were conducted. The latter is divided into two main parts: an economic feasibility analysis and a budget allocation, resulting in a budget of 30,123,206.25 euros.
The main objective of this project is the management of organic waste produced in the cider and juice industry in northern Spain, with the capacity to treat all waste produced in Galicia. To this end, a facility was designed in the Chantada industrial estate (Lugo). In this process, which uses apple pomace as a raw material, 1,500 tons of acetic acid are produced sustainably annually, utilizing 43,440 tons of the residue. The process consists of two main fermentations, the first of which produces ethanol, which is subsequently transformed into acetic acid by the presence of the bacteria Acetobacter Pasteurianus SKYAA25. Apple fiber is also obtained as a byproduct, which will be marketed for other industries to use it. Furthermore, the plant design was completed, establishing the mass and energy balances for each piece of equipment, an estimate of the dimensions and other characteristics of them, and a detailed design of the fermenters used to produce acetic acid, including a construction plan. The following plans are also included: a location and site plan, a layout plan, a flow diagram, instrumentation and piping diagrams, and a Gantt diagram for the equipment. Finally, a health and safety study, an environmental impact study, and an economic analysis of the facility were conducted. The latter is divided into two main parts: an economic feasibility analysis and a budget allocation, resulting in a budget of 30,123,206.25 euros.
Direction
GONZALEZ GARCIA, SARA (Tutorships)
GONZALEZ GARCIA, SARA (Tutorships)
Court
Omil Prieto, Francisco (Chairman)
Rodríguez Figueiras, Óscar (Secretary)
VAL DEL RIO, MARIA ANGELES (Member)
Omil Prieto, Francisco (Chairman)
Rodríguez Figueiras, Óscar (Secretary)
VAL DEL RIO, MARIA ANGELES (Member)
Cumene production plant from benzene and propylene
Authorship
A.R.B.
Bachelor's Degree in Chemical Engeneering
A.R.B.
Bachelor's Degree in Chemical Engeneering
Defense date
07.16.2025 10:30
07.16.2025 10:30
Summary
Cumene, the common name for isopropylbenzene, is a chemical compound widely used in industry, as it serves as an intermediate in the production of high value-added products such as phenol and acetone. It is also used in the production of propylene oxide. For these reasons, it was decided to build an industrial plant for the production of cumene through the alkylation of benzene with the addition of propylene. The process consists of an initial conditioning section, where benzene and propylene are introduced and brought to the appropriate conditions for the reaction to occur. The second section is the reactor, where the alkylation reaction takes place in a catalytic reactor to obtain the desired product. Next, in the third section, the resulting products are separated and purified using a series of distillation columns. The final stage is the storage of the separated products. Alexandre Rey Baliña will design the distillation column. Antón Varela Veiga will design the reactor.
Cumene, the common name for isopropylbenzene, is a chemical compound widely used in industry, as it serves as an intermediate in the production of high value-added products such as phenol and acetone. It is also used in the production of propylene oxide. For these reasons, it was decided to build an industrial plant for the production of cumene through the alkylation of benzene with the addition of propylene. The process consists of an initial conditioning section, where benzene and propylene are introduced and brought to the appropriate conditions for the reaction to occur. The second section is the reactor, where the alkylation reaction takes place in a catalytic reactor to obtain the desired product. Next, in the third section, the resulting products are separated and purified using a series of distillation columns. The final stage is the storage of the separated products. Alexandre Rey Baliña will design the distillation column. Antón Varela Veiga will design the reactor.
Direction
FRANCO URIA, MARIA AMAYA (Tutorships)
SINEIRO TORRES, JORGE (Co-tutorships)
FRANCO URIA, MARIA AMAYA (Tutorships)
SINEIRO TORRES, JORGE (Co-tutorships)
Court
Omil Prieto, Francisco (Chairman)
Rodríguez Figueiras, Óscar (Secretary)
VAL DEL RIO, MARIA ANGELES (Member)
Omil Prieto, Francisco (Chairman)
Rodríguez Figueiras, Óscar (Secretary)
VAL DEL RIO, MARIA ANGELES (Member)
Characterization of ECG Changes Immediately Before the Onset of Paroxysmal Atrial Fibrillation Using Machine Learning
Authorship
S.R.V.
Bachelor’s Degree in Informatics Engineering
S.R.V.
Bachelor’s Degree in Informatics Engineering
Defense date
07.17.2025 10:45
07.17.2025 10:45
Summary
The main objective of this work is the development of a predictive model for paroxysmal atrial fibrillation (AF) based on electrocardiograms (ECGs), using machine learning techniques. The proposed approach aims to identify patterns in ECGs that may serve as early indicators of an imminent AF episode, specifically up to 10 minutes in advance. To this end, the database provided by the Predicting Paroxysmal Atrial Fibrillation/Flutter: The PhysioNet/Computing in Cardiology Challenge 2001 is used, which contains ECG recordings from patients, half of whom experience AF episodes. The methodology is based on a combination of machine learning techniques, including bidirectional LSTM neural networks, the XGBoost algorithm, and various ensemble models. The proposed architectures process the ECG records both in an aggregated manner and at the beat level, in order to extract as much information as possible from them. In addition to detecting AF events, this work also aims to contribute to the clinical interpretation of the results by characterizing the most relevant variables. This allows the identification of ECG features that may be associated with a higher risk of an AF episode, thus facilitating their analysis and understanding by medical personnel.
The main objective of this work is the development of a predictive model for paroxysmal atrial fibrillation (AF) based on electrocardiograms (ECGs), using machine learning techniques. The proposed approach aims to identify patterns in ECGs that may serve as early indicators of an imminent AF episode, specifically up to 10 minutes in advance. To this end, the database provided by the Predicting Paroxysmal Atrial Fibrillation/Flutter: The PhysioNet/Computing in Cardiology Challenge 2001 is used, which contains ECG recordings from patients, half of whom experience AF episodes. The methodology is based on a combination of machine learning techniques, including bidirectional LSTM neural networks, the XGBoost algorithm, and various ensemble models. The proposed architectures process the ECG records both in an aggregated manner and at the beat level, in order to extract as much information as possible from them. In addition to detecting AF events, this work also aims to contribute to the clinical interpretation of the results by characterizing the most relevant variables. This allows the identification of ECG features that may be associated with a higher risk of an AF episode, thus facilitating their analysis and understanding by medical personnel.
Direction
FELIX LAMAS, PAULO MANUEL (Tutorships)
RODRIGUEZ PRESEDO, JESUS MARIA (Co-tutorships)
FELIX LAMAS, PAULO MANUEL (Tutorships)
RODRIGUEZ PRESEDO, JESUS MARIA (Co-tutorships)
Court
CATALA BOLOS, ALEJANDRO (Chairman)
Triñanes Fernández, Joaquín Ángel (Secretary)
GONZALEZ DIAZ, JULIO (Member)
CATALA BOLOS, ALEJANDRO (Chairman)
Triñanes Fernández, Joaquín Ángel (Secretary)
GONZALEZ DIAZ, JULIO (Member)
Vinyl acetate (monomer) production plant
Authorship
D.S.S.
Bachelor's Degree in Chemical Engeneering
D.S.S.
Bachelor's Degree in Chemical Engeneering
Defense date
07.16.2025 09:30
07.16.2025 09:30
Summary
This project consists of the development of a vinyl acetate (monomer) production plant. This chemical product is in great demand at industrial level due to all the uses it has. It is mainly used for the creation of polymers such as PVA or PVP. For its production, ethylene, acetic acid and oxygen were used as raw materials. The process is characterized by the fact that it is carried out in the gas phase. The project was carried out by students Darío Senín Sotelo and Alberto Pisco Domínguez, who designed, respectively, the R-201 multitubular heterogeneous catalytic reactor and the T-302 physical absorption column.
This project consists of the development of a vinyl acetate (monomer) production plant. This chemical product is in great demand at industrial level due to all the uses it has. It is mainly used for the creation of polymers such as PVA or PVP. For its production, ethylene, acetic acid and oxygen were used as raw materials. The process is characterized by the fact that it is carried out in the gas phase. The project was carried out by students Darío Senín Sotelo and Alberto Pisco Domínguez, who designed, respectively, the R-201 multitubular heterogeneous catalytic reactor and the T-302 physical absorption column.
Direction
Rodríguez Figueiras, Óscar (Tutorships)
Rodríguez Figueiras, Óscar (Tutorships)
Court
CARBALLA ARCOS, MARTA (Chairman)
FRANCO RUIZ, DANIEL JOSE (Secretary)
EIBES GONZALEZ, GEMMA MARIA (Member)
CARBALLA ARCOS, MARTA (Chairman)
FRANCO RUIZ, DANIEL JOSE (Secretary)
EIBES GONZALEZ, GEMMA MARIA (Member)
State of the Art of WiFi Pentesting Tools Using the OWISAM Methodology
Authorship
L.S.L.
Bachelor’s Degree in Informatics Engineering
L.S.L.
Bachelor’s Degree in Informatics Engineering
Defense date
07.17.2025 11:00
07.17.2025 11:00
Summary
The present Final Degree Project (TFG) has focused on the update and improvement of the OWISAM (Open Wireless Security Assessment Methodology), for security audits in wireless networks. The project has been developed in collaboration with Tarlogic Security, the company that authored it since 2013. The main objective consisted of updating the contents to include the most recent threats and technologies, while simultaneously developing a new web platform based on Jekyll that significantly improves accessibility, maintainability, and user experience compared to the original implementation. Among the main contributions of the project are the update of the 64 technical controls organized into 10 categories and the implementation of a modern and responsive interface. The new platform has successfully met the performance, accessibility, and usability objectives established. This final degree project has not only modernized a key tool for the cybersecurity community but also establishes the foundations for its maintenance, facilitating open collaboration and continuous evolution of its contents.
The present Final Degree Project (TFG) has focused on the update and improvement of the OWISAM (Open Wireless Security Assessment Methodology), for security audits in wireless networks. The project has been developed in collaboration with Tarlogic Security, the company that authored it since 2013. The main objective consisted of updating the contents to include the most recent threats and technologies, while simultaneously developing a new web platform based on Jekyll that significantly improves accessibility, maintainability, and user experience compared to the original implementation. Among the main contributions of the project are the update of the 64 technical controls organized into 10 categories and the implementation of a modern and responsive interface. The new platform has successfully met the performance, accessibility, and usability objectives established. This final degree project has not only modernized a key tool for the cybersecurity community but also establishes the foundations for its maintenance, facilitating open collaboration and continuous evolution of its contents.
Direction
CARIÑENA AMIGO, MARIA PURIFICACION (Tutorships)
TARASCO ACUÑA, MIGUEL (Co-tutorships)
CARIÑENA AMIGO, MARIA PURIFICACION (Tutorships)
TARASCO ACUÑA, MIGUEL (Co-tutorships)
Court
Argüello Pedreira, Francisco Santiago (Chairman)
PENA BRAGE, FRANCISCO JOSE (Secretary)
Carreira Nouche, María José (Member)
Argüello Pedreira, Francisco Santiago (Chairman)
PENA BRAGE, FRANCISCO JOSE (Secretary)
Carreira Nouche, María José (Member)
Analysis and visualization of emotions in forums for the detection of gambling addiction symptoms
Authorship
D.S.V.
Bachelor’s Degree in Informatics Engineering
D.S.V.
Bachelor’s Degree in Informatics Engineering
Defense date
07.17.2025 11:30
07.17.2025 11:30
Summary
A web tool is presented, aimed at the emotional and semantic analysis of texts, allowing users to define custom queries and visualize results clearly and intuitively. The application enables configurable text processing, prioritizing user experience, system modularity, and the ability to extend to new analysis scenarios. The system is built on a client-server architecture based on the hexagonal paradigm, which promotes separation of concerns and adaptability, using Python and the FastAPI framework for the backend and standard web technologies for the frontend. Its functionalities include uploading .csv files, filtering texts through specific queries with tolerance thresholds, performing emotional analysis, and visualizing personalized results through interactive filters. Technically, it incorporates emotional analysis and semantic retrieval models structured through design patterns such as Strategy, Factory, Decorator, and Facade, ensuring code extensibility and maintainability. It also integrates a user authentication and management system via JWT, secure access control, and data persistence using a PostgreSQL relational database, meeting security and traceability requirements. The tool is designed following principles of scalability and separation of concerns (presentation, business logic, and data), and its functionality has been validated through unit, integration, and usability tests based on real use cases, laying the foundation for its future evolution. This project falls under Type B of the Final Degree Project, covering the specification, design, and implementation of a complete, modular, and extensible system, whose versatility makes it applicable both in academic contexts and in practical natural language processing scenarios.
A web tool is presented, aimed at the emotional and semantic analysis of texts, allowing users to define custom queries and visualize results clearly and intuitively. The application enables configurable text processing, prioritizing user experience, system modularity, and the ability to extend to new analysis scenarios. The system is built on a client-server architecture based on the hexagonal paradigm, which promotes separation of concerns and adaptability, using Python and the FastAPI framework for the backend and standard web technologies for the frontend. Its functionalities include uploading .csv files, filtering texts through specific queries with tolerance thresholds, performing emotional analysis, and visualizing personalized results through interactive filters. Technically, it incorporates emotional analysis and semantic retrieval models structured through design patterns such as Strategy, Factory, Decorator, and Facade, ensuring code extensibility and maintainability. It also integrates a user authentication and management system via JWT, secure access control, and data persistence using a PostgreSQL relational database, meeting security and traceability requirements. The tool is designed following principles of scalability and separation of concerns (presentation, business logic, and data), and its functionality has been validated through unit, integration, and usability tests based on real use cases, laying the foundation for its future evolution. This project falls under Type B of the Final Degree Project, covering the specification, design, and implementation of a complete, modular, and extensible system, whose versatility makes it applicable both in academic contexts and in practical natural language processing scenarios.
Direction
CONDORI FERNANDEZ, OLINDA NELLY (Tutorships)
ARAGON SAENZPARDO, MARIO EZRA (Co-tutorships)
CONDORI FERNANDEZ, OLINDA NELLY (Tutorships)
ARAGON SAENZPARDO, MARIO EZRA (Co-tutorships)
Court
Argüello Pedreira, Francisco Santiago (Chairman)
PENA BRAGE, FRANCISCO JOSE (Secretary)
Carreira Nouche, María José (Member)
Argüello Pedreira, Francisco Santiago (Chairman)
PENA BRAGE, FRANCISCO JOSE (Secretary)
Carreira Nouche, María José (Member)
Gateway: Web platform for training planning
Authorship
S.S.P.
Bachelor’s Degree in Informatics Engineering
S.S.P.
Bachelor’s Degree in Informatics Engineering
Defense date
07.17.2025 12:00
07.17.2025 12:00
Summary
The main contribution of this Final Degree Project is the development of a web application that helps users to plan their training according to their professional objectives. The platform allows you to receive personalized recommendations for courses, organize a training plan and track individual progress. In addition, the system includes key functions such as user registration, personal profile management, viewing training itineraries and the ability to save or mark resources as completed. All this is aimed at offering a useful tool for those who want to learn autonomously, especially in professional improvement processes.
The main contribution of this Final Degree Project is the development of a web application that helps users to plan their training according to their professional objectives. The platform allows you to receive personalized recommendations for courses, organize a training plan and track individual progress. In addition, the system includes key functions such as user registration, personal profile management, viewing training itineraries and the ability to save or mark resources as completed. All this is aimed at offering a useful tool for those who want to learn autonomously, especially in professional improvement processes.
Direction
GARCIA TAHOCES, PABLO (Tutorships)
García Cortés, Mayra (Co-tutorships)
GARCIA TAHOCES, PABLO (Tutorships)
García Cortés, Mayra (Co-tutorships)
Court
Argüello Pedreira, Francisco Santiago (Chairman)
PENA BRAGE, FRANCISCO JOSE (Secretary)
Carreira Nouche, María José (Member)
Argüello Pedreira, Francisco Santiago (Chairman)
PENA BRAGE, FRANCISCO JOSE (Secretary)
Carreira Nouche, María José (Member)
Allyl alcohol production plant by hydrolysis of allyl acetate
Authorship
A.S.B.
Bachelor's Degree in Chemical Engeneering
A.S.B.
Bachelor's Degree in Chemical Engeneering
Defense date
07.16.2025 12:30
07.16.2025 12:30
Summary
The purpose of this project is the design of a production plant 90,000 tons per year of allyl alcohol with a purity of 100%. Production will be carried out continuously 24 hours a day, 330 days a year. Allyl alcohol will be obtained through the catalytic hydrolysis of allyl acetate. In itself, the process will be based on reacting the raw materials allyl acetate and water in a fixed-bed reactor, inside which there will be an ion-exchange acid resin-type catalyst. As a result of the reaction, a stream consisting of allyl alcohol (main product), acetic acid (by-product) and unreacted raw materials will leave the reactor. All chemicals from the previous stream will be separated using separation equipment such as simple distillation columns and an extractive distillation column. One of the simple distillation columns of the process, specifically the one located after the extractive distillation column, will be the equipment that will be designed in detail and the rest will be dimensioned. In addition, as the main purpose of completing the Final Degree Project, the student Ana Suárez Barreiro chooses to obtain the Chemical Engineering degree.
The purpose of this project is the design of a production plant 90,000 tons per year of allyl alcohol with a purity of 100%. Production will be carried out continuously 24 hours a day, 330 days a year. Allyl alcohol will be obtained through the catalytic hydrolysis of allyl acetate. In itself, the process will be based on reacting the raw materials allyl acetate and water in a fixed-bed reactor, inside which there will be an ion-exchange acid resin-type catalyst. As a result of the reaction, a stream consisting of allyl alcohol (main product), acetic acid (by-product) and unreacted raw materials will leave the reactor. All chemicals from the previous stream will be separated using separation equipment such as simple distillation columns and an extractive distillation column. One of the simple distillation columns of the process, specifically the one located after the extractive distillation column, will be the equipment that will be designed in detail and the rest will be dimensioned. In addition, as the main purpose of completing the Final Degree Project, the student Ana Suárez Barreiro chooses to obtain the Chemical Engineering degree.
Direction
RODIL RODRIGUEZ, EVA (Tutorships)
RODIL RODRIGUEZ, EVA (Tutorships)
Court
HOSPIDO QUINTANA, ALMUDENA (Chairman)
GONZALEZ GARCIA, SARA (Secretary)
González Álvarez, Julia (Member)
HOSPIDO QUINTANA, ALMUDENA (Chairman)
GONZALEZ GARCIA, SARA (Secretary)
González Álvarez, Julia (Member)
Production plant of acrylonitrile
Authorship
H.S.B.
Bachelor's Degree in Chemical Engeneering
H.S.B.
Bachelor's Degree in Chemical Engeneering
Defense date
07.16.2025 10:40
07.16.2025 10:40
Summary
The objective of this Final Degree Project is the design of a production plant for 50,000 tons per year of acrylonitrile. This compound has widespread industrial use. It is commonly employed in the production of synthetic fibers, plastics, resins, and in the synthesis of other chemicals. Acrylonitrile is obtained through the ammoxidation reaction of propylene, ammonia, and oxygen, known as the Sohio process. The main reaction products are acrylonitrile, HCN, acetonitrile, water, carbon dioxide, and carbon monoxide. The reaction takes place in a fluidized-bed reactor using a bismuth molybdate oxide catalyst supported on alumina. Subsequently, the reaction products pass through an absorber (using water as the solvent) to recover the highest possible amount of acrylonitrile. Finally, the products are sent through a distillation column train, whose main objective is the separation and purification of acrylonitrile to 99.7% purity.
The objective of this Final Degree Project is the design of a production plant for 50,000 tons per year of acrylonitrile. This compound has widespread industrial use. It is commonly employed in the production of synthetic fibers, plastics, resins, and in the synthesis of other chemicals. Acrylonitrile is obtained through the ammoxidation reaction of propylene, ammonia, and oxygen, known as the Sohio process. The main reaction products are acrylonitrile, HCN, acetonitrile, water, carbon dioxide, and carbon monoxide. The reaction takes place in a fluidized-bed reactor using a bismuth molybdate oxide catalyst supported on alumina. Subsequently, the reaction products pass through an absorber (using water as the solvent) to recover the highest possible amount of acrylonitrile. Finally, the products are sent through a distillation column train, whose main objective is the separation and purification of acrylonitrile to 99.7% purity.
Direction
SOTO CAMPOS, ANA MARIA (Tutorships)
SOTO CAMPOS, ANA MARIA (Tutorships)
Court
VIDAL TATO, MARIA ISABEL (Chairman)
MAURICIO IGLESIAS, MIGUEL (Secretary)
RODRIGUEZ MARTINEZ, HECTOR (Member)
VIDAL TATO, MARIA ISABEL (Chairman)
MAURICIO IGLESIAS, MIGUEL (Secretary)
RODRIGUEZ MARTINEZ, HECTOR (Member)
Mobile application for musical ear training learning
Authorship
R.T.L.
Double Bachelor's Degree in Informatics Engineering and Mathematics
R.T.L.
Double Bachelor's Degree in Informatics Engineering and Mathematics
Defense date
07.17.2025 12:30
07.17.2025 12:30
Summary
This work will describe the process of creating a mobile application for learning musical ear training. It will begin with a preliminary study where the potential of the idea will be explored. We will identify who might be interested in the application and why. A business model will also be proposed, as mobile applications are relatively easy to introduce into the market. The requirements that will define the application's functionality will be specified. A chapter will be devoted to software engineering, explaining the lifecycle model and describing the planning process. Use cases for the previously defined functional requirements will also be included. The design section will cover different aspects such as the graphical interface, system architecture, and software design. Finally, the testing plan will be presented and the success or failure of the project will be evaluated.
This work will describe the process of creating a mobile application for learning musical ear training. It will begin with a preliminary study where the potential of the idea will be explored. We will identify who might be interested in the application and why. A business model will also be proposed, as mobile applications are relatively easy to introduce into the market. The requirements that will define the application's functionality will be specified. A chapter will be devoted to software engineering, explaining the lifecycle model and describing the planning process. Use cases for the previously defined functional requirements will also be included. The design section will cover different aspects such as the graphical interface, system architecture, and software design. Finally, the testing plan will be presented and the success or failure of the project will be evaluated.
Direction
TOBAR QUINTANAR, ALEJANDRO JOSE (Tutorships)
TOBAR QUINTANAR, ALEJANDRO JOSE (Tutorships)
Court
Argüello Pedreira, Francisco Santiago (Chairman)
PENA BRAGE, FRANCISCO JOSE (Secretary)
Carreira Nouche, María José (Member)
Argüello Pedreira, Francisco Santiago (Chairman)
PENA BRAGE, FRANCISCO JOSE (Secretary)
Carreira Nouche, María José (Member)
Introduction to reinforcement learning
Authorship
R.T.L.
Double Bachelor's Degree in Informatics Engineering and Mathematics
R.T.L.
Double Bachelor's Degree in Informatics Engineering and Mathematics
Defense date
07.17.2025 11:25
07.17.2025 11:25
Summary
This work presents an introduction to reinforcement learning from a mathematical perspective, with special emphasis on its connection to dynamic programming. Although reinforcement learning is an independent field within artificial intelligence, many of its core ideas such as sequential decision-making, utility functions, or optimal policies originate from dynamic programming. For this reason, the study begins by addressing dynamic programming as the conceptual and formal foundation upon which reinforcement learning is built. Once the section on dynamic programming is concluded, the most important aspects of reinforcement learning are introduced. Understanding Markov decision processes will be essential for grasping the theory. The work also discusses topics derived from this paradigm, such as the exploration exploitation dilemma, and examines various solution algorithms, including Monte Carlo methods and temporal-difference learning. Finally, the study explores how reinforcement learning can be a powerful tool in the field of mathematical optimization. To this end, several classical optimization problems are analyzed, showing how they can be reformulated within the reinforcement learning framework. This allows the application of RL algorithms to complex problems that are not easily solved by traditional methods.
This work presents an introduction to reinforcement learning from a mathematical perspective, with special emphasis on its connection to dynamic programming. Although reinforcement learning is an independent field within artificial intelligence, many of its core ideas such as sequential decision-making, utility functions, or optimal policies originate from dynamic programming. For this reason, the study begins by addressing dynamic programming as the conceptual and formal foundation upon which reinforcement learning is built. Once the section on dynamic programming is concluded, the most important aspects of reinforcement learning are introduced. Understanding Markov decision processes will be essential for grasping the theory. The work also discusses topics derived from this paradigm, such as the exploration exploitation dilemma, and examines various solution algorithms, including Monte Carlo methods and temporal-difference learning. Finally, the study explores how reinforcement learning can be a powerful tool in the field of mathematical optimization. To this end, several classical optimization problems are analyzed, showing how they can be reformulated within the reinforcement learning framework. This allows the application of RL algorithms to complex problems that are not easily solved by traditional methods.
Direction
GONZALEZ DIAZ, JULIO (Tutorships)
GONZALEZ DIAZ, JULIO (Tutorships)
Court
Majadas Soto, José Javier (Chairman)
SALGADO RODRIGUEZ, MARIA DEL PILAR (Secretary)
CASARES DE CAL, MARIA ANGELES (Member)
Majadas Soto, José Javier (Chairman)
SALGADO RODRIGUEZ, MARIA DEL PILAR (Secretary)
CASARES DE CAL, MARIA ANGELES (Member)
Object annotation using zero-shot techniques
Authorship
L.T.H.
Bachelor’s Degree in Informatics Engineering
L.T.H.
Bachelor’s Degree in Informatics Engineering
Defense date
07.17.2025 11:00
07.17.2025 11:00
Summary
One of the most prominent challenges in the field of artificial intelligence, and specifically in computer vision, is the need to build large and high-quality datasets to train predictive models. Obtaining these datasets is a complex and costly task, especially when there are no publicly available datasets that meet the specific needs of the model. In computer vision problems, this often involves the manual collection and annotation of hundreds of thousands of images. For this reason, the emergence of zero-shot leaning techniques, where no labeled examples are required to train the AI model, is of particular interest in this field. The main objective of this work is to explore object annotation using zero-shot techniques, using a zero-shot object detector to generate pseudo-labels without the need for manually annotated data. These pseudo-labels will then be used to train a conventional detector in a supervised manner. Through the experiments conducted, it will be assessed whether the generated pseudo-labels are of sufficient quality compared to results obtained using real labels. The results obtained show that, indeed, pseudo-labels automatically generated using zero-shot models make it possible to train supervised detectors with performance close to that of models trained with real labels.
One of the most prominent challenges in the field of artificial intelligence, and specifically in computer vision, is the need to build large and high-quality datasets to train predictive models. Obtaining these datasets is a complex and costly task, especially when there are no publicly available datasets that meet the specific needs of the model. In computer vision problems, this often involves the manual collection and annotation of hundreds of thousands of images. For this reason, the emergence of zero-shot leaning techniques, where no labeled examples are required to train the AI model, is of particular interest in this field. The main objective of this work is to explore object annotation using zero-shot techniques, using a zero-shot object detector to generate pseudo-labels without the need for manually annotated data. These pseudo-labels will then be used to train a conventional detector in a supervised manner. Through the experiments conducted, it will be assessed whether the generated pseudo-labels are of sufficient quality compared to results obtained using real labels. The results obtained show that, indeed, pseudo-labels automatically generated using zero-shot models make it possible to train supervised detectors with performance close to that of models trained with real labels.
Direction
MUCIENTES MOLINA, MANUEL FELIPE (Tutorships)
CORES COSTA, DANIEL (Co-tutorships)
MUCIENTES MOLINA, MANUEL FELIPE (Tutorships)
CORES COSTA, DANIEL (Co-tutorships)
Court
Fernández Pena, Anselmo Tomás (Chairman)
Rodeiro Pazos, David (Secretary)
GARCIA TAHOCES, PABLO (Member)
Fernández Pena, Anselmo Tomás (Chairman)
Rodeiro Pazos, David (Secretary)
GARCIA TAHOCES, PABLO (Member)
Acrylonitrile production plant
Authorship
L.V.B.
Bachelor's Degree in Chemical Engeneering
L.V.B.
Bachelor's Degree in Chemical Engeneering
Defense date
07.16.2025 10:40
07.16.2025 10:40
Summary
The objective of this Final Degree Project is the design of a production plant for 50,000 tons per year of acrylonitrile. This compound has widespread industrial use. It is commonly employed in the production of synthetic fibers, plastics, resins, and in the synthesis of other chemicals. Acrylonitrile is obtained through the ammoxidation reaction of propylene, ammonia, and oxygen, known as the Sohio process. The main reaction products are acrylonitrile, HCN, acetonitrile, water, carbon dioxide, and carbon monoxide. The reaction takes place in a fluidized-bed reactor using a bismuth molybdate oxide catalyst supported on alumina. Subsequently, the reaction products pass through an absorber (using water as the solvent) to recover the highest possible amount of acrylonitrile. Finally, the products are sent through a distillation column train, whose main objective is the separation and purification of acrylonitrile to 99.7% purity.
The objective of this Final Degree Project is the design of a production plant for 50,000 tons per year of acrylonitrile. This compound has widespread industrial use. It is commonly employed in the production of synthetic fibers, plastics, resins, and in the synthesis of other chemicals. Acrylonitrile is obtained through the ammoxidation reaction of propylene, ammonia, and oxygen, known as the Sohio process. The main reaction products are acrylonitrile, HCN, acetonitrile, water, carbon dioxide, and carbon monoxide. The reaction takes place in a fluidized-bed reactor using a bismuth molybdate oxide catalyst supported on alumina. Subsequently, the reaction products pass through an absorber (using water as the solvent) to recover the highest possible amount of acrylonitrile. Finally, the products are sent through a distillation column train, whose main objective is the separation and purification of acrylonitrile to 99.7% purity.
Direction
SOTO CAMPOS, ANA MARIA (Tutorships)
SOTO CAMPOS, ANA MARIA (Tutorships)
Court
VIDAL TATO, MARIA ISABEL (Chairman)
MAURICIO IGLESIAS, MIGUEL (Secretary)
RODRIGUEZ MARTINEZ, HECTOR (Member)
VIDAL TATO, MARIA ISABEL (Chairman)
MAURICIO IGLESIAS, MIGUEL (Secretary)
RODRIGUEZ MARTINEZ, HECTOR (Member)
Application in Python in a production environment for migration and support between SQL and NoSQL databases.
Authorship
J.R.V.F.
Bachelor’s Degree in Informatics Engineering
J.R.V.F.
Bachelor’s Degree in Informatics Engineering
Defense date
07.17.2025 11:30
07.17.2025 11:30
Summary
Development of a Python application as a tool for support and data migration between a legacy SQL database system and a new NoSQL system in MongoDB. This tool will enable the automation of daily repetitive tasks on the databases, such as queries, backups, data retrieval from both systems for the same elements, and comparison of results to ensure proper operation and successful migration.
Development of a Python application as a tool for support and data migration between a legacy SQL database system and a new NoSQL system in MongoDB. This tool will enable the automation of daily repetitive tasks on the databases, such as queries, backups, data retrieval from both systems for the same elements, and comparison of results to ensure proper operation and successful migration.
Direction
Argüello Pedreira, Francisco Santiago (Tutorships)
Argüello Pedreira, Francisco Santiago (Tutorships)
Court
Fernández Pena, Anselmo Tomás (Chairman)
Rodeiro Pazos, David (Secretary)
GARCIA TAHOCES, PABLO (Member)
Fernández Pena, Anselmo Tomás (Chairman)
Rodeiro Pazos, David (Secretary)
GARCIA TAHOCES, PABLO (Member)
Cumene production plant from benzene and propylene
Authorship
A.V.V.
Bachelor's Degree in Chemical Engeneering
A.V.V.
Bachelor's Degree in Chemical Engeneering
Defense date
07.16.2025 10:30
07.16.2025 10:30
Summary
Antón Varela Veiga diseñará el reactor Cumene, the common name for isopropylbenzene, is a chemical compound widely used in industry, as it serves as an intermediate in the production of high value added products such as phenol and acetone. It is also used in the production of propylene oxide. For these reasons, it was decided to build an industrial plant for the production of cumene through the alkylation of benzene with the addition of propylene. The process consists of an initial conditioning section, where benzene and propylene are introduced and brought to the appropriate conditions for the reaction to occur. The second section is the reactor, where the alkylation reaction takes place in a catalytic reactor to obtain the desired product. Next, in the third section, the resulting products are separated and purified using a series of distillation columns. The final stage is the storage of the separated products. Alexandre Rey Baliña will design the distillation column. Antón Varela Veiga will design the reactor.
Antón Varela Veiga diseñará el reactor Cumene, the common name for isopropylbenzene, is a chemical compound widely used in industry, as it serves as an intermediate in the production of high value added products such as phenol and acetone. It is also used in the production of propylene oxide. For these reasons, it was decided to build an industrial plant for the production of cumene through the alkylation of benzene with the addition of propylene. The process consists of an initial conditioning section, where benzene and propylene are introduced and brought to the appropriate conditions for the reaction to occur. The second section is the reactor, where the alkylation reaction takes place in a catalytic reactor to obtain the desired product. Next, in the third section, the resulting products are separated and purified using a series of distillation columns. The final stage is the storage of the separated products. Alexandre Rey Baliña will design the distillation column. Antón Varela Veiga will design the reactor.
Direction
FRANCO URIA, MARIA AMAYA (Tutorships)
SINEIRO TORRES, JORGE (Co-tutorships)
FRANCO URIA, MARIA AMAYA (Tutorships)
SINEIRO TORRES, JORGE (Co-tutorships)
Court
Omil Prieto, Francisco (Chairman)
Rodríguez Figueiras, Óscar (Secretary)
VAL DEL RIO, MARIA ANGELES (Member)
Omil Prieto, Francisco (Chairman)
Rodríguez Figueiras, Óscar (Secretary)
VAL DEL RIO, MARIA ANGELES (Member)
Golden Hydrogen Production Plant from wastewater treatment plant sludge.
Authorship
A.V.S.
Bachelor's Degree in Chemical Engeneering
A.V.S.
Bachelor's Degree in Chemical Engeneering
Defense date
07.16.2025 12:45
07.16.2025 12:45
Summary
The aim of this TFG is the design of a hydrogen production plant with carbon dioxide capture from biogas, biomethane, known as golden hydrogen. The biomethane will be generated on-site through the anaerobic digestion of sludge from urban wastewater treatment plants, WWTPs. The plant’s production capacity is 3,150 tonnes of H2 per year, at 99.99% purity. In accordance with the relevant standards and technical manuals, a rigorous design is carried out for the anaerobic digester, the catalytic reformer reactor, and the carbon dioxide absorption tower. The value proposition of this project lies in offering an alternative to close the carbon cycle and to use organic waste as a raw material for producing a valuable product, hydrogen, in this case.
The aim of this TFG is the design of a hydrogen production plant with carbon dioxide capture from biogas, biomethane, known as golden hydrogen. The biomethane will be generated on-site through the anaerobic digestion of sludge from urban wastewater treatment plants, WWTPs. The plant’s production capacity is 3,150 tonnes of H2 per year, at 99.99% purity. In accordance with the relevant standards and technical manuals, a rigorous design is carried out for the anaerobic digester, the catalytic reformer reactor, and the carbon dioxide absorption tower. The value proposition of this project lies in offering an alternative to close the carbon cycle and to use organic waste as a raw material for producing a valuable product, hydrogen, in this case.
Direction
SANCHEZ FERNANDEZ, ADRIAN (Tutorships)
SANCHEZ FERNANDEZ, ADRIAN (Tutorships)
Court
MOREIRA MARTINEZ, RAMON FELIPE (Chairman)
Pedrouso Fuentes, Alba (Secretary)
SINEIRO TORRES, JORGE (Member)
MOREIRA MARTINEZ, RAMON FELIPE (Chairman)
Pedrouso Fuentes, Alba (Secretary)
SINEIRO TORRES, JORGE (Member)
NIDO: Machine Learning for the Identification of Small Neonates
Authorship
P.V.T.
Bachelor’s Degree in Informatics Engineering
P.V.T.
Bachelor’s Degree in Informatics Engineering
Defense date
07.17.2025 12:00
07.17.2025 12:00
Summary
This work is part of the COLLAGE research project, developed in collaboration with the Universitat Pompeu Fabra and BCNatal (Barcelona), and focuses on the prediction of small for gestational age (SGA) newborns using clinical, ultrasound, and sociodemographic data from the baby and the mother. The addressed problem is of high public health interest, given that SGA fetuses present a higher risk of perinatal complications and long-term diseases, and their early detection is key to applying interventions that improve their health and reduce inequalities in perinatal care. We developed a data preprocessing pipeline that included a detailed investigation of the clinical meaning and relevance of each variable, the management of missing values, and feature engineering, obtaining a dataset of 791 cases and 28 variables used as predictors, with clinical validity and no missing values. We built and validated predictive models for SGA using XGBoost and Multiple Kernel Learning, optimizing their parameters through k-fold cross-validation and achieving an F1 score of 0.836 with XGBoost, demonstrating its ability to identify SGA cases in a real clinical setting. Additionally, with the aim of exploring the internal structure of the dataset and identifying potential subgroups, we applied dimensionality reduction techniques using Principal Component Analysis and clustering with Gaussian Mixture Models, identifying clustering patterns associated with oil consumption during pregnancy as a differentiating factor within the cohort. Finally, we evaluated the fairness of the model across different subpopulations (ethnicity, body mass index (BMI), and newborn sex) using the Mann Whitney U test, identifying significant performance differences according to ethnicity and BMI, and confirming the importance of analyzing potential population biases before applying the model in clinical practice. In summary, this report presents a detailed and replicable methodology for the preparation of heterogeneous clinical data, the construction of SGA prediction models, and their evaluation across different population subgroups to ensure fairness.
This work is part of the COLLAGE research project, developed in collaboration with the Universitat Pompeu Fabra and BCNatal (Barcelona), and focuses on the prediction of small for gestational age (SGA) newborns using clinical, ultrasound, and sociodemographic data from the baby and the mother. The addressed problem is of high public health interest, given that SGA fetuses present a higher risk of perinatal complications and long-term diseases, and their early detection is key to applying interventions that improve their health and reduce inequalities in perinatal care. We developed a data preprocessing pipeline that included a detailed investigation of the clinical meaning and relevance of each variable, the management of missing values, and feature engineering, obtaining a dataset of 791 cases and 28 variables used as predictors, with clinical validity and no missing values. We built and validated predictive models for SGA using XGBoost and Multiple Kernel Learning, optimizing their parameters through k-fold cross-validation and achieving an F1 score of 0.836 with XGBoost, demonstrating its ability to identify SGA cases in a real clinical setting. Additionally, with the aim of exploring the internal structure of the dataset and identifying potential subgroups, we applied dimensionality reduction techniques using Principal Component Analysis and clustering with Gaussian Mixture Models, identifying clustering patterns associated with oil consumption during pregnancy as a differentiating factor within the cohort. Finally, we evaluated the fairness of the model across different subpopulations (ethnicity, body mass index (BMI), and newborn sex) using the Mann Whitney U test, identifying significant performance differences according to ethnicity and BMI, and confirming the importance of analyzing potential population biases before applying the model in clinical practice. In summary, this report presents a detailed and replicable methodology for the preparation of heterogeneous clinical data, the construction of SGA prediction models, and their evaluation across different population subgroups to ensure fairness.
Direction
Carreira Nouche, María José (Tutorships)
NUÑEZ GARCIA, MARTA (Co-tutorships)
Carreira Nouche, María José (Tutorships)
NUÑEZ GARCIA, MARTA (Co-tutorships)
Court
Fernández Pena, Anselmo Tomás (Chairman)
Rodeiro Pazos, David (Secretary)
GARCIA TAHOCES, PABLO (Member)
Fernández Pena, Anselmo Tomás (Chairman)
Rodeiro Pazos, David (Secretary)
GARCIA TAHOCES, PABLO (Member)
FlashMaster: Collaborative AI-Powered Study Platform
Authorship
A.V.G.
Bachelor’s Degree in Informatics Engineering
A.V.G.
Bachelor’s Degree in Informatics Engineering
Defense date
07.17.2025 10:30
07.17.2025 10:30
Summary
FlashMaster is a cutting-edge web platform that blends spaced repetition with artificial intelligence to deliver a highly effective, collaborative study experience. Powered by the SM-2 algorithm, it schedules personalized review intervals based on each user’s performance to ensure long-term retention. Its AI assistant automatically generates flashcards from PDF documents, creates smart summaries, and answers questions in real time using a Retrieval-Augmented Generation (RAG) system with semantic search. Users can also set up customizable workspaces with different permission levels, share study collections, chat live, and collaboratively edit notes.
FlashMaster is a cutting-edge web platform that blends spaced repetition with artificial intelligence to deliver a highly effective, collaborative study experience. Powered by the SM-2 algorithm, it schedules personalized review intervals based on each user’s performance to ensure long-term retention. Its AI assistant automatically generates flashcards from PDF documents, creates smart summaries, and answers questions in real time using a Retrieval-Augmented Generation (RAG) system with semantic search. Users can also set up customizable workspaces with different permission levels, share study collections, chat live, and collaboratively edit notes.
Direction
Sánchez Vila, Eduardo Manuel (Tutorships)
Sánchez Vila, Eduardo Manuel (Tutorships)
Court
CABALEIRO DOMINGUEZ, JOSE CARLOS (Chairman)
FLORES GONZALEZ, JULIAN CARLOS (Secretary)
Jeremías López, Ana (Member)
CABALEIRO DOMINGUEZ, JOSE CARLOS (Chairman)
FLORES GONZALEZ, JULIAN CARLOS (Secretary)
Jeremías López, Ana (Member)
Adaptation of a Full-Resolution Network for Retinal Vessel Segmentation
Authorship
A.V.S.
Bachelor’s Degree in Informatics Engineering
A.V.S.
Bachelor’s Degree in Informatics Engineering
Defense date
07.18.2025 12:00
07.18.2025 12:00
Summary
This work addresses the development and optimization of deep learning architectures for the automatic segmentation of blood vessels in high-resolution fundus images, with direct application in the diagnosis of ocular pathologies such as diabetic retinopathy, glaucoma, and age-related macular degeneration. The research is structured into three main phases. First, the FRNet (Full Resolution Network) architecture is evaluated as a starting point, demonstrating that although full-resolution processing improves fine vessel segmentation, it presents critical computational scalability limitations that make its use with 2048x2048 pixel images unfeasible. Second, VesselView is presented and validated, a new U-Net type architecture that overcomes the limitations of FRNet through its adaptation to a more efficient encoder-decoder paradigm. Through systematic studies, its key components are empirically validated: the optimal bottleneck configuration, the indispensability of skip connections, and an evaluation of the effect of different loss functions on it. Third, SantosNet is proposed as an evolution of VesselView, incorporating two original contributions: DirectionalSanLoss, a loss function that optimizes topological quality through directional vessel information, and LearnableFusionBlock, an intelligent weighting module that improves vascular segmentation results. The experiments conducted on the FIVES dataset demonstrate that SantosNet achieves a remarkable improvement, significantly outperforming both FRNet and VesselView in the main metrics.
This work addresses the development and optimization of deep learning architectures for the automatic segmentation of blood vessels in high-resolution fundus images, with direct application in the diagnosis of ocular pathologies such as diabetic retinopathy, glaucoma, and age-related macular degeneration. The research is structured into three main phases. First, the FRNet (Full Resolution Network) architecture is evaluated as a starting point, demonstrating that although full-resolution processing improves fine vessel segmentation, it presents critical computational scalability limitations that make its use with 2048x2048 pixel images unfeasible. Second, VesselView is presented and validated, a new U-Net type architecture that overcomes the limitations of FRNet through its adaptation to a more efficient encoder-decoder paradigm. Through systematic studies, its key components are empirically validated: the optimal bottleneck configuration, the indispensability of skip connections, and an evaluation of the effect of different loss functions on it. Third, SantosNet is proposed as an evolution of VesselView, incorporating two original contributions: DirectionalSanLoss, a loss function that optimizes topological quality through directional vessel information, and LearnableFusionBlock, an intelligent weighting module that improves vascular segmentation results. The experiments conducted on the FIVES dataset demonstrate that SantosNet achieves a remarkable improvement, significantly outperforming both FRNet and VesselView in the main metrics.
Direction
Pardo López, Xosé Manuel (Tutorships)
SANTOS MATEOS, ROI (Co-tutorships)
Pardo López, Xosé Manuel (Tutorships)
SANTOS MATEOS, ROI (Co-tutorships)
Court
Blanco Heras, Dora (Chairman)
GAGO COUSO, FELIPE (Secretary)
Sánchez Vila, Eduardo Manuel (Member)
Blanco Heras, Dora (Chairman)
GAGO COUSO, FELIPE (Secretary)
Sánchez Vila, Eduardo Manuel (Member)
Intelligent Mobile Application for Sleep Monitoring and Analysis
Authorship
E.V.T.
Bachelor’s Degree in Informatics Engineering
E.V.T.
Bachelor’s Degree in Informatics Engineering
Defense date
07.17.2025 11:00
07.17.2025 11:00
Summary
This Final Degree Project presents a mobile application that allows monitoring sleep, analyzing its quality, and receiving personalized recommendations. It combines objective and subjective data through questionnaires and records, integrating artificial intelligence to generate reports, tips, and a conversational diary. It offers visual tools and recommendations aimed at improving sleep quality in an individualized way.
This Final Degree Project presents a mobile application that allows monitoring sleep, analyzing its quality, and receiving personalized recommendations. It combines objective and subjective data through questionnaires and records, integrating artificial intelligence to generate reports, tips, and a conversational diary. It offers visual tools and recommendations aimed at improving sleep quality in an individualized way.
Direction
Sánchez Vila, Eduardo Manuel (Tutorships)
Sánchez Vila, Eduardo Manuel (Tutorships)
Court
CABALEIRO DOMINGUEZ, JOSE CARLOS (Chairman)
FLORES GONZALEZ, JULIAN CARLOS (Secretary)
Jeremías López, Ana (Member)
CABALEIRO DOMINGUEZ, JOSE CARLOS (Chairman)
FLORES GONZALEZ, JULIAN CARLOS (Secretary)
Jeremías López, Ana (Member)
RetroRevive: Web Platform for the Restoration of Old Images
Authorship
T.W.R.
Bachelor’s Degree in Informatics Engineering
T.W.R.
Bachelor’s Degree in Informatics Engineering
Defense date
07.18.2025 12:30
07.18.2025 12:30
Summary
In this Final Degree Project, a web application has been developed to facilitate the processing of old images affected by the passage of time and poor preservation. To repair the deteriorated areas of an image, a method has been implemented that enables their automatic detection, as well as a canvas where damaged areas can be manually defined. Several artificial intelligence techniques have been integrated for color restoration and for increasing both the size and resolution, as well as noise reduction filters. All operations can be applied either individually or in combination. The system has been designed so that users can view the results of all restorations performed through the platform in a gallery, including a comparison with the original image, which required the addition of a database for persistent information storage. Additionally, efforts have been made to ensure that the interface is as simple and intuitive as possible, minimizing the operations and clicks required to obtain results and to navigate through the different sections of the application.
In this Final Degree Project, a web application has been developed to facilitate the processing of old images affected by the passage of time and poor preservation. To repair the deteriorated areas of an image, a method has been implemented that enables their automatic detection, as well as a canvas where damaged areas can be manually defined. Several artificial intelligence techniques have been integrated for color restoration and for increasing both the size and resolution, as well as noise reduction filters. All operations can be applied either individually or in combination. The system has been designed so that users can view the results of all restorations performed through the platform in a gallery, including a comparison with the original image, which required the addition of a database for persistent information storage. Additionally, efforts have been made to ensure that the interface is as simple and intuitive as possible, minimizing the operations and clicks required to obtain results and to navigate through the different sections of the application.
Direction
VIDAL AGUIAR, JUAN CARLOS (Tutorships)
Andrés Estrada, Sergio (Co-tutorships)
García Ullán, Pablo (Co-tutorships)
VIDAL AGUIAR, JUAN CARLOS (Tutorships)
Andrés Estrada, Sergio (Co-tutorships)
García Ullán, Pablo (Co-tutorships)
Court
Blanco Heras, Dora (Chairman)
GAGO COUSO, FELIPE (Secretary)
Sánchez Vila, Eduardo Manuel (Member)
Blanco Heras, Dora (Chairman)
GAGO COUSO, FELIPE (Secretary)
Sánchez Vila, Eduardo Manuel (Member)