Review and comparison of dimensionality reduction techniques.
Authorship
A.A.M.
Bachelor of Mathematics
A.A.M.
Bachelor of Mathematics
Defense date
07.16.2025 11:45
07.16.2025 11:45
Summary
Dimensionality reduction techniques are fundamental procedures in statistics for simplifying datasets while losing the least amount of information as possible. The objective of this work is to thoroughly review some of these methods, with a particular focus on one of the most widely used techniques: Principal Component Analysis. Additionally, we will discuss other more recent nonlinear techniques, which have been gaining popularity in recent years. Finally, to emphasize the practical importance of these techniques, we will present application examples with real datasets to explore the challenges of interpretation and processing they entail.
Dimensionality reduction techniques are fundamental procedures in statistics for simplifying datasets while losing the least amount of information as possible. The objective of this work is to thoroughly review some of these methods, with a particular focus on one of the most widely used techniques: Principal Component Analysis. Additionally, we will discuss other more recent nonlinear techniques, which have been gaining popularity in recent years. Finally, to emphasize the practical importance of these techniques, we will present application examples with real datasets to explore the challenges of interpretation and processing they entail.
Direction
PATEIRO LOPEZ, BEATRIZ (Tutorships)
PATEIRO LOPEZ, BEATRIZ (Tutorships)
Court
CRUJEIRAS CASAIS, ROSA MARÍA (Chairman)
PENA BRAGE, FRANCISCO JOSE (Secretary)
DOMINGUEZ VAZQUEZ, MIGUEL (Member)
CRUJEIRAS CASAIS, ROSA MARÍA (Chairman)
PENA BRAGE, FRANCISCO JOSE (Secretary)
DOMINGUEZ VAZQUEZ, MIGUEL (Member)
Goodness-of-fit tests.
Authorship
E.A.A.
Bachelor of Mathematics
E.A.A.
Bachelor of Mathematics
Defense date
07.15.2025 10:00
07.15.2025 10:00
Summary
The development of this project focuses on the theoretical and practical study of the chi-squared test, which is used to test certain hypotheses on a data sample. Firstly, the historical context of the test is presented, including its origin and evolution. In the theoretical study, the test is formalized by defining its corresponding test statistic and presenting the main results related to it, including those concerning the asymptotic convergence of the statistic. The case in which the null hypothesis is simple is distinguished from the case in which it belongs to a parametric family. In the practical part, simulations of the test are carried out in its different cases using the R software, with the aim of analyzing and characterizing its actual behavior. The calibration of the test (cases in which the null hypothesis holds) and its power (cases in which the null hypothesis does not hold) are studied. Certain practical recommendations mentioned during the degree are also tested.
The development of this project focuses on the theoretical and practical study of the chi-squared test, which is used to test certain hypotheses on a data sample. Firstly, the historical context of the test is presented, including its origin and evolution. In the theoretical study, the test is formalized by defining its corresponding test statistic and presenting the main results related to it, including those concerning the asymptotic convergence of the statistic. The case in which the null hypothesis is simple is distinguished from the case in which it belongs to a parametric family. In the practical part, simulations of the test are carried out in its different cases using the R software, with the aim of analyzing and characterizing its actual behavior. The calibration of the test (cases in which the null hypothesis holds) and its power (cases in which the null hypothesis does not hold) are studied. Certain practical recommendations mentioned during the degree are also tested.
Direction
RODRIGUEZ CASAL, ALBERTO (Tutorships)
Bolón Rodríguez, Diego (Co-tutorships)
RODRIGUEZ CASAL, ALBERTO (Tutorships)
Bolón Rodríguez, Diego (Co-tutorships)
Court
Bolón Rodríguez, Diego (Student’s tutor)
RODRIGUEZ CASAL, ALBERTO (Student’s tutor)
Bolón Rodríguez, Diego (Student’s tutor)
RODRIGUEZ CASAL, ALBERTO (Student’s tutor)
Theoretical-computational study of criticality in Ising-type models
Authorship
P.A.G.
Double bachelor degree in Mathematics and Physics
P.A.G.
Double bachelor degree in Mathematics and Physics
Defense date
07.17.2025 09:30
07.17.2025 09:30
Summary
This document addresses the Ising model, widely used in the study of phase transitions, from both theoretical and computational perspectives. It briefly introduces concepts related to critical phenomena, followed by a description of the model and some well-known theoretical developments and results in one- and two-dimensional lattices with nearest-neighbor interactions. A computational study is then carried out, introducing the Metropolis and Wolff algorithms, and demonstrating their ergodicity and compliance with the detailed balance equation. These two algorithms are compared, showing that the Wolff algorithm achieves faster convergence near the critical point. Using a custom-developed code, phase transitions are characterized in systems ranging from one to four dimensions with nearest-neighbor interactions, and the model is implemented on complex small-world networks, both one- and two-dimensional.
This document addresses the Ising model, widely used in the study of phase transitions, from both theoretical and computational perspectives. It briefly introduces concepts related to critical phenomena, followed by a description of the model and some well-known theoretical developments and results in one- and two-dimensional lattices with nearest-neighbor interactions. A computational study is then carried out, introducing the Metropolis and Wolff algorithms, and demonstrating their ergodicity and compliance with the detailed balance equation. These two algorithms are compared, showing that the Wolff algorithm achieves faster convergence near the critical point. Using a custom-developed code, phase transitions are characterized in systems ranging from one to four dimensions with nearest-neighbor interactions, and the model is implemented on complex small-world networks, both one- and two-dimensional.
Direction
MENDEZ MORALES, TRINIDAD (Tutorships)
Montes Campos, Hadrián (Co-tutorships)
MENDEZ MORALES, TRINIDAD (Tutorships)
Montes Campos, Hadrián (Co-tutorships)
Court
ZAS ARREGUI, ENRIQUE (Chairman)
GARCIA FEAL, XABIER (Secretary)
FONDADO FONDADO, ALFONSO (Member)
ZAS ARREGUI, ENRIQUE (Chairman)
GARCIA FEAL, XABIER (Secretary)
FONDADO FONDADO, ALFONSO (Member)
The Riemann mapping theorem
Authorship
P.A.G.
Double bachelor degree in Mathematics and Physics
P.A.G.
Double bachelor degree in Mathematics and Physics
Defense date
07.16.2025 16:40
07.16.2025 16:40
Summary
This document addresses the Riemann mapping theorem, an essential result in complex analysis that establishes the existence of conformal mappings between simply connected sets and the unit disk. To this end, the necessary theoretical framework is introduced, beginning with a brief introduction to the theorem, along with some fundamental definitions and results on holomorphic functions. Later, the focus shifts to Möbius transformations, a key tool in this work that, together with Schwarz’s lemma, plays a fundamental role in characterizing the biholomorphic automorphisms of the unit disk. This theoretical development concludes with the study of certain properties of the space of holomorphic functions, thereby providing a foundation for giving a rigorous proof of the theorem. Finally, the importance of the theorem is highlighted by presenting some relevant applications in other scientific fields, such as fluid mechanics in physics, and an algorithm is provided to approximately compute the mapping described by the theorem.
This document addresses the Riemann mapping theorem, an essential result in complex analysis that establishes the existence of conformal mappings between simply connected sets and the unit disk. To this end, the necessary theoretical framework is introduced, beginning with a brief introduction to the theorem, along with some fundamental definitions and results on holomorphic functions. Later, the focus shifts to Möbius transformations, a key tool in this work that, together with Schwarz’s lemma, plays a fundamental role in characterizing the biholomorphic automorphisms of the unit disk. This theoretical development concludes with the study of certain properties of the space of holomorphic functions, thereby providing a foundation for giving a rigorous proof of the theorem. Finally, the importance of the theorem is highlighted by presenting some relevant applications in other scientific fields, such as fluid mechanics in physics, and an algorithm is provided to approximately compute the mapping described by the theorem.
Direction
CAO LABORA, DANIEL (Tutorships)
CAO LABORA, DANIEL (Tutorships)
Court
LOPEZ POUSO, RODRIGO (Chairman)
PEON NIETO, ANA (Secretary)
SEOANE MARTINEZ, MARIA LUISA (Member)
LOPEZ POUSO, RODRIGO (Chairman)
PEON NIETO, ANA (Secretary)
SEOANE MARTINEZ, MARIA LUISA (Member)
Analysis of the B+ to mumutaunu decay from a theoretical and experimental point of view
Authorship
D.A.D.A.
Double bachelor degree in Mathematics and Physics
D.A.D.A.
Double bachelor degree in Mathematics and Physics
Defense date
07.17.2025 09:30
07.17.2025 09:30
Summary
This work aims to be a preliminary study, both from a theoretical and an experimental point of view, of the decay B+ to mumutaunu in the context of the LHCb experiment. First, a theoretical development based on bibliographic sources is carried out, performing a similar calculation with the goal of understanding the underlying mechanisms and adapting them to the case of interest; the ultimate objective of this analysis is to obtain an initial reference numerical value for the branching ratio (BR) of this decay. Next, we design an analysis aiming to detect a signal peak for this decay or to set an upper limit on the BR; to this end, we address the problem of missing information due to the undetectable neutrino by introducing the corrected mass. We present the data selection methods, the elimination of combinatorial background using machine learning tools, and a study of the ability to distinguish a signal peak through simulations and fits, along with statistical tools such as Wilks’ theorem. Finally, we present the progress achieved in this initial analysis and conclude with the work to be developed for a future extension of this study.
This work aims to be a preliminary study, both from a theoretical and an experimental point of view, of the decay B+ to mumutaunu in the context of the LHCb experiment. First, a theoretical development based on bibliographic sources is carried out, performing a similar calculation with the goal of understanding the underlying mechanisms and adapting them to the case of interest; the ultimate objective of this analysis is to obtain an initial reference numerical value for the branching ratio (BR) of this decay. Next, we design an analysis aiming to detect a signal peak for this decay or to set an upper limit on the BR; to this end, we address the problem of missing information due to the undetectable neutrino by introducing the corrected mass. We present the data selection methods, the elimination of combinatorial background using machine learning tools, and a study of the ability to distinguish a signal peak through simulations and fits, along with statistical tools such as Wilks’ theorem. Finally, we present the progress achieved in this initial analysis and conclude with the work to be developed for a future extension of this study.
Direction
CID VIDAL, XABIER (Tutorships)
FERNANDEZ GOMEZ, MIGUEL (Co-tutorships)
CID VIDAL, XABIER (Tutorships)
FERNANDEZ GOMEZ, MIGUEL (Co-tutorships)
Court
ZAS ARREGUI, ENRIQUE (Chairman)
GARCIA FEAL, XABIER (Secretary)
FONDADO FONDADO, ALFONSO (Member)
ZAS ARREGUI, ENRIQUE (Chairman)
GARCIA FEAL, XABIER (Secretary)
FONDADO FONDADO, ALFONSO (Member)
Clifford algebras and spin groups
Authorship
D.A.D.A.
Double bachelor degree in Mathematics and Physics
D.A.D.A.
Double bachelor degree in Mathematics and Physics
Defense date
07.16.2025 10:00
07.16.2025 10:00
Summary
The aim of this work is to be a starting point for the study of Spin groups through the formalism of Clifford algebras. In order to do so, we first define and construct these algebras through tensor algebras and we study their main properties as superalgebras, their definition in terms of generators and relations and their connection with exterior algebra. Afterwards, we perform a first classification of the low-dimensional Clifford real algebras by using the Z_2-graded tensor product before proceeding to the complete classification of said algebras ---in the real and complex case---supported by several isomorphisms proved in this work, together with the Bott periodicity theorem. Afterwards, we pass on to defining the Spin group as a subgroup of the units of the Clifford algebra and, after studying the latter’s actions on the whole algebra, we check using this same action that the Spin groups are a double cover of the special orthogonal group SO(n). We also study some additional properties of these groups and classify some of the low dimensional cases.
The aim of this work is to be a starting point for the study of Spin groups through the formalism of Clifford algebras. In order to do so, we first define and construct these algebras through tensor algebras and we study their main properties as superalgebras, their definition in terms of generators and relations and their connection with exterior algebra. Afterwards, we perform a first classification of the low-dimensional Clifford real algebras by using the Z_2-graded tensor product before proceeding to the complete classification of said algebras ---in the real and complex case---supported by several isomorphisms proved in this work, together with the Bott periodicity theorem. Afterwards, we pass on to defining the Spin group as a subgroup of the units of the Clifford algebra and, after studying the latter’s actions on the whole algebra, we check using this same action that the Spin groups are a double cover of the special orthogonal group SO(n). We also study some additional properties of these groups and classify some of the low dimensional cases.
Direction
DIAZ RAMOS, JOSE CARLOS (Tutorships)
Lorenzo Naveiro, Juan Manel (Co-tutorships)
DIAZ RAMOS, JOSE CARLOS (Tutorships)
Lorenzo Naveiro, Juan Manel (Co-tutorships)
Court
GARCIA RODICIO, ANTONIO (Chairman)
CAO LABORA, DANIEL (Secretary)
Gómez Tato, Antonio M. (Member)
GARCIA RODICIO, ANTONIO (Chairman)
CAO LABORA, DANIEL (Secretary)
Gómez Tato, Antonio M. (Member)
Analysis and application of Dijkstra's algorithm in route optimisation: in searth of the shortest path
Authorship
A.B.V.
Double bachelor degree in Mathematics and Physics
A.B.V.
Double bachelor degree in Mathematics and Physics
Defense date
07.15.2025 09:10
07.15.2025 09:10
Summary
In this project, we start off from basic concepts regarding flow problems in networks and then looked at more specific ones that would help us achieve our main goal, which is to explain different algorithms for solving the shortest path problem. In the first chapter we will introduce concepts from graph theory and explain important notions for the following chapters, such as the definition of node, arc or cost. In addition, we will introduce minimun cost flow problems joint with their mathematical formulations. Finally, we will analyze a very important property when solving this type of problem: unimodularity, along with some properties that make solving them easier. In the second chapter we will focus on the shortest path problem, describing that there are different types of algorithms for solving it with particular emphasis on Dijkstra's algorithm. We will also provide some implentations of this and other algorithms that can help solve the problem more efficiently. Additionally, the explanations will be accompanied by examples to make the algorithms easier to understand. Finally, the last chapter presents a practical application of shortest path computation within the Spanish railway network. It includes a detailed description of the Python implementation, the data used, and a graphical representation of the computed routes.
In this project, we start off from basic concepts regarding flow problems in networks and then looked at more specific ones that would help us achieve our main goal, which is to explain different algorithms for solving the shortest path problem. In the first chapter we will introduce concepts from graph theory and explain important notions for the following chapters, such as the definition of node, arc or cost. In addition, we will introduce minimun cost flow problems joint with their mathematical formulations. Finally, we will analyze a very important property when solving this type of problem: unimodularity, along with some properties that make solving them easier. In the second chapter we will focus on the shortest path problem, describing that there are different types of algorithms for solving it with particular emphasis on Dijkstra's algorithm. We will also provide some implentations of this and other algorithms that can help solve the problem more efficiently. Additionally, the explanations will be accompanied by examples to make the algorithms easier to understand. Finally, the last chapter presents a practical application of shortest path computation within the Spanish railway network. It includes a detailed description of the Python implementation, the data used, and a graphical representation of the computed routes.
Direction
GONZALEZ RUEDA, ANGEL MANUEL (Tutorships)
GONZALEZ RUEDA, ANGEL MANUEL (Tutorships)
Court
Majadas Soto, José Javier (Chairman)
SALGADO RODRIGUEZ, MARIA DEL PILAR (Secretary)
CASARES DE CAL, MARIA ANGELES (Member)
Majadas Soto, José Javier (Chairman)
SALGADO RODRIGUEZ, MARIA DEL PILAR (Secretary)
CASARES DE CAL, MARIA ANGELES (Member)
Baroclinic instability and its sensitivity to vertical temperature and wind profiles in atmospheric layers
Authorship
A.B.V.
Double bachelor degree in Mathematics and Physics
A.B.V.
Double bachelor degree in Mathematics and Physics
Defense date
07.17.2025 09:30
07.17.2025 09:30
Summary
This project aims to study baroclinic instability in the atmosphere through theoretical analysis and numerical simulations. In the theoretical section, the key factors influencing this phenomenon will be explained, such as vertical wind shear, thermal gradients, and atmospheric stratification, as well as the role of planetary rotation and the Coriolis force. Next, numerical simulations will be carried out to analyze the impact of variations in temperature and wind profiles on the growth of baroclinic disturbances. The results will allow for an assessment of how these atmospheric conditions affect the evolution of baroclinic waves, comparing the simulations with theoretical predictions. Finally, the agreement between theory and simulations will be discussed, highlighting the conditions that favor the development of baroclinic instability and its potential applications in weather forecasting.
This project aims to study baroclinic instability in the atmosphere through theoretical analysis and numerical simulations. In the theoretical section, the key factors influencing this phenomenon will be explained, such as vertical wind shear, thermal gradients, and atmospheric stratification, as well as the role of planetary rotation and the Coriolis force. Next, numerical simulations will be carried out to analyze the impact of variations in temperature and wind profiles on the growth of baroclinic disturbances. The results will allow for an assessment of how these atmospheric conditions affect the evolution of baroclinic waves, comparing the simulations with theoretical predictions. Finally, the agreement between theory and simulations will be discussed, highlighting the conditions that favor the development of baroclinic instability and its potential applications in weather forecasting.
Direction
MIGUEZ MACHO, GONZALO (Tutorships)
CRESPO OTERO, ALFREDO (Co-tutorships)
MIGUEZ MACHO, GONZALO (Tutorships)
CRESPO OTERO, ALFREDO (Co-tutorships)
Court
ZAS ARREGUI, ENRIQUE (Chairman)
GARCIA FEAL, XABIER (Secretary)
FONDADO FONDADO, ALFONSO (Member)
ZAS ARREGUI, ENRIQUE (Chairman)
GARCIA FEAL, XABIER (Secretary)
FONDADO FONDADO, ALFONSO (Member)
A review of scheduling problems and applications
Authorship
S.B.G.
Bachelor of Mathematics
S.B.G.
Bachelor of Mathematics
Defense date
07.15.2025 09:55
07.15.2025 09:55
Summary
Scheduling problems involve the efficient allocation of limited resources to a set of tasks, considering specific time constraints and optimality criteria. In the first chapter, notation used across the paper will be detailed. The second and third chapters review different variants of the problem, presenting in each case the corresponding algorithm to find the optimal solution, along with the discussion of certain associated theoretical properties. The fourth chapter includes the implementation of these algorithms in R code. Finally, in the fifth chapter, conclusions will be drawn, highlighting the relevance of these problems both in everyday live as well as professional contexts.
Scheduling problems involve the efficient allocation of limited resources to a set of tasks, considering specific time constraints and optimality criteria. In the first chapter, notation used across the paper will be detailed. The second and third chapters review different variants of the problem, presenting in each case the corresponding algorithm to find the optimal solution, along with the discussion of certain associated theoretical properties. The fourth chapter includes the implementation of these algorithms in R code. Finally, in the fifth chapter, conclusions will be drawn, highlighting the relevance of these problems both in everyday live as well as professional contexts.
Direction
GONZALEZ RUEDA, ANGEL MANUEL (Tutorships)
GONZALEZ RUEDA, ANGEL MANUEL (Tutorships)
Court
Majadas Soto, José Javier (Chairman)
SALGADO RODRIGUEZ, MARIA DEL PILAR (Secretary)
CASARES DE CAL, MARIA ANGELES (Member)
Majadas Soto, José Javier (Chairman)
SALGADO RODRIGUEZ, MARIA DEL PILAR (Secretary)
CASARES DE CAL, MARIA ANGELES (Member)
Comparison of search technologies in evaluation benchmarks for the study of misinformation in the context of health-related queries
Authorship
X.C.A.
Double Bachelor's Degree in Informatics Engineering and Mathematics
X.C.A.
Double Bachelor's Degree in Informatics Engineering and Mathematics
Defense date
07.18.2025 11:00
07.18.2025 11:00
Summary
The presence of misinformation in web search results related to health is a concerning issue both socially and scientifically, as it can negatively influence users' decision-making and lead to serious health consequences. This phenomenon, which gained visibility because of the COVID-19 pandemic, is one of the current research areas in Information Retrieval (IR), around which this bachelor thesis is structured. The main goal of the project is to explore how to distinguish relevant and accurate documents from harmful ones, given a specific search intent. With this goal in mind, a study structured along three research lines is proposed. First, a systematic analysis is conducted on the performance of state-of-the-art search systems with respect to this task. Second, a new technique is designed, implemented, and evaluated, based on Large Language Models (LLMs), for generating alternative versions of user queries in such a way that the variants promote the retrieval of relevant and accurate documents, while discouraging the retrieval of harmful ones. Lastly, a study is presented on predicting the presence of health misinformation in search results, for which techniques from related fields are tested, and a new LLM-based predictor, specifically tailored for this task, is designed, implemented, and evaluated. The findings of this work support the potential of LLMs in the field of IR, as they manage to improve the effectiveness of state-of-the-art search systems. Moreover, the project addresses the literature gap with regard to misinformation prediction in queries, while also showing the superior capability of LLMs for this task compared to more general techniques. Parts of the contributions from this project have been accepted for publication at the ACM SIGIR 2025 conference.
The presence of misinformation in web search results related to health is a concerning issue both socially and scientifically, as it can negatively influence users' decision-making and lead to serious health consequences. This phenomenon, which gained visibility because of the COVID-19 pandemic, is one of the current research areas in Information Retrieval (IR), around which this bachelor thesis is structured. The main goal of the project is to explore how to distinguish relevant and accurate documents from harmful ones, given a specific search intent. With this goal in mind, a study structured along three research lines is proposed. First, a systematic analysis is conducted on the performance of state-of-the-art search systems with respect to this task. Second, a new technique is designed, implemented, and evaluated, based on Large Language Models (LLMs), for generating alternative versions of user queries in such a way that the variants promote the retrieval of relevant and accurate documents, while discouraging the retrieval of harmful ones. Lastly, a study is presented on predicting the presence of health misinformation in search results, for which techniques from related fields are tested, and a new LLM-based predictor, specifically tailored for this task, is designed, implemented, and evaluated. The findings of this work support the potential of LLMs in the field of IR, as they manage to improve the effectiveness of state-of-the-art search systems. Moreover, the project addresses the literature gap with regard to misinformation prediction in queries, while also showing the superior capability of LLMs for this task compared to more general techniques. Parts of the contributions from this project have been accepted for publication at the ACM SIGIR 2025 conference.
Direction
Losada Carril, David Enrique (Tutorships)
FERNANDEZ PICHEL, MARCOS (Co-tutorships)
Losada Carril, David Enrique (Tutorships)
FERNANDEZ PICHEL, MARCOS (Co-tutorships)
Court
MOSQUERA GONZALEZ, ANTONIO (Chairman)
CORES COSTA, DANIEL (Secretary)
QUESADA BARRIUSO, PABLO (Member)
MOSQUERA GONZALEZ, ANTONIO (Chairman)
CORES COSTA, DANIEL (Secretary)
QUESADA BARRIUSO, PABLO (Member)
Quantum Computing. Principles and Applications
Authorship
A.C.S.
Double bachelor degree in Mathematics and Physics
A.C.S.
Double bachelor degree in Mathematics and Physics
Defense date
07.16.2025 17:20
07.16.2025 17:20
Summary
This report studies the foundations of Quantum Computing in a fully mathematical manner, abstracting from the underlying real physical systems. Firstly, we study the fundamentals of Quantum Physics, exploring concepts and properties of Hilbert spaces over the field of complex numbers. Next, we define the concepts of qubit and p-qubit, along with the quantum logic gates that operate on them. Finally, we develop some important algorithms with specific applications that show the relevance of this kind of logic. The aim of this document is to serve as an introduction, based on concepts covered by the Mathematics undergraduate program, to the world of Quantum Computing, without requiring any previous knowledge of Physics, in doing so, it seeks to present easier access to understanding or developing quantum algorithms.
This report studies the foundations of Quantum Computing in a fully mathematical manner, abstracting from the underlying real physical systems. Firstly, we study the fundamentals of Quantum Physics, exploring concepts and properties of Hilbert spaces over the field of complex numbers. Next, we define the concepts of qubit and p-qubit, along with the quantum logic gates that operate on them. Finally, we develop some important algorithms with specific applications that show the relevance of this kind of logic. The aim of this document is to serve as an introduction, based on concepts covered by the Mathematics undergraduate program, to the world of Quantum Computing, without requiring any previous knowledge of Physics, in doing so, it seeks to present easier access to understanding or developing quantum algorithms.
Direction
FERNANDEZ FERNANDEZ, FRANCISCO JAVIER (Tutorships)
FERNANDEZ FERNANDEZ, FRANCISCO JAVIER (Tutorships)
Court
LOPEZ POUSO, RODRIGO (Chairman)
PEON NIETO, ANA (Secretary)
SEOANE MARTINEZ, MARIA LUISA (Member)
LOPEZ POUSO, RODRIGO (Chairman)
PEON NIETO, ANA (Secretary)
SEOANE MARTINEZ, MARIA LUISA (Member)
Evaluation of the Effective Temperature through Quantum Simulation
Authorship
A.C.S.
Double bachelor degree in Mathematics and Physics
A.C.S.
Double bachelor degree in Mathematics and Physics
Defense date
07.16.2025 17:00
07.16.2025 17:00
Summary
An important line of research in the field of quantum computing is focused on noise detection and mitigation. In this report, we will focus on methods for measuring the effective temperature, a quantity used for the estimation of the residual population of the excited state of a qubit, due to thermal fluctuations in the device. First, we study the physical systems of transmon superconducting qubits and their dispersive readout process, which are the ones available in QMIO. Based on them, we develop a stochastic simulation on which the measurement methods can be validated. Finally, we study a method for measuring the effective temperature based on e-f Rabi oscillations, validating its hypotheses and obtaining results through simulation.
An important line of research in the field of quantum computing is focused on noise detection and mitigation. In this report, we will focus on methods for measuring the effective temperature, a quantity used for the estimation of the residual population of the excited state of a qubit, due to thermal fluctuations in the device. First, we study the physical systems of transmon superconducting qubits and their dispersive readout process, which are the ones available in QMIO. Based on them, we develop a stochastic simulation on which the measurement methods can be validated. Finally, we study a method for measuring the effective temperature based on e-f Rabi oscillations, validating its hypotheses and obtaining results through simulation.
Direction
MAS SOLE, JAVIER (Tutorships)
Gómez Tato, Andrés (Co-tutorships)
MAS SOLE, JAVIER (Tutorships)
Gómez Tato, Andrés (Co-tutorships)
Court
MIRA PEREZ, JORGE (Chairman)
CID VIDAL, XABIER (Secretary)
MOSQUEIRA REY, JESUS MANUEL (Member)
MIRA PEREZ, JORGE (Chairman)
CID VIDAL, XABIER (Secretary)
MOSQUEIRA REY, JESUS MANUEL (Member)
High-precision quadrature
Authorship
J.C.S.
Bachelor of Mathematics
J.C.S.
Bachelor of Mathematics
Defense date
07.16.2025 17:30
07.16.2025 17:30
Summary
In this work, we address the study of numerical methods for the approximate calculation of definite integrals through quadrature formulas, fundamental techniques when the exact value of the integral is unknown. Specifically, we focus on polynomial interpolatory formulas, which approximate the true value by integrating an interpolation polynomial. We examine in depth three main quadrature methods. First of all, the Gauss formulas, known for their high accuracy using a few nodes. Then, the Romberg method, which combines the composite trapezoidal rule with Richardson extrapolation. Finally, we study the use of endpoint corrections for the composite trapezoidal and Simpson’s rules. For each method, we explore its theoretical formulation, error behavior, and the conditions required to achieve accuracy. Additionally, we provide examples and tables illustrating the variation of the approximation error for each method.
In this work, we address the study of numerical methods for the approximate calculation of definite integrals through quadrature formulas, fundamental techniques when the exact value of the integral is unknown. Specifically, we focus on polynomial interpolatory formulas, which approximate the true value by integrating an interpolation polynomial. We examine in depth three main quadrature methods. First of all, the Gauss formulas, known for their high accuracy using a few nodes. Then, the Romberg method, which combines the composite trapezoidal rule with Richardson extrapolation. Finally, we study the use of endpoint corrections for the composite trapezoidal and Simpson’s rules. For each method, we explore its theoretical formulation, error behavior, and the conditions required to achieve accuracy. Additionally, we provide examples and tables illustrating the variation of the approximation error for each method.
Direction
López Pouso, Óscar (Tutorships)
BARRAL RODIÑO, PATRICIA (Co-tutorships)
López Pouso, Óscar (Tutorships)
BARRAL RODIÑO, PATRICIA (Co-tutorships)
Court
CRUJEIRAS CASAIS, ROSA MARÍA (Chairman)
PENA BRAGE, FRANCISCO JOSE (Secretary)
DOMINGUEZ VAZQUEZ, MIGUEL (Member)
CRUJEIRAS CASAIS, ROSA MARÍA (Chairman)
PENA BRAGE, FRANCISCO JOSE (Secretary)
DOMINGUEZ VAZQUEZ, MIGUEL (Member)
Sudoku: An Application of Operations Research
Authorship
L.D.G.
Bachelor of Mathematics
L.D.G.
Bachelor of Mathematics
Defense date
07.15.2025 10:40
07.15.2025 10:40
Summary
This work focuses on the mathematical and computational study of Sudoku solving, starting from its connection with Latin squares. Its existence and enumeration are analyzed, laying the groundwork for a solid theoretical framework to understand the underlying structure of Sudoku. The game is then described in a historical and formal context, examining its rules and properties, the total number of possible boards, as well as the problem of the minimum number of clues required to guarantee a unique solution. The most commonly used manual solving techniques, both basic and advanced, are also described. The main part of this work explores several solution methodologies using mathematical programming, including linear programming, backtracking algorithms, evolutionary methods (such as genetic algorithms), and simulated annealing, along with graph-based models. Furthermore, the analysis is extended to some Sudoku variants. Finally, a comparative evaluation of the computational efficiency of all proposed methodologies is presented, based on implementations in the R programming language.
This work focuses on the mathematical and computational study of Sudoku solving, starting from its connection with Latin squares. Its existence and enumeration are analyzed, laying the groundwork for a solid theoretical framework to understand the underlying structure of Sudoku. The game is then described in a historical and formal context, examining its rules and properties, the total number of possible boards, as well as the problem of the minimum number of clues required to guarantee a unique solution. The most commonly used manual solving techniques, both basic and advanced, are also described. The main part of this work explores several solution methodologies using mathematical programming, including linear programming, backtracking algorithms, evolutionary methods (such as genetic algorithms), and simulated annealing, along with graph-based models. Furthermore, the analysis is extended to some Sudoku variants. Finally, a comparative evaluation of the computational efficiency of all proposed methodologies is presented, based on implementations in the R programming language.
Direction
SAAVEDRA NIEVES, ALEJANDRO (Tutorships)
DAVILA PENA, LAURA (Co-tutorships)
SAAVEDRA NIEVES, ALEJANDRO (Tutorships)
DAVILA PENA, LAURA (Co-tutorships)
Court
Majadas Soto, José Javier (Chairman)
SALGADO RODRIGUEZ, MARIA DEL PILAR (Secretary)
CASARES DE CAL, MARIA ANGELES (Member)
Majadas Soto, José Javier (Chairman)
SALGADO RODRIGUEZ, MARIA DEL PILAR (Secretary)
CASARES DE CAL, MARIA ANGELES (Member)
Study and optimization of Octree models for neighbourhood searches in 3D point clouds
Authorship
P.D.V.
Double Bachelor's Degree in Informatics Engineering and Mathematics
P.D.V.
Double Bachelor's Degree in Informatics Engineering and Mathematics
Defense date
07.18.2025 12:30
07.18.2025 12:30
Summary
This work presents an approach for efficient neighbour search in 3D point clouds, particularly those acquired via LiDAR technology. The proposed method involves a spatial reordering of the point cloud based on Space-Filling Curves (SFCs), coupled with efficient implementations of Octree-based search algorithms. The study explores how Morton and Hilbert SFCs can optimize the organization of three-dimensional data, improving the performance of neighbourhood queries and the construction of spatial structures over the cloud. Several Octree variants are proposed and evaluated, analyzing their impact on the efficiency and scalability of the proposed method. Experimental results demonstrate that SFC reordering, along with specialized Octree-based search algorithms, significantly enhances spatial data access, providing a robust solution for applications requiring fast processing of large 3D point cloud datasets. Part of this work was presented at the XXXV Jornadas de Paralelismo organized by SARTECO.
This work presents an approach for efficient neighbour search in 3D point clouds, particularly those acquired via LiDAR technology. The proposed method involves a spatial reordering of the point cloud based on Space-Filling Curves (SFCs), coupled with efficient implementations of Octree-based search algorithms. The study explores how Morton and Hilbert SFCs can optimize the organization of three-dimensional data, improving the performance of neighbourhood queries and the construction of spatial structures over the cloud. Several Octree variants are proposed and evaluated, analyzing their impact on the efficiency and scalability of the proposed method. Experimental results demonstrate that SFC reordering, along with specialized Octree-based search algorithms, significantly enhances spatial data access, providing a robust solution for applications requiring fast processing of large 3D point cloud datasets. Part of this work was presented at the XXXV Jornadas de Paralelismo organized by SARTECO.
Direction
Fernández Rivera, Francisco (Tutorships)
YERMO GARCIA, MIGUEL (Co-tutorships)
Fernández Rivera, Francisco (Tutorships)
YERMO GARCIA, MIGUEL (Co-tutorships)
Court
Pardo López, Xosé Manuel (Chairman)
SUAREZ GAREA, JORGE ALBERTO (Secretary)
CARIÑENA AMIGO, MARIA PURIFICACION (Member)
Pardo López, Xosé Manuel (Chairman)
SUAREZ GAREA, JORGE ALBERTO (Secretary)
CARIÑENA AMIGO, MARIA PURIFICACION (Member)
Crossed modules of groups
Authorship
M.E.L.
Bachelor of Mathematics
M.E.L.
Bachelor of Mathematics
Defense date
07.16.2025 18:00
07.16.2025 18:00
Summary
The mathematical concept of a crossed module was introduced by J. H. C. Whitehead in 1949 with the aim of modeling spaces of homotopy 2-type. Crossed modules exist for a wide variety of algebraic structures; in this work, we will consider exclusively the group structure. In this context, a crossed module of groups is a group homomorphism \mu: M \to N together with an action of the group N on the group M by automorphisms, such that two fundamental conditions are satisfied: the equivariance of \mu and the Peiffer identity. Crossed modules have different algebraic properties and are categorically equivalent to structures such as Cat1 -grupos, strict 2-groups, and strict categorical groups. Describing these properties and equivalences will be the main objective of this work.
The mathematical concept of a crossed module was introduced by J. H. C. Whitehead in 1949 with the aim of modeling spaces of homotopy 2-type. Crossed modules exist for a wide variety of algebraic structures; in this work, we will consider exclusively the group structure. In this context, a crossed module of groups is a group homomorphism \mu: M \to N together with an action of the group N on the group M by automorphisms, such that two fundamental conditions are satisfied: the equivariance of \mu and the Peiffer identity. Crossed modules have different algebraic properties and are categorically equivalent to structures such as Cat1 -grupos, strict 2-groups, and strict categorical groups. Describing these properties and equivalences will be the main objective of this work.
Direction
LADRA GONZALEZ, MANUEL EULOGIO (Tutorships)
RAMOS PEREZ, BRAIS (Co-tutorships)
LADRA GONZALEZ, MANUEL EULOGIO (Tutorships)
RAMOS PEREZ, BRAIS (Co-tutorships)
Court
LOPEZ POUSO, RODRIGO (Chairman)
PEON NIETO, ANA (Secretary)
SEOANE MARTINEZ, MARIA LUISA (Member)
LOPEZ POUSO, RODRIGO (Chairman)
PEON NIETO, ANA (Secretary)
SEOANE MARTINEZ, MARIA LUISA (Member)
Automatic Differentiation through computational graphs
Authorship
L.E.G.
Bachelor of Mathematics
L.E.G.
Bachelor of Mathematics
Defense date
07.17.2025 10:00
07.17.2025 10:00
Summary
Automatic Differentiation is a differentiation technique that combines the ideas of symbolic and numerical differentiation with the goal of accurately evaluating derivatives of functions at given points. Two main approaches are typically used: forward mode and reverse mode. Although both will be discussed, the primary focus of this work is on the reverse mode, due to its advantages in scenarios involving functions with many input variables. The study explores the benefits of reverse mode for gradient computation, its extension to multiple independent variables and higher-order derivatives, and the construction of the computational graph required for its implementation in MATLAB. Finally, practical applications are examined, particularly in the context of gradient-based optimization processes, such as those used in training neural networks.
Automatic Differentiation is a differentiation technique that combines the ideas of symbolic and numerical differentiation with the goal of accurately evaluating derivatives of functions at given points. Two main approaches are typically used: forward mode and reverse mode. Although both will be discussed, the primary focus of this work is on the reverse mode, due to its advantages in scenarios involving functions with many input variables. The study explores the benefits of reverse mode for gradient computation, its extension to multiple independent variables and higher-order derivatives, and the construction of the computational graph required for its implementation in MATLAB. Finally, practical applications are examined, particularly in the context of gradient-based optimization processes, such as those used in training neural networks.
Direction
PENA BRAGE, FRANCISCO JOSE (Tutorships)
RODRIGUEZ GARCIA, JERONIMO (Co-tutorships)
PENA BRAGE, FRANCISCO JOSE (Tutorships)
RODRIGUEZ GARCIA, JERONIMO (Co-tutorships)
Court
LOPEZ POUSO, RODRIGO (Chairman)
PEON NIETO, ANA (Secretary)
SEOANE MARTINEZ, MARIA LUISA (Member)
LOPEZ POUSO, RODRIGO (Chairman)
PEON NIETO, ANA (Secretary)
SEOANE MARTINEZ, MARIA LUISA (Member)
Introduction to Markov Chains and its Applications to Luck Games.
Authorship
A.F.B.
Bachelor of Mathematics
A.F.B.
Bachelor of Mathematics
Defense date
07.16.2025 12:30
07.16.2025 12:30
Summary
Markov chains are processes in which the probability of future events does not depend on the past, but only on the current situation. These particular chains are very useful in several fields such as biology, chemistry, physics and computer science. In this project we will analyze Markov chains and apply the studied concepts to games based on luck, like Blackjack or Snakes and Ladders. These games, besides depending on chance, are often influenced by the players’ strategies; therefore, understanding the theory behind them can help the players making better decisions.
Markov chains are processes in which the probability of future events does not depend on the past, but only on the current situation. These particular chains are very useful in several fields such as biology, chemistry, physics and computer science. In this project we will analyze Markov chains and apply the studied concepts to games based on luck, like Blackjack or Snakes and Ladders. These games, besides depending on chance, are often influenced by the players’ strategies; therefore, understanding the theory behind them can help the players making better decisions.
Direction
AMEIJEIRAS ALONSO, JOSE (Tutorships)
Bolón Rodríguez, Diego (Co-tutorships)
AMEIJEIRAS ALONSO, JOSE (Tutorships)
Bolón Rodríguez, Diego (Co-tutorships)
Court
CRUJEIRAS CASAIS, ROSA MARÍA (Chairman)
PENA BRAGE, FRANCISCO JOSE (Secretary)
DOMINGUEZ VAZQUEZ, MIGUEL (Member)
CRUJEIRAS CASAIS, ROSA MARÍA (Chairman)
PENA BRAGE, FRANCISCO JOSE (Secretary)
DOMINGUEZ VAZQUEZ, MIGUEL (Member)
Computational aspects of goodness-of-fit tests for linear regression models
Authorship
G.F.F.
Bachelor of Mathematics
G.F.F.
Bachelor of Mathematics
Defense date
07.16.2025 13:15
07.16.2025 13:15
Summary
This work will present an introduction to the multiple linear regression model and to para- metric regression models, as well as to the classical assumptions typically imposed on them. One of these assumptions is the linearity, respectively the parametric form, of the model; which will lead us to focus on the goodness-of-fit test introduced by Stute (1997), which is highly useful in order to check if a certain model follows some specified parametric form. Due to the complexity of estimating the distribution of the proposed test statistic, we will present a bootstrap approxi- mation, more specifically the wild bootstrap about the residuals of the considered model in order to carry out the calibration of the test in practice. Later, we will implement this test in R and we will carry out a simulation study with the aim of assessing the good performance of the test empirically; verifying that it respects the nominal level under the null hypothesis and it shows a good power, that is, the test will reject the null hypothesis when we consider models under the alternative hypothesis. Finally, we will present a real data application which will allow us to illustrate the utility of the presented procedure in practice.
This work will present an introduction to the multiple linear regression model and to para- metric regression models, as well as to the classical assumptions typically imposed on them. One of these assumptions is the linearity, respectively the parametric form, of the model; which will lead us to focus on the goodness-of-fit test introduced by Stute (1997), which is highly useful in order to check if a certain model follows some specified parametric form. Due to the complexity of estimating the distribution of the proposed test statistic, we will present a bootstrap approxi- mation, more specifically the wild bootstrap about the residuals of the considered model in order to carry out the calibration of the test in practice. Later, we will implement this test in R and we will carry out a simulation study with the aim of assessing the good performance of the test empirically; verifying that it respects the nominal level under the null hypothesis and it shows a good power, that is, the test will reject the null hypothesis when we consider models under the alternative hypothesis. Finally, we will present a real data application which will allow us to illustrate the utility of the presented procedure in practice.
Direction
CONDE AMBOAGE, MERCEDES (Tutorships)
CONDE AMBOAGE, MERCEDES (Tutorships)
Court
CRUJEIRAS CASAIS, ROSA MARÍA (Chairman)
PENA BRAGE, FRANCISCO JOSE (Secretary)
DOMINGUEZ VAZQUEZ, MIGUEL (Member)
CRUJEIRAS CASAIS, ROSA MARÍA (Chairman)
PENA BRAGE, FRANCISCO JOSE (Secretary)
DOMINGUEZ VAZQUEZ, MIGUEL (Member)
Introduction to the Black-Scholes model
Authorship
M.F.P.
Bachelor of Mathematics
M.F.P.
Bachelor of Mathematics
Defense date
07.16.2025 16:00
07.16.2025 16:00
Summary
Published in 1973, the Black-Scholes model represented a significant breakthrough in the theory of financial option pricing, as it provided an explicit solution for the theoretical price of European options. The objective of this paper is to present an introduction to this model. The document begins by explaining basic financial concepts and then addresses the necessary mathematical foundations on which the model is based, particularly highlighting the geometric Brownian motion and Itô’s Lemma. Subsequently, with the support of these tools, the Black-Scholes differential equation is formally derived, followed by the presentation of the explicit formula for European options. A practical case is also included to illustrate the usefulness of this model in real-world situations. Finally, the paper presents a series of factors that limit the model’s applicability in today’s financial markets, and discusses possible extensions and modifications designed to adapt the model to these realities.
Published in 1973, the Black-Scholes model represented a significant breakthrough in the theory of financial option pricing, as it provided an explicit solution for the theoretical price of European options. The objective of this paper is to present an introduction to this model. The document begins by explaining basic financial concepts and then addresses the necessary mathematical foundations on which the model is based, particularly highlighting the geometric Brownian motion and Itô’s Lemma. Subsequently, with the support of these tools, the Black-Scholes differential equation is formally derived, followed by the presentation of the explicit formula for European options. A practical case is also included to illustrate the usefulness of this model in real-world situations. Finally, the paper presents a series of factors that limit the model’s applicability in today’s financial markets, and discusses possible extensions and modifications designed to adapt the model to these realities.
Direction
AMEIJEIRAS ALONSO, JOSE (Tutorships)
GINZO VILLAMAYOR, MARIA JOSE (Co-tutorships)
AMEIJEIRAS ALONSO, JOSE (Tutorships)
GINZO VILLAMAYOR, MARIA JOSE (Co-tutorships)
Court
CRUJEIRAS CASAIS, ROSA MARÍA (Chairman)
PENA BRAGE, FRANCISCO JOSE (Secretary)
DOMINGUEZ VAZQUEZ, MIGUEL (Member)
CRUJEIRAS CASAIS, ROSA MARÍA (Chairman)
PENA BRAGE, FRANCISCO JOSE (Secretary)
DOMINGUEZ VAZQUEZ, MIGUEL (Member)
Stadistical analysis of frequency tables
Authorship
X.F.S.
Bachelor of Mathematics
X.F.S.
Bachelor of Mathematics
Defense date
07.16.2025 16:45
07.16.2025 16:45
Summary
In this paper, a review on statistical methods for the analysis of frequency tables is presented. First, the probability distributions for discrete variables most relevant to frequency tables are reviewed: Binomial, Multinomial , Poisson and Hypergeometric. Next, a review of fundamental concepts related to Statistical Inference and its applications to contingency tables is carried out. Moreover bidimensional contingency tables are studied, starting with $2\times2$ tables and continuing with general $I\times J$ tables. The chi-squared test, the likelihood ratio test and Fisher's exact test will be reviewed. Not only are ilustrative examples presented, but also very visual graphical methods such as barplots, fourfold plots, sieve diagrams and mosaic plots. The paper ends with the study of multidimensional contingency tables, which involves the study of partial and marginal tables, culminating in the Mantel-Haenszel test and Breslow-Day-Tarone test. Odds-ratios are revelaed to be a very useful tool for the analysis of association forms in a contingency table. The paper if accompanied of examples, as well as \verb*|R| code for the implementation of the considered statistical methods.
In this paper, a review on statistical methods for the analysis of frequency tables is presented. First, the probability distributions for discrete variables most relevant to frequency tables are reviewed: Binomial, Multinomial , Poisson and Hypergeometric. Next, a review of fundamental concepts related to Statistical Inference and its applications to contingency tables is carried out. Moreover bidimensional contingency tables are studied, starting with $2\times2$ tables and continuing with general $I\times J$ tables. The chi-squared test, the likelihood ratio test and Fisher's exact test will be reviewed. Not only are ilustrative examples presented, but also very visual graphical methods such as barplots, fourfold plots, sieve diagrams and mosaic plots. The paper ends with the study of multidimensional contingency tables, which involves the study of partial and marginal tables, culminating in the Mantel-Haenszel test and Breslow-Day-Tarone test. Odds-ratios are revelaed to be a very useful tool for the analysis of association forms in a contingency table. The paper if accompanied of examples, as well as \verb*|R| code for the implementation of the considered statistical methods.
Direction
SANCHEZ SELLERO, CESAR ANDRES (Tutorships)
SANCHEZ SELLERO, CESAR ANDRES (Tutorships)
Court
CRUJEIRAS CASAIS, ROSA MARÍA (Chairman)
PENA BRAGE, FRANCISCO JOSE (Secretary)
DOMINGUEZ VAZQUEZ, MIGUEL (Member)
CRUJEIRAS CASAIS, ROSA MARÍA (Chairman)
PENA BRAGE, FRANCISCO JOSE (Secretary)
DOMINGUEZ VAZQUEZ, MIGUEL (Member)
Analysis and Comparison of Concept Drift Detection and Adaptation Techniques
Authorship
F.F.M.
Double Bachelor's Degree in Informatics Engineering and Mathematics
F.F.M.
Double Bachelor's Degree in Informatics Engineering and Mathematics
Defense date
07.17.2025 10:00
07.17.2025 10:00
Summary
This work compares different techniques for the detection and adaptation of Concept Drift, a phenomenon that arises in dynamic and non-stationary environments where the statistical relationships between the model's variables change over time, affecting its performance in some cases. The main objective is to evaluate various drift detectors present in the Literature, along with the analysis of several adaptation techniques following detection. The study is carried out in an experimental setting using artificial datasets that simulate different types of drift, applied to a classification model. Classical algorithms from the RiverML library are analyzed, in scenarios with and without prior knowledge of the environment. In the first case, a hyperparameter optimization is applied using a novel method based on Random Search. Regarding KSWIN, one of the evaluated algorithms, a modification developed as a complementary part of the Bachelor's Thesis in Mathematics is incorporated. This modification introduces statistical techniques such as multiple hypothesis testing and the Benjamini Hochberg correction to improve the detection process, as well as a system for identifying the type of drift through non-parametric inference, which is considered innovative in the Literature. The results highlight the strengths and limitations of both the detectors and the adaptation strategies analyzed. While some algorithms, such as HDDMW, show good overall performance, the choice of the most appropriate detector largely depends on the use case and the type of drift present. Likewise, adaptation based on mini batches offers solid performance compared to periodic retraining. In addition, the proposed modification of KSWIN outperforms other detectors in terms of balancing false positives and false negatives during the detection process, and establishes a solid foundation for the method of drift type identification.
This work compares different techniques for the detection and adaptation of Concept Drift, a phenomenon that arises in dynamic and non-stationary environments where the statistical relationships between the model's variables change over time, affecting its performance in some cases. The main objective is to evaluate various drift detectors present in the Literature, along with the analysis of several adaptation techniques following detection. The study is carried out in an experimental setting using artificial datasets that simulate different types of drift, applied to a classification model. Classical algorithms from the RiverML library are analyzed, in scenarios with and without prior knowledge of the environment. In the first case, a hyperparameter optimization is applied using a novel method based on Random Search. Regarding KSWIN, one of the evaluated algorithms, a modification developed as a complementary part of the Bachelor's Thesis in Mathematics is incorporated. This modification introduces statistical techniques such as multiple hypothesis testing and the Benjamini Hochberg correction to improve the detection process, as well as a system for identifying the type of drift through non-parametric inference, which is considered innovative in the Literature. The results highlight the strengths and limitations of both the detectors and the adaptation strategies analyzed. While some algorithms, such as HDDMW, show good overall performance, the choice of the most appropriate detector largely depends on the use case and the type of drift present. Likewise, adaptation based on mini batches offers solid performance compared to periodic retraining. In addition, the proposed modification of KSWIN outperforms other detectors in terms of balancing false positives and false negatives during the detection process, and establishes a solid foundation for the method of drift type identification.
Direction
MERA PEREZ, DAVID (Tutorships)
MERA PEREZ, DAVID (Tutorships)
Court
LADRA GONZALEZ, MANUEL EULOGIO (Chairman)
LOPEZ FANDIÑO, JAVIER (Secretary)
VIDAL AGUIAR, JUAN CARLOS (Member)
LADRA GONZALEZ, MANUEL EULOGIO (Chairman)
LOPEZ FANDIÑO, JAVIER (Secretary)
VIDAL AGUIAR, JUAN CARLOS (Member)
Undergraduate dissertation
Authorship
C.F.S.
Double bachelor degree in Mathematics and Physics
C.F.S.
Double bachelor degree in Mathematics and Physics
Defense date
07.17.2025 10:00
07.17.2025 10:00
Summary
The van der Waals material 1T CrSe2 consists of strongly covalently bonded layers in plane which are held together out of plane by weak van der Waals forces. These systems exhibit an intermediate behavior between localized electrons with well defined magnetic moments at each atomic site and itinerant electrons delocalized across conduction bands without a net moment assignable to each atom To investigate its magnetic ordering and electronic instabilities we performed ab initio calculations using the WIEN2k code employing the FP LAPW plus local orbitals method and the PBE generalized gradient approximation. First the rigid 1 por 1 monolayer was characterized its ground state is ferromagnetic and its density of states DOS is dominated at the Fermi level by Cr t2g orbitals indicating electronic instabilities To capture the charge density waves CDWs associated with Peierls modes we constructed 2 por 2 and sqrt3 por sqrt3 supercells. Relaxation of the 2 por 2 cell reveals Cr tetramers and a partial band gap characteristic of a unidirectional Peierls mechanism. The sqrt3 por sqrt3 reconstruction simultaneously trimerizes all three t2g orbitals opening pseudogaps above and below the Fermi level and leaving only a single residual flat band Because conventional DFT tends to underestimate the local Coulomb interaction we applied the LDA U correction to the Cr d orbitals. For moderate values of U the pseudogap opening is enhanced and the ferromagnetic CDW phase is further stabilized whereas an excessive U induces subband reordering that reintroduces DOS peaks at EF
The van der Waals material 1T CrSe2 consists of strongly covalently bonded layers in plane which are held together out of plane by weak van der Waals forces. These systems exhibit an intermediate behavior between localized electrons with well defined magnetic moments at each atomic site and itinerant electrons delocalized across conduction bands without a net moment assignable to each atom To investigate its magnetic ordering and electronic instabilities we performed ab initio calculations using the WIEN2k code employing the FP LAPW plus local orbitals method and the PBE generalized gradient approximation. First the rigid 1 por 1 monolayer was characterized its ground state is ferromagnetic and its density of states DOS is dominated at the Fermi level by Cr t2g orbitals indicating electronic instabilities To capture the charge density waves CDWs associated with Peierls modes we constructed 2 por 2 and sqrt3 por sqrt3 supercells. Relaxation of the 2 por 2 cell reveals Cr tetramers and a partial band gap characteristic of a unidirectional Peierls mechanism. The sqrt3 por sqrt3 reconstruction simultaneously trimerizes all three t2g orbitals opening pseudogaps above and below the Fermi level and leaving only a single residual flat band Because conventional DFT tends to underestimate the local Coulomb interaction we applied the LDA U correction to the Cr d orbitals. For moderate values of U the pseudogap opening is enhanced and the ferromagnetic CDW phase is further stabilized whereas an excessive U induces subband reordering that reintroduces DOS peaks at EF
Direction
PARDO CASTRO, VICTOR (Tutorships)
PARDO CASTRO, VICTOR (Tutorships)
Court
MIRA PEREZ, JORGE (Chairman)
CID VIDAL, XABIER (Secretary)
MOSQUEIRA REY, JESUS MANUEL (Member)
MIRA PEREZ, JORGE (Chairman)
CID VIDAL, XABIER (Secretary)
MOSQUEIRA REY, JESUS MANUEL (Member)
Automatic Segmentation of Preclinical Magnetic Resonance Imaging
Authorship
A.G.A.
Double bachelor degree in Mathematics and Physics
A.G.A.
Double bachelor degree in Mathematics and Physics
Defense date
07.16.2025 09:00
07.16.2025 09:00
Summary
The main objective of this work is to develop an automated system for the detection and segmentation of glioblastomas, an aggressive type of brain tumor, in animal models (mice and rats) using preclinical magnetic resonance imaging (MRI). For this purpose, supervised learning techniques are employed to automatically segment the glioblastoma and accurately calculate its volume. Chapter 3 presents the U-Net model, a convolutional neural network specialized in medical segmentation tasks. This model is trained using MRI images of mice with manual segmentations (ground truth), with the aim of automatically locating the glioblastoma in new images. The impact of different hyperparameter configurations is explored, and the models performance is evaluated using specific segmentation metrics. Chapter 4 focuses on the development of predictive models to estimate tumor volume in mice, based on the same images used in the previous chapter. The process includes the extraction of radiomic features, their comparison with those previously obtained by researcher Sara Ortega, the selection of the most relevant variables, and the training of regression models. Again, various combinations of hyperparameters are analyzed to study their influence on prediction quality. In Chapter 5, the previous procedure is replicated, this time using rat images. In addition, an analysis is conducted on the impact of increasing the sample size on the performance of the predictive models, training the algorithms with different amounts of data. Finally, the possibility of building a model to predict animal survival from the images was considered. However, a preliminary analysis revealed that the available data were insufficient to obtain reliable predictions, so this possibility was proposed as future research.
The main objective of this work is to develop an automated system for the detection and segmentation of glioblastomas, an aggressive type of brain tumor, in animal models (mice and rats) using preclinical magnetic resonance imaging (MRI). For this purpose, supervised learning techniques are employed to automatically segment the glioblastoma and accurately calculate its volume. Chapter 3 presents the U-Net model, a convolutional neural network specialized in medical segmentation tasks. This model is trained using MRI images of mice with manual segmentations (ground truth), with the aim of automatically locating the glioblastoma in new images. The impact of different hyperparameter configurations is explored, and the models performance is evaluated using specific segmentation metrics. Chapter 4 focuses on the development of predictive models to estimate tumor volume in mice, based on the same images used in the previous chapter. The process includes the extraction of radiomic features, their comparison with those previously obtained by researcher Sara Ortega, the selection of the most relevant variables, and the training of regression models. Again, various combinations of hyperparameters are analyzed to study their influence on prediction quality. In Chapter 5, the previous procedure is replicated, this time using rat images. In addition, an analysis is conducted on the impact of increasing the sample size on the performance of the predictive models, training the algorithms with different amounts of data. Finally, the possibility of building a model to predict animal survival from the images was considered. However, a preliminary analysis revealed that the available data were insufficient to obtain reliable predictions, so this possibility was proposed as future research.
Direction
IGLESIAS REY, RAMON (Tutorships)
IGLESIAS REY, RAMON (Tutorships)
Court
Pérez Muñuzuri, Vicente (Chairman)
GALLAS TORREIRA, ABRAHAM ANTONIO (Secretary)
RODRIGUEZ GONZALEZ, JUAN ANTONIO (Member)
Pérez Muñuzuri, Vicente (Chairman)
GALLAS TORREIRA, ABRAHAM ANTONIO (Secretary)
RODRIGUEZ GONZALEZ, JUAN ANTONIO (Member)
Shallow water equations: analytic and numeric solutions
Authorship
M.G.C.
Bachelor of Mathematics
M.G.C.
Bachelor of Mathematics
Defense date
07.17.2025 10:40
07.17.2025 10:40
Summary
Shallow water equations are of notable relevance in hydraulics and environmental sciences. Having access to a simple mathematical model capable of describing reality with precision comes with substantial benefits. Following this motivation, this work is targeted at introducing the shallow water equations and together with their analytical and numeric solutions. We begin studying the mathematical properties of the corresponding system of hyperbolic conservation laws, so that we can next obtain some particular solutions and finally approach the design and validation of numerical methods for solving the equations. The Matlab code developed is based in finite volume schemes of first order and allows the approximate solution of the classic Riemann problem.
Shallow water equations are of notable relevance in hydraulics and environmental sciences. Having access to a simple mathematical model capable of describing reality with precision comes with substantial benefits. Following this motivation, this work is targeted at introducing the shallow water equations and together with their analytical and numeric solutions. We begin studying the mathematical properties of the corresponding system of hyperbolic conservation laws, so that we can next obtain some particular solutions and finally approach the design and validation of numerical methods for solving the equations. The Matlab code developed is based in finite volume schemes of first order and allows the approximate solution of the classic Riemann problem.
Direction
VAZQUEZ CENDON, MARIA ELENA (Tutorships)
BUSTO ULLOA, SARAY (Co-tutorships)
VAZQUEZ CENDON, MARIA ELENA (Tutorships)
BUSTO ULLOA, SARAY (Co-tutorships)
Court
LOPEZ POUSO, RODRIGO (Chairman)
PEON NIETO, ANA (Secretary)
SEOANE MARTINEZ, MARIA LUISA (Member)
LOPEZ POUSO, RODRIGO (Chairman)
PEON NIETO, ANA (Secretary)
SEOANE MARTINEZ, MARIA LUISA (Member)
Introduction to Minimal Surfaces
Authorship
J.G.G.
Bachelor of Mathematics
J.G.G.
Bachelor of Mathematics
Defense date
07.16.2025 10:45
07.16.2025 10:45
Summary
The investigation of the theory of minimal surfaces, which still remains fully in force, began in the 18th century with valuable contributions from illustrious mathematicians such as L. Euler or J. Lagrange. We begin this work by collecting the first approaches and definitions, sometimes from different perspectives -such as physic, geometric or analytic- of minimal surfaces. One of the most significant advances in this subject occurred between 1861 and 1864 with the incorporation of Complex Analysis to the study of minimal surfaces, culminating in what we know today as the Weierstrass-Enneper Representation, which we also discuss in this text. Finally, we move on to study one of the main tools in the context of surface theory: the Maximum Principle. This result allows us to compare and distinguish surfaces through the analysis of their respective mean curvatures.
The investigation of the theory of minimal surfaces, which still remains fully in force, began in the 18th century with valuable contributions from illustrious mathematicians such as L. Euler or J. Lagrange. We begin this work by collecting the first approaches and definitions, sometimes from different perspectives -such as physic, geometric or analytic- of minimal surfaces. One of the most significant advances in this subject occurred between 1861 and 1864 with the incorporation of Complex Analysis to the study of minimal surfaces, culminating in what we know today as the Weierstrass-Enneper Representation, which we also discuss in this text. Finally, we move on to study one of the main tools in the context of surface theory: the Maximum Principle. This result allows us to compare and distinguish surfaces through the analysis of their respective mean curvatures.
Direction
SANMARTIN LOPEZ, VICTOR (Tutorships)
SANMARTIN LOPEZ, VICTOR (Tutorships)
Court
GARCIA RODICIO, ANTONIO (Chairman)
CAO LABORA, DANIEL (Secretary)
Gómez Tato, Antonio M. (Member)
GARCIA RODICIO, ANTONIO (Chairman)
CAO LABORA, DANIEL (Secretary)
Gómez Tato, Antonio M. (Member)
What to do with a sample and a computer? Applying the bootstrap methodology to the calculation of confidence intervals
Authorship
C.G.G.
Bachelor of Mathematics
C.G.G.
Bachelor of Mathematics
Defense date
07.16.2025 09:15
07.16.2025 09:15
Summary
In Statistics, one of the main objectives is to estimate the value of a parameter that characterizes a population. Likewise, it is of interest to study the properties of such estimates, such as the uncertainty associated with the estimator. This is the aim of the jackknife method, from which the bootstrap method emerged as an improved version. Uniform bootstrap, which is the simplest version, is useful in situations where the population distribution is completely unknown and only the sample is available. However, when certain properties of the underlying distribution are known, other variants can be used that yield better results than the uniform one. This work presents the different bootstrap procedures, as well as the associated algorithms, in combination with the Monte Carlo method. Currently, the applications of the bootstrap are numerous. One of the most relevant is the construction of confidence intervals for various parameters. In this work, we compare, based on their coverage error, the method based on the asymptotic Normal distribution and three bootstrap variants: the basic percentile method, the percentil-t, and the symmetrized percentilt. These three procedures, whose construction is based on the pivotal method, differ on the definition of the pivot used. Notably, the absolute studentized pivot used in the symmetrized percentil-t provides more accurate confidence intervals.
In Statistics, one of the main objectives is to estimate the value of a parameter that characterizes a population. Likewise, it is of interest to study the properties of such estimates, such as the uncertainty associated with the estimator. This is the aim of the jackknife method, from which the bootstrap method emerged as an improved version. Uniform bootstrap, which is the simplest version, is useful in situations where the population distribution is completely unknown and only the sample is available. However, when certain properties of the underlying distribution are known, other variants can be used that yield better results than the uniform one. This work presents the different bootstrap procedures, as well as the associated algorithms, in combination with the Monte Carlo method. Currently, the applications of the bootstrap are numerous. One of the most relevant is the construction of confidence intervals for various parameters. In this work, we compare, based on their coverage error, the method based on the asymptotic Normal distribution and three bootstrap variants: the basic percentile method, the percentil-t, and the symmetrized percentilt. These three procedures, whose construction is based on the pivotal method, differ on the definition of the pivot used. Notably, the absolute studentized pivot used in the symmetrized percentil-t provides more accurate confidence intervals.
Direction
BORRAJO GARCIA, MARIA ISABEL (Tutorships)
BORRAJO GARCIA, MARIA ISABEL (Tutorships)
Court
CRUJEIRAS CASAIS, ROSA MARÍA (Chairman)
PENA BRAGE, FRANCISCO JOSE (Secretary)
DOMINGUEZ VAZQUEZ, MIGUEL (Member)
CRUJEIRAS CASAIS, ROSA MARÍA (Chairman)
PENA BRAGE, FRANCISCO JOSE (Secretary)
DOMINGUEZ VAZQUEZ, MIGUEL (Member)
The Functional Linear Regression Model
Authorship
L.G.R.
Bachelor of Mathematics
L.G.R.
Bachelor of Mathematics
Defense date
07.16.2025 10:00
07.16.2025 10:00
Summary
Functional data analysis is the branch of statistics concerned with the study of functions as probabilistic objects, rather than scalars or vectors, as is the case in the traditional statistical framework. The aim of this work is to provide an introduction to the statistics of functional data, establishing its theoretical foundations with regression as a centric point. Two functional linear regression models are formulated, depending on whether the response variable is scalar or functional, and a range of estimation methods as well as significance testing procedures applicable to each model are presented.
Functional data analysis is the branch of statistics concerned with the study of functions as probabilistic objects, rather than scalars or vectors, as is the case in the traditional statistical framework. The aim of this work is to provide an introduction to the statistics of functional data, establishing its theoretical foundations with regression as a centric point. Two functional linear regression models are formulated, depending on whether the response variable is scalar or functional, and a range of estimation methods as well as significance testing procedures applicable to each model are presented.
Direction
GONZALEZ MANTEIGA, WENCESLAO (Tutorships)
GONZALEZ MANTEIGA, WENCESLAO (Tutorships)
Court
CRUJEIRAS CASAIS, ROSA MARÍA (Chairman)
PENA BRAGE, FRANCISCO JOSE (Secretary)
DOMINGUEZ VAZQUEZ, MIGUEL (Member)
CRUJEIRAS CASAIS, ROSA MARÍA (Chairman)
PENA BRAGE, FRANCISCO JOSE (Secretary)
DOMINGUEZ VAZQUEZ, MIGUEL (Member)
Bankruptcy problems
Authorship
H.G.S.
Bachelor of Mathematics
H.G.S.
Bachelor of Mathematics
Defense date
07.15.2025 11:25
07.15.2025 11:25
Summary
The motivation stems from a concrete and highly significant problem that had to be resolved years ago and left the world paralyzed: the COVID crisis. The initial distribution of masks was crucial in limiting the virus’s spread, forcing different states to face an allocation problem of a scarce resource across their territories. This work examines the bankruptcy problem, a classic model in game theory that can also be approached axiomatically, representing situations in which a group of agents claims more than what is available from a limited resource. This type of problem arises in numerous social and economic contexts where it is necessary to distribute a scarce good among different interested parties, such as geopolitics, business economics, or public asset management. The main objective of this work is to conduct a bibliographic review of the different proposed solutions for these situations, studying their mathematical properties, conceptual justifications and potential applications, whether using existing game theory models or developing specific ones. Various rules are analyzed, such as proportional division, the Talmud solution, the Shapley value, priority rules, and geometric rules, among others, comparing their behavior in terms of fairness, consistency, and efficiency, all applied to the practical case.
The motivation stems from a concrete and highly significant problem that had to be resolved years ago and left the world paralyzed: the COVID crisis. The initial distribution of masks was crucial in limiting the virus’s spread, forcing different states to face an allocation problem of a scarce resource across their territories. This work examines the bankruptcy problem, a classic model in game theory that can also be approached axiomatically, representing situations in which a group of agents claims more than what is available from a limited resource. This type of problem arises in numerous social and economic contexts where it is necessary to distribute a scarce good among different interested parties, such as geopolitics, business economics, or public asset management. The main objective of this work is to conduct a bibliographic review of the different proposed solutions for these situations, studying their mathematical properties, conceptual justifications and potential applications, whether using existing game theory models or developing specific ones. Various rules are analyzed, such as proportional division, the Talmud solution, the Shapley value, priority rules, and geometric rules, among others, comparing their behavior in terms of fairness, consistency, and efficiency, all applied to the practical case.
Direction
SAAVEDRA NIEVES, ALEJANDRO (Tutorships)
SAAVEDRA NIEVES, ALEJANDRO (Tutorships)
Court
Majadas Soto, José Javier (Chairman)
SALGADO RODRIGUEZ, MARIA DEL PILAR (Secretary)
CASARES DE CAL, MARIA ANGELES (Member)
Majadas Soto, José Javier (Chairman)
SALGADO RODRIGUEZ, MARIA DEL PILAR (Secretary)
CASARES DE CAL, MARIA ANGELES (Member)
Shortcuts to adiabaticity in simple quantum systems
Authorship
J.G.C.
Double bachelor degree in Mathematics and Physics
J.G.C.
Double bachelor degree in Mathematics and Physics
Defense date
07.16.2025 09:00
07.16.2025 09:00
Summary
Given a quantum system in which we want to modify some control parameter without changes in its energy level, the adiabatic theorem allows us to do it. In exchange, the speed at which this change is made must be sufficiently small. This leads to a greater vulnerability of the system to effects such as noise or decoherence that result in the lost of its quantum properties. In order to solve this difficulty, the methods known as shortcuts to adiabaticity are designed. They allow us to accelerate the system preparation time without having to depend on the adiabatic theorem. In particular, in this work we will focus on the method known as Counterdiabatic Driving (CD) based on the introduction of an additional term in the hamiltonian of the system that avoids transitions between instantaneous eigenstates. First, we will motivate and give the precise statement of the adiabatic theorem. Then, we will introduce the CD formalism and apply it to several simple systems. Finally, a brief introduction to others shortcuts to adiabaticity and their limits will be given.
Given a quantum system in which we want to modify some control parameter without changes in its energy level, the adiabatic theorem allows us to do it. In exchange, the speed at which this change is made must be sufficiently small. This leads to a greater vulnerability of the system to effects such as noise or decoherence that result in the lost of its quantum properties. In order to solve this difficulty, the methods known as shortcuts to adiabaticity are designed. They allow us to accelerate the system preparation time without having to depend on the adiabatic theorem. In particular, in this work we will focus on the method known as Counterdiabatic Driving (CD) based on the introduction of an additional term in the hamiltonian of the system that avoids transitions between instantaneous eigenstates. First, we will motivate and give the precise statement of the adiabatic theorem. Then, we will introduce the CD formalism and apply it to several simple systems. Finally, a brief introduction to others shortcuts to adiabaticity and their limits will be given.
Direction
Vázquez Ramallo, Alfonso (Tutorships)
Vázquez Ramallo, Alfonso (Tutorships)
Court
Pérez Muñuzuri, Vicente (Chairman)
GALLAS TORREIRA, ABRAHAM ANTONIO (Secretary)
RODRIGUEZ GONZALEZ, JUAN ANTONIO (Member)
Pérez Muñuzuri, Vicente (Chairman)
GALLAS TORREIRA, ABRAHAM ANTONIO (Secretary)
RODRIGUEZ GONZALEZ, JUAN ANTONIO (Member)
Robust vision-language models for small objects
Authorship
N.G.S.D.V.
Double Bachelor's Degree in Informatics Engineering and Mathematics
N.G.S.D.V.
Double Bachelor's Degree in Informatics Engineering and Mathematics
Defense date
07.17.2025 11:00
07.17.2025 11:00
Summary
This work focuses on the optimization of vision-language models (VLMs) applied to visual question answering (VQA) tasks on videos containing small objects, a scenario that presents significant computational challenges due to the large number of irrelevant visual tokens generated. To address this issue, a method is proposed based on the integration of a detector that identifies relevant visual regions in the video frames, allowing the filtering of background-associated visual tokens generated by the vision module (ViT) before being processed by the language model (LLM). Experimental results show that this filtering effectively eliminates a large proportion of visual tokens, leading to a notable reduction in the computational complexity of the language model and, consequently, a decrease in the overall system complexity, without compromising performance. Furthermore, an improvement in the LLM’s execution time is observed, contributing to greater efficiency in textual processing. However, the overall inference time is still influenced by the ViT, which remains the main bottleneck due to high-resolution image processing, as well as by the additional computational cost introduced by the detector. This work validates the use of filtering techniques as an effective strategy to improve the efficiency of VLMs and opens new lines of research aimed at optimizing visual processing and exploring lighter-weight detectors.
This work focuses on the optimization of vision-language models (VLMs) applied to visual question answering (VQA) tasks on videos containing small objects, a scenario that presents significant computational challenges due to the large number of irrelevant visual tokens generated. To address this issue, a method is proposed based on the integration of a detector that identifies relevant visual regions in the video frames, allowing the filtering of background-associated visual tokens generated by the vision module (ViT) before being processed by the language model (LLM). Experimental results show that this filtering effectively eliminates a large proportion of visual tokens, leading to a notable reduction in the computational complexity of the language model and, consequently, a decrease in the overall system complexity, without compromising performance. Furthermore, an improvement in the LLM’s execution time is observed, contributing to greater efficiency in textual processing. However, the overall inference time is still influenced by the ViT, which remains the main bottleneck due to high-resolution image processing, as well as by the additional computational cost introduced by the detector. This work validates the use of filtering techniques as an effective strategy to improve the efficiency of VLMs and opens new lines of research aimed at optimizing visual processing and exploring lighter-weight detectors.
Direction
MUCIENTES MOLINA, MANUEL FELIPE (Tutorships)
CORES COSTA, DANIEL (Co-tutorships)
MUCIENTES MOLINA, MANUEL FELIPE (Tutorships)
CORES COSTA, DANIEL (Co-tutorships)
Court
LADRA GONZALEZ, MANUEL EULOGIO (Chairman)
LOPEZ FANDIÑO, JAVIER (Secretary)
VIDAL AGUIAR, JUAN CARLOS (Member)
LADRA GONZALEZ, MANUEL EULOGIO (Chairman)
LOPEZ FANDIÑO, JAVIER (Secretary)
VIDAL AGUIAR, JUAN CARLOS (Member)
Combinatorial Optimization and Heuristic Algorithms
Authorship
D.G.G.
Bachelor of Mathematics
D.G.G.
Bachelor of Mathematics
Defense date
07.15.2025 12:10
07.15.2025 12:10
Summary
The work will primarily consist of an in-depth analysis of combinatorial optimization problems, particularly three of them: the network flow problem, the knapsack problem, and the traveling salesman problem. The latter will include a practical simulation applied to real-world scenarios, allowing for an analysis of the efficiency of its main heuristics.
The work will primarily consist of an in-depth analysis of combinatorial optimization problems, particularly three of them: the network flow problem, the knapsack problem, and the traveling salesman problem. The latter will include a practical simulation applied to real-world scenarios, allowing for an analysis of the efficiency of its main heuristics.
Direction
CASAS MENDEZ, BALBINA VIRGINIA (Tutorships)
CASAS MENDEZ, BALBINA VIRGINIA (Tutorships)
Court
Majadas Soto, José Javier (Chairman)
SALGADO RODRIGUEZ, MARIA DEL PILAR (Secretary)
CASARES DE CAL, MARIA ANGELES (Member)
Majadas Soto, José Javier (Chairman)
SALGADO RODRIGUEZ, MARIA DEL PILAR (Secretary)
CASARES DE CAL, MARIA ANGELES (Member)
A review of finite difference algorithms in parabolic models
Authorship
Y.G.G.
Bachelor of Mathematics
Y.G.G.
Bachelor of Mathematics
Defense date
07.15.2025 10:30
07.15.2025 10:30
Summary
In this work, we will present different algorithms using the finite difference method to solve parabolic partial differential equations (PDEs). These numerical methods will be oriented towards simulating an application in bio-heat transfer. We will begin with the formalization of the parabolic model, followed by a focus on a specific case of significant relevance in bioengineering: the Pennes equation. The finite difference method will be introduced as a discretization tool, along with formulations in alternative coordinate systems. Next, four classical algorithms will be developed and analyzed: the explicit method, the implicit method, the Crank-Nicolson method, and the method of lines. For each of these, we will provide a detailed formulation, numerical analysis, MATLAB implementation, and validation. Finally, the model will be applied to a real case of breast tissue with cyst, validating the obtained results and discussing their practical usefulness.
In this work, we will present different algorithms using the finite difference method to solve parabolic partial differential equations (PDEs). These numerical methods will be oriented towards simulating an application in bio-heat transfer. We will begin with the formalization of the parabolic model, followed by a focus on a specific case of significant relevance in bioengineering: the Pennes equation. The finite difference method will be introduced as a discretization tool, along with formulations in alternative coordinate systems. Next, four classical algorithms will be developed and analyzed: the explicit method, the implicit method, the Crank-Nicolson method, and the method of lines. For each of these, we will provide a detailed formulation, numerical analysis, MATLAB implementation, and validation. Finally, the model will be applied to a real case of breast tissue with cyst, validating the obtained results and discussing their practical usefulness.
Direction
QUINTELA ESTEVEZ, PEREGRINA (Tutorships)
QUINTELA ESTEVEZ, PEREGRINA (Tutorships)
Court
QUINTELA ESTEVEZ, PEREGRINA (Student’s tutor)
QUINTELA ESTEVEZ, PEREGRINA (Student’s tutor)
Nilpotent centers at polynomial systems.
Authorship
A.I.V.
Bachelor of Mathematics
A.I.V.
Bachelor of Mathematics
Defense date
07.17.2025 11:20
07.17.2025 11:20
Summary
In this bachelor's final thesis, conducted within the field of Mathematical Analysis, we tackle the typification of global centers at bidimensional polynomial dynamic systems, which is an open problem. After gathering a collection of fundamental results about dynamic systems theory, local and global centers are introduced by a classification attending to their linearization. Owing to the problem’s complexity, we confine the study to find nilpotent centers at systems with quintic homogeneous polynomials and under certain symmetries. Hamiltonian systems were considered too. Studying global centers requires a analysis into orbit behaviour near the infinity of the euclidean plane. To face it, we draw on the Poincaré compactification. This technique projects the vector field defined in R2 onto a sphere, where points at infinity are identified with its equator line. Finally, in order to understand the local structure of some singularities, we use blow up transformations that expand the singular point along a whole straight line, making them easier to manage.
In this bachelor's final thesis, conducted within the field of Mathematical Analysis, we tackle the typification of global centers at bidimensional polynomial dynamic systems, which is an open problem. After gathering a collection of fundamental results about dynamic systems theory, local and global centers are introduced by a classification attending to their linearization. Owing to the problem’s complexity, we confine the study to find nilpotent centers at systems with quintic homogeneous polynomials and under certain symmetries. Hamiltonian systems were considered too. Studying global centers requires a analysis into orbit behaviour near the infinity of the euclidean plane. To face it, we draw on the Poincaré compactification. This technique projects the vector field defined in R2 onto a sphere, where points at infinity are identified with its equator line. Finally, in order to understand the local structure of some singularities, we use blow up transformations that expand the singular point along a whole straight line, making them easier to manage.
Direction
OTERO ESPINAR, MARIA VICTORIA (Tutorships)
Diz Pita, Érika (Co-tutorships)
OTERO ESPINAR, MARIA VICTORIA (Tutorships)
Diz Pita, Érika (Co-tutorships)
Court
LOPEZ POUSO, RODRIGO (Chairman)
PEON NIETO, ANA (Secretary)
SEOANE MARTINEZ, MARIA LUISA (Member)
LOPEZ POUSO, RODRIGO (Chairman)
PEON NIETO, ANA (Secretary)
SEOANE MARTINEZ, MARIA LUISA (Member)
Introduction to time series models
Authorship
I.L.C.
Double Bachelor's Degree in Informatics Engineering and Mathematics
I.L.C.
Double Bachelor's Degree in Informatics Engineering and Mathematics
Defense date
07.17.2025 09:10
07.17.2025 09:10
Summary
The study of time series is a fundamental tool in the analysis of data that evolve over time, allowing us to model and predict phenomena in fields such as economics, meteorology, engineering or social sciences. This Final Degree Thesis deals with the analysis of time series from a classical statistical perspective, considering that the observed series are realisations of stochastic processes. Autoregressive (AR), moving average (MA), mixed (ARMA), integrated (ARIMA) and seasonal (SARIMA) models are presented in detail, explaining their mathematical formulation, parameter estimation methods and associated forecasting techniques. A methodology for time series analysis is also proposed, which will be used in the analysis of two real time series. In addition, a simulation study is included to illustrate and evaluate the estimation and forecasting processes of the models.
The study of time series is a fundamental tool in the analysis of data that evolve over time, allowing us to model and predict phenomena in fields such as economics, meteorology, engineering or social sciences. This Final Degree Thesis deals with the analysis of time series from a classical statistical perspective, considering that the observed series are realisations of stochastic processes. Autoregressive (AR), moving average (MA), mixed (ARMA), integrated (ARIMA) and seasonal (SARIMA) models are presented in detail, explaining their mathematical formulation, parameter estimation methods and associated forecasting techniques. A methodology for time series analysis is also proposed, which will be used in the analysis of two real time series. In addition, a simulation study is included to illustrate and evaluate the estimation and forecasting processes of the models.
Direction
CRUJEIRAS CASAIS, ROSA MARÍA (Tutorships)
CRUJEIRAS CASAIS, ROSA MARÍA (Tutorships)
Court
Majadas Soto, José Javier (Chairman)
SALGADO RODRIGUEZ, MARIA DEL PILAR (Secretary)
CASARES DE CAL, MARIA ANGELES (Member)
Majadas Soto, José Javier (Chairman)
SALGADO RODRIGUEZ, MARIA DEL PILAR (Secretary)
CASARES DE CAL, MARIA ANGELES (Member)
Theory of lower and upper solutions applied to ordinary differential equations.
Authorship
A.L.R.
Bachelor of Mathematics
A.L.R.
Bachelor of Mathematics
Defense date
07.15.2025 16:00
07.15.2025 16:00
Summary
The method of lower and upper solutions has been studied since the last decade of the 19th century due to its effectiveness in the analysis of nonlinear differential equations. This approach allows the establishment of the existence of solutions without the need to solve the problem explicitly. In this work, we first present the application of the lower and upper solution method to second-order ordinary differential equations with periodic conditions. Various existence results are proved, which also ensure that the solution or solutions are bounded by the lower and the upper solution. Next, we describe the monotone method, a constructive technique based on Green’s functions and the appropriate choice of a pair of lower and upper solutions. Unlike the previous case, this method is presented to ordinary differential equations of arbitrary order and under general boundary conditions, not necessarily periodic. The procedure generates monotone sequences that converge to the extremal solutions of the problem.
The method of lower and upper solutions has been studied since the last decade of the 19th century due to its effectiveness in the analysis of nonlinear differential equations. This approach allows the establishment of the existence of solutions without the need to solve the problem explicitly. In this work, we first present the application of the lower and upper solution method to second-order ordinary differential equations with periodic conditions. Various existence results are proved, which also ensure that the solution or solutions are bounded by the lower and the upper solution. Next, we describe the monotone method, a constructive technique based on Green’s functions and the appropriate choice of a pair of lower and upper solutions. Unlike the previous case, this method is presented to ordinary differential equations of arbitrary order and under general boundary conditions, not necessarily periodic. The procedure generates monotone sequences that converge to the extremal solutions of the problem.
Direction
CABADA FERNANDEZ, ALBERTO (Tutorships)
CABADA FERNANDEZ, ALBERTO (Tutorships)
Court
LOPEZ POUSO, RODRIGO (Chairman)
PEON NIETO, ANA (Secretary)
SEOANE MARTINEZ, MARIA LUISA (Member)
LOPEZ POUSO, RODRIGO (Chairman)
PEON NIETO, ANA (Secretary)
SEOANE MARTINEZ, MARIA LUISA (Member)
Exploitation of Large Language Models for Automatic Annotation of Credibility
Authorship
P.L.P.
Double Bachelor's Degree in Informatics Engineering and Mathematics
P.L.P.
Double Bachelor's Degree in Informatics Engineering and Mathematics
Defense date
07.17.2025 11:30
07.17.2025 11:30
Summary
This thesis focuses on the application of state-of-the-art language models, such as GPT-4 and LlaMa 3, to the labeling of health-related documents in the context of the TREC project. The main objective is to evaluate the possibility of replacing annotations made by human experts with labels generated by LLMs. The field of Information Retrieval requires large labeled datasets, which are created by expert human annotators, a process that is costly in terms of both time and money. If it can be proven that these human annotations can be replaced by automatically generated ones, this would represent a major advance in the generation of high-quality datasets. In this work, we will label the same web documents that were labeled by humans; this will allow us to analyze discrepancies between human labels and those generated by the models. We also studied the effect that the instructions given to the language model have on the accuracy of the labeling. We based our methodology on a publication by Microsoft researchers, in which the relevance of each document is labeled. The results obtained in thomas2024 were very satisfactory and were implemented in Bing due to their improvement in time, cost, and quality compared to crowdsourced labelers. Our results represent an advance over this previous publication, as we carry out labeling of more complex features such as medical correctness and credibility. The results obtained in our work were in some cases very similar to those of Paul Thomas et al, so we consider them positive enough to replace human labels.
This thesis focuses on the application of state-of-the-art language models, such as GPT-4 and LlaMa 3, to the labeling of health-related documents in the context of the TREC project. The main objective is to evaluate the possibility of replacing annotations made by human experts with labels generated by LLMs. The field of Information Retrieval requires large labeled datasets, which are created by expert human annotators, a process that is costly in terms of both time and money. If it can be proven that these human annotations can be replaced by automatically generated ones, this would represent a major advance in the generation of high-quality datasets. In this work, we will label the same web documents that were labeled by humans; this will allow us to analyze discrepancies between human labels and those generated by the models. We also studied the effect that the instructions given to the language model have on the accuracy of the labeling. We based our methodology on a publication by Microsoft researchers, in which the relevance of each document is labeled. The results obtained in thomas2024 were very satisfactory and were implemented in Bing due to their improvement in time, cost, and quality compared to crowdsourced labelers. Our results represent an advance over this previous publication, as we carry out labeling of more complex features such as medical correctness and credibility. The results obtained in our work were in some cases very similar to those of Paul Thomas et al, so we consider them positive enough to replace human labels.
Direction
Losada Carril, David Enrique (Tutorships)
FERNANDEZ PICHEL, MARCOS (Co-tutorships)
Losada Carril, David Enrique (Tutorships)
FERNANDEZ PICHEL, MARCOS (Co-tutorships)
Court
LADRA GONZALEZ, MANUEL EULOGIO (Chairman)
LOPEZ FANDIÑO, JAVIER (Secretary)
VIDAL AGUIAR, JUAN CARLOS (Member)
LADRA GONZALEZ, MANUEL EULOGIO (Chairman)
LOPEZ FANDIÑO, JAVIER (Secretary)
VIDAL AGUIAR, JUAN CARLOS (Member)
The problem of comparing populations: what to do when the t- test can not be applied
Authorship
A.L.P.
Bachelor of Mathematics
A.L.P.
Bachelor of Mathematics
Defense date
07.16.2025 10:45
07.16.2025 10:45
Summary
Comparing populations is a topic within Statistical Inference which allows detecting differences among several groups. This document starts with an introduction to this issue, followed by a chapter dedicated to presenting basic statistical concepts which are needed to correctly understand the full document. Among these concepts, hypothesis testing stands out. After that, the problem of comparing population is carefully explained in order to compare means, variances and distribution functions. On the one hand, we will work with two populations or with more than two populations and, on the other hand, parametric methods are distinguished from non-parametric methods. The first methods assume normality, with the t-test being part of this group, whereas the second methods do not make any kind of assumption about the target populations. Two tests will be added in order to check the normality hypothesis: the Lilliefors test and the Shapiro-Wilk test. Other tests to check for variance equality will be included as well, such as the Levene’s test. Finally, the theoretical tools explained previously will be used to analyze a dataset on forest fires in Galicia in 2006 using software R.
Comparing populations is a topic within Statistical Inference which allows detecting differences among several groups. This document starts with an introduction to this issue, followed by a chapter dedicated to presenting basic statistical concepts which are needed to correctly understand the full document. Among these concepts, hypothesis testing stands out. After that, the problem of comparing population is carefully explained in order to compare means, variances and distribution functions. On the one hand, we will work with two populations or with more than two populations and, on the other hand, parametric methods are distinguished from non-parametric methods. The first methods assume normality, with the t-test being part of this group, whereas the second methods do not make any kind of assumption about the target populations. Two tests will be added in order to check the normality hypothesis: the Lilliefors test and the Shapiro-Wilk test. Other tests to check for variance equality will be included as well, such as the Levene’s test. Finally, the theoretical tools explained previously will be used to analyze a dataset on forest fires in Galicia in 2006 using software R.
Direction
BORRAJO GARCIA, MARIA ISABEL (Tutorships)
BORRAJO GARCIA, MARIA ISABEL (Tutorships)
Court
CRUJEIRAS CASAIS, ROSA MARÍA (Chairman)
PENA BRAGE, FRANCISCO JOSE (Secretary)
DOMINGUEZ VAZQUEZ, MIGUEL (Member)
CRUJEIRAS CASAIS, ROSA MARÍA (Chairman)
PENA BRAGE, FRANCISCO JOSE (Secretary)
DOMINGUEZ VAZQUEZ, MIGUEL (Member)
Sylow Theory and low order groups
Authorship
I.M.H.
Bachelor of Mathematics
I.M.H.
Bachelor of Mathematics
Defense date
07.15.2025 12:00
07.15.2025 12:00
Summary
The objective of this work is to study finite groups of order less than or equal to 30, grouping them into different categories according to their structure in prime factors, fundamentally using techniques such as Sylow Theory. We will also introduce new tools to complete our study such as the semi-direct product. Normally, the analysis of the different groups will be done for the general case and then for the specific orders, but, if the task is complicated in the general case, the study will focus only on the specific case, of which an in-depth study will be attempted.
The objective of this work is to study finite groups of order less than or equal to 30, grouping them into different categories according to their structure in prime factors, fundamentally using techniques such as Sylow Theory. We will also introduce new tools to complete our study such as the semi-direct product. Normally, the analysis of the different groups will be done for the general case and then for the specific orders, but, if the task is complicated in the general case, the study will focus only on the specific case, of which an in-depth study will be attempted.
Direction
FERNANDEZ RODRIGUEZ, ROSA Mª (Tutorships)
COSTOYA RAMOS, MARIA CRISTINA (Co-tutorships)
FERNANDEZ RODRIGUEZ, ROSA Mª (Tutorships)
COSTOYA RAMOS, MARIA CRISTINA (Co-tutorships)
Court
COSTOYA RAMOS, MARIA CRISTINA (Student’s tutor)
FERNANDEZ RODRIGUEZ, ROSA Mª (Student’s tutor)
COSTOYA RAMOS, MARIA CRISTINA (Student’s tutor)
FERNANDEZ RODRIGUEZ, ROSA Mª (Student’s tutor)
Undergraduate dissertation
Authorship
D.M.M.
Bachelor of Mathematics
D.M.M.
Bachelor of Mathematics
Defense date
07.15.2025 16:40
07.15.2025 16:40
Summary
A research project focused on algebraic curves is the first step toward entering the realm of varieties. The geometric perspective from which they are studied serves as an important motivation for certain foundations and theorems in commutative algebra. Using the knowledge acquired in introductory courses on ring theory and number theory, one can draft the foundations of algebraic geometry. After introducing the notions of affine and projective varieties, it is common to present the concept of a singular point. The main idea to study this points is to find which rings provide local information about the curves and, furthermore, whether it is possible to build examples of simple curves that preserve the properties of more complex ones.
A research project focused on algebraic curves is the first step toward entering the realm of varieties. The geometric perspective from which they are studied serves as an important motivation for certain foundations and theorems in commutative algebra. Using the knowledge acquired in introductory courses on ring theory and number theory, one can draft the foundations of algebraic geometry. After introducing the notions of affine and projective varieties, it is common to present the concept of a singular point. The main idea to study this points is to find which rings provide local information about the curves and, furthermore, whether it is possible to build examples of simple curves that preserve the properties of more complex ones.
Direction
ALONSO TARRIO, LEOVIGILDO (Tutorships)
ALVITE PAZO, RAUL (Co-tutorships)
ALONSO TARRIO, LEOVIGILDO (Tutorships)
ALVITE PAZO, RAUL (Co-tutorships)
Court
LOPEZ POUSO, RODRIGO (Chairman)
PEON NIETO, ANA (Secretary)
SEOANE MARTINEZ, MARIA LUISA (Member)
LOPEZ POUSO, RODRIGO (Chairman)
PEON NIETO, ANA (Secretary)
SEOANE MARTINEZ, MARIA LUISA (Member)
Functional Analysis by G.D. Lugovaia and S.N. Sherstniov. Solved exercises.
Authorship
M.O.P.
Bachelor of Mathematics
M.O.P.
Bachelor of Mathematics
Defense date
07.15.2025 17:20
07.15.2025 17:20
Summary
In this Final Degree Project, we will present the basic concepts of the theory of C*-algebras taught by Galina Dmitrievna Lugovaia and Anatoli Nikolaevich Sherstniov, as well as solve the exercises they propose. We will begin by introducing Banach algebras, studying their algebraic and topological properties, and culminating with the Guelfand representation, which relates them to spaces of continuous functions. Next, we will study the particular case of C*-algebras, of great interest in the field of functional analysis. We will analyze their structure and most important concepts: positive elements, approximate identities, morphisms, states...
In this Final Degree Project, we will present the basic concepts of the theory of C*-algebras taught by Galina Dmitrievna Lugovaia and Anatoli Nikolaevich Sherstniov, as well as solve the exercises they propose. We will begin by introducing Banach algebras, studying their algebraic and topological properties, and culminating with the Guelfand representation, which relates them to spaces of continuous functions. Next, we will study the particular case of C*-algebras, of great interest in the field of functional analysis. We will analyze their structure and most important concepts: positive elements, approximate identities, morphisms, states...
Direction
LOSADA RODRIGUEZ, JORGE (Tutorships)
LOSADA RODRIGUEZ, JORGE (Tutorships)
Court
LOPEZ POUSO, RODRIGO (Chairman)
PEON NIETO, ANA (Secretary)
SEOANE MARTINEZ, MARIA LUISA (Member)
LOPEZ POUSO, RODRIGO (Chairman)
PEON NIETO, ANA (Secretary)
SEOANE MARTINEZ, MARIA LUISA (Member)
Model Specification in Survival Analysis
Authorship
A.A.P.B.
Bachelor of Mathematics
A.A.P.B.
Bachelor of Mathematics
Defense date
07.17.2025 09:55
07.17.2025 09:55
Summary
This project goes over some specification contrasts based on the distribution function and the density function for both complete data samples and samples with random right censoring. Specification contrasts are explained, along with their elements. A nonparametric estimator for the distribution function is proposed in case of a complete sample, its properties are studied and contrasts are layed out using different distances. Nonparametric estimation of the density function for complete samples is discussed, as well as some contrasts for it. Survival analysis is presented, focusing on some of the most important functions for studying lifetime data and frequently used parametric families; censoring, and especially random right censoring, are also introduced. A nonparametric estimator for the distribution function under random right censoring is proposed and parametric estimation is discussed; contrasts for complete samples are adapted to this scenario. A nonparametric estimator for the density function under random right censoring is given, together with a contrast based on it. A nonparametric continuous estimator for the distribution function is studied along with a contrast using it.
This project goes over some specification contrasts based on the distribution function and the density function for both complete data samples and samples with random right censoring. Specification contrasts are explained, along with their elements. A nonparametric estimator for the distribution function is proposed in case of a complete sample, its properties are studied and contrasts are layed out using different distances. Nonparametric estimation of the density function for complete samples is discussed, as well as some contrasts for it. Survival analysis is presented, focusing on some of the most important functions for studying lifetime data and frequently used parametric families; censoring, and especially random right censoring, are also introduced. A nonparametric estimator for the distribution function under random right censoring is proposed and parametric estimation is discussed; contrasts for complete samples are adapted to this scenario. A nonparametric estimator for the density function under random right censoring is given, together with a contrast based on it. A nonparametric continuous estimator for the distribution function is studied along with a contrast using it.
Direction
GONZALEZ MANTEIGA, WENCESLAO (Tutorships)
VIDAL GARCIA, MARIA (Co-tutorships)
GONZALEZ MANTEIGA, WENCESLAO (Tutorships)
VIDAL GARCIA, MARIA (Co-tutorships)
Court
Majadas Soto, José Javier (Chairman)
SALGADO RODRIGUEZ, MARIA DEL PILAR (Secretary)
CASARES DE CAL, MARIA ANGELES (Member)
Majadas Soto, José Javier (Chairman)
SALGADO RODRIGUEZ, MARIA DEL PILAR (Secretary)
CASARES DE CAL, MARIA ANGELES (Member)
Classification trees and optimisation
Authorship
I.Q.R.
Double Bachelor's Degree in Informatics Engineering and Mathematics
I.Q.R.
Double Bachelor's Degree in Informatics Engineering and Mathematics
Defense date
07.17.2025 10:40
07.17.2025 10:40
Summary
This paper studies classification trees, a technique widely used in machine learning due to its simplicity, predictive capacity and ease of interpretation. In particular, different strategies for their construction are analysed, with special attention to methods based on mathematical optimisation. Three representative approaches are considered: Random Forests (RF), as a heuristic model based on tree assembly; Optimal Classification Trees (OCT), which poses the problem as a mixed integer linear optimisation; and Optimal Randomized Classification Trees (ORCT), which uses a continuous formulation that improves scalability while maintaining interpretability. The paper begins with a review of the fundamentals of statistical classification and decision tree-based methods. This is followed by a detailed description of optimisation models that allow the construction of optimal trees. Finally, a comparative empirical study is performed using five datasets of varying complexity, evaluating each model for accuracy, training time, interpretability and practical feasibility. The results show that RF offers high performance at low computational cost, while ORCT strikes a balance between accuracy and scalability. In contrast, OCT, while theoretically attractive, has computational limitations that restrict its use to smaller scale problems.
This paper studies classification trees, a technique widely used in machine learning due to its simplicity, predictive capacity and ease of interpretation. In particular, different strategies for their construction are analysed, with special attention to methods based on mathematical optimisation. Three representative approaches are considered: Random Forests (RF), as a heuristic model based on tree assembly; Optimal Classification Trees (OCT), which poses the problem as a mixed integer linear optimisation; and Optimal Randomized Classification Trees (ORCT), which uses a continuous formulation that improves scalability while maintaining interpretability. The paper begins with a review of the fundamentals of statistical classification and decision tree-based methods. This is followed by a detailed description of optimisation models that allow the construction of optimal trees. Finally, a comparative empirical study is performed using five datasets of varying complexity, evaluating each model for accuracy, training time, interpretability and practical feasibility. The results show that RF offers high performance at low computational cost, while ORCT strikes a balance between accuracy and scalability. In contrast, OCT, while theoretically attractive, has computational limitations that restrict its use to smaller scale problems.
Direction
GONZALEZ DIAZ, JULIO (Tutorships)
GONZALEZ DIAZ, JULIO (Tutorships)
Court
Majadas Soto, José Javier (Chairman)
SALGADO RODRIGUEZ, MARIA DEL PILAR (Secretary)
CASARES DE CAL, MARIA ANGELES (Member)
Majadas Soto, José Javier (Chairman)
SALGADO RODRIGUEZ, MARIA DEL PILAR (Secretary)
CASARES DE CAL, MARIA ANGELES (Member)
Automatic Text Analysis Technologies for Personality Trait Estimation
Authorship
I.Q.R.
Double Bachelor's Degree in Informatics Engineering and Mathematics
I.Q.R.
Double Bachelor's Degree in Informatics Engineering and Mathematics
Defense date
07.17.2025 10:15
07.17.2025 10:15
Summary
This work is framed within the field of Text-based Personality Computing (TPC), which seeks to estimate personality traits from texts written by users using natural language processing (NLP) techniques. Traditionally, personality traits are measured with questionnaires, but these methods have limitations such as the subjectivity of the answers or the difficulty of applying them on a large scale. Thanks to advances in PLN, it is now possible to analyse texts and predict certain traits without the need for surveys. In this paper, we have used a Reddit dataset with information about the personality traits of its users and applied modern techniques to compare their texts with the items of the NEO-FFI questionnaire. Through this process, Big-5 scores were estimated, the results were evaluated and MBTI traits were derived from these results. The proposed approach offers a simple, scalable and interpretable alternative for automatic personality analysis.
This work is framed within the field of Text-based Personality Computing (TPC), which seeks to estimate personality traits from texts written by users using natural language processing (NLP) techniques. Traditionally, personality traits are measured with questionnaires, but these methods have limitations such as the subjectivity of the answers or the difficulty of applying them on a large scale. Thanks to advances in PLN, it is now possible to analyse texts and predict certain traits without the need for surveys. In this paper, we have used a Reddit dataset with information about the personality traits of its users and applied modern techniques to compare their texts with the items of the NEO-FFI questionnaire. Through this process, Big-5 scores were estimated, the results were evaluated and MBTI traits were derived from these results. The proposed approach offers a simple, scalable and interpretable alternative for automatic personality analysis.
Direction
Losada Carril, David Enrique (Tutorships)
FERNANDEZ PICHEL, MARCOS (Co-tutorships)
Losada Carril, David Enrique (Tutorships)
FERNANDEZ PICHEL, MARCOS (Co-tutorships)
Court
CATALA BOLOS, ALEJANDRO (Chairman)
Triñanes Fernández, Joaquín Ángel (Secretary)
GONZALEZ DIAZ, JULIO (Member)
CATALA BOLOS, ALEJANDRO (Chairman)
Triñanes Fernández, Joaquín Ángel (Secretary)
GONZALEZ DIAZ, JULIO (Member)
Quantum Computing Algorithm for Optimization
Authorship
P.R.P.
Bachelor of Mathematics
P.R.P.
Bachelor of Mathematics
Defense date
07.15.2025 12:00
07.15.2025 12:00
Summary
In the field of quantum computing, there is considerable interest in leveraging quantum properties to solve optimization problems, due to their potential advantage over classical methods. This thesis introduces the fundamentals of quantum computing, covering both basic elements (the qubit and the p-qubit) as well as quantum gates and the measurement process. It then explores how optimization problems can be modeled using Hamiltonians, with particular focus on the QUBO and Ising models. The work delves into adiabatic quantum computing (adiabatic theorem) and presents the Quantum Approximate Optimization Algorithm (QAOA). It also details the hybrid classical-quantum architecture of the Variational Quantum Eigensolver (VQE), its variational optimization techniques, and practical applications. Finally, the performance of classical versus quantum computing is compared through complexity classes, the notion of quantum advantage is discussed, and future lines of mathematical research in this area are explored.
In the field of quantum computing, there is considerable interest in leveraging quantum properties to solve optimization problems, due to their potential advantage over classical methods. This thesis introduces the fundamentals of quantum computing, covering both basic elements (the qubit and the p-qubit) as well as quantum gates and the measurement process. It then explores how optimization problems can be modeled using Hamiltonians, with particular focus on the QUBO and Ising models. The work delves into adiabatic quantum computing (adiabatic theorem) and presents the Quantum Approximate Optimization Algorithm (QAOA). It also details the hybrid classical-quantum architecture of the Variational Quantum Eigensolver (VQE), its variational optimization techniques, and practical applications. Finally, the performance of classical versus quantum computing is compared through complexity classes, the notion of quantum advantage is discussed, and future lines of mathematical research in this area are explored.
Direction
PENA BRAGE, FRANCISCO JOSE (Tutorships)
PENA BRAGE, FRANCISCO JOSE (Tutorships)
Court
PENA BRAGE, FRANCISCO JOSE (Student’s tutor)
PENA BRAGE, FRANCISCO JOSE (Student’s tutor)
Optimal control of discrete systems and ordinary differential equations
Authorship
D.R.C.
Bachelor of Mathematics
D.R.C.
Bachelor of Mathematics
Defense date
07.15.2025 18:00
07.15.2025 18:00
Summary
In this work we will study optimal control problems without restrictions, both in the discrete and continuous cases. The first thing that will be sought in these problems will be to provide methods that allow to calculate the gradient of the cost functional, $\tilde J$, in the discrete case. These will be particularized in the case where the problem is evolutionary, taking advantage of this structure to reduce the computational cost. In the continuous case, problems where the state is given by an EDO will be considered. The way in which these problems will be approached will consist of: discretizing the problem so that it falls within the framework of the previous sections, or calculating the directional derivatives of the cost functional, using methods that will be provided in this work, thus allowing the gradient to be calculated by approximating the control space by a finite-dimensional space. The methods of the discrete case will be used to solve parameter estimation problems, while those of the continuous case will solve a problem related to the motion of a cart-pole system.
In this work we will study optimal control problems without restrictions, both in the discrete and continuous cases. The first thing that will be sought in these problems will be to provide methods that allow to calculate the gradient of the cost functional, $\tilde J$, in the discrete case. These will be particularized in the case where the problem is evolutionary, taking advantage of this structure to reduce the computational cost. In the continuous case, problems where the state is given by an EDO will be considered. The way in which these problems will be approached will consist of: discretizing the problem so that it falls within the framework of the previous sections, or calculating the directional derivatives of the cost functional, using methods that will be provided in this work, thus allowing the gradient to be calculated by approximating the control space by a finite-dimensional space. The methods of the discrete case will be used to solve parameter estimation problems, while those of the continuous case will solve a problem related to the motion of a cart-pole system.
Direction
RODRIGUEZ GARCIA, JERONIMO (Tutorships)
RODRIGUEZ GARCIA, JERONIMO (Tutorships)
Court
LOPEZ POUSO, RODRIGO (Chairman)
PEON NIETO, ANA (Secretary)
SEOANE MARTINEZ, MARIA LUISA (Member)
LOPEZ POUSO, RODRIGO (Chairman)
PEON NIETO, ANA (Secretary)
SEOANE MARTINEZ, MARIA LUISA (Member)
Topologycal data analysis and dynamical systems.
Authorship
R.R.L.D.L.V.
Bachelor of Mathematics
R.R.L.D.L.V.
Bachelor of Mathematics
Defense date
07.16.2025 11:30
07.16.2025 11:30
Summary
In this work, we will explore the main concepts of topological data analysis (TDA): we will study various homology theories and examine the idea of persistence. Additionally, we will introduce dynamical systems and study entropy in depth. Finally, we will analyze Shub’s conjecture, which relates the entropy of the system to the spectral radius between homology groups, and we will seek a version of the result using persistent homology
In this work, we will explore the main concepts of topological data analysis (TDA): we will study various homology theories and examine the idea of persistence. Additionally, we will introduce dynamical systems and study entropy in depth. Finally, we will analyze Shub’s conjecture, which relates the entropy of the system to the spectral radius between homology groups, and we will seek a version of the result using persistent homology
Direction
Álvarez López, Jesús Antonio (Tutorships)
Meniño Cotón, Carlos (Co-tutorships)
Álvarez López, Jesús Antonio (Tutorships)
Meniño Cotón, Carlos (Co-tutorships)
Court
GARCIA RODICIO, ANTONIO (Chairman)
CAO LABORA, DANIEL (Secretary)
Gómez Tato, Antonio M. (Member)
GARCIA RODICIO, ANTONIO (Chairman)
CAO LABORA, DANIEL (Secretary)
Gómez Tato, Antonio M. (Member)
Schauder basis in Banach Spaces
Authorship
I.R.R.
Bachelor of Mathematics
I.R.R.
Bachelor of Mathematics
Defense date
07.16.2025 16:00
07.16.2025 16:00
Summary
We will study the basic concepts of Schauder basis, mainly related to the Functional Analysis. We will see some of their properties, types and utilities in characterization of spaces attributes, focusing on reflexivity and the internal structure of spaces, giving special treat to the spaces of sequences like $c_0$ and $\ell_p$, laying emphasis on $\ell_1$ among latter. Furthermore, we will give some notes related to existence and uniqueness of Schauder basis and try to show how this basis are a natural extension of Hamel basis of finite spaces as well as a generalization of Hilbert basis to more generic Banach spaces.
We will study the basic concepts of Schauder basis, mainly related to the Functional Analysis. We will see some of their properties, types and utilities in characterization of spaces attributes, focusing on reflexivity and the internal structure of spaces, giving special treat to the spaces of sequences like $c_0$ and $\ell_p$, laying emphasis on $\ell_1$ among latter. Furthermore, we will give some notes related to existence and uniqueness of Schauder basis and try to show how this basis are a natural extension of Hamel basis of finite spaces as well as a generalization of Hilbert basis to more generic Banach spaces.
Direction
LOSADA RODRIGUEZ, JORGE (Tutorships)
LOSADA RODRIGUEZ, JORGE (Tutorships)
Court
LOPEZ POUSO, RODRIGO (Chairman)
PEON NIETO, ANA (Secretary)
SEOANE MARTINEZ, MARIA LUISA (Member)
LOPEZ POUSO, RODRIGO (Chairman)
PEON NIETO, ANA (Secretary)
SEOANE MARTINEZ, MARIA LUISA (Member)
Propagation of Ultrahigh-energy cosmic rays in the Universe
Authorship
A.R.S.
Double bachelor degree in Mathematics and Physics
A.R.S.
Double bachelor degree in Mathematics and Physics
Defense date
07.17.2025 09:30
07.17.2025 09:30
Summary
Ultrahigh-energy cosmic rays (UHECRs) are the most energetic particles detected to date in the Universe, reaching energies around the exaelectronvolt (EeV). Their study is fundamental to explore physics beyond the Standard Model, analyze the structure of the Galactic magnetic field, and address relevant questions in the field of cosmology. Currently, there are multiple lines of research focused on understanding these events, and observatories such as the Pierre Auger Observatory and the Telescope Array are comitted to detecting this type of extragalactic particles. This work studies the propagation of these cosmic rays within the Galaxy, aiming to analyze how the Galactic magnetic field influences such propagation and how the sources from which they originate could be identified. To this end, the astrophysical simulation framework CRPropa is used, which allows simulations to be performed through its Python interface. In this way, simulations of the backtracking of UHECRs detected by the Pierre Auger Observatory are carried out, obtaining both their arrival direction at the Galactic boundary and the deflection experienced during their trajectory. The study focuses on performing simulations by varying the components of the magnetic field used within the JF12 model [1, 2], considering both the presence of small-scale random turbulence and its absence, as well as the type of particle being propagated.
Ultrahigh-energy cosmic rays (UHECRs) are the most energetic particles detected to date in the Universe, reaching energies around the exaelectronvolt (EeV). Their study is fundamental to explore physics beyond the Standard Model, analyze the structure of the Galactic magnetic field, and address relevant questions in the field of cosmology. Currently, there are multiple lines of research focused on understanding these events, and observatories such as the Pierre Auger Observatory and the Telescope Array are comitted to detecting this type of extragalactic particles. This work studies the propagation of these cosmic rays within the Galaxy, aiming to analyze how the Galactic magnetic field influences such propagation and how the sources from which they originate could be identified. To this end, the astrophysical simulation framework CRPropa is used, which allows simulations to be performed through its Python interface. In this way, simulations of the backtracking of UHECRs detected by the Pierre Auger Observatory are carried out, obtaining both their arrival direction at the Galactic boundary and the deflection experienced during their trajectory. The study focuses on performing simulations by varying the components of the magnetic field used within the JF12 model [1, 2], considering both the presence of small-scale random turbulence and its absence, as well as the type of particle being propagated.
Direction
ALVAREZ MUÑIZ, JAIME (Tutorships)
ALVAREZ MUÑIZ, JAIME (Tutorships)
Court
ACOSTA PLAZA, EVA MARIA (Chairman)
VIEITES DIAZ, MARIA (Secretary)
Wu , Bin (Member)
ACOSTA PLAZA, EVA MARIA (Chairman)
VIEITES DIAZ, MARIA (Secretary)
Wu , Bin (Member)
Manifolds with a warped product structure
Authorship
A.R.S.
Double bachelor degree in Mathematics and Physics
A.R.S.
Double bachelor degree in Mathematics and Physics
Defense date
07.16.2025 12:15
07.16.2025 12:15
Summary
Pseudo-Riemannian manifolds with a local product structure have played a fundamental role in addressing a wide range of geometric and physical problems. In particular, such manifolds pro- vide one of the most commonly used geometric frameworks for constructing Einstein Riemannian metrics and constitute the underlying structure of most relativistic spacetimes. The aim of this work is to conduct a systematic study of manifolds with a local warped product structure, focusing on their curvature properties.
Pseudo-Riemannian manifolds with a local product structure have played a fundamental role in addressing a wide range of geometric and physical problems. In particular, such manifolds pro- vide one of the most commonly used geometric frameworks for constructing Einstein Riemannian metrics and constitute the underlying structure of most relativistic spacetimes. The aim of this work is to conduct a systematic study of manifolds with a local warped product structure, focusing on their curvature properties.
Direction
GARCIA RIO, EDUARDO (Tutorships)
CAEIRO OLIVEIRA, SANDRO (Co-tutorships)
GARCIA RIO, EDUARDO (Tutorships)
CAEIRO OLIVEIRA, SANDRO (Co-tutorships)
Court
GARCIA RODICIO, ANTONIO (Chairman)
CAO LABORA, DANIEL (Secretary)
Gómez Tato, Antonio M. (Member)
GARCIA RODICIO, ANTONIO (Chairman)
CAO LABORA, DANIEL (Secretary)
Gómez Tato, Antonio M. (Member)
Complex dynamics and the Mandelbrot set
Authorship
D.S.C.
Bachelor of Mathematics
D.S.C.
Bachelor of Mathematics
Defense date
07.17.2025 12:00
07.17.2025 12:00
Summary
In the 1960s, thanks to the rise of computers, which made it possible to produce graphical representations, and the work of the Polish mathematician Benoît Mandelbrot, a special kind of sets became popular: fractals. The aim of this work is to rigurously define the Mandelbrot set, one of the many fractals studied by him. To achieve this, we will explore in detail the dinamycs of the quadratic family.
In the 1960s, thanks to the rise of computers, which made it possible to produce graphical representations, and the work of the Polish mathematician Benoît Mandelbrot, a special kind of sets became popular: fractals. The aim of this work is to rigurously define the Mandelbrot set, one of the many fractals studied by him. To achieve this, we will explore in detail the dinamycs of the quadratic family.
Direction
BUEDO FERNANDEZ, SEBASTIAN (Tutorships)
CAO LABORA, DANIEL (Co-tutorships)
BUEDO FERNANDEZ, SEBASTIAN (Tutorships)
CAO LABORA, DANIEL (Co-tutorships)
Court
LOPEZ POUSO, RODRIGO (Chairman)
PEON NIETO, ANA (Secretary)
SEOANE MARTINEZ, MARIA LUISA (Member)
LOPEZ POUSO, RODRIGO (Chairman)
PEON NIETO, ANA (Secretary)
SEOANE MARTINEZ, MARIA LUISA (Member)
Solving systems of linear equations with non-Hermitian complex matrices
Authorship
A.S.M.
Bachelor of Mathematics
A.S.M.
Bachelor of Mathematics
Defense date
07.16.2025 12:00
07.16.2025 12:00
Summary
The aim of this dissertation is to present an iterative technique, the Conjugate Orthogonal Conjugate Gradient (COCG) method, designed to solve linear systems Ax = b whose coefficient matrix A is complex, symmetric in the real sense, and non-Hermitian. Because COCG belongs to the family of Krylov subspace methods, we first introduce these methods for real matrices and then show how several of them can be extended to the complex case. The exposition begins with the Conjugate Gradient (CG) and Biconjugate Gradient (BCG) algorithms, and culminates with a detailed presentation of the COCG scheme. Algorithms for implementing the different methods will be described, and some practical examples using matrices obtained from Matrix Market will be presented.
The aim of this dissertation is to present an iterative technique, the Conjugate Orthogonal Conjugate Gradient (COCG) method, designed to solve linear systems Ax = b whose coefficient matrix A is complex, symmetric in the real sense, and non-Hermitian. Because COCG belongs to the family of Krylov subspace methods, we first introduce these methods for real matrices and then show how several of them can be extended to the complex case. The exposition begins with the Conjugate Gradient (CG) and Biconjugate Gradient (BCG) algorithms, and culminates with a detailed presentation of the COCG scheme. Algorithms for implementing the different methods will be described, and some practical examples using matrices obtained from Matrix Market will be presented.
Direction
SALGADO RODRIGUEZ, MARIA DEL PILAR (Tutorships)
SALGADO RODRIGUEZ, MARIA DEL PILAR (Tutorships)
Court
SALGADO RODRIGUEZ, MARIA DEL PILAR (Student’s tutor)
SALGADO RODRIGUEZ, MARIA DEL PILAR (Student’s tutor)
Introduction to reinforcement learning
Authorship
R.T.L.
Double Bachelor's Degree in Informatics Engineering and Mathematics
R.T.L.
Double Bachelor's Degree in Informatics Engineering and Mathematics
Defense date
07.17.2025 11:25
07.17.2025 11:25
Summary
This work presents an introduction to reinforcement learning from a mathematical perspective, with special emphasis on its connection to dynamic programming. Although reinforcement learning is an independent field within artificial intelligence, many of its core ideas such as sequential decision-making, utility functions, or optimal policies originate from dynamic programming. For this reason, the study begins by addressing dynamic programming as the conceptual and formal foundation upon which reinforcement learning is built. Once the section on dynamic programming is concluded, the most important aspects of reinforcement learning are introduced. Understanding Markov decision processes will be essential for grasping the theory. The work also discusses topics derived from this paradigm, such as the exploration exploitation dilemma, and examines various solution algorithms, including Monte Carlo methods and temporal-difference learning. Finally, the study explores how reinforcement learning can be a powerful tool in the field of mathematical optimization. To this end, several classical optimization problems are analyzed, showing how they can be reformulated within the reinforcement learning framework. This allows the application of RL algorithms to complex problems that are not easily solved by traditional methods.
This work presents an introduction to reinforcement learning from a mathematical perspective, with special emphasis on its connection to dynamic programming. Although reinforcement learning is an independent field within artificial intelligence, many of its core ideas such as sequential decision-making, utility functions, or optimal policies originate from dynamic programming. For this reason, the study begins by addressing dynamic programming as the conceptual and formal foundation upon which reinforcement learning is built. Once the section on dynamic programming is concluded, the most important aspects of reinforcement learning are introduced. Understanding Markov decision processes will be essential for grasping the theory. The work also discusses topics derived from this paradigm, such as the exploration exploitation dilemma, and examines various solution algorithms, including Monte Carlo methods and temporal-difference learning. Finally, the study explores how reinforcement learning can be a powerful tool in the field of mathematical optimization. To this end, several classical optimization problems are analyzed, showing how they can be reformulated within the reinforcement learning framework. This allows the application of RL algorithms to complex problems that are not easily solved by traditional methods.
Direction
GONZALEZ DIAZ, JULIO (Tutorships)
GONZALEZ DIAZ, JULIO (Tutorships)
Court
Majadas Soto, José Javier (Chairman)
SALGADO RODRIGUEZ, MARIA DEL PILAR (Secretary)
CASARES DE CAL, MARIA ANGELES (Member)
Majadas Soto, José Javier (Chairman)
SALGADO RODRIGUEZ, MARIA DEL PILAR (Secretary)
CASARES DE CAL, MARIA ANGELES (Member)
Mobile application for musical ear training learning
Authorship
R.T.L.
Double Bachelor's Degree in Informatics Engineering and Mathematics
R.T.L.
Double Bachelor's Degree in Informatics Engineering and Mathematics
Defense date
07.17.2025 12:30
07.17.2025 12:30
Summary
This work will describe the process of creating a mobile application for learning musical ear training. It will begin with a preliminary study where the potential of the idea will be explored. We will identify who might be interested in the application and why. A business model will also be proposed, as mobile applications are relatively easy to introduce into the market. The requirements that will define the application's functionality will be specified. A chapter will be devoted to software engineering, explaining the lifecycle model and describing the planning process. Use cases for the previously defined functional requirements will also be included. The design section will cover different aspects such as the graphical interface, system architecture, and software design. Finally, the testing plan will be presented and the success or failure of the project will be evaluated.
This work will describe the process of creating a mobile application for learning musical ear training. It will begin with a preliminary study where the potential of the idea will be explored. We will identify who might be interested in the application and why. A business model will also be proposed, as mobile applications are relatively easy to introduce into the market. The requirements that will define the application's functionality will be specified. A chapter will be devoted to software engineering, explaining the lifecycle model and describing the planning process. Use cases for the previously defined functional requirements will also be included. The design section will cover different aspects such as the graphical interface, system architecture, and software design. Finally, the testing plan will be presented and the success or failure of the project will be evaluated.
Direction
TOBAR QUINTANAR, ALEJANDRO JOSE (Tutorships)
TOBAR QUINTANAR, ALEJANDRO JOSE (Tutorships)
Court
Argüello Pedreira, Francisco Santiago (Chairman)
PENA BRAGE, FRANCISCO JOSE (Secretary)
Carreira Nouche, María José (Member)
Argüello Pedreira, Francisco Santiago (Chairman)
PENA BRAGE, FRANCISCO JOSE (Secretary)
Carreira Nouche, María José (Member)
From Galois theory to class field theory
Authorship
L.V.M.
Bachelor of Mathematics
L.V.M.
Bachelor of Mathematics
Defense date
07.17.2025 12:30
07.17.2025 12:30
Summary
The aim of this thesis is to justify the introduction of Class Field Theory starting from Galois Theory and Algebraic Number Theory, using the initial motivation provided by Artin and other mathematicians from the early 20th century. The work will begin with a review of some results from Galois theory and algebraic number theory that predate the development of class field theory, such as the Kronecker--Weber theorem. Then, based on the results of Weber, Hilbert, and Artin, the foundational results of Class Field Theory will be formulated. The rest of the work will be devoted to exploring their implications, with a particular focus on the connections with Galois Theory.
The aim of this thesis is to justify the introduction of Class Field Theory starting from Galois Theory and Algebraic Number Theory, using the initial motivation provided by Artin and other mathematicians from the early 20th century. The work will begin with a review of some results from Galois theory and algebraic number theory that predate the development of class field theory, such as the Kronecker--Weber theorem. Then, based on the results of Weber, Hilbert, and Artin, the foundational results of Class Field Theory will be formulated. The rest of the work will be devoted to exploring their implications, with a particular focus on the connections with Galois Theory.
Direction
RIVERO SALGADO, OSCAR (Tutorships)
RIVERO SALGADO, OSCAR (Tutorships)
Court
RIVERO SALGADO, OSCAR (Student’s tutor)
RIVERO SALGADO, OSCAR (Student’s tutor)
Conformal maps: an introduction
Authorship
Y.X.X.C.
Bachelor of Mathematics
Y.X.X.C.
Bachelor of Mathematics
Defense date
07.17.2025 17:00
07.17.2025 17:00
Summary
In this paper we make an introductory study of conformal map in the context of complex analysis, with special emphasis on Möbius transformations. Since these applications arise from the theory of holomorphic functions, we devote the first part of the paper to review the basic tools of complex analysis. Then, we introduce the general concept of conformal map and show that every injective holomorphic function is a conformal map, illustrating it with examples. On this basis, the paper focuses on Möbius transformations, the only conformal automorphism of the extended complex plane. We present its matrix representation, study its invariants (points, circles, cross-ratio and symmetry) and classify according to the number of fixed points (elliptic, hyperbolic, parabolic, loxodromic). Finally, we develop in Maple a set of procedures to facilitate the intuitive understanding of these concepts.
In this paper we make an introductory study of conformal map in the context of complex analysis, with special emphasis on Möbius transformations. Since these applications arise from the theory of holomorphic functions, we devote the first part of the paper to review the basic tools of complex analysis. Then, we introduce the general concept of conformal map and show that every injective holomorphic function is a conformal map, illustrating it with examples. On this basis, the paper focuses on Möbius transformations, the only conformal automorphism of the extended complex plane. We present its matrix representation, study its invariants (points, circles, cross-ratio and symmetry) and classify according to the number of fixed points (elliptic, hyperbolic, parabolic, loxodromic). Finally, we develop in Maple a set of procedures to facilitate the intuitive understanding of these concepts.
Direction
TRINCHET SORIA, ROSA Mª (Tutorships)
TRINCHET SORIA, ROSA Mª (Tutorships)
Court
TRINCHET SORIA, ROSA Mª (Student’s tutor)
TRINCHET SORIA, ROSA Mª (Student’s tutor)