Computational study of short-range order and transport properties in ionic liquid mixtures with imidazole and water.
Authorship
D.C.A.G.
Double bachelor degree in Physics and Chemistry
D.C.A.G.
Double bachelor degree in Physics and Chemistry
Defense date
07.17.2025 09:30
07.17.2025 09:30
Summary
This work focuses on the computational study of transport properties in mixtures of ionic liquids with imidazole and water, while also exploring their potential applications in short-range interactions. Specifically, it covers the analysis of the feasibility of proton mobility based on the Grotthuss mechanism, a process that enables technological applications such as proton-conducting batteries and has implications in various biological reactions. In order to evaluate the interactions between the different species interactions, the structural and dynamic properties of the mixtures are calculated. Initially, the properties of two different types of ionic liquids (protic and aprotic) dissolved in imidazole are analyzed. Subsequently, the effect that the presence of water would have on the properties of the different mixtures is evaluated, with an upper tolerance limit for water being determined.
This work focuses on the computational study of transport properties in mixtures of ionic liquids with imidazole and water, while also exploring their potential applications in short-range interactions. Specifically, it covers the analysis of the feasibility of proton mobility based on the Grotthuss mechanism, a process that enables technological applications such as proton-conducting batteries and has implications in various biological reactions. In order to evaluate the interactions between the different species interactions, the structural and dynamic properties of the mixtures are calculated. Initially, the properties of two different types of ionic liquids (protic and aprotic) dissolved in imidazole are analyzed. Subsequently, the effect that the presence of water would have on the properties of the different mixtures is evaluated, with an upper tolerance limit for water being determined.
Direction
MENDEZ MORALES, TRINIDAD (Tutorships)
OTERO LEMA, MARTIN (Co-tutorships)
MENDEZ MORALES, TRINIDAD (Tutorships)
OTERO LEMA, MARTIN (Co-tutorships)
Court
ZAS ARREGUI, ENRIQUE (Chairman)
GARCIA FEAL, XABIER (Secretary)
FONDADO FONDADO, ALFONSO (Member)
ZAS ARREGUI, ENRIQUE (Chairman)
GARCIA FEAL, XABIER (Secretary)
FONDADO FONDADO, ALFONSO (Member)
Beyond lithium batteries
Authorship
L.A.F.
Bachelor of Physics
L.A.F.
Bachelor of Physics
Defense date
07.17.2025 09:30
07.17.2025 09:30
Summary
Efficient energy storage is a key pillar in the transition towards sustainable energy systems. Lithium-ion (Li-ion) batteries currently dominate the market due to their technological maturity, solid performance, and decreasing costs. However, their energy capacity is approaching the theoretical limit, prompting the exploration of emerging technologies with higher energy densities. This work analyzes and compares three lithium-based battery technologies: lithium-ion, lithium-sulfur (Li-S), and lithium-oxygen (Li-O2). Their operating principles, the most promising materials for each component (anode, cathode, and electrolyte), as well as their main advantages, limitations, and improvement strategies are discussed. Li-S batteries offer higher theoretical capacity and lower environmental impact but face challenges related to stability and cycling life. Li-O2 batteries stand out for their extremely high energy density, yet their complex chemistry currently limits practical implementation. Overall, while post-lithium technologies still require significant advancements, their development is crucial to meet the growing demand for energy storage and to facilitate global decarbonization.
Efficient energy storage is a key pillar in the transition towards sustainable energy systems. Lithium-ion (Li-ion) batteries currently dominate the market due to their technological maturity, solid performance, and decreasing costs. However, their energy capacity is approaching the theoretical limit, prompting the exploration of emerging technologies with higher energy densities. This work analyzes and compares three lithium-based battery technologies: lithium-ion, lithium-sulfur (Li-S), and lithium-oxygen (Li-O2). Their operating principles, the most promising materials for each component (anode, cathode, and electrolyte), as well as their main advantages, limitations, and improvement strategies are discussed. Li-S batteries offer higher theoretical capacity and lower environmental impact but face challenges related to stability and cycling life. Li-O2 batteries stand out for their extremely high energy density, yet their complex chemistry currently limits practical implementation. Overall, while post-lithium technologies still require significant advancements, their development is crucial to meet the growing demand for energy storage and to facilitate global decarbonization.
Direction
TABOADA ANTELO, PABLO (Tutorships)
BARBOSA FERNANDEZ, SILVIA (Co-tutorships)
TABOADA ANTELO, PABLO (Tutorships)
BARBOSA FERNANDEZ, SILVIA (Co-tutorships)
Court
ZAS ARREGUI, ENRIQUE (Chairman)
GARCIA FEAL, XABIER (Secretary)
FONDADO FONDADO, ALFONSO (Member)
ZAS ARREGUI, ENRIQUE (Chairman)
GARCIA FEAL, XABIER (Secretary)
FONDADO FONDADO, ALFONSO (Member)
Resolution determination in scintillator bars with SiPM readout and data acquisition programming for a high-sampling-rate digital oscilloscope
Authorship
X.A.P.
Bachelor of Physics
X.A.P.
Bachelor of Physics
Defense date
07.17.2025 09:30
07.17.2025 09:30
Summary
This Bachelor’s Thesis characterizes the temporal response of the Tektronix MSO64B oscilloscope when measuring pulses simulating cosmic-ray radiation in the Qmio quantum supercomputer at CESGA. To this end, an experimental setup was designed using high frequency function generators (T3AWG3352 and Keysight 33500B) connected to the oscilloscope via 5m and 10m coaxial cables. These conditions replicate the rapid transit of muons and other particles, demanding high-precision trigger techniques. Two methods for determining the signal arrival time were compared: the Constant Fraction Discriminator (CFD) and constant-threshold voltage triggering. The study covered three scenarios: identical pulses, amplitude variations, and asymmetric fall times. We then analyzed metrics such as the standard deviation, systematic biases, and inter-channel correlation (due to common trigger jitter). Thus, the CFD method achieved a resolution of approximately 100 ps, with minimal differences across the three conditions. Optimal CFD parameters were found in the ranges k (0.2,0.3) and tau (0.7,0.8)td, which minimize timing error. In contrast, constant-threshold triggering exhibited deviations of 100 to 500 ps, showing strong sensitivity to amplitude variations. Overall, the CFD proved to be the most effective technique for mitigating cosmic-ray induced errors in the qubits of the Qmio supercomputer.
This Bachelor’s Thesis characterizes the temporal response of the Tektronix MSO64B oscilloscope when measuring pulses simulating cosmic-ray radiation in the Qmio quantum supercomputer at CESGA. To this end, an experimental setup was designed using high frequency function generators (T3AWG3352 and Keysight 33500B) connected to the oscilloscope via 5m and 10m coaxial cables. These conditions replicate the rapid transit of muons and other particles, demanding high-precision trigger techniques. Two methods for determining the signal arrival time were compared: the Constant Fraction Discriminator (CFD) and constant-threshold voltage triggering. The study covered three scenarios: identical pulses, amplitude variations, and asymmetric fall times. We then analyzed metrics such as the standard deviation, systematic biases, and inter-channel correlation (due to common trigger jitter). Thus, the CFD method achieved a resolution of approximately 100 ps, with minimal differences across the three conditions. Optimal CFD parameters were found in the ranges k (0.2,0.3) and tau (0.7,0.8)td, which minimize timing error. In contrast, constant-threshold triggering exhibited deviations of 100 to 500 ps, showing strong sensitivity to amplitude variations. Overall, the CFD proved to be the most effective technique for mitigating cosmic-ray induced errors in the qubits of the Qmio supercomputer.
Direction
ALVAREZ POL, HECTOR (Tutorships)
ALVAREZ POL, HECTOR (Tutorships)
Court
ZAS ARREGUI, ENRIQUE (Chairman)
GARCIA FEAL, XABIER (Secretary)
FONDADO FONDADO, ALFONSO (Member)
ZAS ARREGUI, ENRIQUE (Chairman)
GARCIA FEAL, XABIER (Secretary)
FONDADO FONDADO, ALFONSO (Member)
Solving Physics-based models using neural networks
Authorship
R.A.G.
Bachelor of Physics
R.A.G.
Bachelor of Physics
Defense date
07.17.2025 09:30
07.17.2025 09:30
Summary
In this work, we will explore a method for solving differential equations using neural networks: Physics-Informed Neural Networks (PINNs). As this is a very recent and still developing approach, there is no consensus yet on how to overcome the current limitations of PINNs. In our case, we will implement the Multi-Stage algorithm to reduce the error in the solution predicted by the PINN. This algorithm is inspired by perturbation theory: we will construct the solution to our problem as the sum of n neural networks. Additionally, we will improve the ability of our neural network to fit high-frequency functions by modifying its activation function. Finally, we will use PINNs to solve different physical models in which the number of variables and the order of the derivatives vary from one to another. Each model will be solved in two different ways: the first time, we will implement the algorithms developed throughout this work; the second time, we will solve the differential equations without applying any of these improvements. In this way, we will be able to evaluate the effectiveness of the proposed method.
In this work, we will explore a method for solving differential equations using neural networks: Physics-Informed Neural Networks (PINNs). As this is a very recent and still developing approach, there is no consensus yet on how to overcome the current limitations of PINNs. In our case, we will implement the Multi-Stage algorithm to reduce the error in the solution predicted by the PINN. This algorithm is inspired by perturbation theory: we will construct the solution to our problem as the sum of n neural networks. Additionally, we will improve the ability of our neural network to fit high-frequency functions by modifying its activation function. Finally, we will use PINNs to solve different physical models in which the number of variables and the order of the derivatives vary from one to another. Each model will be solved in two different ways: the first time, we will implement the algorithms developed throughout this work; the second time, we will solve the differential equations without applying any of these improvements. In this way, we will be able to evaluate the effectiveness of the proposed method.
Direction
FARIÑA BIASI, ANXO (Tutorships)
FARIÑA BIASI, ANXO (Tutorships)
Court
ZAS ARREGUI, ENRIQUE (Chairman)
GARCIA FEAL, XABIER (Secretary)
FONDADO FONDADO, ALFONSO (Member)
ZAS ARREGUI, ENRIQUE (Chairman)
GARCIA FEAL, XABIER (Secretary)
FONDADO FONDADO, ALFONSO (Member)
Theoretical-computational study of criticality in Ising-type models
Authorship
P.A.G.
Double bachelor degree in Mathematics and Physics
P.A.G.
Double bachelor degree in Mathematics and Physics
Defense date
07.17.2025 09:30
07.17.2025 09:30
Summary
This document addresses the Ising model, widely used in the study of phase transitions, from both theoretical and computational perspectives. It briefly introduces concepts related to critical phenomena, followed by a description of the model and some well-known theoretical developments and results in one- and two-dimensional lattices with nearest-neighbor interactions. A computational study is then carried out, introducing the Metropolis and Wolff algorithms, and demonstrating their ergodicity and compliance with the detailed balance equation. These two algorithms are compared, showing that the Wolff algorithm achieves faster convergence near the critical point. Using a custom-developed code, phase transitions are characterized in systems ranging from one to four dimensions with nearest-neighbor interactions, and the model is implemented on complex small-world networks, both one- and two-dimensional.
This document addresses the Ising model, widely used in the study of phase transitions, from both theoretical and computational perspectives. It briefly introduces concepts related to critical phenomena, followed by a description of the model and some well-known theoretical developments and results in one- and two-dimensional lattices with nearest-neighbor interactions. A computational study is then carried out, introducing the Metropolis and Wolff algorithms, and demonstrating their ergodicity and compliance with the detailed balance equation. These two algorithms are compared, showing that the Wolff algorithm achieves faster convergence near the critical point. Using a custom-developed code, phase transitions are characterized in systems ranging from one to four dimensions with nearest-neighbor interactions, and the model is implemented on complex small-world networks, both one- and two-dimensional.
Direction
MENDEZ MORALES, TRINIDAD (Tutorships)
Montes Campos, Hadrián (Co-tutorships)
MENDEZ MORALES, TRINIDAD (Tutorships)
Montes Campos, Hadrián (Co-tutorships)
Court
ZAS ARREGUI, ENRIQUE (Chairman)
GARCIA FEAL, XABIER (Secretary)
FONDADO FONDADO, ALFONSO (Member)
ZAS ARREGUI, ENRIQUE (Chairman)
GARCIA FEAL, XABIER (Secretary)
FONDADO FONDADO, ALFONSO (Member)
The Riemann mapping theorem
Authorship
P.A.G.
Double bachelor degree in Mathematics and Physics
P.A.G.
Double bachelor degree in Mathematics and Physics
Defense date
07.16.2025 16:40
07.16.2025 16:40
Summary
This document addresses the Riemann mapping theorem, an essential result in complex analysis that establishes the existence of conformal mappings between simply connected sets and the unit disk. To this end, the necessary theoretical framework is introduced, beginning with a brief introduction to the theorem, along with some fundamental definitions and results on holomorphic functions. Later, the focus shifts to Möbius transformations, a key tool in this work that, together with Schwarz’s lemma, plays a fundamental role in characterizing the biholomorphic automorphisms of the unit disk. This theoretical development concludes with the study of certain properties of the space of holomorphic functions, thereby providing a foundation for giving a rigorous proof of the theorem. Finally, the importance of the theorem is highlighted by presenting some relevant applications in other scientific fields, such as fluid mechanics in physics, and an algorithm is provided to approximately compute the mapping described by the theorem.
This document addresses the Riemann mapping theorem, an essential result in complex analysis that establishes the existence of conformal mappings between simply connected sets and the unit disk. To this end, the necessary theoretical framework is introduced, beginning with a brief introduction to the theorem, along with some fundamental definitions and results on holomorphic functions. Later, the focus shifts to Möbius transformations, a key tool in this work that, together with Schwarz’s lemma, plays a fundamental role in characterizing the biholomorphic automorphisms of the unit disk. This theoretical development concludes with the study of certain properties of the space of holomorphic functions, thereby providing a foundation for giving a rigorous proof of the theorem. Finally, the importance of the theorem is highlighted by presenting some relevant applications in other scientific fields, such as fluid mechanics in physics, and an algorithm is provided to approximately compute the mapping described by the theorem.
Direction
CAO LABORA, DANIEL (Tutorships)
CAO LABORA, DANIEL (Tutorships)
Court
LOPEZ POUSO, RODRIGO (Chairman)
PEON NIETO, ANA (Secretary)
SEOANE MARTINEZ, MARIA LUISA (Member)
LOPEZ POUSO, RODRIGO (Chairman)
PEON NIETO, ANA (Secretary)
SEOANE MARTINEZ, MARIA LUISA (Member)
Clifford algebras and spin groups
Authorship
D.A.D.A.
Double bachelor degree in Mathematics and Physics
D.A.D.A.
Double bachelor degree in Mathematics and Physics
Defense date
07.16.2025 10:00
07.16.2025 10:00
Summary
The aim of this work is to be a starting point for the study of Spin groups through the formalism of Clifford algebras. In order to do so, we first define and construct these algebras through tensor algebras and we study their main properties as superalgebras, their definition in terms of generators and relations and their connection with exterior algebra. Afterwards, we perform a first classification of the low-dimensional Clifford real algebras by using the Z_2-graded tensor product before proceeding to the complete classification of said algebras ---in the real and complex case---supported by several isomorphisms proved in this work, together with the Bott periodicity theorem. Afterwards, we pass on to defining the Spin group as a subgroup of the units of the Clifford algebra and, after studying the latter’s actions on the whole algebra, we check using this same action that the Spin groups are a double cover of the special orthogonal group SO(n). We also study some additional properties of these groups and classify some of the low dimensional cases.
The aim of this work is to be a starting point for the study of Spin groups through the formalism of Clifford algebras. In order to do so, we first define and construct these algebras through tensor algebras and we study their main properties as superalgebras, their definition in terms of generators and relations and their connection with exterior algebra. Afterwards, we perform a first classification of the low-dimensional Clifford real algebras by using the Z_2-graded tensor product before proceeding to the complete classification of said algebras ---in the real and complex case---supported by several isomorphisms proved in this work, together with the Bott periodicity theorem. Afterwards, we pass on to defining the Spin group as a subgroup of the units of the Clifford algebra and, after studying the latter’s actions on the whole algebra, we check using this same action that the Spin groups are a double cover of the special orthogonal group SO(n). We also study some additional properties of these groups and classify some of the low dimensional cases.
Direction
DIAZ RAMOS, JOSE CARLOS (Tutorships)
Lorenzo Naveiro, Juan Manel (Co-tutorships)
DIAZ RAMOS, JOSE CARLOS (Tutorships)
Lorenzo Naveiro, Juan Manel (Co-tutorships)
Court
GARCIA RODICIO, ANTONIO (Chairman)
CAO LABORA, DANIEL (Secretary)
Gómez Tato, Antonio M. (Member)
GARCIA RODICIO, ANTONIO (Chairman)
CAO LABORA, DANIEL (Secretary)
Gómez Tato, Antonio M. (Member)
Analysis of the B+ to mumutaunu decay from a theoretical and experimental point of view
Authorship
D.A.D.A.
Double bachelor degree in Mathematics and Physics
D.A.D.A.
Double bachelor degree in Mathematics and Physics
Defense date
07.17.2025 09:30
07.17.2025 09:30
Summary
This work aims to be a preliminary study, both from a theoretical and an experimental point of view, of the decay B+ to mumutaunu in the context of the LHCb experiment. First, a theoretical development based on bibliographic sources is carried out, performing a similar calculation with the goal of understanding the underlying mechanisms and adapting them to the case of interest; the ultimate objective of this analysis is to obtain an initial reference numerical value for the branching ratio (BR) of this decay. Next, we design an analysis aiming to detect a signal peak for this decay or to set an upper limit on the BR; to this end, we address the problem of missing information due to the undetectable neutrino by introducing the corrected mass. We present the data selection methods, the elimination of combinatorial background using machine learning tools, and a study of the ability to distinguish a signal peak through simulations and fits, along with statistical tools such as Wilks’ theorem. Finally, we present the progress achieved in this initial analysis and conclude with the work to be developed for a future extension of this study.
This work aims to be a preliminary study, both from a theoretical and an experimental point of view, of the decay B+ to mumutaunu in the context of the LHCb experiment. First, a theoretical development based on bibliographic sources is carried out, performing a similar calculation with the goal of understanding the underlying mechanisms and adapting them to the case of interest; the ultimate objective of this analysis is to obtain an initial reference numerical value for the branching ratio (BR) of this decay. Next, we design an analysis aiming to detect a signal peak for this decay or to set an upper limit on the BR; to this end, we address the problem of missing information due to the undetectable neutrino by introducing the corrected mass. We present the data selection methods, the elimination of combinatorial background using machine learning tools, and a study of the ability to distinguish a signal peak through simulations and fits, along with statistical tools such as Wilks’ theorem. Finally, we present the progress achieved in this initial analysis and conclude with the work to be developed for a future extension of this study.
Direction
CID VIDAL, XABIER (Tutorships)
FERNANDEZ GOMEZ, MIGUEL (Co-tutorships)
CID VIDAL, XABIER (Tutorships)
FERNANDEZ GOMEZ, MIGUEL (Co-tutorships)
Court
ZAS ARREGUI, ENRIQUE (Chairman)
GARCIA FEAL, XABIER (Secretary)
FONDADO FONDADO, ALFONSO (Member)
ZAS ARREGUI, ENRIQUE (Chairman)
GARCIA FEAL, XABIER (Secretary)
FONDADO FONDADO, ALFONSO (Member)
Baroclinic instability and its sensitivity to vertical temperature and wind profiles in atmospheric layers
Authorship
A.B.V.
Double bachelor degree in Mathematics and Physics
A.B.V.
Double bachelor degree in Mathematics and Physics
Defense date
07.17.2025 09:30
07.17.2025 09:30
Summary
This project aims to study baroclinic instability in the atmosphere through theoretical analysis and numerical simulations. In the theoretical section, the key factors influencing this phenomenon will be explained, such as vertical wind shear, thermal gradients, and atmospheric stratification, as well as the role of planetary rotation and the Coriolis force. Next, numerical simulations will be carried out to analyze the impact of variations in temperature and wind profiles on the growth of baroclinic disturbances. The results will allow for an assessment of how these atmospheric conditions affect the evolution of baroclinic waves, comparing the simulations with theoretical predictions. Finally, the agreement between theory and simulations will be discussed, highlighting the conditions that favor the development of baroclinic instability and its potential applications in weather forecasting.
This project aims to study baroclinic instability in the atmosphere through theoretical analysis and numerical simulations. In the theoretical section, the key factors influencing this phenomenon will be explained, such as vertical wind shear, thermal gradients, and atmospheric stratification, as well as the role of planetary rotation and the Coriolis force. Next, numerical simulations will be carried out to analyze the impact of variations in temperature and wind profiles on the growth of baroclinic disturbances. The results will allow for an assessment of how these atmospheric conditions affect the evolution of baroclinic waves, comparing the simulations with theoretical predictions. Finally, the agreement between theory and simulations will be discussed, highlighting the conditions that favor the development of baroclinic instability and its potential applications in weather forecasting.
Direction
MIGUEZ MACHO, GONZALO (Tutorships)
CRESPO OTERO, ALFREDO (Co-tutorships)
MIGUEZ MACHO, GONZALO (Tutorships)
CRESPO OTERO, ALFREDO (Co-tutorships)
Court
ZAS ARREGUI, ENRIQUE (Chairman)
GARCIA FEAL, XABIER (Secretary)
FONDADO FONDADO, ALFONSO (Member)
ZAS ARREGUI, ENRIQUE (Chairman)
GARCIA FEAL, XABIER (Secretary)
FONDADO FONDADO, ALFONSO (Member)
Analysis and application of Dijkstra's algorithm in route optimisation: in searth of the shortest path
Authorship
A.B.V.
Double bachelor degree in Mathematics and Physics
A.B.V.
Double bachelor degree in Mathematics and Physics
Defense date
07.15.2025 09:10
07.15.2025 09:10
Summary
In this project, we start off from basic concepts regarding flow problems in networks and then looked at more specific ones that would help us achieve our main goal, which is to explain different algorithms for solving the shortest path problem. In the first chapter we will introduce concepts from graph theory and explain important notions for the following chapters, such as the definition of node, arc or cost. In addition, we will introduce minimun cost flow problems joint with their mathematical formulations. Finally, we will analyze a very important property when solving this type of problem: unimodularity, along with some properties that make solving them easier. In the second chapter we will focus on the shortest path problem, describing that there are different types of algorithms for solving it with particular emphasis on Dijkstra's algorithm. We will also provide some implentations of this and other algorithms that can help solve the problem more efficiently. Additionally, the explanations will be accompanied by examples to make the algorithms easier to understand. Finally, the last chapter presents a practical application of shortest path computation within the Spanish railway network. It includes a detailed description of the Python implementation, the data used, and a graphical representation of the computed routes.
In this project, we start off from basic concepts regarding flow problems in networks and then looked at more specific ones that would help us achieve our main goal, which is to explain different algorithms for solving the shortest path problem. In the first chapter we will introduce concepts from graph theory and explain important notions for the following chapters, such as the definition of node, arc or cost. In addition, we will introduce minimun cost flow problems joint with their mathematical formulations. Finally, we will analyze a very important property when solving this type of problem: unimodularity, along with some properties that make solving them easier. In the second chapter we will focus on the shortest path problem, describing that there are different types of algorithms for solving it with particular emphasis on Dijkstra's algorithm. We will also provide some implentations of this and other algorithms that can help solve the problem more efficiently. Additionally, the explanations will be accompanied by examples to make the algorithms easier to understand. Finally, the last chapter presents a practical application of shortest path computation within the Spanish railway network. It includes a detailed description of the Python implementation, the data used, and a graphical representation of the computed routes.
Direction
GONZALEZ RUEDA, ANGEL MANUEL (Tutorships)
GONZALEZ RUEDA, ANGEL MANUEL (Tutorships)
Court
Majadas Soto, José Javier (Chairman)
SALGADO RODRIGUEZ, MARIA DEL PILAR (Secretary)
CASARES DE CAL, MARIA ANGELES (Member)
Majadas Soto, José Javier (Chairman)
SALGADO RODRIGUEZ, MARIA DEL PILAR (Secretary)
CASARES DE CAL, MARIA ANGELES (Member)
Parameter Estimation of Hyperbolic Black Hole Encounters with the Einstein Telescope Gravitational Wave Detector
Authorship
X.B.G.
Bachelor of Physics
X.B.G.
Bachelor of Physics
Defense date
07.17.2025 09:30
07.17.2025 09:30
Summary
This project focuses on the estimation of physical parameters in hyperbolic encounters of black holes using the future gravitational wave detector Einstein Telescope. These waves, predicted by Einstein in 1916 and first detected in 2015, have so far enabled the detection and study of around 90 black hole mergers. In hyperbolic encounters, which have not yet been detected, two black holes approach each other and generate a gravitational wave burst without merging. These events provide valuable information about the dynamics and population of black holes in dense environments, such as stellar clusters. This work aims to assess how accurately the physical parameters of such encounters can be estimated from their gravitational wave signals. To this end, a surrogate model is used to efficiently generate representative waveforms of these events, and the Fisher matrix formalism is applied-a statistical tool that allows for the estimation of parameter uncertainties without requiring a full Bayesian analysis. Throughout the study, the sensitivity of the signal to variations in different parameters (mass, spin, inclination, luminosity distance...) is analyzed, and the expected uncertainties are quantified as functions of luminosity distance and impact parameter. The results show that, for events at distances on the order of 500-1000 Mpc (consistent with current detections), key parameters such as spin or mass can be determined with relative precisions better than 10%, provided the signal has a high SNR and the trajectory is sufficiently close (i.e., a smaller impact parameter). This study reinforces the potential of the Einstein Telescope to detect and characterize hyperbolic encounters and offers a quantitative evaluation of its capability to measure the fundamental parameters of these dynamic events.
This project focuses on the estimation of physical parameters in hyperbolic encounters of black holes using the future gravitational wave detector Einstein Telescope. These waves, predicted by Einstein in 1916 and first detected in 2015, have so far enabled the detection and study of around 90 black hole mergers. In hyperbolic encounters, which have not yet been detected, two black holes approach each other and generate a gravitational wave burst without merging. These events provide valuable information about the dynamics and population of black holes in dense environments, such as stellar clusters. This work aims to assess how accurately the physical parameters of such encounters can be estimated from their gravitational wave signals. To this end, a surrogate model is used to efficiently generate representative waveforms of these events, and the Fisher matrix formalism is applied-a statistical tool that allows for the estimation of parameter uncertainties without requiring a full Bayesian analysis. Throughout the study, the sensitivity of the signal to variations in different parameters (mass, spin, inclination, luminosity distance...) is analyzed, and the expected uncertainties are quantified as functions of luminosity distance and impact parameter. The results show that, for events at distances on the order of 500-1000 Mpc (consistent with current detections), key parameters such as spin or mass can be determined with relative precisions better than 10%, provided the signal has a high SNR and the trajectory is sufficiently close (i.e., a smaller impact parameter). This study reinforces the potential of the Einstein Telescope to detect and characterize hyperbolic encounters and offers a quantitative evaluation of its capability to measure the fundamental parameters of these dynamic events.
Direction
CALDERON BUSTILLO, JUAN (Tutorships)
CALDERON BUSTILLO, JUAN (Tutorships)
Court
ZAS ARREGUI, ENRIQUE (Chairman)
GARCIA FEAL, XABIER (Secretary)
FONDADO FONDADO, ALFONSO (Member)
ZAS ARREGUI, ENRIQUE (Chairman)
GARCIA FEAL, XABIER (Secretary)
FONDADO FONDADO, ALFONSO (Member)
Using AI to develop new materials
Authorship
R.B.F.
Bachelor of Physics
R.B.F.
Bachelor of Physics
Defense date
07.18.2025 09:30
07.18.2025 09:30
Summary
The development of new materials is being influenced by artificial intelligence (AI), which enables the design and optimization of materials with greater speed and precision. In the field of soft matter physics, systems such as lipid-based nanoparticles offer unique properties for biomedical and technological applications. To harness the potential of AI in this area, a database of lipid compounds was created by combining various existing sources, with the aim of training models that improve the understanding and design of lipid nanoparticles.
The development of new materials is being influenced by artificial intelligence (AI), which enables the design and optimization of materials with greater speed and precision. In the field of soft matter physics, systems such as lipid-based nanoparticles offer unique properties for biomedical and technological applications. To harness the potential of AI in this area, a database of lipid compounds was created by combining various existing sources, with the aim of training models that improve the understanding and design of lipid nanoparticles.
Direction
TABOADA ANTELO, PABLO (Tutorships)
CAMBON FREIRE, ADRIANA (Co-tutorships)
TABOADA ANTELO, PABLO (Tutorships)
CAMBON FREIRE, ADRIANA (Co-tutorships)
Court
ZAS ARREGUI, ENRIQUE (Chairman)
GARCIA FEAL, XABIER (Secretary)
FONDADO FONDADO, ALFONSO (Member)
ZAS ARREGUI, ENRIQUE (Chairman)
GARCIA FEAL, XABIER (Secretary)
FONDADO FONDADO, ALFONSO (Member)
Space-time study of astrophysical neutrinos at the IceCube Observatory
Authorship
I.B.N.
Bachelor of Physics
I.B.N.
Bachelor of Physics
Defense date
07.16.2025 09:00
07.16.2025 09:00
Summary
The detection of very high-energy neutrinos is key in particle astrophysics, as these neutral particles can travel from dense and distant regions of the universe without being deflected, making them ideal messengers for Multimessenger Astronomy. The IceCube observatory, located in Antarctica, has detected a flux of neutrinos of astrophysical origin, among which the so-called 'gold events' stand out due to their high energy and good angular resolution. However, their sources have not yet been identified. This work aims to study the IceCube experiment, understand its analysis techniques, and apply them to the spatial and temporal study of these events in order to investigate their possible origin.
The detection of very high-energy neutrinos is key in particle astrophysics, as these neutral particles can travel from dense and distant regions of the universe without being deflected, making them ideal messengers for Multimessenger Astronomy. The IceCube observatory, located in Antarctica, has detected a flux of neutrinos of astrophysical origin, among which the so-called 'gold events' stand out due to their high energy and good angular resolution. However, their sources have not yet been identified. This work aims to study the IceCube experiment, understand its analysis techniques, and apply them to the spatial and temporal study of these events in order to investigate their possible origin.
Direction
ALVAREZ MUÑIZ, JAIME (Tutorships)
ALVAREZ MUÑIZ, JAIME (Tutorships)
Court
HERNANDO MORATA, JOSE ANGEL (Chairman)
SERANTES ABALO, DAVID (Secretary)
MENDEZ MORALES, TRINIDAD (Member)
HERNANDO MORATA, JOSE ANGEL (Chairman)
SERANTES ABALO, DAVID (Secretary)
MENDEZ MORALES, TRINIDAD (Member)
Laser technologies for the conservation and restoration of cultural heritage
Authorship
C.L.C.C.
Bachelor of Physics
C.L.C.C.
Bachelor of Physics
Defense date
07.16.2025 17:00
07.16.2025 17:00
Summary
This study conducts a comprehensive review of laser-based technologies in cultural heritage conservation, with emphasis on recent developments. A specialized analysis of laser ablation cleaning for metallic objects is provided, evaluating both its efficacy and challenges. The cleaning process will be characterized using optical and confocal microscopy.
This study conducts a comprehensive review of laser-based technologies in cultural heritage conservation, with emphasis on recent developments. A specialized analysis of laser ablation cleaning for metallic objects is provided, evaluating both its efficacy and challenges. The cleaning process will be characterized using optical and confocal microscopy.
Direction
BAO VARELA, Mª CARMEN (Tutorships)
Gómez Varela, Ana Isabel (Co-tutorships)
BAO VARELA, Mª CARMEN (Tutorships)
Gómez Varela, Ana Isabel (Co-tutorships)
Court
MIRA PEREZ, JORGE (Chairman)
CID VIDAL, XABIER (Secretary)
MOSQUEIRA REY, JESUS MANUEL (Member)
MIRA PEREZ, JORGE (Chairman)
CID VIDAL, XABIER (Secretary)
MOSQUEIRA REY, JESUS MANUEL (Member)
Molecular dynamics simulation of the PSA protein and chaperones in different pH conditions
Authorship
S.C.R.
Bachelor of Physics
S.C.R.
Bachelor of Physics
Defense date
07.16.2025 17:00
07.16.2025 17:00
Summary
PSA is a fundamental protein in the early detection of prostate cancer, one of the types with the highest annual mortality rate. However, its usefulness as a biomarker is limited by its low specificity, since its blood levels can also increase in the presence of benign conditions, which can generate false positives. For this reason, there has been a growing interest in identifying structural differences in PSA that allow us to distinguish its origin, with the aim of improving diagnostic accuracy. In this study, molecular dynamics simulations were performed in different pH environments in order to emulate tumor conditions, characterized by a more acidic environment, and benign, more basic conditions. The purpose was to analyze possible differences in the secondary and tertiary structures of the PSA under these conditions. The results obtained reveal variations in the conformation of the secondary structure of the protein, attributable to changes in the protonation of certain polar groups induced by pH. This modification reduces internal repulsions, favors hydrophobic redistribution and leads to an increase in the formation of hydrogen bonds, both intraprotein and with the solvent, which translates into greater structural stability.
PSA is a fundamental protein in the early detection of prostate cancer, one of the types with the highest annual mortality rate. However, its usefulness as a biomarker is limited by its low specificity, since its blood levels can also increase in the presence of benign conditions, which can generate false positives. For this reason, there has been a growing interest in identifying structural differences in PSA that allow us to distinguish its origin, with the aim of improving diagnostic accuracy. In this study, molecular dynamics simulations were performed in different pH environments in order to emulate tumor conditions, characterized by a more acidic environment, and benign, more basic conditions. The purpose was to analyze possible differences in the secondary and tertiary structures of the PSA under these conditions. The results obtained reveal variations in the conformation of the secondary structure of the protein, attributable to changes in the protonation of certain polar groups induced by pH. This modification reduces internal repulsions, favors hydrophobic redistribution and leads to an increase in the formation of hydrogen bonds, both intraprotein and with the solvent, which translates into greater structural stability.
Direction
Prieto Estévez, Gerardo (Tutorships)
DOMINGUEZ ARCA, VICENTE (Co-tutorships)
Prieto Estévez, Gerardo (Tutorships)
DOMINGUEZ ARCA, VICENTE (Co-tutorships)
Court
MIRA PEREZ, JORGE (Chairman)
CID VIDAL, XABIER (Secretary)
MOSQUEIRA REY, JESUS MANUEL (Member)
MIRA PEREZ, JORGE (Chairman)
CID VIDAL, XABIER (Secretary)
MOSQUEIRA REY, JESUS MANUEL (Member)
Tuning of a point diffraction interferometer for the characterization of artificial tear properties
Authorship
E.C.G.
Bachelor of Physics
E.C.G.
Bachelor of Physics
Defense date
07.16.2025 17:00
07.16.2025 17:00
Summary
This work presents the setup of a point diffraction interferometer designed to measure properties of artificial tears, with a special focus on the tear breakup time on contact lenses. First, a general overview of the basic concepts of interference is provided, followed by a theoretical description of the point diffraction interferometer. Next, the complete experimental setup process is described in detail, from the materials used to the methodology followed. All the issues that had to be solved to achieve a functional setup are also included in an appendix. Subsequently, the obtained results regarding the tear breakup time are presented, along with a discussion analyzing and interpreting these results. Tear breakup times were measured for the same contact lens using different liquids. A qualitative study of another property of contact lenses, the rehydration time, is also included. Finally, the conclusions of the project are presented, and possible improvements and future research directions based on this experiment are discussed.
This work presents the setup of a point diffraction interferometer designed to measure properties of artificial tears, with a special focus on the tear breakup time on contact lenses. First, a general overview of the basic concepts of interference is provided, followed by a theoretical description of the point diffraction interferometer. Next, the complete experimental setup process is described in detail, from the materials used to the methodology followed. All the issues that had to be solved to achieve a functional setup are also included in an appendix. Subsequently, the obtained results regarding the tear breakup time are presented, along with a discussion analyzing and interpreting these results. Tear breakup times were measured for the same contact lens using different liquids. A qualitative study of another property of contact lenses, the rehydration time, is also included. Finally, the conclusions of the project are presented, and possible improvements and future research directions based on this experiment are discussed.
Direction
ACOSTA PLAZA, EVA MARIA (Tutorships)
GARCIA PORTA, NERY (Co-tutorships)
ACOSTA PLAZA, EVA MARIA (Tutorships)
GARCIA PORTA, NERY (Co-tutorships)
Court
MIRA PEREZ, JORGE (Chairman)
CID VIDAL, XABIER (Secretary)
MOSQUEIRA REY, JESUS MANUEL (Member)
MIRA PEREZ, JORGE (Chairman)
CID VIDAL, XABIER (Secretary)
MOSQUEIRA REY, JESUS MANUEL (Member)
Quantum Computing. Principles and Applications
Authorship
A.C.S.
Double bachelor degree in Mathematics and Physics
A.C.S.
Double bachelor degree in Mathematics and Physics
Defense date
07.16.2025 17:20
07.16.2025 17:20
Summary
This report studies the foundations of Quantum Computing in a fully mathematical manner, abstracting from the underlying real physical systems. Firstly, we study the fundamentals of Quantum Physics, exploring concepts and properties of Hilbert spaces over the field of complex numbers. Next, we define the concepts of qubit and p-qubit, along with the quantum logic gates that operate on them. Finally, we develop some important algorithms with specific applications that show the relevance of this kind of logic. The aim of this document is to serve as an introduction, based on concepts covered by the Mathematics undergraduate program, to the world of Quantum Computing, without requiring any previous knowledge of Physics, in doing so, it seeks to present easier access to understanding or developing quantum algorithms.
This report studies the foundations of Quantum Computing in a fully mathematical manner, abstracting from the underlying real physical systems. Firstly, we study the fundamentals of Quantum Physics, exploring concepts and properties of Hilbert spaces over the field of complex numbers. Next, we define the concepts of qubit and p-qubit, along with the quantum logic gates that operate on them. Finally, we develop some important algorithms with specific applications that show the relevance of this kind of logic. The aim of this document is to serve as an introduction, based on concepts covered by the Mathematics undergraduate program, to the world of Quantum Computing, without requiring any previous knowledge of Physics, in doing so, it seeks to present easier access to understanding or developing quantum algorithms.
Direction
FERNANDEZ FERNANDEZ, FRANCISCO JAVIER (Tutorships)
FERNANDEZ FERNANDEZ, FRANCISCO JAVIER (Tutorships)
Court
LOPEZ POUSO, RODRIGO (Chairman)
PEON NIETO, ANA (Secretary)
SEOANE MARTINEZ, MARIA LUISA (Member)
LOPEZ POUSO, RODRIGO (Chairman)
PEON NIETO, ANA (Secretary)
SEOANE MARTINEZ, MARIA LUISA (Member)
Evaluation of the Effective Temperature through Quantum Simulation
Authorship
A.C.S.
Double bachelor degree in Mathematics and Physics
A.C.S.
Double bachelor degree in Mathematics and Physics
Defense date
07.16.2025 17:00
07.16.2025 17:00
Summary
An important line of research in the field of quantum computing is focused on noise detection and mitigation. In this report, we will focus on methods for measuring the effective temperature, a quantity used for the estimation of the residual population of the excited state of a qubit, due to thermal fluctuations in the device. First, we study the physical systems of transmon superconducting qubits and their dispersive readout process, which are the ones available in QMIO. Based on them, we develop a stochastic simulation on which the measurement methods can be validated. Finally, we study a method for measuring the effective temperature based on e-f Rabi oscillations, validating its hypotheses and obtaining results through simulation.
An important line of research in the field of quantum computing is focused on noise detection and mitigation. In this report, we will focus on methods for measuring the effective temperature, a quantity used for the estimation of the residual population of the excited state of a qubit, due to thermal fluctuations in the device. First, we study the physical systems of transmon superconducting qubits and their dispersive readout process, which are the ones available in QMIO. Based on them, we develop a stochastic simulation on which the measurement methods can be validated. Finally, we study a method for measuring the effective temperature based on e-f Rabi oscillations, validating its hypotheses and obtaining results through simulation.
Direction
MAS SOLE, JAVIER (Tutorships)
Gómez Tato, Andrés (Co-tutorships)
MAS SOLE, JAVIER (Tutorships)
Gómez Tato, Andrés (Co-tutorships)
Court
MIRA PEREZ, JORGE (Chairman)
CID VIDAL, XABIER (Secretary)
MOSQUEIRA REY, JESUS MANUEL (Member)
MIRA PEREZ, JORGE (Chairman)
CID VIDAL, XABIER (Secretary)
MOSQUEIRA REY, JESUS MANUEL (Member)
Relativity and Time: Einstein vs Bergson
Authorship
B.D.Z.
Bachelor of Physics
B.D.Z.
Bachelor of Physics
Defense date
07.16.2025 09:00
07.16.2025 09:00
Summary
This work analyses the twin paradox within the framework of Special and General Relativity, addressing not only the formal geometric structure of space-time but also the epistemological and ontological implications associated with the concept of time. After reconstructing the mathematical foundations of space-time, the paradox is examined in terms of proper time and geodesic deviation. Experimental validations such as muon decay and the Shapiro delay are included, confirming time dilation as a measurable consequence. Subsequently, this approach is contrasted with Henri Bergson’s critique, who argues that duration is not reducible to physical time. Relativity, by discarding absolute simultaneity and privileged frames, attributes asymmetry between observers to the geometry of space-time. Bergson reformulates the paradox, emphasizing the reciprocal equivalence of magnitudes. Far from a simple confrontation between physics and philosophy, the debate reveals deeper questions: is time dilation an absolute geometric property or a relational construction? Can there be duration without a proper trajectory or without an observer to measure it? What does it mean to compare times between frames that do not share simultaneity? Relativity allows multiple times to be represented, but forces us to rethink what it means to compare durations, leaving open the ultimate meaning of time.
This work analyses the twin paradox within the framework of Special and General Relativity, addressing not only the formal geometric structure of space-time but also the epistemological and ontological implications associated with the concept of time. After reconstructing the mathematical foundations of space-time, the paradox is examined in terms of proper time and geodesic deviation. Experimental validations such as muon decay and the Shapiro delay are included, confirming time dilation as a measurable consequence. Subsequently, this approach is contrasted with Henri Bergson’s critique, who argues that duration is not reducible to physical time. Relativity, by discarding absolute simultaneity and privileged frames, attributes asymmetry between observers to the geometry of space-time. Bergson reformulates the paradox, emphasizing the reciprocal equivalence of magnitudes. Far from a simple confrontation between physics and philosophy, the debate reveals deeper questions: is time dilation an absolute geometric property or a relational construction? Can there be duration without a proper trajectory or without an observer to measure it? What does it mean to compare times between frames that do not share simultaneity? Relativity allows multiple times to be represented, but forces us to rethink what it means to compare durations, leaving open the ultimate meaning of time.
Direction
EDELSTEIN GLAUBACH, JOSE DANIEL (Tutorships)
Giribet , Gastón Enrique (Co-tutorships)
EDELSTEIN GLAUBACH, JOSE DANIEL (Tutorships)
Giribet , Gastón Enrique (Co-tutorships)
Court
HERNANDO MORATA, JOSE ANGEL (Chairman)
SERANTES ABALO, DAVID (Secretary)
MENDEZ MORALES, TRINIDAD (Member)
HERNANDO MORATA, JOSE ANGEL (Chairman)
SERANTES ABALO, DAVID (Secretary)
MENDEZ MORALES, TRINIDAD (Member)
Use of a flashDiamond Detector for Ultra-High Dose Rate Dosimetry.
Authorship
M.A.D.S.
Bachelor of Physics
M.A.D.S.
Bachelor of Physics
Defense date
07.17.2025 10:00
07.17.2025 10:00
Summary
FLASH radiotherapy is an emerging cancer treatment technique that leverages the so-called FLASH effect under ultra-high dose rate conditions. To overcome the limitations this poses to conventional dosimetry, the flashDiamond (fD) detector was specifically developed. This work presents a detailed evaluation of its dosimetric performance. Measurements of percentage depth dose (PDD) curves, beam profiles, and absolute dose were carried out using two accelerators: a LIAC HWL for intraoperative radiotherapy at the Hospital Clínico Universitario de Santiago, and the ElectronFlash from Sordina IORT Technologies at the Curie Institute in Paris. Experimental data were complemented with Monte Carlo simulations. The results show good agreement with reference detectors under conventional conditions and confirm the fD’s suitability for ultra-high dose-per-pulse and dose rate applications, supporting its potential as a reliable dosimetric tool for clinical implementation of FLASH-RT.
FLASH radiotherapy is an emerging cancer treatment technique that leverages the so-called FLASH effect under ultra-high dose rate conditions. To overcome the limitations this poses to conventional dosimetry, the flashDiamond (fD) detector was specifically developed. This work presents a detailed evaluation of its dosimetric performance. Measurements of percentage depth dose (PDD) curves, beam profiles, and absolute dose were carried out using two accelerators: a LIAC HWL for intraoperative radiotherapy at the Hospital Clínico Universitario de Santiago, and the ElectronFlash from Sordina IORT Technologies at the Curie Institute in Paris. Experimental data were complemented with Monte Carlo simulations. The results show good agreement with reference detectors under conventional conditions and confirm the fD’s suitability for ultra-high dose-per-pulse and dose rate applications, supporting its potential as a reliable dosimetric tool for clinical implementation of FLASH-RT.
Direction
GOMEZ RODRIGUEZ, FAUSTINO (Tutorships)
GOMEZ RODRIGUEZ, FAUSTINO (Tutorships)
Court
MIRA PEREZ, JORGE (Chairman)
CID VIDAL, XABIER (Secretary)
MOSQUEIRA REY, JESUS MANUEL (Member)
MIRA PEREZ, JORGE (Chairman)
CID VIDAL, XABIER (Secretary)
MOSQUEIRA REY, JESUS MANUEL (Member)
Magnetometric study of critical phenomena and vortex dynamics in iron-based superconductor CaKFe4As4
Authorship
N.E.F.
Bachelor of Physics
N.E.F.
Bachelor of Physics
Defense date
07.18.2025 09:30
07.18.2025 09:30
Summary
CaKFe4As4 is a recently discovered superconductor from the iron pnictide family (FeSC). It is of great interest because it is one of the few stoichiometric FeSC and because it has a high critical temperature (Tc = 35 K) within Fe-based superconductors. This work will involve the characterization of a high-quality CaKFe4As4 single crystal and the measurement of its magnetic response. The high resolution of the used magnetometer (a SQUID type, based on superconducting quantum interference) will allow the study of precursor effects of superconductivity induced by thermal fluctuations at temperatures T close to the superconducting transition, Tc, and the determination of superconducting parameters such as the coherence length and the critical magnetic field. In turn, hysteresis loop measurements for T less than Tc will reveal parameters of technological interest such as the critical current density and the irreversibility magnetic field.
CaKFe4As4 is a recently discovered superconductor from the iron pnictide family (FeSC). It is of great interest because it is one of the few stoichiometric FeSC and because it has a high critical temperature (Tc = 35 K) within Fe-based superconductors. This work will involve the characterization of a high-quality CaKFe4As4 single crystal and the measurement of its magnetic response. The high resolution of the used magnetometer (a SQUID type, based on superconducting quantum interference) will allow the study of precursor effects of superconductivity induced by thermal fluctuations at temperatures T close to the superconducting transition, Tc, and the determination of superconducting parameters such as the coherence length and the critical magnetic field. In turn, hysteresis loop measurements for T less than Tc will reveal parameters of technological interest such as the critical current density and the irreversibility magnetic field.
Direction
MOSQUEIRA REY, JESUS MANUEL (Tutorships)
MOSQUEIRA REY, JESUS MANUEL (Tutorships)
Court
ZAS ARREGUI, ENRIQUE (Chairman)
GARCIA FEAL, XABIER (Secretary)
FONDADO FONDADO, ALFONSO (Member)
ZAS ARREGUI, ENRIQUE (Chairman)
GARCIA FEAL, XABIER (Secretary)
FONDADO FONDADO, ALFONSO (Member)
Study of the termoelectrical properties of the ionic liquid EMimTFSI and mixtures with lithium salts and nanoparticles
Authorship
Y.E.F.
Bachelor of Physics
Y.E.F.
Bachelor of Physics
Defense date
07.17.2025 10:00
07.17.2025 10:00
Summary
In the search for a sustainable future, renewable energies play a crucial role. However, they strongly depend on energy storage systems. This work focuses on the study of the ionic liquid (IL) 1-Ethyl-3-methylimidazoliumbis(trifluoromethylsulfonyl)imide (EMimTFSI) for its potential use as an electrolyte. Measurements were made on both the pure IL and mixtures containing nanoparticles (HK3) and lithium salt (LiTFSI), aiming to improve the IL’s properties. The samples were characterized using different techniques: Electrochemical Impedance Spectroscopy (EIS), Nuclear Magnetic Resonance (NMR), and Differential Scanning Calorimetry (DSC). EIS results show conductivity values above 0.3 S/m for all samples, indicating their suitability for battery applications. NMR studies of the IL and its mixtures with varying HK3 content showed an increase in the diffusion coefficient compared to the pure IL. However, this improvement does not scale with increasing HK3 content. DSC results reveal a slight improvement in thermal properties in mixtures containing HK3 compared to those without, particularly noting the disappearance of peaks associated with structural rearrangements of the IL at high heating rates.
In the search for a sustainable future, renewable energies play a crucial role. However, they strongly depend on energy storage systems. This work focuses on the study of the ionic liquid (IL) 1-Ethyl-3-methylimidazoliumbis(trifluoromethylsulfonyl)imide (EMimTFSI) for its potential use as an electrolyte. Measurements were made on both the pure IL and mixtures containing nanoparticles (HK3) and lithium salt (LiTFSI), aiming to improve the IL’s properties. The samples were characterized using different techniques: Electrochemical Impedance Spectroscopy (EIS), Nuclear Magnetic Resonance (NMR), and Differential Scanning Calorimetry (DSC). EIS results show conductivity values above 0.3 S/m for all samples, indicating their suitability for battery applications. NMR studies of the IL and its mixtures with varying HK3 content showed an increase in the diffusion coefficient compared to the pure IL. However, this improvement does not scale with increasing HK3 content. DSC results reveal a slight improvement in thermal properties in mixtures containing HK3 compared to those without, particularly noting the disappearance of peaks associated with structural rearrangements of the IL at high heating rates.
Direction
SALGADO CARBALLO, JOSEFA (Tutorships)
Santiago Alonso, Antía (Co-tutorships)
SALGADO CARBALLO, JOSEFA (Tutorships)
Santiago Alonso, Antía (Co-tutorships)
Court
MIRA PEREZ, JORGE (Chairman)
CID VIDAL, XABIER (Secretary)
MOSQUEIRA REY, JESUS MANUEL (Member)
MIRA PEREZ, JORGE (Chairman)
CID VIDAL, XABIER (Secretary)
MOSQUEIRA REY, JESUS MANUEL (Member)
Fabrication, structural characterization, and study of the superconducting properties of nanogranular aluminum thin films.
Authorship
M.F.V.
Bachelor of Physics
M.F.V.
Bachelor of Physics
Defense date
07.16.2025 09:00
07.16.2025 09:00
Summary
Nanogranular aluminum (consisting of nanometric aluminum grains separated by an insulating layer of aluminum oxide) is a benchmark superconductor for the implementation of quantum logic devices. It is also very interesting from a fundamental point of view because it exhibits superconducting critical temperatures (Tc) up to three times higher than bulk aluminum (Tc = 1.1 K). This is the reason why very recent articles have been dedicated to better understanding the phenomenology of this material. This work describes the fabrication process of nanogranular aluminum thin films by thermal evaporation, their characterization by scanning electron microscopy and atomic force microscopy, and the measurement of their electrical transport properties (resistivity and V-I curves). The fabricated films have Tc = 2.4 K, but up to 1.8 K they present a non-ohmic paracoherent state (superconducting grains decoupled from each other), never before observed in Al. Below 1.8 K the grains finally couple and the material reaches bulk superconductivity. Similar behavior was previously observed in artificial superconducting nanostructures of Nb.
Nanogranular aluminum (consisting of nanometric aluminum grains separated by an insulating layer of aluminum oxide) is a benchmark superconductor for the implementation of quantum logic devices. It is also very interesting from a fundamental point of view because it exhibits superconducting critical temperatures (Tc) up to three times higher than bulk aluminum (Tc = 1.1 K). This is the reason why very recent articles have been dedicated to better understanding the phenomenology of this material. This work describes the fabrication process of nanogranular aluminum thin films by thermal evaporation, their characterization by scanning electron microscopy and atomic force microscopy, and the measurement of their electrical transport properties (resistivity and V-I curves). The fabricated films have Tc = 2.4 K, but up to 1.8 K they present a non-ohmic paracoherent state (superconducting grains decoupled from each other), never before observed in Al. Below 1.8 K the grains finally couple and the material reaches bulk superconductivity. Similar behavior was previously observed in artificial superconducting nanostructures of Nb.
Direction
MOSQUEIRA REY, JESUS MANUEL (Tutorships)
MOSQUEIRA REY, JESUS MANUEL (Tutorships)
Court
Pérez Muñuzuri, Vicente (Chairman)
GALLAS TORREIRA, ABRAHAM ANTONIO (Secretary)
RODRIGUEZ GONZALEZ, JUAN ANTONIO (Member)
Pérez Muñuzuri, Vicente (Chairman)
GALLAS TORREIRA, ABRAHAM ANTONIO (Secretary)
RODRIGUEZ GONZALEZ, JUAN ANTONIO (Member)
From macro to micro: microfluidics and its potential in the validation of theranostic nanosystems
Authorship
A.F.M.
Bachelor of Physics
A.F.M.
Bachelor of Physics
Defense date
07.17.2025 10:00
07.17.2025 10:00
Summary
Microfluidic devices, or chips, represent a promising alternative for preclinical validation of theranostic nanosystems in diseases such as cancer or atherosclerosis. In this context, the so-called organs/arteries-on-a-chip allow the simulation of tissue hemodynamic conditions, mimicking the interaction between nanosystems and physiological walls, thus overcoming the limitations of static cell cultures and reducing animal experimentation. The proper manipulation of fluids within the chips requires an understanding of the physical phenomena at the microscale, which differ significantly from those observed at the macroscale. Therefore, the first part of this work provides an overview of fundamental parameters in fluid physics such as Reynolds or Peclet numbers, which determine the dominant forces at the microscale, along with a description of the associated phenomena, including shear stress, diffusion, capillarity... The second part of the study consists in the experimental development of an artery-on-a-chip for atherosclerosis treatment, using a three-channel chip connected to a fluidic unit through a perfusion set with two reservoirs. Shear stress is the most significant force acting on arterial walls; however, it cannot be directly controlled, so the chip must be calibrated to establish the relationship between the pressure applied by a pressure pump connected to the fluidic unit and the shear stress generated at the microchannel walls depending on the perfusion set and reservoirs used. Thus, this part of the study analyzes the effect of the perfusion set diameter and length and the reservoir diameter on the resulting shear stress, finding the optimal conditions to achieve a typical value in healthy arteries (12 dyn/cm2) and thereby recreate the arterial inner wall through the dynamic culture of endothelial cells in one of the chip's channels.
Microfluidic devices, or chips, represent a promising alternative for preclinical validation of theranostic nanosystems in diseases such as cancer or atherosclerosis. In this context, the so-called organs/arteries-on-a-chip allow the simulation of tissue hemodynamic conditions, mimicking the interaction between nanosystems and physiological walls, thus overcoming the limitations of static cell cultures and reducing animal experimentation. The proper manipulation of fluids within the chips requires an understanding of the physical phenomena at the microscale, which differ significantly from those observed at the macroscale. Therefore, the first part of this work provides an overview of fundamental parameters in fluid physics such as Reynolds or Peclet numbers, which determine the dominant forces at the microscale, along with a description of the associated phenomena, including shear stress, diffusion, capillarity... The second part of the study consists in the experimental development of an artery-on-a-chip for atherosclerosis treatment, using a three-channel chip connected to a fluidic unit through a perfusion set with two reservoirs. Shear stress is the most significant force acting on arterial walls; however, it cannot be directly controlled, so the chip must be calibrated to establish the relationship between the pressure applied by a pressure pump connected to the fluidic unit and the shear stress generated at the microchannel walls depending on the perfusion set and reservoirs used. Thus, this part of the study analyzes the effect of the perfusion set diameter and length and the reservoir diameter on the resulting shear stress, finding the optimal conditions to achieve a typical value in healthy arteries (12 dyn/cm2) and thereby recreate the arterial inner wall through the dynamic culture of endothelial cells in one of the chip's channels.
Direction
BARBOSA FERNANDEZ, SILVIA (Tutorships)
OGANDO CORTES, ALEJANDRO (Co-tutorships)
BARBOSA FERNANDEZ, SILVIA (Tutorships)
OGANDO CORTES, ALEJANDRO (Co-tutorships)
Court
MIRA PEREZ, JORGE (Chairman)
CID VIDAL, XABIER (Secretary)
MOSQUEIRA REY, JESUS MANUEL (Member)
MIRA PEREZ, JORGE (Chairman)
CID VIDAL, XABIER (Secretary)
MOSQUEIRA REY, JESUS MANUEL (Member)
Characterisation of the calibration signals of NEXT-100
Authorship
N.F.L.
Bachelor of Physics
N.F.L.
Bachelor of Physics
Defense date
07.17.2025 10:00
07.17.2025 10:00
Summary
The NEXT experiment is an international collaboration searching for neutrinoless double beta decay (beta beta 0 nu) in the isotope 136Xe. The NEXT-100 detector, currently operational, is expected to give a competitive search at half-life. For this, it is necessary to first validate the simulations. In this work we will analyse the events of the electron-generated signals, and hence the response of the detector to these simulated Kr signals by establishing a method for correction and estimation of the energy resolution.
The NEXT experiment is an international collaboration searching for neutrinoless double beta decay (beta beta 0 nu) in the isotope 136Xe. The NEXT-100 detector, currently operational, is expected to give a competitive search at half-life. For this, it is necessary to first validate the simulations. In this work we will analyse the events of the electron-generated signals, and hence the response of the detector to these simulated Kr signals by establishing a method for correction and estimation of the energy resolution.
Direction
HERNANDO MORATA, JOSE ANGEL (Tutorships)
HERVES CARRETE, CARLOS (Co-tutorships)
HERNANDO MORATA, JOSE ANGEL (Tutorships)
HERVES CARRETE, CARLOS (Co-tutorships)
Court
MIRA PEREZ, JORGE (Chairman)
CID VIDAL, XABIER (Secretary)
MOSQUEIRA REY, JESUS MANUEL (Member)
MIRA PEREZ, JORGE (Chairman)
CID VIDAL, XABIER (Secretary)
MOSQUEIRA REY, JESUS MANUEL (Member)
Undergraduate dissertation
Authorship
C.F.S.
Double bachelor degree in Mathematics and Physics
C.F.S.
Double bachelor degree in Mathematics and Physics
Defense date
07.17.2025 10:00
07.17.2025 10:00
Summary
The van der Waals material 1T CrSe2 consists of strongly covalently bonded layers in plane which are held together out of plane by weak van der Waals forces. These systems exhibit an intermediate behavior between localized electrons with well defined magnetic moments at each atomic site and itinerant electrons delocalized across conduction bands without a net moment assignable to each atom To investigate its magnetic ordering and electronic instabilities we performed ab initio calculations using the WIEN2k code employing the FP LAPW plus local orbitals method and the PBE generalized gradient approximation. First the rigid 1 por 1 monolayer was characterized its ground state is ferromagnetic and its density of states DOS is dominated at the Fermi level by Cr t2g orbitals indicating electronic instabilities To capture the charge density waves CDWs associated with Peierls modes we constructed 2 por 2 and sqrt3 por sqrt3 supercells. Relaxation of the 2 por 2 cell reveals Cr tetramers and a partial band gap characteristic of a unidirectional Peierls mechanism. The sqrt3 por sqrt3 reconstruction simultaneously trimerizes all three t2g orbitals opening pseudogaps above and below the Fermi level and leaving only a single residual flat band Because conventional DFT tends to underestimate the local Coulomb interaction we applied the LDA U correction to the Cr d orbitals. For moderate values of U the pseudogap opening is enhanced and the ferromagnetic CDW phase is further stabilized whereas an excessive U induces subband reordering that reintroduces DOS peaks at EF
The van der Waals material 1T CrSe2 consists of strongly covalently bonded layers in plane which are held together out of plane by weak van der Waals forces. These systems exhibit an intermediate behavior between localized electrons with well defined magnetic moments at each atomic site and itinerant electrons delocalized across conduction bands without a net moment assignable to each atom To investigate its magnetic ordering and electronic instabilities we performed ab initio calculations using the WIEN2k code employing the FP LAPW plus local orbitals method and the PBE generalized gradient approximation. First the rigid 1 por 1 monolayer was characterized its ground state is ferromagnetic and its density of states DOS is dominated at the Fermi level by Cr t2g orbitals indicating electronic instabilities To capture the charge density waves CDWs associated with Peierls modes we constructed 2 por 2 and sqrt3 por sqrt3 supercells. Relaxation of the 2 por 2 cell reveals Cr tetramers and a partial band gap characteristic of a unidirectional Peierls mechanism. The sqrt3 por sqrt3 reconstruction simultaneously trimerizes all three t2g orbitals opening pseudogaps above and below the Fermi level and leaving only a single residual flat band Because conventional DFT tends to underestimate the local Coulomb interaction we applied the LDA U correction to the Cr d orbitals. For moderate values of U the pseudogap opening is enhanced and the ferromagnetic CDW phase is further stabilized whereas an excessive U induces subband reordering that reintroduces DOS peaks at EF
Direction
PARDO CASTRO, VICTOR (Tutorships)
PARDO CASTRO, VICTOR (Tutorships)
Court
MIRA PEREZ, JORGE (Chairman)
CID VIDAL, XABIER (Secretary)
MOSQUEIRA REY, JESUS MANUEL (Member)
MIRA PEREZ, JORGE (Chairman)
CID VIDAL, XABIER (Secretary)
MOSQUEIRA REY, JESUS MANUEL (Member)
Laser-controlled nuclear batteries
Authorship
M.G.R.
Bachelor of Physics
M.G.R.
Bachelor of Physics
Defense date
07.17.2025 10:00
07.17.2025 10:00
Summary
Nuclear batteries are responsible for converting the energy released by the decay of radioactive isotopes into electricity, although unlike a nuclear reactor, they do not do so through a chain reaction. However, their output power cannot be directly regulated, as it decreases over time as the decay process progresses. Therefore, we will explore how this output can be controlled in order to use the batteries whenever desired. To address this, it is preferable to use nuclei with long-lived isomers that can (in a simple way) transition to a short-lived excited intermediate state, which would then release energy as it de-excites. The isomeric state to be studied will be Mo-93m, with a half-life t(1/2) = 6.85 h, which can be obtained through the reaction 93Nb(p,n)93mMo. This reaction will be simulated numerically by pumping protons onto a niobium target using ultra-intense lasers. In addition, we will investigate whether a process known as NEEC (Nuclear Excitation by Electron Capture) can excite Mo-93m to a short-lived intermediate state (with t(1/2)= 3.52 ns), which would release 2.430 MeV per nucleus upon de-excitation. In the NEEC process, a free electron is captured into an atomic vacancy, transferring its energy and momentum to the nucleus, thereby exciting it. However, it has not been experimentally confirmed that this process can occur. Some physicists claim to have demonstrated it, though the scientific community has not reached a consensus on the validity of their experiment. Ultimately, through numerical simulations in Python, we will study the feasibility of producing Mo-93m both at the Laboratorio Láser de Aceleración e Aplicacións at the Universidade de Santiago de Compostela (L2A2) and at the Centro de Láseres Pulsados (CLPU) in Salamanca. The results will provide insights into whether it would be worthwhile to conduct this experiment, which is highly relevant to the research on improving future energy storage and technological energy management.
Nuclear batteries are responsible for converting the energy released by the decay of radioactive isotopes into electricity, although unlike a nuclear reactor, they do not do so through a chain reaction. However, their output power cannot be directly regulated, as it decreases over time as the decay process progresses. Therefore, we will explore how this output can be controlled in order to use the batteries whenever desired. To address this, it is preferable to use nuclei with long-lived isomers that can (in a simple way) transition to a short-lived excited intermediate state, which would then release energy as it de-excites. The isomeric state to be studied will be Mo-93m, with a half-life t(1/2) = 6.85 h, which can be obtained through the reaction 93Nb(p,n)93mMo. This reaction will be simulated numerically by pumping protons onto a niobium target using ultra-intense lasers. In addition, we will investigate whether a process known as NEEC (Nuclear Excitation by Electron Capture) can excite Mo-93m to a short-lived intermediate state (with t(1/2)= 3.52 ns), which would release 2.430 MeV per nucleus upon de-excitation. In the NEEC process, a free electron is captured into an atomic vacancy, transferring its energy and momentum to the nucleus, thereby exciting it. However, it has not been experimentally confirmed that this process can occur. Some physicists claim to have demonstrated it, though the scientific community has not reached a consensus on the validity of their experiment. Ultimately, through numerical simulations in Python, we will study the feasibility of producing Mo-93m both at the Laboratorio Láser de Aceleración e Aplicacións at the Universidade de Santiago de Compostela (L2A2) and at the Centro de Láseres Pulsados (CLPU) in Salamanca. The results will provide insights into whether it would be worthwhile to conduct this experiment, which is highly relevant to the research on improving future energy storage and technological energy management.
Direction
ALEJO ALONSO, AARON JOSE (Tutorships)
ALEJO ALONSO, AARON JOSE (Tutorships)
Court
MIRA PEREZ, JORGE (Chairman)
CID VIDAL, XABIER (Secretary)
MOSQUEIRA REY, JESUS MANUEL (Member)
MIRA PEREZ, JORGE (Chairman)
CID VIDAL, XABIER (Secretary)
MOSQUEIRA REY, JESUS MANUEL (Member)
Heat modeling and management in a SiC photovoltaic laser power converter
Authorship
I.G.G.
Bachelor of Physics
I.G.G.
Bachelor of Physics
Defense date
07.16.2025 09:00
07.16.2025 09:00
Summary
The principal problem that limits deep space missions as well as the investigation of other dark remote places is the absence of a reliable energy source. High power laser transmission has been pointed out as the primary way to solve this, but a key aspect to make it possible is to obtain a high efficiency laser power converter (LPC) and, even then, heat loses would be the main problem to solve. In the current state-of-the-art architectures high bandgap semiconductors are being investigated, being the tree polytypes of SiC (3C, 4H and 6H) one of the main candidates for this new technology. In this work, a heat simulation will be done to a 4H-SiC based horizontal laser power converter (hLPC) to observe the magnitude of heat losses in the device and to test the current approaches to heat management. No mayor temperature problems were found, but the scaling to bigger cells and more complex architectures might require a more in-depth analysis.
The principal problem that limits deep space missions as well as the investigation of other dark remote places is the absence of a reliable energy source. High power laser transmission has been pointed out as the primary way to solve this, but a key aspect to make it possible is to obtain a high efficiency laser power converter (LPC) and, even then, heat loses would be the main problem to solve. In the current state-of-the-art architectures high bandgap semiconductors are being investigated, being the tree polytypes of SiC (3C, 4H and 6H) one of the main candidates for this new technology. In this work, a heat simulation will be done to a 4H-SiC based horizontal laser power converter (hLPC) to observe the magnitude of heat losses in the device and to test the current approaches to heat management. No mayor temperature problems were found, but the scaling to bigger cells and more complex architectures might require a more in-depth analysis.
Direction
GARCIA LOUREIRO, ANTONIO JESUS (Tutorships)
FERNANDEZ LOZANO, JAVIER (Co-tutorships)
GARCIA LOUREIRO, ANTONIO JESUS (Tutorships)
FERNANDEZ LOZANO, JAVIER (Co-tutorships)
Court
Pérez Muñuzuri, Vicente (Chairman)
GALLAS TORREIRA, ABRAHAM ANTONIO (Secretary)
RODRIGUEZ GONZALEZ, JUAN ANTONIO (Member)
Pérez Muñuzuri, Vicente (Chairman)
GALLAS TORREIRA, ABRAHAM ANTONIO (Secretary)
RODRIGUEZ GONZALEZ, JUAN ANTONIO (Member)
Characterization and Simulation of the Americium/Beryllium (AmBe) Source for the Calibration of the Water Cherenkov Test Experiment (WCTE) Detector for Hyper-Kamiokande (HK)
Authorship
A.G.A.
Bachelor of Physics
A.G.A.
Bachelor of Physics
Defense date
07.16.2025 09:00
07.16.2025 09:00
Summary
Hyper-Kamiokande (HK) is the third in the Japanese family of Water Cherenkov Detectors (WCDs), which focus their efforts on the study of neutrinos and the possible proton decay, as predicted by Grand Unified Theories (GUTs). For this purpose, several test experiments are being constructed, one of which is the Water Cherenkov Test Experiment (WCTE), currently operating at CERN. WCTE implements new technologies into the WCDs, such as a new type of MultiPMT, as well as new calibration techniques that will allow Hyper-Kamiokande to improve its particle separation and identification capabilities. Part of the calibration will be carried out using two radioactive sources, one of them being Americium/Beryllium (AmBe), whose simulation and characterization is presented in this work. This study presents the simulation and characterization of an Americium/Beryllium (AmBe) source, one of the two radioactive sources to be used for the calibration of the WCTE experiment.
Hyper-Kamiokande (HK) is the third in the Japanese family of Water Cherenkov Detectors (WCDs), which focus their efforts on the study of neutrinos and the possible proton decay, as predicted by Grand Unified Theories (GUTs). For this purpose, several test experiments are being constructed, one of which is the Water Cherenkov Test Experiment (WCTE), currently operating at CERN. WCTE implements new technologies into the WCDs, such as a new type of MultiPMT, as well as new calibration techniques that will allow Hyper-Kamiokande to improve its particle separation and identification capabilities. Part of the calibration will be carried out using two radioactive sources, one of them being Americium/Beryllium (AmBe), whose simulation and characterization is presented in this work. This study presents the simulation and characterization of an Americium/Beryllium (AmBe) source, one of the two radioactive sources to be used for the calibration of the WCTE experiment.
Direction
HERNANDO MORATA, JOSE ANGEL (Tutorships)
Costas Rodríguez, Diego (Co-tutorships)
HERNANDO MORATA, JOSE ANGEL (Tutorships)
Costas Rodríguez, Diego (Co-tutorships)
Court
Pérez Muñuzuri, Vicente (Chairman)
GALLAS TORREIRA, ABRAHAM ANTONIO (Secretary)
RODRIGUEZ GONZALEZ, JUAN ANTONIO (Member)
Pérez Muñuzuri, Vicente (Chairman)
GALLAS TORREIRA, ABRAHAM ANTONIO (Secretary)
RODRIGUEZ GONZALEZ, JUAN ANTONIO (Member)
Automatic Segmentation of Preclinical Magnetic Resonance Imaging
Authorship
A.G.A.
Double bachelor degree in Mathematics and Physics
A.G.A.
Double bachelor degree in Mathematics and Physics
Defense date
07.16.2025 09:00
07.16.2025 09:00
Summary
The main objective of this work is to develop an automated system for the detection and segmentation of glioblastomas, an aggressive type of brain tumor, in animal models (mice and rats) using preclinical magnetic resonance imaging (MRI). For this purpose, supervised learning techniques are employed to automatically segment the glioblastoma and accurately calculate its volume. Chapter 3 presents the U-Net model, a convolutional neural network specialized in medical segmentation tasks. This model is trained using MRI images of mice with manual segmentations (ground truth), with the aim of automatically locating the glioblastoma in new images. The impact of different hyperparameter configurations is explored, and the models performance is evaluated using specific segmentation metrics. Chapter 4 focuses on the development of predictive models to estimate tumor volume in mice, based on the same images used in the previous chapter. The process includes the extraction of radiomic features, their comparison with those previously obtained by researcher Sara Ortega, the selection of the most relevant variables, and the training of regression models. Again, various combinations of hyperparameters are analyzed to study their influence on prediction quality. In Chapter 5, the previous procedure is replicated, this time using rat images. In addition, an analysis is conducted on the impact of increasing the sample size on the performance of the predictive models, training the algorithms with different amounts of data. Finally, the possibility of building a model to predict animal survival from the images was considered. However, a preliminary analysis revealed that the available data were insufficient to obtain reliable predictions, so this possibility was proposed as future research.
The main objective of this work is to develop an automated system for the detection and segmentation of glioblastomas, an aggressive type of brain tumor, in animal models (mice and rats) using preclinical magnetic resonance imaging (MRI). For this purpose, supervised learning techniques are employed to automatically segment the glioblastoma and accurately calculate its volume. Chapter 3 presents the U-Net model, a convolutional neural network specialized in medical segmentation tasks. This model is trained using MRI images of mice with manual segmentations (ground truth), with the aim of automatically locating the glioblastoma in new images. The impact of different hyperparameter configurations is explored, and the models performance is evaluated using specific segmentation metrics. Chapter 4 focuses on the development of predictive models to estimate tumor volume in mice, based on the same images used in the previous chapter. The process includes the extraction of radiomic features, their comparison with those previously obtained by researcher Sara Ortega, the selection of the most relevant variables, and the training of regression models. Again, various combinations of hyperparameters are analyzed to study their influence on prediction quality. In Chapter 5, the previous procedure is replicated, this time using rat images. In addition, an analysis is conducted on the impact of increasing the sample size on the performance of the predictive models, training the algorithms with different amounts of data. Finally, the possibility of building a model to predict animal survival from the images was considered. However, a preliminary analysis revealed that the available data were insufficient to obtain reliable predictions, so this possibility was proposed as future research.
Direction
IGLESIAS REY, RAMON (Tutorships)
IGLESIAS REY, RAMON (Tutorships)
Court
Pérez Muñuzuri, Vicente (Chairman)
GALLAS TORREIRA, ABRAHAM ANTONIO (Secretary)
RODRIGUEZ GONZALEZ, JUAN ANTONIO (Member)
Pérez Muñuzuri, Vicente (Chairman)
GALLAS TORREIRA, ABRAHAM ANTONIO (Secretary)
RODRIGUEZ GONZALEZ, JUAN ANTONIO (Member)
Shortcuts to adiabaticity in simple quantum systems
Authorship
J.G.C.
Double bachelor degree in Mathematics and Physics
J.G.C.
Double bachelor degree in Mathematics and Physics
Defense date
07.16.2025 09:00
07.16.2025 09:00
Summary
Given a quantum system in which we want to modify some control parameter without changes in its energy level, the adiabatic theorem allows us to do it. In exchange, the speed at which this change is made must be sufficiently small. This leads to a greater vulnerability of the system to effects such as noise or decoherence that result in the lost of its quantum properties. In order to solve this difficulty, the methods known as shortcuts to adiabaticity are designed. They allow us to accelerate the system preparation time without having to depend on the adiabatic theorem. In particular, in this work we will focus on the method known as Counterdiabatic Driving (CD) based on the introduction of an additional term in the hamiltonian of the system that avoids transitions between instantaneous eigenstates. First, we will motivate and give the precise statement of the adiabatic theorem. Then, we will introduce the CD formalism and apply it to several simple systems. Finally, a brief introduction to others shortcuts to adiabaticity and their limits will be given.
Given a quantum system in which we want to modify some control parameter without changes in its energy level, the adiabatic theorem allows us to do it. In exchange, the speed at which this change is made must be sufficiently small. This leads to a greater vulnerability of the system to effects such as noise or decoherence that result in the lost of its quantum properties. In order to solve this difficulty, the methods known as shortcuts to adiabaticity are designed. They allow us to accelerate the system preparation time without having to depend on the adiabatic theorem. In particular, in this work we will focus on the method known as Counterdiabatic Driving (CD) based on the introduction of an additional term in the hamiltonian of the system that avoids transitions between instantaneous eigenstates. First, we will motivate and give the precise statement of the adiabatic theorem. Then, we will introduce the CD formalism and apply it to several simple systems. Finally, a brief introduction to others shortcuts to adiabaticity and their limits will be given.
Direction
Vázquez Ramallo, Alfonso (Tutorships)
Vázquez Ramallo, Alfonso (Tutorships)
Court
Pérez Muñuzuri, Vicente (Chairman)
GALLAS TORREIRA, ABRAHAM ANTONIO (Secretary)
RODRIGUEZ GONZALEZ, JUAN ANTONIO (Member)
Pérez Muñuzuri, Vicente (Chairman)
GALLAS TORREIRA, ABRAHAM ANTONIO (Secretary)
RODRIGUEZ GONZALEZ, JUAN ANTONIO (Member)
Design and optimization of a dressing for ulcers using CFD techniques.
Authorship
S.G.B.
Bachelor of Physics
S.G.B.
Bachelor of Physics
Defense date
07.17.2025 09:00
07.17.2025 09:00
Summary
The effective treatment of chronic ulcers represents a significant clinical challenge that requires advanced solutions to reduce both the associated morbidity and the socioeconomic burden on healthcare systems. These are exudative wounds that are difficult to manage, as excessive secretion can cause maceration of the surrounding skin and delay healing, leading to infections. In this context, the present work proposes the design and optimization of a dressing capable of simultaneously delivering medication and draining exudate, intended for the care of such wounds in particularly severe cases. The aim is to design an intelligent structured material that facilitates the removal of this secretion. To carry out the study, computational fluid dynamics (CFD) simulation was used as the main tool, through the Simcenter STAR-CCM+ software, which allowed us to model the behavior of the fluids involved within the structure of the dressing. With the aim of exploring a wide range of configurations regarding the device’s geometry, a comparative analysis was conducted between four different designs, built parametrically, to later perform an optimization process in which the most efficient configuration of their variables was sought. Once the optimal structure for each design was determined, a sensitivity analysis was carried out to evaluate their performance under different clinical conditions: what happens when the drug inlet velocity changes?, and the wound depth?
The effective treatment of chronic ulcers represents a significant clinical challenge that requires advanced solutions to reduce both the associated morbidity and the socioeconomic burden on healthcare systems. These are exudative wounds that are difficult to manage, as excessive secretion can cause maceration of the surrounding skin and delay healing, leading to infections. In this context, the present work proposes the design and optimization of a dressing capable of simultaneously delivering medication and draining exudate, intended for the care of such wounds in particularly severe cases. The aim is to design an intelligent structured material that facilitates the removal of this secretion. To carry out the study, computational fluid dynamics (CFD) simulation was used as the main tool, through the Simcenter STAR-CCM+ software, which allowed us to model the behavior of the fluids involved within the structure of the dressing. With the aim of exploring a wide range of configurations regarding the device’s geometry, a comparative analysis was conducted between four different designs, built parametrically, to later perform an optimization process in which the most efficient configuration of their variables was sought. Once the optimal structure for each design was determined, a sensitivity analysis was carried out to evaluate their performance under different clinical conditions: what happens when the drug inlet velocity changes?, and the wound depth?
Direction
Pérez Muñuzuri, Alberto (Tutorships)
PARAMES ESTEVEZ, SANTIAGO (Co-tutorships)
Pérez Muñuzuri, Alberto (Tutorships)
PARAMES ESTEVEZ, SANTIAGO (Co-tutorships)
Court
Pérez Muñuzuri, Vicente (Chairman)
GALLAS TORREIRA, ABRAHAM ANTONIO (Secretary)
RODRIGUEZ GONZALEZ, JUAN ANTONIO (Member)
Pérez Muñuzuri, Vicente (Chairman)
GALLAS TORREIRA, ABRAHAM ANTONIO (Secretary)
RODRIGUEZ GONZALEZ, JUAN ANTONIO (Member)
Analysis of the electrical conductivity of lubricants for electric vehicles
Authorship
F.G.P.
Bachelor of Physics
F.G.P.
Bachelor of Physics
Defense date
07.17.2025 09:00
07.17.2025 09:00
Summary
This Bachelor’s Thesis explores how the presence of polar additives and nanoparticles affects the electrical behaviour of synthetic lubricants used in applications where controlling charge accumulation is required, as in components of electric vehicles. The study combines laboratory tests on a polyalphaolefin (PAO-4) base oil and commercial lubricants, working with mixtures at different concentrations of 1-butanol, 1-hexanol and 1-nonanol, as well as conductive or semiconductive nanoparticles (TiC, TiO2, Al2O3). The relationship between additive fraction, temperature and the evolution of conductivity is addressed experimentally, paying attention to the formation of possible polar domains within traditionally insulating systems. For each configuration, the consistency of the results with available ionisation theories in non-polar media is evaluated, and the role of additive stability and dispersion is discussed as critical factors to ensure reproducible behaviour. This approach aims to provide useful data to better understand the physical mechanisms governing the variation of electrical conductivity in synthetic base lubricants, highlighting the importance of optimising additive selection and dosage to balance tribological and electrical properties in current industrial applications.
This Bachelor’s Thesis explores how the presence of polar additives and nanoparticles affects the electrical behaviour of synthetic lubricants used in applications where controlling charge accumulation is required, as in components of electric vehicles. The study combines laboratory tests on a polyalphaolefin (PAO-4) base oil and commercial lubricants, working with mixtures at different concentrations of 1-butanol, 1-hexanol and 1-nonanol, as well as conductive or semiconductive nanoparticles (TiC, TiO2, Al2O3). The relationship between additive fraction, temperature and the evolution of conductivity is addressed experimentally, paying attention to the formation of possible polar domains within traditionally insulating systems. For each configuration, the consistency of the results with available ionisation theories in non-polar media is evaluated, and the role of additive stability and dispersion is discussed as critical factors to ensure reproducible behaviour. This approach aims to provide useful data to better understand the physical mechanisms governing the variation of electrical conductivity in synthetic base lubricants, highlighting the importance of optimising additive selection and dosage to balance tribological and electrical properties in current industrial applications.
Direction
AMIGO POMBO, ALFREDO JOSE (Tutorships)
GINER RAJALA, OSCAR VICENT (Co-tutorships)
AMIGO POMBO, ALFREDO JOSE (Tutorships)
GINER RAJALA, OSCAR VICENT (Co-tutorships)
Court
Pérez Muñuzuri, Vicente (Chairman)
GALLAS TORREIRA, ABRAHAM ANTONIO (Secretary)
RODRIGUEZ GONZALEZ, JUAN ANTONIO (Member)
Pérez Muñuzuri, Vicente (Chairman)
GALLAS TORREIRA, ABRAHAM ANTONIO (Secretary)
RODRIGUEZ GONZALEZ, JUAN ANTONIO (Member)
Exploring the dynamics of the Universe through Fridmann's models.
Authorship
P.H.S.
Bachelor of Physics
P.H.S.
Bachelor of Physics
Defense date
07.17.2025 09:00
07.17.2025 09:00
Summary
The proposal in this work is to expand on the concepts studied in the Astrophysics and Cosmology course of the Physics degree. Specifically, the objective is to study cosmological models of a more complex character than those usually treated in the fundamental cosmology literature. For this, we will work with the formalism centered on the Fridmann equations and we will develop Python algorithms to numerically solve the equations that arise in multicomponent cosmologies.
The proposal in this work is to expand on the concepts studied in the Astrophysics and Cosmology course of the Physics degree. Specifically, the objective is to study cosmological models of a more complex character than those usually treated in the fundamental cosmology literature. For this, we will work with the formalism centered on the Fridmann equations and we will develop Python algorithms to numerically solve the equations that arise in multicomponent cosmologies.
Direction
ALVAREZ MUÑIZ, JAIME (Tutorships)
ALVAREZ MUÑIZ, JAIME (Tutorships)
Court
Pérez Muñuzuri, Vicente (Chairman)
GALLAS TORREIRA, ABRAHAM ANTONIO (Secretary)
RODRIGUEZ GONZALEZ, JUAN ANTONIO (Member)
Pérez Muñuzuri, Vicente (Chairman)
GALLAS TORREIRA, ABRAHAM ANTONIO (Secretary)
RODRIGUEZ GONZALEZ, JUAN ANTONIO (Member)
Ab initio study of defects in semiconductors and their influence on the band structure.
Authorship
B.I.F.
Bachelor of Physics
B.I.F.
Bachelor of Physics
Defense date
07.17.2025 09:00
07.17.2025 09:00
Summary
This work presents an ab initio study of various semiconductors with the aim of analyzing their electronic structure and evaluating the impact of defects and different exchange-correlation functionals on the calculated band gap. Density Functional Theory (DFT), combined with the Linearized Augmented Plane Waves (LAPW) method as implemented in WIEN2k, is used throughout the study. First, the behavior of silicon is investigated under p type doping (boron), n-type doping (phosphorus), and the presence of vacancies. The simulations show that doping introduces shallow donor or acceptor levels, shifting the Fermi level accordingly, while vacancies generate localized states within the band gap. Next, the electronic structure of strontium titanate (SrTiO3), an ionic perovskite-type semiconductor, is examined. Orbital analysis confirms that the valence band is dominated by oxygen p orbitals, and the conduction band originates from titanium d orbitals. The crystal field splitting of the d orbitals in the octahedral environment is also studied. Finally, the effectiveness of the modified Becke Johnson potential (mBJ) is evaluated by comparing its results with those of the Generalized Gradient Approximation (GGA) functional and experimental gap values. For Si and Ge (sp-type semiconductors), a significant improvement in the calculated gap. In SrTiO3, the correction is smaller but still relevant. This work highlights the efficiency of mBJ as a semilocal alternative for improving gap predictions in semiconductors, and the importance of defect-level analysis in understanding the electronic behavior of materials.
This work presents an ab initio study of various semiconductors with the aim of analyzing their electronic structure and evaluating the impact of defects and different exchange-correlation functionals on the calculated band gap. Density Functional Theory (DFT), combined with the Linearized Augmented Plane Waves (LAPW) method as implemented in WIEN2k, is used throughout the study. First, the behavior of silicon is investigated under p type doping (boron), n-type doping (phosphorus), and the presence of vacancies. The simulations show that doping introduces shallow donor or acceptor levels, shifting the Fermi level accordingly, while vacancies generate localized states within the band gap. Next, the electronic structure of strontium titanate (SrTiO3), an ionic perovskite-type semiconductor, is examined. Orbital analysis confirms that the valence band is dominated by oxygen p orbitals, and the conduction band originates from titanium d orbitals. The crystal field splitting of the d orbitals in the octahedral environment is also studied. Finally, the effectiveness of the modified Becke Johnson potential (mBJ) is evaluated by comparing its results with those of the Generalized Gradient Approximation (GGA) functional and experimental gap values. For Si and Ge (sp-type semiconductors), a significant improvement in the calculated gap. In SrTiO3, the correction is smaller but still relevant. This work highlights the efficiency of mBJ as a semilocal alternative for improving gap predictions in semiconductors, and the importance of defect-level analysis in understanding the electronic behavior of materials.
Direction
PARDO CASTRO, VICTOR (Tutorships)
PARDO CASTRO, VICTOR (Tutorships)
Court
Pérez Muñuzuri, Vicente (Chairman)
GALLAS TORREIRA, ABRAHAM ANTONIO (Secretary)
RODRIGUEZ GONZALEZ, JUAN ANTONIO (Member)
Pérez Muñuzuri, Vicente (Chairman)
GALLAS TORREIRA, ABRAHAM ANTONIO (Secretary)
RODRIGUEZ GONZALEZ, JUAN ANTONIO (Member)
Influence of the Allyl Group on the Molar Refractivity of Ionic Liquids
Authorship
C.J.F.
Bachelor of Physics
C.J.F.
Bachelor of Physics
Defense date
07.17.2025 09:00
07.17.2025 09:00
Summary
This work presents the optical characterization of two imidazolium-based ionic liquids using spectrally resolved white light interferometry, for which both the system and the built-in micrometer are calibrated. Interferometric measurements provide the refractive index as a function of wavelength, which allows the calculation of molar refractivity. This analysis is used to evaluate the contribution of both the allyl group and its double bond to the molar refractivity in the selected ionic liquids.
This work presents the optical characterization of two imidazolium-based ionic liquids using spectrally resolved white light interferometry, for which both the system and the built-in micrometer are calibrated. Interferometric measurements provide the refractive index as a function of wavelength, which allows the calculation of molar refractivity. This analysis is used to evaluate the contribution of both the allyl group and its double bond to the molar refractivity in the selected ionic liquids.
Direction
AROSA LOBATO, YAGO (Tutorships)
AROSA LOBATO, YAGO (Tutorships)
Court
Pérez Muñuzuri, Vicente (Chairman)
GALLAS TORREIRA, ABRAHAM ANTONIO (Secretary)
RODRIGUEZ GONZALEZ, JUAN ANTONIO (Member)
Pérez Muñuzuri, Vicente (Chairman)
GALLAS TORREIRA, ABRAHAM ANTONIO (Secretary)
RODRIGUEZ GONZALEZ, JUAN ANTONIO (Member)
Study of mass limits in neutron stars
Authorship
C.L.V.
Bachelor of Physics
C.L.V.
Bachelor of Physics
Defense date
07.16.2025 09:00
07.16.2025 09:00
Summary
This work focuses on the study of neutron star physics, examining their structure through equations of state (EOS), whick describe the behavior of matter at the extreme densities characteristic of the interior of these bodies, and analyzing the so-called mass limits. Existing astronomical observations will be described, including isolates pulsars, magnetars and binary systems, which allow for direct evaluation of neutron star masses. Finally, building upon the detection of gravitational waves, the current status and reevaluation of these mass limites will be contextualized, with a profound impact of the equiations of state of nuclear matter.
This work focuses on the study of neutron star physics, examining their structure through equations of state (EOS), whick describe the behavior of matter at the extreme densities characteristic of the interior of these bodies, and analyzing the so-called mass limits. Existing astronomical observations will be described, including isolates pulsars, magnetars and binary systems, which allow for direct evaluation of neutron star masses. Finally, building upon the detection of gravitational waves, the current status and reevaluation of these mass limites will be contextualized, with a profound impact of the equiations of state of nuclear matter.
Direction
ALVAREZ POL, HECTOR (Tutorships)
ALVAREZ POL, HECTOR (Tutorships)
Court
HERNANDO MORATA, JOSE ANGEL (Chairman)
SERANTES ABALO, DAVID (Secretary)
MENDEZ MORALES, TRINIDAD (Member)
HERNANDO MORATA, JOSE ANGEL (Chairman)
SERANTES ABALO, DAVID (Secretary)
MENDEZ MORALES, TRINIDAD (Member)
Analysis of the Space-Charge phenomenon in the Pressurized TPC of the DUNE experiment (ND-GAr)
Authorship
A.L.B.
Bachelor of Physics
A.L.B.
Bachelor of Physics
Defense date
07.16.2025 09:00
07.16.2025 09:00
Summary
The space charge consists of the deformation of the electric field in an ionization detector, caused by the ions' drift velocity being much lower than that of the electrons. This deformation in the field induces in turn a deformation in the measurement of the transverse position of the detected particles. In this work, an analytical analysis is proposed for the space charge fields in the cases of continuous and pulsed charge injection. This analysis is used to estimate the effect that the space charge will have on the ND-GAr TPC of the DUNE experiment, whose main objective will be the study of the neutrino oscillation process. Realistic values are used for this detector, and Ar/CH4 and Ar/CF4 mixtures are studied, both for the space charge effect produced by the neutrino beam itself and by cosmic muons. The result is a deformation in the transverse position for the worst case of O(mm).
The space charge consists of the deformation of the electric field in an ionization detector, caused by the ions' drift velocity being much lower than that of the electrons. This deformation in the field induces in turn a deformation in the measurement of the transverse position of the detected particles. In this work, an analytical analysis is proposed for the space charge fields in the cases of continuous and pulsed charge injection. This analysis is used to estimate the effect that the space charge will have on the ND-GAr TPC of the DUNE experiment, whose main objective will be the study of the neutrino oscillation process. Realistic values are used for this detector, and Ar/CH4 and Ar/CF4 mixtures are studied, both for the space charge effect produced by the neutrino beam itself and by cosmic muons. The result is a deformation in the transverse position for the worst case of O(mm).
Direction
GONZALEZ DIAZ, DIEGO (Tutorships)
GONZALEZ DIAZ, DIEGO (Tutorships)
Court
HERNANDO MORATA, JOSE ANGEL (Chairman)
SERANTES ABALO, DAVID (Secretary)
MENDEZ MORALES, TRINIDAD (Member)
HERNANDO MORATA, JOSE ANGEL (Chairman)
SERANTES ABALO, DAVID (Secretary)
MENDEZ MORALES, TRINIDAD (Member)
Scintillation detectors and electronic systems for radiation dose assessment in radioactive waste containers
Authorship
S.T.L.P.
Bachelor of Physics
S.T.L.P.
Bachelor of Physics
Defense date
07.16.2025 09:00
07.16.2025 09:00
Summary
Radioactive waste containers store spent fuel waste from nuclear power plants for temporary or permanent storage in geologically stable deep repositories. Measuring the dose generated by the waste in the containers is a problem that can be addressed with different simple detectors that allow sufficient dosimetric characterization to determine the status of the container. In this project, a proposal and designment for a detector based on avalanche photodiodes (APDs) is presented for the determination of the radioactive dose using a simple electronic design that optimizes the size and operation of the detector
Radioactive waste containers store spent fuel waste from nuclear power plants for temporary or permanent storage in geologically stable deep repositories. Measuring the dose generated by the waste in the containers is a problem that can be addressed with different simple detectors that allow sufficient dosimetric characterization to determine the status of the container. In this project, a proposal and designment for a detector based on avalanche photodiodes (APDs) is presented for the determination of the radioactive dose using a simple electronic design that optimizes the size and operation of the detector
Direction
ALVAREZ POL, HECTOR (Tutorships)
CASAREJOS RUIZ, ENRIQUE (Co-tutorships)
ALVAREZ POL, HECTOR (Tutorships)
CASAREJOS RUIZ, ENRIQUE (Co-tutorships)
Court
HERNANDO MORATA, JOSE ANGEL (Chairman)
SERANTES ABALO, DAVID (Secretary)
MENDEZ MORALES, TRINIDAD (Member)
HERNANDO MORATA, JOSE ANGEL (Chairman)
SERANTES ABALO, DAVID (Secretary)
MENDEZ MORALES, TRINIDAD (Member)
Modelling the tracking performance in PbPb events at the LHCb experiment.
Authorship
A.L.M.
Bachelor of Physics
A.L.M.
Bachelor of Physics
Defense date
07.16.2025 09:00
07.16.2025 09:00
Summary
The goal of this project is to accurately estimate the fraction of real reconstructed tracks out of the total in lead lead collisions within the LHCb experiment. This analysis is based on a Monte Carlo simulation that is corrected using a reweighting algorithm known as GBReweighter. The result is then refined by fitting the ghost probability distribution from the simulation to the experimental data. This analysis is crucial to avoid misinterpreting the data, which could otherwise lead to incorrect conclusions about the underlying physics of the system.
The goal of this project is to accurately estimate the fraction of real reconstructed tracks out of the total in lead lead collisions within the LHCb experiment. This analysis is based on a Monte Carlo simulation that is corrected using a reweighting algorithm known as GBReweighter. The result is then refined by fitting the ghost probability distribution from the simulation to the experimental data. This analysis is crucial to avoid misinterpreting the data, which could otherwise lead to incorrect conclusions about the underlying physics of the system.
Direction
SANTAMARINA RIOS, CIBRAN (Tutorships)
BELIN , SAMUEL JULES (Co-tutorships)
SANTAMARINA RIOS, CIBRAN (Tutorships)
BELIN , SAMUEL JULES (Co-tutorships)
Court
HERNANDO MORATA, JOSE ANGEL (Chairman)
SERANTES ABALO, DAVID (Secretary)
MENDEZ MORALES, TRINIDAD (Member)
HERNANDO MORATA, JOSE ANGEL (Chairman)
SERANTES ABALO, DAVID (Secretary)
MENDEZ MORALES, TRINIDAD (Member)
Chaos and Fractals
Authorship
J.M.L.P.
Bachelor of Physics
J.M.L.P.
Bachelor of Physics
Defense date
07.16.2025 09:00
07.16.2025 09:00
Summary
The aim of this essay is to establish the relation between chaotic systems and fractals. Lorenz equations constitute an example of such chaotic systems, and we will need to discuss the region of parameters for which these equations represent such behavior. After that, we will introduce the concept of strange attractors and justify its fractal structure. Finally, we will introduce a few different definitions of the fractal dimension and we will calculate Cantor's set dimension and the correlation dimension of the Lorenz strange attractor.
The aim of this essay is to establish the relation between chaotic systems and fractals. Lorenz equations constitute an example of such chaotic systems, and we will need to discuss the region of parameters for which these equations represent such behavior. After that, we will introduce the concept of strange attractors and justify its fractal structure. Finally, we will introduce a few different definitions of the fractal dimension and we will calculate Cantor's set dimension and the correlation dimension of the Lorenz strange attractor.
Direction
MIGUEZ MACHO, GONZALO (Tutorships)
MIGUEZ MACHO, GONZALO (Tutorships)
Court
HERNANDO MORATA, JOSE ANGEL (Chairman)
SERANTES ABALO, DAVID (Secretary)
MENDEZ MORALES, TRINIDAD (Member)
HERNANDO MORATA, JOSE ANGEL (Chairman)
SERANTES ABALO, DAVID (Secretary)
MENDEZ MORALES, TRINIDAD (Member)
Critical fluctuations in few-layer superconductors: Extended theoretical predictions for their electrical conductivity
Authorship
F.L.S.
Bachelor of Physics
F.L.S.
Bachelor of Physics
Defense date
07.16.2025 09:00
07.16.2025 09:00
Summary
A theoretical study is presented in which several novel expressions for the electrical conductivity in the precursor state of superconductors composed of a small finite number of parallel layers (few-layer superconductors) are calculated. In particular, by employing a matrix-based approach to the superconducting Ginzburg-Landau functional adapted to the geometry under consideration, explicit expressions are derived for both the direct and indirect contributions to paraconductivity. Previous calculations, which relied on determining the energy eigenvalues of the functional, were limited in that they could not address cases with an arbitrary number of layers or account for indirect contributions. Furthermore, these expressions are compared, yielding good agreement, with existing data for few-layer Bi3O2S3 systems, whose precursor state, to our knowledge, had not been previously analyzed.
A theoretical study is presented in which several novel expressions for the electrical conductivity in the precursor state of superconductors composed of a small finite number of parallel layers (few-layer superconductors) are calculated. In particular, by employing a matrix-based approach to the superconducting Ginzburg-Landau functional adapted to the geometry under consideration, explicit expressions are derived for both the direct and indirect contributions to paraconductivity. Previous calculations, which relied on determining the energy eigenvalues of the functional, were limited in that they could not address cases with an arbitrary number of layers or account for indirect contributions. Furthermore, these expressions are compared, yielding good agreement, with existing data for few-layer Bi3O2S3 systems, whose precursor state, to our knowledge, had not been previously analyzed.
Direction
VAZQUEZ RAMALLO, MANUEL (Tutorships)
VAZQUEZ RAMALLO, MANUEL (Tutorships)
Court
HERNANDO MORATA, JOSE ANGEL (Chairman)
SERANTES ABALO, DAVID (Secretary)
MENDEZ MORALES, TRINIDAD (Member)
HERNANDO MORATA, JOSE ANGEL (Chairman)
SERANTES ABALO, DAVID (Secretary)
MENDEZ MORALES, TRINIDAD (Member)
Control therapies in tumor growth models with radiotherapy
Authorship
A.L.R.
Bachelor of Physics
A.L.R.
Bachelor of Physics
Defense date
07.16.2025 09:00
07.16.2025 09:00
Summary
Following a brief introduction to the general dynamics and fundamental aspects of cancer, this work begins with a concise literature review of several tumor growth models (exponential, Gompertz, and Verhulst) and radiotherapy response models (Linear-Quadratic model), which reveal competitive dynamics between two populations (healthy and cancerous cells). A mathematical analysis of these models is carried out to determine which best reproduces results found in the literature. Furthermore, these models are coupled with the diffusion equation to produce numerical simulations in both one and two spatial dimensions over time. The remainder of the study focuses on improving and optimizing the best of the preceding models by aiming to minimize the radiotherapy dose administered to the patient during the treatment window. To this end, optimal control strategies are applied to a system of two coupled differential equations governed by a slightly modified Verhulst model. This framework enables the comparison of three possible scenarios and their corresponding stability diagrams: uncontrolled, constant control and optimal control. In essence, this work progressively develops a simple mathematical model of cancer that is iteratively refined and enhanced through bibliographic and mathematical study, as well as the application of numerical solution methods.
Following a brief introduction to the general dynamics and fundamental aspects of cancer, this work begins with a concise literature review of several tumor growth models (exponential, Gompertz, and Verhulst) and radiotherapy response models (Linear-Quadratic model), which reveal competitive dynamics between two populations (healthy and cancerous cells). A mathematical analysis of these models is carried out to determine which best reproduces results found in the literature. Furthermore, these models are coupled with the diffusion equation to produce numerical simulations in both one and two spatial dimensions over time. The remainder of the study focuses on improving and optimizing the best of the preceding models by aiming to minimize the radiotherapy dose administered to the patient during the treatment window. To this end, optimal control strategies are applied to a system of two coupled differential equations governed by a slightly modified Verhulst model. This framework enables the comparison of three possible scenarios and their corresponding stability diagrams: uncontrolled, constant control and optimal control. In essence, this work progressively develops a simple mathematical model of cancer that is iteratively refined and enhanced through bibliographic and mathematical study, as well as the application of numerical solution methods.
Direction
Pérez Muñuzuri, Alberto (Tutorships)
López Pedrares, Javier (Co-tutorships)
Pérez Muñuzuri, Alberto (Tutorships)
López Pedrares, Javier (Co-tutorships)
Court
HERNANDO MORATA, JOSE ANGEL (Chairman)
SERANTES ABALO, DAVID (Secretary)
MENDEZ MORALES, TRINIDAD (Member)
HERNANDO MORATA, JOSE ANGEL (Chairman)
SERANTES ABALO, DAVID (Secretary)
MENDEZ MORALES, TRINIDAD (Member)
Machine Learning for B Meson Selection in the LHCb Experiment
Authorship
M.L.R.
Bachelor of Physics
M.L.R.
Bachelor of Physics
Defense date
07.16.2025 09:00
07.16.2025 09:00
Summary
Current tensions in the semileptonic ratio R(D) versus Standard Model predictions highlight the need for ultra-pure signal samples. After briefly reviewing the theoretical context and the LHCb experiment, we focus on the study of the hadronic channel B0 to D(minus) pi(plus) pi(minus) pi(plus), D(minus) to pi(minus) K(plus) pi(minus), using real LHCb Run 2 data (2016 to 2018) and Monte Carlo simulations. We selected the variables with the highest signal background contrast and tested various multilayer perceptron (MLP) architectures in scikit learn to find the optimal cut that maximizes purity without sacrificing efficiency. The classifier achieved an AUC of 0.97 and increased the signal to background ratio from 82 percent to 288 percent, while retaining over 96 percent of the signal. With the purified sample, we performed a mass fit of the B0 meson using ROOT RooFit, obtaining 5280,00 MeV per c2 with an uncertainty of 0.13 MeV per c2, in excellent agreement with reference values. These findings demonstrate the power of neural networks to produce high purity samples essential for future semileptonic measurements and new physics searches.
Current tensions in the semileptonic ratio R(D) versus Standard Model predictions highlight the need for ultra-pure signal samples. After briefly reviewing the theoretical context and the LHCb experiment, we focus on the study of the hadronic channel B0 to D(minus) pi(plus) pi(minus) pi(plus), D(minus) to pi(minus) K(plus) pi(minus), using real LHCb Run 2 data (2016 to 2018) and Monte Carlo simulations. We selected the variables with the highest signal background contrast and tested various multilayer perceptron (MLP) architectures in scikit learn to find the optimal cut that maximizes purity without sacrificing efficiency. The classifier achieved an AUC of 0.97 and increased the signal to background ratio from 82 percent to 288 percent, while retaining over 96 percent of the signal. With the purified sample, we performed a mass fit of the B0 meson using ROOT RooFit, obtaining 5280,00 MeV per c2 with an uncertainty of 0.13 MeV per c2, in excellent agreement with reference values. These findings demonstrate the power of neural networks to produce high purity samples essential for future semileptonic measurements and new physics searches.
Direction
ROMERO VIDAL, ANTONIO (Tutorships)
NÓVOA FERNÁNDEZ, JULIO (Co-tutorships)
ROMERO VIDAL, ANTONIO (Tutorships)
NÓVOA FERNÁNDEZ, JULIO (Co-tutorships)
Court
HERNANDO MORATA, JOSE ANGEL (Chairman)
SERANTES ABALO, DAVID (Secretary)
MENDEZ MORALES, TRINIDAD (Member)
HERNANDO MORATA, JOSE ANGEL (Chairman)
SERANTES ABALO, DAVID (Secretary)
MENDEZ MORALES, TRINIDAD (Member)
Collection efficiency in ionization chambers
Authorship
L.L.S.
Bachelor of Physics
L.L.S.
Bachelor of Physics
Defense date
07.17.2025 10:00
07.17.2025 10:00
Summary
FLASH radiotherapy, an emerging modality in oncology, appears as a promising tool, although it comes with significant dosimetry challenges. This work addresses the operation of ionization chambers and the associated physical processes, as well as the study of their collection efficiency under the extreme conditions of this regime, where the high doses per pulse challenge the physics of recombination and reveal the limitations of traditional models. For this purpose, data obtained at the MELAF electron accelerator of the PTB in Germany are analysed and compared with the predictions of an existing numerical model that incorporates relevant effects not considered by classical formulations. This combined approach allows for the evaluation of the consistency of current models and deepens the understanding of the actual performance of ionization chambers under FLASH conditions moving towards a more accurate dosimetry that can support the therapeutic potential of this technique.
FLASH radiotherapy, an emerging modality in oncology, appears as a promising tool, although it comes with significant dosimetry challenges. This work addresses the operation of ionization chambers and the associated physical processes, as well as the study of their collection efficiency under the extreme conditions of this regime, where the high doses per pulse challenge the physics of recombination and reveal the limitations of traditional models. For this purpose, data obtained at the MELAF electron accelerator of the PTB in Germany are analysed and compared with the predictions of an existing numerical model that incorporates relevant effects not considered by classical formulations. This combined approach allows for the evaluation of the consistency of current models and deepens the understanding of the actual performance of ionization chambers under FLASH conditions moving towards a more accurate dosimetry that can support the therapeutic potential of this technique.
Direction
GOMEZ RODRIGUEZ, FAUSTINO (Tutorships)
Paz Martín, José (Co-tutorships)
GOMEZ RODRIGUEZ, FAUSTINO (Tutorships)
Paz Martín, José (Co-tutorships)
Court
MIRA PEREZ, JORGE (Chairman)
CID VIDAL, XABIER (Secretary)
MOSQUEIRA REY, JESUS MANUEL (Member)
MIRA PEREZ, JORGE (Chairman)
CID VIDAL, XABIER (Secretary)
MOSQUEIRA REY, JESUS MANUEL (Member)
Application of the piston mechanism to proton knockout reactions
Authorship
M.L.F.
Bachelor of Physics
M.L.F.
Bachelor of Physics
Defense date
07.17.2025 10:30
07.17.2025 10:30
Summary
This work studies the applicability of the piston mechanism in nuclear decays involving the emission of two neutrons, triggered by \textit{knockout} reactions. Using a model based on nuclear configuration mixing and its time evolution, the conditions under which this mechanism can develop and give rise to coherent oscillations between quantum states are analyzed. The study is set within the context of current experiments at GSI using high-resolution neutron detectors, which allow the observation of correlations between particles in the decay products. Key parameters determining the physical relevance of the mechanism are identified, such as the lifetime of the excited state, the oscillation period, and the degree of configuration mixing.
This work studies the applicability of the piston mechanism in nuclear decays involving the emission of two neutrons, triggered by \textit{knockout} reactions. Using a model based on nuclear configuration mixing and its time evolution, the conditions under which this mechanism can develop and give rise to coherent oscillations between quantum states are analyzed. The study is set within the context of current experiments at GSI using high-resolution neutron detectors, which allow the observation of correlations between particles in the decay products. Key parameters determining the physical relevance of the mechanism are identified, such as the lifetime of the excited state, the oscillation period, and the degree of configuration mixing.
Direction
FERNANDEZ DOMINGUEZ, BEATRIZ (Tutorships)
FEIJOO FONTAN, MARTINA (Co-tutorships)
FERNANDEZ DOMINGUEZ, BEATRIZ (Tutorships)
FEIJOO FONTAN, MARTINA (Co-tutorships)
Court
MERINO GAYOSO, CARLOS MIGUEL (Chairman)
PRIETO BLANCO, XESUS (Secretary)
Martinez Hernandez, Diego (Member)
MERINO GAYOSO, CARLOS MIGUEL (Chairman)
PRIETO BLANCO, XESUS (Secretary)
Martinez Hernandez, Diego (Member)
Testing the joint observation of the black-hole merger GW190521 through electromagnetic and gravitational waves leveraging spin and gravitational recoil measurements.
Authorship
U.M.R.
Bachelor of Physics
U.M.R.
Bachelor of Physics
Defense date
07.17.2025 10:30
07.17.2025 10:30
Summary
In this work we aim to evaluate the common source hypothesis for the gravitational wave signal GW190521 and its candidate electromagnetic counterpart ZTF19abanrhr. We will introduce the statistical formalism that will be used to carry out the data analysis and present the restrictions to be imposed on the studied parameters. We will first carry out our analysis focusing solely on the sky location and luminosity distance of the sources to then introduce further restrictions concerning the flare emission mechanism.
In this work we aim to evaluate the common source hypothesis for the gravitational wave signal GW190521 and its candidate electromagnetic counterpart ZTF19abanrhr. We will introduce the statistical formalism that will be used to carry out the data analysis and present the restrictions to be imposed on the studied parameters. We will first carry out our analysis focusing solely on the sky location and luminosity distance of the sources to then introduce further restrictions concerning the flare emission mechanism.
Direction
CALDERON BUSTILLO, JUAN (Tutorships)
CALDERON BUSTILLO, JUAN (Tutorships)
Court
MERINO GAYOSO, CARLOS MIGUEL (Chairman)
PRIETO BLANCO, XESUS (Secretary)
Martinez Hernandez, Diego (Member)
MERINO GAYOSO, CARLOS MIGUEL (Chairman)
PRIETO BLANCO, XESUS (Secretary)
Martinez Hernandez, Diego (Member)
Modelling of the thermal tehavior of soils affected by forest fires.
Authorship
J.J.M.B.
Bachelor of Physics
J.J.M.B.
Bachelor of Physics
Defense date
07.16.2025 09:00
07.16.2025 09:00
Summary
This Final Degree Project proposes an advance in the study of the thermal behaviour of soils affected by forest fires through the modelling of thermograms previously obtained by Differential Scanning Calorimetry (DSC). The main objective is to separate the thermal processes occurring in the soil organic matter and analyse its changes as a result of laboratory heating, using different mathematical functions. To this end, a software tool has been developed in Python that enables the fitting of thermal curves based on experimental data, incorporating features such as smoothing, noise correction, minimum detection, and parameter fitting using the Levenberg-Marquardt algorithm. Samples of soil from the Antela area (Xinzo de Limia, Ourense) have been analysed, both untreated and subjected to thermal treatment at 300 degrees Celsius. The results show that the sum of three Gaussian functions is the most effective and statistically robust model (Coefficient of determination greater than 0.999) for describing DSC curves, identifying three main exothermic events: combustion of labile organic matter (200-300 degrees Celsius), combustion of recalcitrant organic matter (300-400 degrees Celsius), and degradation of compounds generated during the combustion process (greater than 400 degrees Celsius). It is concluded that the proposed model enables an accurate characterisation of the thermal state of the soil, facilitating the identification of the damage sustained and its potential recovery. This methodology represents a replicable and transferable tool for post-fire soil assessment, with potential applications in ecological restoration and forest management.
This Final Degree Project proposes an advance in the study of the thermal behaviour of soils affected by forest fires through the modelling of thermograms previously obtained by Differential Scanning Calorimetry (DSC). The main objective is to separate the thermal processes occurring in the soil organic matter and analyse its changes as a result of laboratory heating, using different mathematical functions. To this end, a software tool has been developed in Python that enables the fitting of thermal curves based on experimental data, incorporating features such as smoothing, noise correction, minimum detection, and parameter fitting using the Levenberg-Marquardt algorithm. Samples of soil from the Antela area (Xinzo de Limia, Ourense) have been analysed, both untreated and subjected to thermal treatment at 300 degrees Celsius. The results show that the sum of three Gaussian functions is the most effective and statistically robust model (Coefficient of determination greater than 0.999) for describing DSC curves, identifying three main exothermic events: combustion of labile organic matter (200-300 degrees Celsius), combustion of recalcitrant organic matter (300-400 degrees Celsius), and degradation of compounds generated during the combustion process (greater than 400 degrees Celsius). It is concluded that the proposed model enables an accurate characterisation of the thermal state of the soil, facilitating the identification of the damage sustained and its potential recovery. This methodology represents a replicable and transferable tool for post-fire soil assessment, with potential applications in ecological restoration and forest management.
Direction
SALGADO CARBALLO, JOSEFA (Tutorships)
PARAJO VIEITO, JUAN JOSE (Co-tutorships)
SALGADO CARBALLO, JOSEFA (Tutorships)
PARAJO VIEITO, JUAN JOSE (Co-tutorships)
Court
MIRAMONTES ANTAS, JOSE LUIS (Chairman)
SANTAMARINA RIOS, CIBRAN (Secretary)
Prieto Estévez, Gerardo (Member)
MIRAMONTES ANTAS, JOSE LUIS (Chairman)
SANTAMARINA RIOS, CIBRAN (Secretary)
Prieto Estévez, Gerardo (Member)
Study of weak turbulence in Einstein's equations
Authorship
A.M.F.
Bachelor of Physics
A.M.F.
Bachelor of Physics
Defense date
07.17.2025 10:30
07.17.2025 10:30
Summary
This work explores the nonlinear dynamics of a stochastic field of gravitational waves, a physical scenario relevant to the early universe. Starting from Einstein's vacuum equations under the Hadad-Zakharov metric, the multiple time-scale method is applied to overcome the limitations of standard perturbation theories, which fail due to the appearance of secular terms. The main objective is the derivation of the kinetic equation that governs the statistical evolution of the wave action spectrum for a system dominated by four-wave interactions. Once the kinetic equation is obtained, its stationary solutions are analyzed. Two families of solutions are identified: the thermodynamic equilibrium spectra of Rayleigh-Jeans and the non-equilibrium spectra of Kolmogorov-Zakharov. The most significant result is the prediction of a dual cascade: a direct cascade of energy towards small scales and an inverse cascade of wave action towards large scales.
This work explores the nonlinear dynamics of a stochastic field of gravitational waves, a physical scenario relevant to the early universe. Starting from Einstein's vacuum equations under the Hadad-Zakharov metric, the multiple time-scale method is applied to overcome the limitations of standard perturbation theories, which fail due to the appearance of secular terms. The main objective is the derivation of the kinetic equation that governs the statistical evolution of the wave action spectrum for a system dominated by four-wave interactions. Once the kinetic equation is obtained, its stationary solutions are analyzed. Two families of solutions are identified: the thermodynamic equilibrium spectra of Rayleigh-Jeans and the non-equilibrium spectra of Kolmogorov-Zakharov. The most significant result is the prediction of a dual cascade: a direct cascade of energy towards small scales and an inverse cascade of wave action towards large scales.
Direction
FARIÑA BIASI, ANXO (Tutorships)
FARIÑA BIASI, ANXO (Tutorships)
Court
MERINO GAYOSO, CARLOS MIGUEL (Chairman)
PRIETO BLANCO, XESUS (Secretary)
Martinez Hernandez, Diego (Member)
MERINO GAYOSO, CARLOS MIGUEL (Chairman)
PRIETO BLANCO, XESUS (Secretary)
Martinez Hernandez, Diego (Member)
Theoretical-computational study of ionic liquid, carbonates and lithium salt mixtures.
Authorship
P.M.N.
Bachelor of Physics
P.M.N.
Bachelor of Physics
Defense date
07.17.2025 10:30
07.17.2025 10:30
Summary
Ionic liquids (IL) are compounds of great interest in the field of energy storage and candidates to replace the electrolytes commonly used. In this study, ternary mixtures between IL 1-Ethyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide (EMIM-TFSI), a lithium salt (Li-TFSI) and ethylene carbonate (EC) as cosolvent will be analyzed both experimentally and computationally. In particular, we will obtain values for the electrical conductivity of the mixtures and make a comparison between the results obtained.
Ionic liquids (IL) are compounds of great interest in the field of energy storage and candidates to replace the electrolytes commonly used. In this study, ternary mixtures between IL 1-Ethyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide (EMIM-TFSI), a lithium salt (Li-TFSI) and ethylene carbonate (EC) as cosolvent will be analyzed both experimentally and computationally. In particular, we will obtain values for the electrical conductivity of the mixtures and make a comparison between the results obtained.
Direction
MENDEZ MORALES, TRINIDAD (Tutorships)
PARAJO VIEITO, JUAN JOSE (Co-tutorships)
MENDEZ MORALES, TRINIDAD (Tutorships)
PARAJO VIEITO, JUAN JOSE (Co-tutorships)
Court
MERINO GAYOSO, CARLOS MIGUEL (Chairman)
PRIETO BLANCO, XESUS (Secretary)
Martinez Hernandez, Diego (Member)
MERINO GAYOSO, CARLOS MIGUEL (Chairman)
PRIETO BLANCO, XESUS (Secretary)
Martinez Hernandez, Diego (Member)
Study of decays to the final state Ds+pi+pi- with Run 3 data collected by the LHCb experiment at CERN.
Authorship
H.M.F.
Bachelor of Physics
H.M.F.
Bachelor of Physics
Defense date
07.17.2025 10:30
07.17.2025 10:30
Summary
The hadrons Ds1(2460)+ and Ds1(2536)+ are excited states with the same flavour but higher mass than the Ds+. The Ds1(2469)+ is a tetraquark candidate due to characteristics such as its small width an its proximity to the D*K threshold. Therefore, the aim of this work is to study the nature of the Ds1(2460)+ through its production, using the Ds1(2536)+ as a reference, since it is a well-established conventional meson. For this purpose, we will use the decays DsJ to Ds+pi+pi-, which provide a clean channel containing only hadrons in the final state. Thanks to the increased luminosity and improvements in hadron reconstruction achieves with the recent LHCb Upgrade I, higher detection efficiency for these decays is expected, which compensates for their relatively low decay rate. We will therfore use the new Run 3 LHCb data, verify them, and perform a detector performance test for this type of analysis. The study will begin by applying kinematic and particel identification cuts while reconstructing the Ds+ meson through the decay Ds+ to K+K-pi+. Subsequently, we will reconstruct the Ds1(2460)+ and Ds1(2536)+ mesons and perform an invariant mass fit to extract the signal events. This procedure will be repeated as a function of pT bins, allowing us to obtain the ratio of hadron signals as a function of pT, which will be taken as the final result of the analysis.
The hadrons Ds1(2460)+ and Ds1(2536)+ are excited states with the same flavour but higher mass than the Ds+. The Ds1(2469)+ is a tetraquark candidate due to characteristics such as its small width an its proximity to the D*K threshold. Therefore, the aim of this work is to study the nature of the Ds1(2460)+ through its production, using the Ds1(2536)+ as a reference, since it is a well-established conventional meson. For this purpose, we will use the decays DsJ to Ds+pi+pi-, which provide a clean channel containing only hadrons in the final state. Thanks to the increased luminosity and improvements in hadron reconstruction achieves with the recent LHCb Upgrade I, higher detection efficiency for these decays is expected, which compensates for their relatively low decay rate. We will therfore use the new Run 3 LHCb data, verify them, and perform a detector performance test for this type of analysis. The study will begin by applying kinematic and particel identification cuts while reconstructing the Ds+ meson through the decay Ds+ to K+K-pi+. Subsequently, we will reconstruct the Ds1(2460)+ and Ds1(2536)+ mesons and perform an invariant mass fit to extract the signal events. This procedure will be repeated as a function of pT bins, allowing us to obtain the ratio of hadron signals as a function of pT, which will be taken as the final result of the analysis.
Direction
GALLAS TORREIRA, ABRAHAM ANTONIO (Tutorships)
Cambón Bouzas, José Iván (Co-tutorships)
GALLAS TORREIRA, ABRAHAM ANTONIO (Tutorships)
Cambón Bouzas, José Iván (Co-tutorships)
Court
MERINO GAYOSO, CARLOS MIGUEL (Chairman)
PRIETO BLANCO, XESUS (Secretary)
Martinez Hernandez, Diego (Member)
MERINO GAYOSO, CARLOS MIGUEL (Chairman)
PRIETO BLANCO, XESUS (Secretary)
Martinez Hernandez, Diego (Member)
Characterization of a constant potential X ray tube.
Authorship
D.M.C.
Bachelor of Physics
D.M.C.
Bachelor of Physics
Defense date
07.17.2025 10:30
07.17.2025 10:30
Summary
This work presents a comprehensive experimental characterization of a constant potential X-ray tube, focused on determining its half-value layer (HVL) and the spatial profile of the emitted beam. The HVL, which quantifies the beam's penetration ability and serves as an indicator of its quality, is obtained through direct measurements using an ionization chamber, and is also indirectly estimated from energy spectra recorded with a CdTe semiconductor detector. To correct distortions in the measured spectra (due to escape effects, random coincidences, and Compton scattering), a detector-specific response matrix was developed and applied, based on Monte Carlo simulations. The study is complemented by comparisons between experimental results and spectral models generated with the SpekPy software, allowing the estimation of parameters such as the tube’s inherent filtration and the evaluation of the anode geometry's influence (heel effect) on beam distribution. Additionally, the beam profile is analyzed in orthogonal directions using a motorized linear stage, revealing the impact of structural components and anode tilt on field homogeneity. This work provides a detailed insight into the actual behavior of an X-ray tube in contrast to idealized models, highlighting the importance of combining experimental measurements, simulations, and analytical models for accurate characterization. The results are relevant both for dosimetric purposes and for clinical or experimental applications requiring rigorous control over beam quality.
This work presents a comprehensive experimental characterization of a constant potential X-ray tube, focused on determining its half-value layer (HVL) and the spatial profile of the emitted beam. The HVL, which quantifies the beam's penetration ability and serves as an indicator of its quality, is obtained through direct measurements using an ionization chamber, and is also indirectly estimated from energy spectra recorded with a CdTe semiconductor detector. To correct distortions in the measured spectra (due to escape effects, random coincidences, and Compton scattering), a detector-specific response matrix was developed and applied, based on Monte Carlo simulations. The study is complemented by comparisons between experimental results and spectral models generated with the SpekPy software, allowing the estimation of parameters such as the tube’s inherent filtration and the evaluation of the anode geometry's influence (heel effect) on beam distribution. Additionally, the beam profile is analyzed in orthogonal directions using a motorized linear stage, revealing the impact of structural components and anode tilt on field homogeneity. This work provides a detailed insight into the actual behavior of an X-ray tube in contrast to idealized models, highlighting the importance of combining experimental measurements, simulations, and analytical models for accurate characterization. The results are relevant both for dosimetric purposes and for clinical or experimental applications requiring rigorous control over beam quality.
Direction
GOMEZ RODRIGUEZ, FAUSTINO (Tutorships)
GOMEZ RODRIGUEZ, FAUSTINO (Tutorships)
Court
MERINO GAYOSO, CARLOS MIGUEL (Chairman)
PRIETO BLANCO, XESUS (Secretary)
Martinez Hernandez, Diego (Member)
MERINO GAYOSO, CARLOS MIGUEL (Chairman)
PRIETO BLANCO, XESUS (Secretary)
Martinez Hernandez, Diego (Member)
Development and characterization of an ignition source detector in the ultraviolet range
Authorship
C.N.L.
Bachelor of Physics
C.N.L.
Bachelor of Physics
Defense date
07.17.2025 10:30
07.17.2025 10:30
Summary
This work presents the development and characterization of a radiation detection system in the ultraviolet (UV) range, whose main component is the UVTron R13192 from Hamamatsu. This device enables detection within the 185 to 260 nm range and features high response speed, sensitivity, and the ability to operate without interference from sunlight. The study includes the construction of an experimental setup using a C10807 circuit, an oscilloscope, and an Arduino for data acquisition. A calibration is carried out using an alpha source of americium, taking advantage of the radioluminescence induced in nitrogen in the air, and an estimated value for the number of photons per pulse detectable by the sensor is obtained. The spatial and angular sensitivity of the system is analyzed experimentally using a UV source (lamp L9657-03), and this analysis is complemented by the development of a simulation in Geant4. This simulation allows for a versatile and straightforward reproduction of the detector’s behavior under different geometric configurations. Finally, several improvements are proposed, including the use of a PTFE-coated collimator, and more advanced technologies such as S-RETGEM detectors are explored. These new systems offer greater sensitivity and adaptability for more demanding applications, such as forest fire detection. This research concludes and verifies the effectiveness of the UVTron for ignition source detection, while also presenting alternative paths for future improvements.
This work presents the development and characterization of a radiation detection system in the ultraviolet (UV) range, whose main component is the UVTron R13192 from Hamamatsu. This device enables detection within the 185 to 260 nm range and features high response speed, sensitivity, and the ability to operate without interference from sunlight. The study includes the construction of an experimental setup using a C10807 circuit, an oscilloscope, and an Arduino for data acquisition. A calibration is carried out using an alpha source of americium, taking advantage of the radioluminescence induced in nitrogen in the air, and an estimated value for the number of photons per pulse detectable by the sensor is obtained. The spatial and angular sensitivity of the system is analyzed experimentally using a UV source (lamp L9657-03), and this analysis is complemented by the development of a simulation in Geant4. This simulation allows for a versatile and straightforward reproduction of the detector’s behavior under different geometric configurations. Finally, several improvements are proposed, including the use of a PTFE-coated collimator, and more advanced technologies such as S-RETGEM detectors are explored. These new systems offer greater sensitivity and adaptability for more demanding applications, such as forest fire detection. This research concludes and verifies the effectiveness of the UVTron for ignition source detection, while also presenting alternative paths for future improvements.
Direction
AYYAD LIMONGE, FRANCESC YASSID (Tutorships)
CABO LANDEIRA, CRISTINA (Co-tutorships)
AYYAD LIMONGE, FRANCESC YASSID (Tutorships)
CABO LANDEIRA, CRISTINA (Co-tutorships)
Court
MERINO GAYOSO, CARLOS MIGUEL (Chairman)
PRIETO BLANCO, XESUS (Secretary)
Martinez Hernandez, Diego (Member)
MERINO GAYOSO, CARLOS MIGUEL (Chairman)
PRIETO BLANCO, XESUS (Secretary)
Martinez Hernandez, Diego (Member)
Tribological Properties of Electric Vehicle Transmission Nanofluids and their Compatibility with Elastomers
Authorship
P.N.P.
Bachelor of Physics
P.N.P.
Bachelor of Physics
Defense date
07.17.2025 16:00
07.17.2025 16:00
Summary
In this Final Degree Project a tribological study of eight nanolubricants prepared from a base oil (PAO6) additivated with Titanium Carbide (TiC) or Zirconium Carbide (ZrC) nanoparticles (NPs) has been carried out. The anti-friction and anti-wear behaviour of both NPs has been analysed by measuring the coefficient of friction on a tribometer and the wear footprint on an optical profilometer. For the nanolubricant with TiC NPs, only an improvement in the coefficient of friction of 8% has been achieved, while the one containing ZrC NPs has reduced both the coefficient of friction by 10% and the wear area by 17%. The optimal concentration for which the greatest reduction of these parameters was achieved with respect to PAO6 was 0.01 wt% of ZrC NPs. With the two dispersions of optimal concentration, a compatibility study has been carried out with three elastomers (silicone, FKM and EPDM rubbers) by carrying out tensile and volume variation tests.
In this Final Degree Project a tribological study of eight nanolubricants prepared from a base oil (PAO6) additivated with Titanium Carbide (TiC) or Zirconium Carbide (ZrC) nanoparticles (NPs) has been carried out. The anti-friction and anti-wear behaviour of both NPs has been analysed by measuring the coefficient of friction on a tribometer and the wear footprint on an optical profilometer. For the nanolubricant with TiC NPs, only an improvement in the coefficient of friction of 8% has been achieved, while the one containing ZrC NPs has reduced both the coefficient of friction by 10% and the wear area by 17%. The optimal concentration for which the greatest reduction of these parameters was achieved with respect to PAO6 was 0.01 wt% of ZrC NPs. With the two dispersions of optimal concentration, a compatibility study has been carried out with three elastomers (silicone, FKM and EPDM rubbers) by carrying out tensile and volume variation tests.
Direction
FERNANDEZ PEREZ, JOSEFA (Tutorships)
GARCIA GUIMAREY, MARIA JESUS (Co-tutorships)
FERNANDEZ PEREZ, JOSEFA (Tutorships)
GARCIA GUIMAREY, MARIA JESUS (Co-tutorships)
Court
MERINO GAYOSO, CARLOS MIGUEL (Chairman)
PRIETO BLANCO, XESUS (Secretary)
Martinez Hernandez, Diego (Member)
MERINO GAYOSO, CARLOS MIGUEL (Chairman)
PRIETO BLANCO, XESUS (Secretary)
Martinez Hernandez, Diego (Member)
Development and characterization of an energy transmission system using optical fiber
Authorship
A.O.D.A.
Bachelor of Physics
A.O.D.A.
Bachelor of Physics
Defense date
07.17.2025 16:00
07.17.2025 16:00
Summary
This recent years, there has been great advance in the transmission of energy by light. The so called PBL systems (Power By Light) are an answer to energy transmission as they allow greater control and security, making them suitable to a lot of uses and environments. In this work we will show a detailed process of development and characterization of a laser transmission system by optical fiber based in an epitaxial vertical structured cell (VEHSA) with a total number of 6 p/n junctions.
This recent years, there has been great advance in the transmission of energy by light. The so called PBL systems (Power By Light) are an answer to energy transmission as they allow greater control and security, making them suitable to a lot of uses and environments. In this work we will show a detailed process of development and characterization of a laser transmission system by optical fiber based in an epitaxial vertical structured cell (VEHSA) with a total number of 6 p/n junctions.
Direction
GARCIA LOUREIRO, ANTONIO JESUS (Tutorships)
FERNANDEZ LOZANO, JAVIER (Co-tutorships)
GARCIA LOUREIRO, ANTONIO JESUS (Tutorships)
FERNANDEZ LOZANO, JAVIER (Co-tutorships)
Court
MERINO GAYOSO, CARLOS MIGUEL (Chairman)
PRIETO BLANCO, XESUS (Secretary)
Martinez Hernandez, Diego (Member)
MERINO GAYOSO, CARLOS MIGUEL (Chairman)
PRIETO BLANCO, XESUS (Secretary)
Martinez Hernandez, Diego (Member)
Waves in Coastal Systems
Authorship
A.P.G.
Bachelor of Physics
A.P.G.
Bachelor of Physics
Defense date
07.17.2025 16:00
07.17.2025 16:00
Summary
The linear properties of waves in our shores can be studied through a surface gravity waves problem in the framework of continous media. Our scope is to study wave propagation, its dispersion relation and the dynamic and kinematic properties of the particle flow. Furthermore, we will explain everyday phenomena such as shoaling, refraction and grouping into sets based upon general wave properties, which we will illustrate through simulations in SWAN, one of the spectral models used by MeteoGalicia.
The linear properties of waves in our shores can be studied through a surface gravity waves problem in the framework of continous media. Our scope is to study wave propagation, its dispersion relation and the dynamic and kinematic properties of the particle flow. Furthermore, we will explain everyday phenomena such as shoaling, refraction and grouping into sets based upon general wave properties, which we will illustrate through simulations in SWAN, one of the spectral models used by MeteoGalicia.
Direction
MIGUEZ MACHO, GONZALO (Tutorships)
MIGUEZ MACHO, GONZALO (Tutorships)
Court
MERINO GAYOSO, CARLOS MIGUEL (Chairman)
PRIETO BLANCO, XESUS (Secretary)
Martinez Hernandez, Diego (Member)
MERINO GAYOSO, CARLOS MIGUEL (Chairman)
PRIETO BLANCO, XESUS (Secretary)
Martinez Hernandez, Diego (Member)
Formation of Singularities in Euler's Equations: The Implosion of a Fluid
Authorship
A.P.O.
Bachelor of Physics
A.P.O.
Bachelor of Physics
Defense date
07.16.2025 09:00
07.16.2025 09:00
Summary
In order to illustrate a set of techniques and methods used in the study of singularities, the Euler equations for an isentropic and compressible gas are analyzed using self-similar solutions that represent the implosion of the fluid. This approach simplifies the coupled system and allows it to be studied through an autonomous system in which a series of singularities arise. The possible trajectories are obtained through numerical integration, which makes it possible to determine the density and velocity profiles at any time during compression.
In order to illustrate a set of techniques and methods used in the study of singularities, the Euler equations for an isentropic and compressible gas are analyzed using self-similar solutions that represent the implosion of the fluid. This approach simplifies the coupled system and allows it to be studied through an autonomous system in which a series of singularities arise. The possible trajectories are obtained through numerical integration, which makes it possible to determine the density and velocity profiles at any time during compression.
Direction
FARIÑA BIASI, ANXO (Tutorships)
FARIÑA BIASI, ANXO (Tutorships)
Court
MIRAMONTES ANTAS, JOSE LUIS (Chairman)
SANTAMARINA RIOS, CIBRAN (Secretary)
Prieto Estévez, Gerardo (Member)
MIRAMONTES ANTAS, JOSE LUIS (Chairman)
SANTAMARINA RIOS, CIBRAN (Secretary)
Prieto Estévez, Gerardo (Member)
Study of decoherence and determinism with active targets
Authorship
P.P.B.
Bachelor of Physics
P.P.B.
Bachelor of Physics
Defense date
07.16.2025 09:00
07.16.2025 09:00
Summary
This work presents a study of quantum decoherence, focusing on its manifestation in matter-wave interferometry and its role in the transition between quantum and classical regimes. In particular, it analyzes collision-induced decoherence, based on the theoretical framework developed by Hornberger et al., which involves interferometry with fullerene molecules in the presence of a background gas. From the analysis of the visibility of the interference pattern in a Talbot-Lau interferometer, the calculations from the mentioned article are reproduced to determine the characteristic decoherence pressure for different gases. The main objective is to extrapolate these results to electron beams and to assess the conditions under which the approximation remains valid. Furthermore, the feasibility of implementing an interferometric setup for electron beams within a time projection chamber with active targets is investigated, with the aim of enabling experimental studies of decoherence under controlled conditions. Finally, some relevant technological applications of decoherence studies, both with electrons and with molecules, are briefly discussed, along with several open research directions in this field.
This work presents a study of quantum decoherence, focusing on its manifestation in matter-wave interferometry and its role in the transition between quantum and classical regimes. In particular, it analyzes collision-induced decoherence, based on the theoretical framework developed by Hornberger et al., which involves interferometry with fullerene molecules in the presence of a background gas. From the analysis of the visibility of the interference pattern in a Talbot-Lau interferometer, the calculations from the mentioned article are reproduced to determine the characteristic decoherence pressure for different gases. The main objective is to extrapolate these results to electron beams and to assess the conditions under which the approximation remains valid. Furthermore, the feasibility of implementing an interferometric setup for electron beams within a time projection chamber with active targets is investigated, with the aim of enabling experimental studies of decoherence under controlled conditions. Finally, some relevant technological applications of decoherence studies, both with electrons and with molecules, are briefly discussed, along with several open research directions in this field.
Direction
FERNANDEZ DOMINGUEZ, BEATRIZ (Tutorships)
FERNANDEZ DOMINGUEZ, BEATRIZ (Tutorships)
Court
MIRAMONTES ANTAS, JOSE LUIS (Chairman)
SANTAMARINA RIOS, CIBRAN (Secretary)
Prieto Estévez, Gerardo (Member)
MIRAMONTES ANTAS, JOSE LUIS (Chairman)
SANTAMARINA RIOS, CIBRAN (Secretary)
Prieto Estévez, Gerardo (Member)
Application of the Principle of Collapsed Distributions to the Synthesis and Diagnosis of Planar Antenna Arrays
Authorship
D.M.P.O.
Bachelor of Physics
D.M.P.O.
Bachelor of Physics
Defense date
07.16.2025 09:00
07.16.2025 09:00
Summary
The following work presents two applications of the principle of collapsed distributions. The principle states that each azimuthal cut of a planar array pattern, with phi = phi_o, is equivalent to the pattern generated by a linear array. The linear array is obtained by projecting all of the excitations of the planar antenna on to the line phi = phi_o. The first application uses this idea to synthesize a 32-element planar array with square grid, circular boundary and octantal symmetry. Solution of the syntheis problem requires solving an over-determined system of six equations and four variables. We attempted to solve this system approximately by means of a hybrid optimization algorithm of Simulated Annealing and Downhill Simplex. The second application presents an algorithm for detecting faulty elements in damaged planar arrays, where only on-off faults are considered. The algorithm uses far-field complex samples of the damaged pattern taken along the phi = 0, 45 and 90 degree cuts. It uses this information to perform an exhaustive search to find the three collapsed linear arrays, in the specified directions, that best match the samples of the damaged pattern. Thus, the coordinates of the faulty elements along the three mentioned axes can be easily obtained. The algorithm was applied here to the diagnosis of a 76-element square-grid antenna with circular boundary and quadrantal symmetry.
The following work presents two applications of the principle of collapsed distributions. The principle states that each azimuthal cut of a planar array pattern, with phi = phi_o, is equivalent to the pattern generated by a linear array. The linear array is obtained by projecting all of the excitations of the planar antenna on to the line phi = phi_o. The first application uses this idea to synthesize a 32-element planar array with square grid, circular boundary and octantal symmetry. Solution of the syntheis problem requires solving an over-determined system of six equations and four variables. We attempted to solve this system approximately by means of a hybrid optimization algorithm of Simulated Annealing and Downhill Simplex. The second application presents an algorithm for detecting faulty elements in damaged planar arrays, where only on-off faults are considered. The algorithm uses far-field complex samples of the damaged pattern taken along the phi = 0, 45 and 90 degree cuts. It uses this information to perform an exhaustive search to find the three collapsed linear arrays, in the specified directions, that best match the samples of the damaged pattern. Thus, the coordinates of the faulty elements along the three mentioned axes can be easily obtained. The algorithm was applied here to the diagnosis of a 76-element square-grid antenna with circular boundary and quadrantal symmetry.
Direction
ARES PENA, FRANCISCO JOSE (Tutorships)
ARES PENA, FRANCISCO JOSE (Tutorships)
Court
MIRAMONTES ANTAS, JOSE LUIS (Chairman)
SANTAMARINA RIOS, CIBRAN (Secretary)
Prieto Estévez, Gerardo (Member)
MIRAMONTES ANTAS, JOSE LUIS (Chairman)
SANTAMARINA RIOS, CIBRAN (Secretary)
Prieto Estévez, Gerardo (Member)
Drug-albumin interactions: a thermodynamic study
Authorship
C.P.G.
Bachelor of Physics
C.P.G.
Bachelor of Physics
Defense date
07.16.2025 09:00
07.16.2025 09:00
Summary
Human serum albumin (HSA) is the main protein in the blood and functions as a transport vehicle for a wide variety of molecules, including drugs. Clomipramine is a tricyclic antidepressant drug that has proven highly effective in treating patients with obsessive-compulsive disorder. It has a complex pharmacokinetics and high protein binding, which condition its clinical efficacy and generate wide inter-patient variability. Noble metal nanoclusters are aggregates of tens of atoms that have unique optical properties and generate a significant fluorescence phenomenon. Stable conjugates have been successfully synthesized (with an average of 45 gold atoms per protein) that exhibit a characteristic red fluorescence (620 nm). The conjugate has been physicochemically characterized to verify how the presence of the gold nanoclusters affects the behavior of clomipramine. It has been demonstrated that the conjugate maintains the ability to bind clomipramine with an affinity similar to the native HSA and with a 1:1 stoichiometry. Thermodynamically, it has been demonstrated by ITC that the binding is spontaneous and exothermic in both cases. Although for the conjugate a less favorable enthalpic contribution is obtained, it is compensated by a smaller entropic penalty, which implies that the protein becomes more rigid and requires less structural rearrangement. These results validate the HSA-AuNCs conjugate as a functional analog of HSA that can be used as an optical probe for future bioimaging applications.
Human serum albumin (HSA) is the main protein in the blood and functions as a transport vehicle for a wide variety of molecules, including drugs. Clomipramine is a tricyclic antidepressant drug that has proven highly effective in treating patients with obsessive-compulsive disorder. It has a complex pharmacokinetics and high protein binding, which condition its clinical efficacy and generate wide inter-patient variability. Noble metal nanoclusters are aggregates of tens of atoms that have unique optical properties and generate a significant fluorescence phenomenon. Stable conjugates have been successfully synthesized (with an average of 45 gold atoms per protein) that exhibit a characteristic red fluorescence (620 nm). The conjugate has been physicochemically characterized to verify how the presence of the gold nanoclusters affects the behavior of clomipramine. It has been demonstrated that the conjugate maintains the ability to bind clomipramine with an affinity similar to the native HSA and with a 1:1 stoichiometry. Thermodynamically, it has been demonstrated by ITC that the binding is spontaneous and exothermic in both cases. Although for the conjugate a less favorable enthalpic contribution is obtained, it is compensated by a smaller entropic penalty, which implies that the protein becomes more rigid and requires less structural rearrangement. These results validate the HSA-AuNCs conjugate as a functional analog of HSA that can be used as an optical probe for future bioimaging applications.
Direction
TOPETE CAMACHO, ANTONIO (Tutorships)
TOPETE CAMACHO, ANTONIO (Tutorships)
Court
MIRAMONTES ANTAS, JOSE LUIS (Chairman)
SANTAMARINA RIOS, CIBRAN (Secretary)
Prieto Estévez, Gerardo (Member)
MIRAMONTES ANTAS, JOSE LUIS (Chairman)
SANTAMARINA RIOS, CIBRAN (Secretary)
Prieto Estévez, Gerardo (Member)
The Hanbury Brown and Twiss Effect: Simulation with Speckle pattern
Authorship
G.A.P.C.
Bachelor of Physics
G.A.P.C.
Bachelor of Physics
Defense date
07.16.2025 09:00
07.16.2025 09:00
Summary
This thesis addresses Hanbury Brown and Twiss (HBT)-style intensity interferometry, initially developed for measuring the angular size of stars. We first review the classical framework: the intensity correlation between two points of an ordered source originating from a thermal source drifts, and its decrease with the separation between points (baseline) allows the stellar diameter to be recovered without the need for phase information. We also show how quantum theory predicts anticorrelation for single-photon states. This experiment illustrates the contrast between classical and quantum light. We simulate the chaotic source in the laboratory with a laser and a rotating diffuser that allows us to control temporal coherence by generating a laser speckle. A small aperture simulates the extension of the star, and each pair of camera pixels simulates a configuration of the HBT astronomical observatory. The results are consistent with the classical predictions of Hanbury Brown and Twiss.
This thesis addresses Hanbury Brown and Twiss (HBT)-style intensity interferometry, initially developed for measuring the angular size of stars. We first review the classical framework: the intensity correlation between two points of an ordered source originating from a thermal source drifts, and its decrease with the separation between points (baseline) allows the stellar diameter to be recovered without the need for phase information. We also show how quantum theory predicts anticorrelation for single-photon states. This experiment illustrates the contrast between classical and quantum light. We simulate the chaotic source in the laboratory with a laser and a rotating diffuser that allows us to control temporal coherence by generating a laser speckle. A small aperture simulates the extension of the star, and each pair of camera pixels simulates a configuration of the HBT astronomical observatory. The results are consistent with the classical predictions of Hanbury Brown and Twiss.
Direction
PRIETO BLANCO, XESUS (Tutorships)
BARRAL RAÑA, DAVID (Co-tutorships)
PRIETO BLANCO, XESUS (Tutorships)
BARRAL RAÑA, DAVID (Co-tutorships)
Court
MIRAMONTES ANTAS, JOSE LUIS (Chairman)
SANTAMARINA RIOS, CIBRAN (Secretary)
Prieto Estévez, Gerardo (Member)
MIRAMONTES ANTAS, JOSE LUIS (Chairman)
SANTAMARINA RIOS, CIBRAN (Secretary)
Prieto Estévez, Gerardo (Member)
Calculation of the production of the exotic nuclei with the new 186W at FAIR
Authorship
V.P.R.
Bachelor of Physics
V.P.R.
Bachelor of Physics
Defense date
07.16.2025 09:00
07.16.2025 09:00
Summary
At the new FAIR facility, still under development, a new primary beam is planned to be used, originating from a 186W source, which will enable measurements of neutron-rich nuclei near the drip line for elements in the Z=69 region. In this case, we will use fragmentation reactions of 186W projectiles at 1 GeV/u induced by a Be target. To carry out these experiments, it is necessary to perform preliminary calculations to determine the expected production cross sections. Based on these cross sections, the production rates of a wide range of nuclides between Z=55 and Z=72 will be stimated. These calculations of exotic nuclei production will be performed using the INCL+ABLA simulationcode, allowing us to conclude that this new source presents a viable and highly interesting alternative for future experiments using the SUPER-FRS. Once produced, the exotic nuclei can be sent through a mass spectrometer to different experimental areas at FAIR to study their properties: mass, decays, structure...
At the new FAIR facility, still under development, a new primary beam is planned to be used, originating from a 186W source, which will enable measurements of neutron-rich nuclei near the drip line for elements in the Z=69 region. In this case, we will use fragmentation reactions of 186W projectiles at 1 GeV/u induced by a Be target. To carry out these experiments, it is necessary to perform preliminary calculations to determine the expected production cross sections. Based on these cross sections, the production rates of a wide range of nuclides between Z=55 and Z=72 will be stimated. These calculations of exotic nuclei production will be performed using the INCL+ABLA simulationcode, allowing us to conclude that this new source presents a viable and highly interesting alternative for future experiments using the SUPER-FRS. Once produced, the exotic nuclei can be sent through a mass spectrometer to different experimental areas at FAIR to study their properties: mass, decays, structure...
Direction
ALVAREZ POL, HECTOR (Tutorships)
FEIJOO FONTAN, MARTINA (Co-tutorships)
ALVAREZ POL, HECTOR (Tutorships)
FEIJOO FONTAN, MARTINA (Co-tutorships)
Court
MIRAMONTES ANTAS, JOSE LUIS (Chairman)
SANTAMARINA RIOS, CIBRAN (Secretary)
Prieto Estévez, Gerardo (Member)
MIRAMONTES ANTAS, JOSE LUIS (Chairman)
SANTAMARINA RIOS, CIBRAN (Secretary)
Prieto Estévez, Gerardo (Member)
Exploring the physical limit of the time resolution in gaseous detectors
Authorship
D.P.R.
Bachelor of Physics
D.P.R.
Bachelor of Physics
Defense date
07.16.2025 09:00
07.16.2025 09:00
Summary
The RPCs (Resistive Plate Chambers) are a type of detector currently used in colliders such as CERN’s LHC, among others, for the detection of charged particles such as muons or pions. In this work, we will explain how the concept of RPCs was developed, starting from Geiger-Müller counters, moving through PSC spark detectors, and reaching the current RPC models. In addition, we will study the different possible processes that could occur inside the detector, focusing mainly on the appearance of carrier avalanches. We will mathematically describe how these avalanches are produced and how they evolve over time, taking into account the detector’s conditions. In doing so, we will also obtain expressions that can be used to perform Monte Carlo simulations of the system. We will also describe important effects in RPCs, such as the appearance of streamers or sparks, and how the detectors are built considering these effects. On the other hand, starting from the expression of the current induced by the movement of carriers within the detector, we will derive analytical expressions to obtain the efficiency and the time resolution of the detector. For both quantities, we will achieve results for realistic cases and compare them with those reported in publications to verify the validity of these expressions. Finally, we will use the knowledge acquired in a particular case: the PICOSEC detectors, which have been under development in recent years. These detectors use RPC technology in a specific scenario where the electrons transported inside the detector are photoelectrons generated by Cherenkov radiation. This latter aspect will cause certain changes in the study of the detector’s time resolution, which we will verify by obtaining the resolution for this type of detector. As with RPCs, we will validate the expression by applying it to real cases and comparing with results from publications.
The RPCs (Resistive Plate Chambers) are a type of detector currently used in colliders such as CERN’s LHC, among others, for the detection of charged particles such as muons or pions. In this work, we will explain how the concept of RPCs was developed, starting from Geiger-Müller counters, moving through PSC spark detectors, and reaching the current RPC models. In addition, we will study the different possible processes that could occur inside the detector, focusing mainly on the appearance of carrier avalanches. We will mathematically describe how these avalanches are produced and how they evolve over time, taking into account the detector’s conditions. In doing so, we will also obtain expressions that can be used to perform Monte Carlo simulations of the system. We will also describe important effects in RPCs, such as the appearance of streamers or sparks, and how the detectors are built considering these effects. On the other hand, starting from the expression of the current induced by the movement of carriers within the detector, we will derive analytical expressions to obtain the efficiency and the time resolution of the detector. For both quantities, we will achieve results for realistic cases and compare them with those reported in publications to verify the validity of these expressions. Finally, we will use the knowledge acquired in a particular case: the PICOSEC detectors, which have been under development in recent years. These detectors use RPC technology in a specific scenario where the electrons transported inside the detector are photoelectrons generated by Cherenkov radiation. This latter aspect will cause certain changes in the study of the detector’s time resolution, which we will verify by obtaining the resolution for this type of detector. As with RPCs, we will validate the expression by applying it to real cases and comparing with results from publications.
Direction
GONZALEZ DIAZ, DIEGO (Tutorships)
GONZALEZ DIAZ, DIEGO (Tutorships)
Court
MIRAMONTES ANTAS, JOSE LUIS (Chairman)
SANTAMARINA RIOS, CIBRAN (Secretary)
Prieto Estévez, Gerardo (Member)
MIRAMONTES ANTAS, JOSE LUIS (Chairman)
SANTAMARINA RIOS, CIBRAN (Secretary)
Prieto Estévez, Gerardo (Member)
Mermin Inequalities on Quantum Computers
Authorship
A.R.B.
Bachelor of Physics
A.R.B.
Bachelor of Physics
Defense date
07.16.2025 09:00
07.16.2025 09:00
Summary
This dissertation explores the use of Mermin inequalities, a generalization of Bell inequalities, with a twofold purpose: on the one hand, to experimentally test the nonlocality of quantum mechanics in systems with multiple qubits; and on the other, to assess the fidelity of current quantum computers based on their ability to generate multipartite quantum correlations. With this aim, two possible implementations are proposed in the form of quantum circuits, inspired by the bipartite experiments of Clauser, Horn, Shimony and Holt (CHSH), and Aspect. In addition, error mitigation methods are applied, along with an adaptive routine for generating the GHZ state, a highly entangled quantum state that maximizes inequality violation. Lastly, the circuits are executed on both quantum circuit emulators and real quantum processing units (QPUs).
This dissertation explores the use of Mermin inequalities, a generalization of Bell inequalities, with a twofold purpose: on the one hand, to experimentally test the nonlocality of quantum mechanics in systems with multiple qubits; and on the other, to assess the fidelity of current quantum computers based on their ability to generate multipartite quantum correlations. With this aim, two possible implementations are proposed in the form of quantum circuits, inspired by the bipartite experiments of Clauser, Horn, Shimony and Holt (CHSH), and Aspect. In addition, error mitigation methods are applied, along with an adaptive routine for generating the GHZ state, a highly entangled quantum state that maximizes inequality violation. Lastly, the circuits are executed on both quantum circuit emulators and real quantum processing units (QPUs).
Direction
MAS SOLE, JAVIER (Tutorships)
Díaz Camacho, Guillermo (Co-tutorships)
MAS SOLE, JAVIER (Tutorships)
Díaz Camacho, Guillermo (Co-tutorships)
Court
MIRAMONTES ANTAS, JOSE LUIS (Chairman)
SANTAMARINA RIOS, CIBRAN (Secretary)
Prieto Estévez, Gerardo (Member)
MIRAMONTES ANTAS, JOSE LUIS (Chairman)
SANTAMARINA RIOS, CIBRAN (Secretary)
Prieto Estévez, Gerardo (Member)
X-ray spectrometer based on passive detectors
Authorship
J.A.R.L.
Bachelor of Physics
J.A.R.L.
Bachelor of Physics
Defense date
07.17.2025 09:30
07.17.2025 09:30
Summary
The production of X-rays by ultra-short, high-intensity laser pulses is an emerging technique with promising applications in medicine and research. However, the espectral charaterisation of this type os sources presents some challenges, due to the high intensity and short duration of the emitted radiation pulses, which limits the use of conventional active detectors due to phenomena such as pile-up. In this work we propose a spectroscopy system based on passive detectors Image Plates (IPs), wich are not affected by this limitations. For this purpose we have performed an absolute calibration of these detectors by means of the X-ray fluorescence of various materials. Finally we have develop a function of reconstructions that allows us to recover the original spectrum from mesurements made with filters of different thicknesses interposed between the source and the IPs.
The production of X-rays by ultra-short, high-intensity laser pulses is an emerging technique with promising applications in medicine and research. However, the espectral charaterisation of this type os sources presents some challenges, due to the high intensity and short duration of the emitted radiation pulses, which limits the use of conventional active detectors due to phenomena such as pile-up. In this work we propose a spectroscopy system based on passive detectors Image Plates (IPs), wich are not affected by this limitations. For this purpose we have performed an absolute calibration of these detectors by means of the X-ray fluorescence of various materials. Finally we have develop a function of reconstructions that allows us to recover the original spectrum from mesurements made with filters of different thicknesses interposed between the source and the IPs.
Direction
ALEJO ALONSO, AARON JOSE (Tutorships)
ALEJO ALONSO, AARON JOSE (Tutorships)
Court
ACOSTA PLAZA, EVA MARIA (Chairman)
VIEITES DIAZ, MARIA (Secretary)
Wu , Bin (Member)
ACOSTA PLAZA, EVA MARIA (Chairman)
VIEITES DIAZ, MARIA (Secretary)
Wu , Bin (Member)
NEXT trace reconstruction using neural networks
Authorship
M.R.M.
Bachelor of Physics
M.R.M.
Bachelor of Physics
Defense date
07.17.2025 09:30
07.17.2025 09:30
Summary
The NEXT experiment aims to detect double beta decay without neutrino emission, the observation of which could have important implications in particle physics. The NEXT-100 detector, a Time Projection Chamber (TPC) with high-pressure xenon gas, located at the Canfranc Underground Laboratory (LSC), is used for this purpose. This detector allows a three-dimensional reconstruction of the traces left by the particles. However, these reconstructed traces have a diffuse appearance due to instrumental effects. For this purpose, we propose the use of a convolutional neural network based on semantic segmentation, with the aim of reconstructing a trace as similar as possible to the original one, eliminating the distortions introduced by the detector. Different models were trained with different configurations, evaluating their performance throughout the process using the loss function and the Intersection over Union metric. After training and validation, the model was applied to test data in order to generate predictions. Finally, a decision threshold was established to evaluate the results, using the metrics IoU and Accuracy to determine the performance of the model and its ability to recover the original trace.
The NEXT experiment aims to detect double beta decay without neutrino emission, the observation of which could have important implications in particle physics. The NEXT-100 detector, a Time Projection Chamber (TPC) with high-pressure xenon gas, located at the Canfranc Underground Laboratory (LSC), is used for this purpose. This detector allows a three-dimensional reconstruction of the traces left by the particles. However, these reconstructed traces have a diffuse appearance due to instrumental effects. For this purpose, we propose the use of a convolutional neural network based on semantic segmentation, with the aim of reconstructing a trace as similar as possible to the original one, eliminating the distortions introduced by the detector. Different models were trained with different configurations, evaluating their performance throughout the process using the loss function and the Intersection over Union metric. After training and validation, the model was applied to test data in order to generate predictions. Finally, a decision threshold was established to evaluate the results, using the metrics IoU and Accuracy to determine the performance of the model and its ability to recover the original trace.
Direction
HERNANDO MORATA, JOSE ANGEL (Tutorships)
PEREZ MANEIRO, MARTIN (Co-tutorships)
HERNANDO MORATA, JOSE ANGEL (Tutorships)
PEREZ MANEIRO, MARTIN (Co-tutorships)
Court
ACOSTA PLAZA, EVA MARIA (Chairman)
VIEITES DIAZ, MARIA (Secretary)
Wu , Bin (Member)
ACOSTA PLAZA, EVA MARIA (Chairman)
VIEITES DIAZ, MARIA (Secretary)
Wu , Bin (Member)
Atmospheric depth of muon production in neutrino induced showers at the Pierre Auger Observatory
Authorship
P.R.R.
Bachelor of Physics
P.R.R.
Bachelor of Physics
Defense date
07.17.2025 09:30
07.17.2025 09:30
Summary
Extreme astrophysical processes can produce ultra-high-energy particles that reach the Earth, including neutrinos. Since neutrinos are neutral particles, they are not deflected by cosmic magnetic fields during their journey and thus arrive pointing directly to their source. Due to their low interaction cross section, detecting these particles requires detectors with extremely large volumes. The Pierre Auger Observatory is capable of detecting the atmospheric showers produced by primary particles entering the atmosphere. Neutrinos, being able to traverse large portions of the atmosphere without interacting, can initiate showers at much greater depths than charged primary particles such as protons. Identifying an atmospheric shower that starts deep in the atmosphere would allow us to recognize it as a potential neutrino event. As the incidence angle increases, so does the amount of atmosphere a particle can travel through; therefore, our interest lies in studying these highly inclined events. In this work, we analyze whether the algorithm for reconstructing the Muon Production Depth (MPD) distribution works for events with a high inclination angle (greater than 60 degrees). We aim to determine whether the muon production depth can serve as a discriminant between neutrino-induced events and those caused by charged primary particles, and to apply MPD reconstruction to real events detected by the Pierre Auger Observatory that are candidates for being initiated by neutrinos.
Extreme astrophysical processes can produce ultra-high-energy particles that reach the Earth, including neutrinos. Since neutrinos are neutral particles, they are not deflected by cosmic magnetic fields during their journey and thus arrive pointing directly to their source. Due to their low interaction cross section, detecting these particles requires detectors with extremely large volumes. The Pierre Auger Observatory is capable of detecting the atmospheric showers produced by primary particles entering the atmosphere. Neutrinos, being able to traverse large portions of the atmosphere without interacting, can initiate showers at much greater depths than charged primary particles such as protons. Identifying an atmospheric shower that starts deep in the atmosphere would allow us to recognize it as a potential neutrino event. As the incidence angle increases, so does the amount of atmosphere a particle can travel through; therefore, our interest lies in studying these highly inclined events. In this work, we analyze whether the algorithm for reconstructing the Muon Production Depth (MPD) distribution works for events with a high inclination angle (greater than 60 degrees). We aim to determine whether the muon production depth can serve as a discriminant between neutrino-induced events and those caused by charged primary particles, and to apply MPD reconstruction to real events detected by the Pierre Auger Observatory that are candidates for being initiated by neutrinos.
Direction
CAZON BOADO, LORENZO (Tutorships)
CAZON BOADO, LORENZO (Tutorships)
Court
ACOSTA PLAZA, EVA MARIA (Chairman)
VIEITES DIAZ, MARIA (Secretary)
Wu , Bin (Member)
ACOSTA PLAZA, EVA MARIA (Chairman)
VIEITES DIAZ, MARIA (Secretary)
Wu , Bin (Member)
Artificial Intelligence and Molecular Simulation: Computational Tools for the Development of New Antibiotics
Authorship
L.R.D.L.F.
Double bachelor degree in Physics and Chemistry
L.R.D.L.F.
Double bachelor degree in Physics and Chemistry
Defense date
07.15.2025 09:00
07.15.2025 09:00
Summary
Antibiotic resistance has become a global health crisis, with alarming projections estimating up to 10 million annual deaths by 2050 if effective therapeutic alternatives are not developed. Among these alternatives, antimicrobial peptides (AMPs) stand out due to their ability to selectively interact with bacterial membranes, thanks to their positive charge and amphipathic nature. However, their structural study presents unique challenges, as many AMPs only adopt their functional conformation (typically Alpha-helical) upon contact with lipid environments, remaining unstructured in aqueous solution. In this study, we employed atomistic molecular dynamics to characterize the structural stability of the peptide Lasioglossin III (LL-III) in a model lipid bilayer (POPC:POPG 80:20), which simulates the anionic composition of bacterial membranes. Using 200 ns simulations and independent replicas, we compared surface-adsorbed and pre-inserted (partially membrane-embedded) configurations. The results show that membrane insertion promotes the preservation of the helical structure, while surface adsorption induces conformational disorder. Additionally, a strong dependence on the initial peptide orientation was observed in its structural evolution. These findings demonstrate that molecular dynamics can capture membrane-induced folding, overcoming limitations of methods like AlphaFold (trained on globular proteins in aqueous solution). Future work could integrate these data into machine learning models to design new antimicrobials.
Antibiotic resistance has become a global health crisis, with alarming projections estimating up to 10 million annual deaths by 2050 if effective therapeutic alternatives are not developed. Among these alternatives, antimicrobial peptides (AMPs) stand out due to their ability to selectively interact with bacterial membranes, thanks to their positive charge and amphipathic nature. However, their structural study presents unique challenges, as many AMPs only adopt their functional conformation (typically Alpha-helical) upon contact with lipid environments, remaining unstructured in aqueous solution. In this study, we employed atomistic molecular dynamics to characterize the structural stability of the peptide Lasioglossin III (LL-III) in a model lipid bilayer (POPC:POPG 80:20), which simulates the anionic composition of bacterial membranes. Using 200 ns simulations and independent replicas, we compared surface-adsorbed and pre-inserted (partially membrane-embedded) configurations. The results show that membrane insertion promotes the preservation of the helical structure, while surface adsorption induces conformational disorder. Additionally, a strong dependence on the initial peptide orientation was observed in its structural evolution. These findings demonstrate that molecular dynamics can capture membrane-induced folding, overcoming limitations of methods like AlphaFold (trained on globular proteins in aqueous solution). Future work could integrate these data into machine learning models to design new antimicrobials.
Direction
GARCIA FANDIÑO, REBECA (Tutorships)
PIÑEIRO GUILLEN, ANGEL (Co-tutorships)
GARCIA FANDIÑO, REBECA (Tutorships)
PIÑEIRO GUILLEN, ANGEL (Co-tutorships)
Court
SAA RODRIGUEZ, CARLOS EUGENIO (Chairman)
SOUSA PEDRARES, ANTONIO (Secretary)
GARCIA DEIBE, ANA MARIA (Member)
SAA RODRIGUEZ, CARLOS EUGENIO (Chairman)
SOUSA PEDRARES, ANTONIO (Secretary)
GARCIA DEIBE, ANA MARIA (Member)
Peptide folding at interfaces between media of different polarity using quantum computing
Authorship
L.R.D.L.F.
Double bachelor degree in Physics and Chemistry
L.R.D.L.F.
Double bachelor degree in Physics and Chemistry
Defense date
07.17.2025 09:30
07.17.2025 09:30
Summary
In this work, we analyze the potential of quantum computing as an innovative tool for studying the three-dimensional folding of antimicrobial peptides (AMPs) at interfaces between media of different polarities, simulating the surface of a cellular membrane. For this purpose, we use the Variational Quantum Eigensolver (VQE) algorithm alongside the hybrid quantum-classical computing system QMIO at CESGA. The project is framed within the urgent health crisis caused by bacterial resistance, emphasizing the importance of understanding the functional structure of AMPs at an atomic level to develop more effective therapeutic molecules. Methodologically, we employ a coarse-grained model based on the modified Miyazawa-Jernigan potential to capture polar gradient effects and phase transitions, combined with parameterized quantum circuits and hybrid optimization techniques. Three prototype sequences (hydrophobic P1, charged P2, and amphipathic P3) are evaluated to compare how peptide orientation and distance from the interface, the polarity gradient between the two media, and the peptide residue distribution affect folding. Results obtained using ideal quantum simulators show conformations consistent with the specific physicochemical characteristics of each sequence: P1 adopts extended configurations in nonpolar media, P2 orients toward polar phases, while P3 forms internal hydrophobic cores with selective exposure of polar residues, suggesting preferential stabilization at the interface. On the other hand, using QMIO reveals the need to mitigate quantum noise to obtain realistic results. Our study demonstrates the potential of integrating quantum and classical methods to overcome current limitations in molecular dynamics, accelerating the search for stable conformations and facilitating the rational design of AMPs. Several key challenges remain, such as improving error correction and mitigation strategies, developing more efficient ansatz for VQE circuits, and optimizing the scalability of quantum hardware.
In this work, we analyze the potential of quantum computing as an innovative tool for studying the three-dimensional folding of antimicrobial peptides (AMPs) at interfaces between media of different polarities, simulating the surface of a cellular membrane. For this purpose, we use the Variational Quantum Eigensolver (VQE) algorithm alongside the hybrid quantum-classical computing system QMIO at CESGA. The project is framed within the urgent health crisis caused by bacterial resistance, emphasizing the importance of understanding the functional structure of AMPs at an atomic level to develop more effective therapeutic molecules. Methodologically, we employ a coarse-grained model based on the modified Miyazawa-Jernigan potential to capture polar gradient effects and phase transitions, combined with parameterized quantum circuits and hybrid optimization techniques. Three prototype sequences (hydrophobic P1, charged P2, and amphipathic P3) are evaluated to compare how peptide orientation and distance from the interface, the polarity gradient between the two media, and the peptide residue distribution affect folding. Results obtained using ideal quantum simulators show conformations consistent with the specific physicochemical characteristics of each sequence: P1 adopts extended configurations in nonpolar media, P2 orients toward polar phases, while P3 forms internal hydrophobic cores with selective exposure of polar residues, suggesting preferential stabilization at the interface. On the other hand, using QMIO reveals the need to mitigate quantum noise to obtain realistic results. Our study demonstrates the potential of integrating quantum and classical methods to overcome current limitations in molecular dynamics, accelerating the search for stable conformations and facilitating the rational design of AMPs. Several key challenges remain, such as improving error correction and mitigation strategies, developing more efficient ansatz for VQE circuits, and optimizing the scalability of quantum hardware.
Direction
PIÑEIRO GUILLEN, ANGEL (Tutorships)
CONDE TORRES, DANIEL (Co-tutorships)
PIÑEIRO GUILLEN, ANGEL (Tutorships)
CONDE TORRES, DANIEL (Co-tutorships)
Court
ACOSTA PLAZA, EVA MARIA (Chairman)
VIEITES DIAZ, MARIA (Secretary)
Wu , Bin (Member)
ACOSTA PLAZA, EVA MARIA (Chairman)
VIEITES DIAZ, MARIA (Secretary)
Wu , Bin (Member)
Searching for Long Lived Particles in Dark Sectors using Machine Learning algorithms
Authorship
A.R.G.
Bachelor of Physics
A.R.G.
Bachelor of Physics
Defense date
07.17.2025 09:30
07.17.2025 09:30
Summary
The Standard Model (SM) is an extremely precise theory for describing the fundamental interactions between particles. However, we know it is incomplete, as it cannot accommodate gravity, explain the origin of dark matter and dark energy, or account for the matter-antimatter asymmetry of the Universe; among other issues that motivate us to search for new physics (NP) models or physics beyond the SM (BSM). In this context, the Long Lived Particles (LLPs) emerge as a natural solution, as they can go undetected in experiments due to the geometric limitations of detectors. Certain NP incorporate the ''dark sectors'' and predict the existence of LLPs weakly coupled to the SM. This work will study the decay of a dark hadron, pi_v to Ks0 (to pi+ pi-) K+ pi-, with a mass of 2.5 GeV/c2, as proposed by one of these models, using real data from LHCb and an XGBoost algorithm to discriminate background from the signal of interest. LHCb is a suitable detector for this search thanks to its excellent resolution in reconstructing displaced vertices, its particle identification capabilities, and its lower luminosity compared to ATLAS and CMS, which facilitates background treatment at low mass.
The Standard Model (SM) is an extremely precise theory for describing the fundamental interactions between particles. However, we know it is incomplete, as it cannot accommodate gravity, explain the origin of dark matter and dark energy, or account for the matter-antimatter asymmetry of the Universe; among other issues that motivate us to search for new physics (NP) models or physics beyond the SM (BSM). In this context, the Long Lived Particles (LLPs) emerge as a natural solution, as they can go undetected in experiments due to the geometric limitations of detectors. Certain NP incorporate the ''dark sectors'' and predict the existence of LLPs weakly coupled to the SM. This work will study the decay of a dark hadron, pi_v to Ks0 (to pi+ pi-) K+ pi-, with a mass of 2.5 GeV/c2, as proposed by one of these models, using real data from LHCb and an XGBoost algorithm to discriminate background from the signal of interest. LHCb is a suitable detector for this search thanks to its excellent resolution in reconstructing displaced vertices, its particle identification capabilities, and its lower luminosity compared to ATLAS and CMS, which facilitates background treatment at low mass.
Direction
CID VIDAL, XABIER (Tutorships)
Rodríguez Fernández, Emilio Xosé (Co-tutorships)
CID VIDAL, XABIER (Tutorships)
Rodríguez Fernández, Emilio Xosé (Co-tutorships)
Court
ACOSTA PLAZA, EVA MARIA (Chairman)
VIEITES DIAZ, MARIA (Secretary)
Wu , Bin (Member)
ACOSTA PLAZA, EVA MARIA (Chairman)
VIEITES DIAZ, MARIA (Secretary)
Wu , Bin (Member)
Analysis of the semileptonic decay B to tau nu at the LHCb experiment
Authorship
L.R.L.
Bachelor of Physics
L.R.L.
Bachelor of Physics
Defense date
07.17.2025 09:00
07.17.2025 09:00
Summary
The decay B to tau nu is conceptually straightforward to understand from a theoretical standpoint, since it has a contribution from a tree-level diagram. The branching ratio of this decay is well known in the Standard Model, and it is highly sensitive to effects from potential new particles that could mediate the interaction. However, from an experimental standpoint, the decay is challenging since the most favourable final state, in which the tau lepton has decayed into three pions and a neutrino, contains two undetected neutrinos. The reconstruction is especially complex in experiments at hadron colliders, like the LHCb, where neutrinos are not directly detected. The main goal of this thesis is to develop a set of kinematic strategies to reconstruct the decay chain B to tau nu and tau to pi pi pi nu. The effect of the reconstruction will also be studied in a similar channel Ds to tau nu and tau to pi pi pi nu with the aim of separating both channels in a future data analysis. We seek to improve either the experimental precision of the branching ratio, or the precision of the corresponding CKM matrix element. A deviation of the branching fraction from the Standard Model prediction could serve as an indicator of new physics.
The decay B to tau nu is conceptually straightforward to understand from a theoretical standpoint, since it has a contribution from a tree-level diagram. The branching ratio of this decay is well known in the Standard Model, and it is highly sensitive to effects from potential new particles that could mediate the interaction. However, from an experimental standpoint, the decay is challenging since the most favourable final state, in which the tau lepton has decayed into three pions and a neutrino, contains two undetected neutrinos. The reconstruction is especially complex in experiments at hadron colliders, like the LHCb, where neutrinos are not directly detected. The main goal of this thesis is to develop a set of kinematic strategies to reconstruct the decay chain B to tau nu and tau to pi pi pi nu. The effect of the reconstruction will also be studied in a similar channel Ds to tau nu and tau to pi pi pi nu with the aim of separating both channels in a future data analysis. We seek to improve either the experimental precision of the branching ratio, or the precision of the corresponding CKM matrix element. A deviation of the branching fraction from the Standard Model prediction could serve as an indicator of new physics.
Direction
VIEITES DIAZ, MARIA (Tutorships)
Brea Rodríguez, Alexandre (Co-tutorships)
VIEITES DIAZ, MARIA (Tutorships)
Brea Rodríguez, Alexandre (Co-tutorships)
Court
RUSO VEIRAS, JUAN MANUEL (Chairman)
CAZON BOADO, LORENZO (Secretary)
AYYAD LIMONGE, FRANCESC YASSID (Member)
RUSO VEIRAS, JUAN MANUEL (Chairman)
CAZON BOADO, LORENZO (Secretary)
AYYAD LIMONGE, FRANCESC YASSID (Member)
Optimization of a two-photon polymerization system for surface modification of biocompatible materials
Authorship
M.R.L.
Bachelor of Physics
M.R.L.
Bachelor of Physics
Defense date
07.16.2025 09:00
07.16.2025 09:00
Summary
In this work, a two-photon polymerization (2PP) system was optimized for the fabrication of biocompatible microstructures, with the final goal of applying them to fibroblast cell culture (connective tissue cells). This technique is based on a nonlinear optical process involving the simultaneous absorption of two photons and enables the fabrication of structures in photosensitive material with a resolution far superior to that of conventional techniques. This, combined with its high versatility, makes it a promising tool in fields such as biomedicine and optics. The SZ2080 resin was used due to its high biocompatibility, stability and photosensitivity. Networks were fabricated by combining scanning sppeds and different laser powers in order to study the effect of these parameters on the thickness and height of the polymerized structures. Characterization was performed using optical, confocal and scanning electron microscopy (FESEM), allowing for a precise analysis of the structures’ dimensions and integrity. The viability of the system was then assessed by culturing human fibroblasts on the fabricated networks. In conclusion, key technical knowñedge about 2PP technique was acquired and several areas for improvement were identified, both in fabrication process (including substrate treatment, processing and development) and in the cell culture protocols.
In this work, a two-photon polymerization (2PP) system was optimized for the fabrication of biocompatible microstructures, with the final goal of applying them to fibroblast cell culture (connective tissue cells). This technique is based on a nonlinear optical process involving the simultaneous absorption of two photons and enables the fabrication of structures in photosensitive material with a resolution far superior to that of conventional techniques. This, combined with its high versatility, makes it a promising tool in fields such as biomedicine and optics. The SZ2080 resin was used due to its high biocompatibility, stability and photosensitivity. Networks were fabricated by combining scanning sppeds and different laser powers in order to study the effect of these parameters on the thickness and height of the polymerized structures. Characterization was performed using optical, confocal and scanning electron microscopy (FESEM), allowing for a precise analysis of the structures’ dimensions and integrity. The viability of the system was then assessed by culturing human fibroblasts on the fabricated networks. In conclusion, key technical knowñedge about 2PP technique was acquired and several areas for improvement were identified, both in fabrication process (including substrate treatment, processing and development) and in the cell culture protocols.
Direction
Gómez Varela, Ana Isabel (Tutorships)
BAO VARELA, Mª CARMEN (Co-tutorships)
Gómez Varela, Ana Isabel (Tutorships)
BAO VARELA, Mª CARMEN (Co-tutorships)
Court
MIRAMONTES ANTAS, JOSE LUIS (Chairman)
SANTAMARINA RIOS, CIBRAN (Secretary)
Prieto Estévez, Gerardo (Member)
MIRAMONTES ANTAS, JOSE LUIS (Chairman)
SANTAMARINA RIOS, CIBRAN (Secretary)
Prieto Estévez, Gerardo (Member)
Manifolds with a warped product structure
Authorship
A.R.S.
Double bachelor degree in Mathematics and Physics
A.R.S.
Double bachelor degree in Mathematics and Physics
Defense date
07.16.2025 12:15
07.16.2025 12:15
Summary
Pseudo-Riemannian manifolds with a local product structure have played a fundamental role in addressing a wide range of geometric and physical problems. In particular, such manifolds pro- vide one of the most commonly used geometric frameworks for constructing Einstein Riemannian metrics and constitute the underlying structure of most relativistic spacetimes. The aim of this work is to conduct a systematic study of manifolds with a local warped product structure, focusing on their curvature properties.
Pseudo-Riemannian manifolds with a local product structure have played a fundamental role in addressing a wide range of geometric and physical problems. In particular, such manifolds pro- vide one of the most commonly used geometric frameworks for constructing Einstein Riemannian metrics and constitute the underlying structure of most relativistic spacetimes. The aim of this work is to conduct a systematic study of manifolds with a local warped product structure, focusing on their curvature properties.
Direction
GARCIA RIO, EDUARDO (Tutorships)
CAEIRO OLIVEIRA, SANDRO (Co-tutorships)
GARCIA RIO, EDUARDO (Tutorships)
CAEIRO OLIVEIRA, SANDRO (Co-tutorships)
Court
GARCIA RODICIO, ANTONIO (Chairman)
CAO LABORA, DANIEL (Secretary)
Gómez Tato, Antonio M. (Member)
GARCIA RODICIO, ANTONIO (Chairman)
CAO LABORA, DANIEL (Secretary)
Gómez Tato, Antonio M. (Member)
Propagation of Ultrahigh-energy cosmic rays in the Universe
Authorship
A.R.S.
Double bachelor degree in Mathematics and Physics
A.R.S.
Double bachelor degree in Mathematics and Physics
Defense date
07.17.2025 09:30
07.17.2025 09:30
Summary
Ultrahigh-energy cosmic rays (UHECRs) are the most energetic particles detected to date in the Universe, reaching energies around the exaelectronvolt (EeV). Their study is fundamental to explore physics beyond the Standard Model, analyze the structure of the Galactic magnetic field, and address relevant questions in the field of cosmology. Currently, there are multiple lines of research focused on understanding these events, and observatories such as the Pierre Auger Observatory and the Telescope Array are comitted to detecting this type of extragalactic particles. This work studies the propagation of these cosmic rays within the Galaxy, aiming to analyze how the Galactic magnetic field influences such propagation and how the sources from which they originate could be identified. To this end, the astrophysical simulation framework CRPropa is used, which allows simulations to be performed through its Python interface. In this way, simulations of the backtracking of UHECRs detected by the Pierre Auger Observatory are carried out, obtaining both their arrival direction at the Galactic boundary and the deflection experienced during their trajectory. The study focuses on performing simulations by varying the components of the magnetic field used within the JF12 model [1, 2], considering both the presence of small-scale random turbulence and its absence, as well as the type of particle being propagated.
Ultrahigh-energy cosmic rays (UHECRs) are the most energetic particles detected to date in the Universe, reaching energies around the exaelectronvolt (EeV). Their study is fundamental to explore physics beyond the Standard Model, analyze the structure of the Galactic magnetic field, and address relevant questions in the field of cosmology. Currently, there are multiple lines of research focused on understanding these events, and observatories such as the Pierre Auger Observatory and the Telescope Array are comitted to detecting this type of extragalactic particles. This work studies the propagation of these cosmic rays within the Galaxy, aiming to analyze how the Galactic magnetic field influences such propagation and how the sources from which they originate could be identified. To this end, the astrophysical simulation framework CRPropa is used, which allows simulations to be performed through its Python interface. In this way, simulations of the backtracking of UHECRs detected by the Pierre Auger Observatory are carried out, obtaining both their arrival direction at the Galactic boundary and the deflection experienced during their trajectory. The study focuses on performing simulations by varying the components of the magnetic field used within the JF12 model [1, 2], considering both the presence of small-scale random turbulence and its absence, as well as the type of particle being propagated.
Direction
ALVAREZ MUÑIZ, JAIME (Tutorships)
ALVAREZ MUÑIZ, JAIME (Tutorships)
Court
ACOSTA PLAZA, EVA MARIA (Chairman)
VIEITES DIAZ, MARIA (Secretary)
Wu , Bin (Member)
ACOSTA PLAZA, EVA MARIA (Chairman)
VIEITES DIAZ, MARIA (Secretary)
Wu , Bin (Member)
Optical properties of gaseous electron multipliers
Authorship
D.R.T.
Bachelor of Physics
D.R.T.
Bachelor of Physics
Defense date
07.18.2025 16:00
07.18.2025 16:00
Summary
The objective of this work is to characterize the diffractive properties and optical response of GEM (Gas Electron Multiplier) type structures in a gas avalanche environment, with the ultimate goal of understanding the optimal design of these structures for use in a time-projection chamber for neutrino physics, such as in the DUNE experiment.
The objective of this work is to characterize the diffractive properties and optical response of GEM (Gas Electron Multiplier) type structures in a gas avalanche environment, with the ultimate goal of understanding the optimal design of these structures for use in a time-projection chamber for neutrino physics, such as in the DUNE experiment.
Direction
GONZALEZ DIAZ, DIEGO (Tutorships)
GONZALEZ DIAZ, DIEGO (Tutorships)
Court
ACOSTA PLAZA, EVA MARIA (Chairman)
VIEITES DIAZ, MARIA (Secretary)
Wu , Bin (Member)
ACOSTA PLAZA, EVA MARIA (Chairman)
VIEITES DIAZ, MARIA (Secretary)
Wu , Bin (Member)
Undergraduate dissertation
Authorship
H.R.F.
Bachelor of Physics
H.R.F.
Bachelor of Physics
Defense date
07.18.2025 16:00
07.18.2025 16:00
Summary
This work focuses on high-resolution spectroscopy of exoplanetary atmospheres. In this case we studied the warm super-Neptune/sub-Saturn WASP-107b, discovered in 2017 by the WASP project, which is one of the lowest-density exoplanets known as of today. The main objective of this project is to study the presence (or absence) of 4 molecules in this planet’s atmosphere: H2O, CO, CO2 and CH4. For this study we used 3 simulations of the atmosphere of this exoplanet with different noise levels, based on real observations made with the CRIRES+ instrument at the Very Large Telescope (VLT) in Chile. This instrument is essential due to its high resolution power and its ability to operate in the infrared range, which is crucial for detecting the mentioned molecules, as they exhibit strong transitions in this region. For data processing 2 methods were used. First, the data was processed through the CRIRES+ reduction pipeline. After this, 2 different data reduction algorithms were used: SYSREM and PCA. Both methods are used to remove the spectral lines from the host star (WASP-107) and the Earth’s atmosphere. Finally, the cross-correlation technique was used to compare the planetary spectrum with models of the 4 molecules we are looking to detect. In the first place, we study the results of the use of SYSREM and cross-correlation, which reveal the presence of only water in our simulations of WASP-107b’s atmosphere. Because of this, the study with PCA and cross-correlation performed after was just done for this molecule, which results confirmed the ones obtained by the previous method.
This work focuses on high-resolution spectroscopy of exoplanetary atmospheres. In this case we studied the warm super-Neptune/sub-Saturn WASP-107b, discovered in 2017 by the WASP project, which is one of the lowest-density exoplanets known as of today. The main objective of this project is to study the presence (or absence) of 4 molecules in this planet’s atmosphere: H2O, CO, CO2 and CH4. For this study we used 3 simulations of the atmosphere of this exoplanet with different noise levels, based on real observations made with the CRIRES+ instrument at the Very Large Telescope (VLT) in Chile. This instrument is essential due to its high resolution power and its ability to operate in the infrared range, which is crucial for detecting the mentioned molecules, as they exhibit strong transitions in this region. For data processing 2 methods were used. First, the data was processed through the CRIRES+ reduction pipeline. After this, 2 different data reduction algorithms were used: SYSREM and PCA. Both methods are used to remove the spectral lines from the host star (WASP-107) and the Earth’s atmosphere. Finally, the cross-correlation technique was used to compare the planetary spectrum with models of the 4 molecules we are looking to detect. In the first place, we study the results of the use of SYSREM and cross-correlation, which reveal the presence of only water in our simulations of WASP-107b’s atmosphere. Because of this, the study with PCA and cross-correlation performed after was just done for this molecule, which results confirmed the ones obtained by the previous method.
Direction
ALVAREZ POL, HECTOR (Tutorships)
Morello , Giuseppe (Co-tutorships)
ALVAREZ POL, HECTOR (Tutorships)
Morello , Giuseppe (Co-tutorships)
Court
ACOSTA PLAZA, EVA MARIA (Chairman)
VIEITES DIAZ, MARIA (Secretary)
Wu , Bin (Member)
ACOSTA PLAZA, EVA MARIA (Chairman)
VIEITES DIAZ, MARIA (Secretary)
Wu , Bin (Member)
Search for the B+ - gamma pi+pi-pi+ decay with the LHCb detector
Authorship
J.P.S.T.
Bachelor of Physics
J.P.S.T.
Bachelor of Physics
Defense date
07.17.2025 09:00
07.17.2025 09:00
Summary
This report is focused on the study of two decay channels of B+ mesons into final states containing a high-energy photon and three hadrons (two charged pions and a charged kaon or three charged pions). The main purpose is to compare both channels and determine the presence of the latter in LHCb data. To achieve this goal, both data samples obtained during LHCb’s RunII and Monte Carlo simulations are used. The selection efficiency is evaluated by fitting the invariant mass distribution of the selected events, comparing the characteristics of the selected data in both decay channels and performing a likelihood-ratio test on the second one. From a technical perspective, Python-ROOT notions, neural network analysis techniques, and data visualization and fitting modules were employed.
This report is focused on the study of two decay channels of B+ mesons into final states containing a high-energy photon and three hadrons (two charged pions and a charged kaon or three charged pions). The main purpose is to compare both channels and determine the presence of the latter in LHCb data. To achieve this goal, both data samples obtained during LHCb’s RunII and Monte Carlo simulations are used. The selection efficiency is evaluated by fitting the invariant mass distribution of the selected events, comparing the characteristics of the selected data in both decay channels and performing a likelihood-ratio test on the second one. From a technical perspective, Python-ROOT notions, neural network analysis techniques, and data visualization and fitting modules were employed.
Direction
SANTAMARINA RIOS, CIBRAN (Tutorships)
VIEITES DIAZ, MARIA (Co-tutorships)
SANTAMARINA RIOS, CIBRAN (Tutorships)
VIEITES DIAZ, MARIA (Co-tutorships)
Court
RUSO VEIRAS, JUAN MANUEL (Chairman)
CAZON BOADO, LORENZO (Secretary)
AYYAD LIMONGE, FRANCESC YASSID (Member)
RUSO VEIRAS, JUAN MANUEL (Chairman)
CAZON BOADO, LORENZO (Secretary)
AYYAD LIMONGE, FRANCESC YASSID (Member)
Study of a rare 49K decay: implications in geochronology
Authorship
L.S.M.
Bachelor of Physics
L.S.M.
Bachelor of Physics
Defense date
07.18.2025 16:00
07.18.2025 16:00
Summary
This thesis analyzes a rare nuclear decay of 40K: the direct electron capture to the ground state of 40Ar, a third-forbidden nuclear transition. Although theoretically predicted, this decay was not experimentally observed until 2023, thanks to the KDK (Potassium Decay) experiment. Its detection represents a milestone in nuclear physics, as it is the only observed third-forbidden electron capture to date. The study focuses on the physical principles of the process, the experimental methodology used by KDK, and the implications of this discovery across several fields. First, its impact on dark matter detection experiments, such as DAMA/LIBRA, is examined, since the Auger electrons and X-rays generated by this decay may mimic dark matter signals. Second, the connection with neutrinoless double beta decay is explored, providing valuable data to refine nuclear models that predict its half-life. Finally, the consequences for geochronology are analyzed, as the newly measured decay branch alters the age calculation in the 40K/40Ar dating method. Including this new decay channel leads to a slight reduction in estimated geological ages, particularly for very old samples where high precision is required. A quantitative simulation is presented, showing the effect of this correction as a function of the 40K/40Ar isotopic ratio. Overall, this thesis demonstrates how an experimental finding in nuclear physics can have significant and far-reaching consequences, from the search for new physics to improved accuracy in geological time scales.
This thesis analyzes a rare nuclear decay of 40K: the direct electron capture to the ground state of 40Ar, a third-forbidden nuclear transition. Although theoretically predicted, this decay was not experimentally observed until 2023, thanks to the KDK (Potassium Decay) experiment. Its detection represents a milestone in nuclear physics, as it is the only observed third-forbidden electron capture to date. The study focuses on the physical principles of the process, the experimental methodology used by KDK, and the implications of this discovery across several fields. First, its impact on dark matter detection experiments, such as DAMA/LIBRA, is examined, since the Auger electrons and X-rays generated by this decay may mimic dark matter signals. Second, the connection with neutrinoless double beta decay is explored, providing valuable data to refine nuclear models that predict its half-life. Finally, the consequences for geochronology are analyzed, as the newly measured decay branch alters the age calculation in the 40K/40Ar dating method. Including this new decay channel leads to a slight reduction in estimated geological ages, particularly for very old samples where high precision is required. A quantitative simulation is presented, showing the effect of this correction as a function of the 40K/40Ar isotopic ratio. Overall, this thesis demonstrates how an experimental finding in nuclear physics can have significant and far-reaching consequences, from the search for new physics to improved accuracy in geological time scales.
Direction
FERNANDEZ DOMINGUEZ, BEATRIZ (Tutorships)
FERNANDEZ DOMINGUEZ, BEATRIZ (Tutorships)
Court
ACOSTA PLAZA, EVA MARIA (Chairman)
VIEITES DIAZ, MARIA (Secretary)
Wu , Bin (Member)
ACOSTA PLAZA, EVA MARIA (Chairman)
VIEITES DIAZ, MARIA (Secretary)
Wu , Bin (Member)
Coupling constant and effective charges in Quantum Chromodynamics
Authorship
J.S.F.
Bachelor of Physics
J.S.F.
Bachelor of Physics
Defense date
07.17.2025 09:00
07.17.2025 09:00
Summary
Quantum Chromodynamics (QCD) is the theory that describes the strong interaction in the Standard Model. This work reviews the fundamental concepts of QCD and, in particular, the strong coupling constant, alpha_s, whose dependence on the transferred momentum is essential to understanding the dynamics of quarks and gluons. At high transfered momentum, this constant can be calculated using perturbation theory, thanks to the property of asymptotic freedom. However, at low transfered momentum, perturbation theory breaks down, and non-perturbative methods become necessary. These approaches allow us to describe phenomena such as the generation of hadron mass and quark confinement, both of which are deeply connected to the infrared behavior of alpha_s. In addition, the concept of effective charge, an alternative way of defining the coupling constant from physical observables, is introduced and various relevant definitions in QCD are discussed. These allow the analysis of alpha_s to be extended beyond the perturbative regime, avoiding ambiguities related to the renormalization scheme. Finally, future perspectives in QCD research which may contribute to a deeper understanding of the strong interaction are discussed.
Quantum Chromodynamics (QCD) is the theory that describes the strong interaction in the Standard Model. This work reviews the fundamental concepts of QCD and, in particular, the strong coupling constant, alpha_s, whose dependence on the transferred momentum is essential to understanding the dynamics of quarks and gluons. At high transfered momentum, this constant can be calculated using perturbation theory, thanks to the property of asymptotic freedom. However, at low transfered momentum, perturbation theory breaks down, and non-perturbative methods become necessary. These approaches allow us to describe phenomena such as the generation of hadron mass and quark confinement, both of which are deeply connected to the infrared behavior of alpha_s. In addition, the concept of effective charge, an alternative way of defining the coupling constant from physical observables, is introduced and various relevant definitions in QCD are discussed. These allow the analysis of alpha_s to be extended beyond the perturbative regime, avoiding ambiguities related to the renormalization scheme. Finally, future perspectives in QCD research which may contribute to a deeper understanding of the strong interaction are discussed.
Direction
MERINO GAYOSO, CARLOS MIGUEL (Tutorships)
MERINO GAYOSO, CARLOS MIGUEL (Tutorships)
Court
RUSO VEIRAS, JUAN MANUEL (Chairman)
CAZON BOADO, LORENZO (Secretary)
AYYAD LIMONGE, FRANCESC YASSID (Member)
RUSO VEIRAS, JUAN MANUEL (Chairman)
CAZON BOADO, LORENZO (Secretary)
AYYAD LIMONGE, FRANCESC YASSID (Member)
Ab initio study of semiconductor semiconductor and semiconductor metal heterostructures
Authorship
P.S.R.
Bachelor of Physics
P.S.R.
Bachelor of Physics
Defense date
07.17.2025 09:00
07.17.2025 09:00
Summary
Heterostructures of different materials are fundamental to modern electronics enabling the tuning of various parameters of the electronic properties of devices used throughout cutting edge technology. In this work we will study at the level of electronic structure semiconductor semiconductor and semiconductor metal heterostructures based on semiconductors such as Si and Ge and metals such as Ca and Ag. To carry out the study we will employ density functional theory DFT calculations widely used in computational condensed matter physics using the software Wien2K one of the most popular and important tools in scientific research in this field. Thus we will investigate the behavior of charge carriers and electronic states when different heterojunctions are formed using a high performance computational environment and accompanying the analysis with a qualitative study of the results.
Heterostructures of different materials are fundamental to modern electronics enabling the tuning of various parameters of the electronic properties of devices used throughout cutting edge technology. In this work we will study at the level of electronic structure semiconductor semiconductor and semiconductor metal heterostructures based on semiconductors such as Si and Ge and metals such as Ca and Ag. To carry out the study we will employ density functional theory DFT calculations widely used in computational condensed matter physics using the software Wien2K one of the most popular and important tools in scientific research in this field. Thus we will investigate the behavior of charge carriers and electronic states when different heterojunctions are formed using a high performance computational environment and accompanying the analysis with a qualitative study of the results.
Direction
PARDO CASTRO, VICTOR (Tutorships)
PARDO CASTRO, VICTOR (Tutorships)
Court
RUSO VEIRAS, JUAN MANUEL (Chairman)
CAZON BOADO, LORENZO (Secretary)
AYYAD LIMONGE, FRANCESC YASSID (Member)
RUSO VEIRAS, JUAN MANUEL (Chairman)
CAZON BOADO, LORENZO (Secretary)
AYYAD LIMONGE, FRANCESC YASSID (Member)
Characterization of the Quark-Gluon Plasma and its signatures
Authorship
G.S.R.
Bachelor of Physics
G.S.R.
Bachelor of Physics
Defense date
07.17.2025 09:00
07.17.2025 09:00
Summary
The study of matter governed by the strong interaction at high temperatures and densities is a highly relevant topic, due to its production in heavy ion collisions at accelerators such as the Large Hadron Collider (LHC) at CERN, and because of its implications in cosmology and in the behavior of massive astrophysical objects such as neutron stars. This work will focus on the description of the Quark-Gluon Plasma (QGP), analyzing its phase transition between this state and hadronic matter, and determining the critical temperature at which this change occurs. In particular, the MIT bag model will be useful, as it allows us to obtain the values of the temperature at zero chemical potential and the baryon density at zero temperature at which deconfinement occurs.
The study of matter governed by the strong interaction at high temperatures and densities is a highly relevant topic, due to its production in heavy ion collisions at accelerators such as the Large Hadron Collider (LHC) at CERN, and because of its implications in cosmology and in the behavior of massive astrophysical objects such as neutron stars. This work will focus on the description of the Quark-Gluon Plasma (QGP), analyzing its phase transition between this state and hadronic matter, and determining the critical temperature at which this change occurs. In particular, the MIT bag model will be useful, as it allows us to obtain the values of the temperature at zero chemical potential and the baryon density at zero temperature at which deconfinement occurs.
Direction
GONZALEZ FERREIRO, ELENA (Tutorships)
GONZALEZ FERREIRO, ELENA (Tutorships)
Court
RUSO VEIRAS, JUAN MANUEL (Chairman)
CAZON BOADO, LORENZO (Secretary)
AYYAD LIMONGE, FRANCESC YASSID (Member)
RUSO VEIRAS, JUAN MANUEL (Chairman)
CAZON BOADO, LORENZO (Secretary)
AYYAD LIMONGE, FRANCESC YASSID (Member)
Skyrmion approximation by instanton holonomy on a Yang-Mills theory.
Authorship
C.S.H.
Bachelor of Physics
C.S.H.
Bachelor of Physics
Defense date
07.17.2025 09:00
07.17.2025 09:00
Summary
The Skyrme model was proposed by english mathematical physicist Tony Skyrme as an effective model for baryons described by solitons on a field taking values on the Lie group SU(2) (Skyrmions). Later, Atiyah and Manton published an article where they obtained said solitons from other soliton-type configurations on a Yang-Mills field with four euclidean dimensions (instantons). In this work the relation between Skyrmions and instantons will be revised, reproducing the results by Atiyah and Manton
The Skyrme model was proposed by english mathematical physicist Tony Skyrme as an effective model for baryons described by solitons on a field taking values on the Lie group SU(2) (Skyrmions). Later, Atiyah and Manton published an article where they obtained said solitons from other soliton-type configurations on a Yang-Mills field with four euclidean dimensions (instantons). In this work the relation between Skyrmions and instantons will be revised, reproducing the results by Atiyah and Manton
Direction
ADAM , CHRISTOPH (Tutorships)
García Martín-Caro, Alberto (Co-tutorships)
ADAM , CHRISTOPH (Tutorships)
García Martín-Caro, Alberto (Co-tutorships)
Court
RUSO VEIRAS, JUAN MANUEL (Chairman)
CAZON BOADO, LORENZO (Secretary)
AYYAD LIMONGE, FRANCESC YASSID (Member)
RUSO VEIRAS, JUAN MANUEL (Chairman)
CAZON BOADO, LORENZO (Secretary)
AYYAD LIMONGE, FRANCESC YASSID (Member)
AI-powered neutron tomography for advanced materials inspection
Authorship
R.S.D.
Bachelor of Physics
R.S.D.
Bachelor of Physics
Defense date
07.18.2025 16:00
07.18.2025 16:00
Summary
This work explores the feasibility of using convolutional neural networks (CNNs) to solve the inverse problem in material characterization via neutron transmission. The objective is to determine the isotopic composition of a sample from its energy-dependent transmittance spectrum. To this end, a dual approach was developed. First, a Geant4 simulation was implemented to generate high-fidelity transmittance data. Second, an artificial intelligence model was built in Python using Keras and TensorFlow, trained on a large dataset of theoretical curves generated through the Beer-Lambert law and cross-section data from the EXFOR database. The results show that the AI model can predict the composition of isotopic mixtures with a very low mean absolute error under ideal simulation conditions (high collimation). The model's performance degradation when reducing collimation has been quantified, validating that the build-up effect is a significant source of systematic error. Furthermore, the model's robustness against energy uncertainty was analyzed, concluding that training with noisy data enhances generalization. The scalability study shows that high accuracy is maintained for multi-component mixtures, identifying that the method's main limitation is not the number of isotopes, but the overlap between their spectral signatures. This project lays a solid foundation, demonstrating the potential of combining Monte Carlo simulation and machine learning for the development of new non-destructive testing techniques through neutron tomography.
This work explores the feasibility of using convolutional neural networks (CNNs) to solve the inverse problem in material characterization via neutron transmission. The objective is to determine the isotopic composition of a sample from its energy-dependent transmittance spectrum. To this end, a dual approach was developed. First, a Geant4 simulation was implemented to generate high-fidelity transmittance data. Second, an artificial intelligence model was built in Python using Keras and TensorFlow, trained on a large dataset of theoretical curves generated through the Beer-Lambert law and cross-section data from the EXFOR database. The results show that the AI model can predict the composition of isotopic mixtures with a very low mean absolute error under ideal simulation conditions (high collimation). The model's performance degradation when reducing collimation has been quantified, validating that the build-up effect is a significant source of systematic error. Furthermore, the model's robustness against energy uncertainty was analyzed, concluding that training with noisy data enhances generalization. The scalability study shows that high accuracy is maintained for multi-component mixtures, identifying that the method's main limitation is not the number of isotopes, but the overlap between their spectral signatures. This project lays a solid foundation, demonstrating the potential of combining Monte Carlo simulation and machine learning for the development of new non-destructive testing techniques through neutron tomography.
Direction
AYYAD LIMONGE, FRANCESC YASSID (Tutorships)
AYYAD LIMONGE, FRANCESC YASSID (Tutorships)
Court
ACOSTA PLAZA, EVA MARIA (Chairman)
VIEITES DIAZ, MARIA (Secretary)
Wu , Bin (Member)
ACOSTA PLAZA, EVA MARIA (Chairman)
VIEITES DIAZ, MARIA (Secretary)
Wu , Bin (Member)
Using GNNs to enhance the reach of LHCb in the search for long-lived particles
Authorship
R.S.R.
Bachelor of Physics
R.S.R.
Bachelor of Physics
Defense date
07.17.2025 09:00
07.17.2025 09:00
Summary
This Bachelor’s Thesis studies the reconstruction of charged particle trajectories in the LHCb experiment using graph neural networks (GNNs). The reconstruction efficiencies obtained for trajectories with hits in the vertex locator (VELO) are high, comparable to those achieved by the LHCb software project known as Allen. The second stage of this Bachelor’s Thesis consists of applying the knowledge acquired to the reconstruction of T-tracks, which only have hits in the SciFi detector, located several meters away from the VELO. This presents additional challenges, as it involves a problem with higher multiplicity, that is, a higher density of trajectories to be reconstructed. Consequently, the efficiencies achieved are lower. Nevertheless, it is an interesting exercise for two main reasons: its novel nature, with results never obtained before, and the potential importance of T-tracks, which are usually excluded from analyses, in extending the capability of the LHCb experiment to search for long-lived particles, both within and beyond the Standard Model of Particle Physics.
This Bachelor’s Thesis studies the reconstruction of charged particle trajectories in the LHCb experiment using graph neural networks (GNNs). The reconstruction efficiencies obtained for trajectories with hits in the vertex locator (VELO) are high, comparable to those achieved by the LHCb software project known as Allen. The second stage of this Bachelor’s Thesis consists of applying the knowledge acquired to the reconstruction of T-tracks, which only have hits in the SciFi detector, located several meters away from the VELO. This presents additional challenges, as it involves a problem with higher multiplicity, that is, a higher density of trajectories to be reconstructed. Consequently, the efficiencies achieved are lower. Nevertheless, it is an interesting exercise for two main reasons: its novel nature, with results never obtained before, and the potential importance of T-tracks, which are usually excluded from analyses, in extending the capability of the LHCb experiment to search for long-lived particles, both within and beyond the Standard Model of Particle Physics.
Direction
CID VIDAL, XABIER (Tutorships)
FERNANDEZ GOMEZ, MIGUEL (Co-tutorships)
CID VIDAL, XABIER (Tutorships)
FERNANDEZ GOMEZ, MIGUEL (Co-tutorships)
Court
RUSO VEIRAS, JUAN MANUEL (Chairman)
CAZON BOADO, LORENZO (Secretary)
AYYAD LIMONGE, FRANCESC YASSID (Member)
RUSO VEIRAS, JUAN MANUEL (Chairman)
CAZON BOADO, LORENZO (Secretary)
AYYAD LIMONGE, FRANCESC YASSID (Member)
Jaynes-Cummings model generalizations.
Authorship
R.S.L.
Bachelor of Physics
R.S.L.
Bachelor of Physics
Defense date
07.17.2025 09:00
07.17.2025 09:00
Summary
The Jaynes-Cummings model is a simple tool used to describe the interaction between a Rydberg atom (with two energetic levels) and the quantum electromagnetic radiation. In its most elemental version, the model is solvable in an exact way. It has been used with the objective of representing the quantum electrodynamics in cavities as well as proving some of the quantum mechanics fundamentals. This thesis will mainly revolve around the generalisations of this model, specifically the one where the entanglement between the atom and the radiation is a time-dependent function, allowing the study of dynamic effects and the exploration of new ways of controlling the light-matter interaction. The main point of studying this generalisation is its application in fields such as superconductor circuits and optical cavities, where the entanglement modulation can be used experimentally. Additionally, the time dependence introduces interesting phenomena, such as the controlled entanglement generation and the quantum state engineering.
The Jaynes-Cummings model is a simple tool used to describe the interaction between a Rydberg atom (with two energetic levels) and the quantum electromagnetic radiation. In its most elemental version, the model is solvable in an exact way. It has been used with the objective of representing the quantum electrodynamics in cavities as well as proving some of the quantum mechanics fundamentals. This thesis will mainly revolve around the generalisations of this model, specifically the one where the entanglement between the atom and the radiation is a time-dependent function, allowing the study of dynamic effects and the exploration of new ways of controlling the light-matter interaction. The main point of studying this generalisation is its application in fields such as superconductor circuits and optical cavities, where the entanglement modulation can be used experimentally. Additionally, the time dependence introduces interesting phenomena, such as the controlled entanglement generation and the quantum state engineering.
Direction
Vázquez Ramallo, Alfonso (Tutorships)
Vázquez Ramallo, Alfonso (Tutorships)
Court
RUSO VEIRAS, JUAN MANUEL (Chairman)
CAZON BOADO, LORENZO (Secretary)
AYYAD LIMONGE, FRANCESC YASSID (Member)
RUSO VEIRAS, JUAN MANUEL (Chairman)
CAZON BOADO, LORENZO (Secretary)
AYYAD LIMONGE, FRANCESC YASSID (Member)
Mathematical modeling of the vestibular system: correlations between real and simulated membranes using differential geometry
Authorship
U.T.B.
Bachelor of Physics
U.T.B.
Bachelor of Physics
Defense date
07.17.2025 09:00
07.17.2025 09:00
Summary
Benign Paroxysmal Positional Vertigo (BPPV) is caused by the displacement of otoconias in the vestibular system and, although the Epley's manoeuvre is the standard treatment, it fails in 12.5% of the cases due to the individual anatomical differences. To address this, a personalized approach based on the mathematical reconstruction of the membranous labyrinth, difficult to visualize directly because its size and fragility, was proposed. The process is based on the use of the bony labyrinth as the anatomical reference and in the use of the centerlines to describe the geometry of the semicircular channels, adding a modified moving trihedron which boosts the numerical stability compared to Frenet-Serret's traditional system. That allows us to represent with greater accuracy the complex geometry of the inner ear, avoiding the possible errors derived from global coordinates. The analysis of the five ears shows a better consistency in the angular correlations between vectors of the moving trihedron than between its magnitudes. Moreover, we were able to see that the discrepancy among the simulated structures and real ones, which are concentrated in localited regions, keeping a general coherence on the orientation of the membranes. The results validate the use of differential geometry techniques to adapt the treatments of the BPPV for each patient, allow the medical staff to be more precise in the therapeutic maneuver planning. This personalized treatment could reduce significantly the failures of the conventional treatment and increase its efficiency representing a promising improvement in the struggle against BPPV.
Benign Paroxysmal Positional Vertigo (BPPV) is caused by the displacement of otoconias in the vestibular system and, although the Epley's manoeuvre is the standard treatment, it fails in 12.5% of the cases due to the individual anatomical differences. To address this, a personalized approach based on the mathematical reconstruction of the membranous labyrinth, difficult to visualize directly because its size and fragility, was proposed. The process is based on the use of the bony labyrinth as the anatomical reference and in the use of the centerlines to describe the geometry of the semicircular channels, adding a modified moving trihedron which boosts the numerical stability compared to Frenet-Serret's traditional system. That allows us to represent with greater accuracy the complex geometry of the inner ear, avoiding the possible errors derived from global coordinates. The analysis of the five ears shows a better consistency in the angular correlations between vectors of the moving trihedron than between its magnitudes. Moreover, we were able to see that the discrepancy among the simulated structures and real ones, which are concentrated in localited regions, keeping a general coherence on the orientation of the membranes. The results validate the use of differential geometry techniques to adapt the treatments of the BPPV for each patient, allow the medical staff to be more precise in the therapeutic maneuver planning. This personalized treatment could reduce significantly the failures of the conventional treatment and increase its efficiency representing a promising improvement in the struggle against BPPV.
Direction
Pérez Muñuzuri, Alberto (Tutorships)
SAAVEDRA LOPEZ, MARTIN (Co-tutorships)
Pérez Muñuzuri, Alberto (Tutorships)
SAAVEDRA LOPEZ, MARTIN (Co-tutorships)
Court
RUSO VEIRAS, JUAN MANUEL (Chairman)
CAZON BOADO, LORENZO (Secretary)
AYYAD LIMONGE, FRANCESC YASSID (Member)
RUSO VEIRAS, JUAN MANUEL (Chairman)
CAZON BOADO, LORENZO (Secretary)
AYYAD LIMONGE, FRANCESC YASSID (Member)
Study of the unbound lithium-10 nucleus with ACTAR TPC
Authorship
D.V.L.
Bachelor of Physics
D.V.L.
Bachelor of Physics
Defense date
07.17.2025 09:00
07.17.2025 09:00
Summary
This work focuses on the study of the ground-state wave function of 11Li and the structure of the 10Li nucleus through the transfer reaction 11Li(d,t)10Li. This reaction makes it possible to determine the different proportions of the 2s1/2 and 1p1/2 orbitals present in the ground state of 11Li and to measure the spectroscopy of 10Li. The ACTAR TPC detector was used, which allows for highly precise tracking of particles thanks to its operation in inverse kinematics and the use of an active gas that serves as both target and detection medium. In addition, walls of silicon detectors are available to measure the residual energy of the outgoing particles, which is key to reconstructing the excitation energy of 10Li. The simulation developed, based on Monte Carlo methods, took into account effects such as straggling, the energy resolution of the detectors, and the angular resolution. The results obtained show that the angular resolution is the main source of uncertainty, limiting the ability to clearly separate the 2s1/2 and 1p1/2 states. Improving this precision would enable a more detailed characterization of the resonant states of 10Li and, by extension, of the halo of 11Li. Prospects for improving statistics are also seen through track reconstruction with the ACTAR TPC detector. This study contributes to the improvement of phenomenological models that describe exotic nuclei near the dripline, which are essential for understanding the nucleosynthesis processes of heavy elements in the universe.
This work focuses on the study of the ground-state wave function of 11Li and the structure of the 10Li nucleus through the transfer reaction 11Li(d,t)10Li. This reaction makes it possible to determine the different proportions of the 2s1/2 and 1p1/2 orbitals present in the ground state of 11Li and to measure the spectroscopy of 10Li. The ACTAR TPC detector was used, which allows for highly precise tracking of particles thanks to its operation in inverse kinematics and the use of an active gas that serves as both target and detection medium. In addition, walls of silicon detectors are available to measure the residual energy of the outgoing particles, which is key to reconstructing the excitation energy of 10Li. The simulation developed, based on Monte Carlo methods, took into account effects such as straggling, the energy resolution of the detectors, and the angular resolution. The results obtained show that the angular resolution is the main source of uncertainty, limiting the ability to clearly separate the 2s1/2 and 1p1/2 states. Improving this precision would enable a more detailed characterization of the resonant states of 10Li and, by extension, of the halo of 11Li. Prospects for improving statistics are also seen through track reconstruction with the ACTAR TPC detector. This study contributes to the improvement of phenomenological models that describe exotic nuclei near the dripline, which are essential for understanding the nucleosynthesis processes of heavy elements in the universe.
Direction
FERNANDEZ DOMINGUEZ, BEATRIZ (Tutorships)
LOZANO GONZALEZ, MIGUEL (Co-tutorships)
FERNANDEZ DOMINGUEZ, BEATRIZ (Tutorships)
LOZANO GONZALEZ, MIGUEL (Co-tutorships)
Court
RUSO VEIRAS, JUAN MANUEL (Chairman)
CAZON BOADO, LORENZO (Secretary)
AYYAD LIMONGE, FRANCESC YASSID (Member)
RUSO VEIRAS, JUAN MANUEL (Chairman)
CAZON BOADO, LORENZO (Secretary)
AYYAD LIMONGE, FRANCESC YASSID (Member)