Foundational AI research
We work on foundational Generative AI models, focusing on GAN architectures and diffusion models.
- In the case of GANs, we analyze the interactions between the generator and discriminator to adapt these models to specific use cases. This knowledge allows us to optimize their performance and explore new forms of interaction that could surpass the current capabilities of GANs. These optimized architectures are applied, for instance, in the virtual reconstruction of buildings from images of ruins, improving upon the results of models like pix2pix.
- We have developed a diffusion model capable of generating images from text with more detailed descriptions than those supported by models like Stable Diffusion. This model employs the "diffusion" technique to gradually transform a simple distribution, such as a Gaussian, into a more complex probabilistic distribution that represents images, extracting patterns from them. This enables the generation of new images related to a specific field, offering precise control over the quality and diversity of the outputs, while requiring less computational power compared to other general-purpose models.
Development of new models in the field of NLP that are competitive with current architectures. The research focuses on designing new architectures, generally based on BERT, incorporating layers, loss functions, and optimization techniques that together define a model focused on text analysis for image generation (Text2Image). A collateral line of work delves into Visual Transformers.
We also explore the application of NLP models in fields beyond language processing and analysis.
- In the case of biomedical signals, particularly those related to neuroscience, adapting these techniques to analyze such signals provides new solutions for the diagnosis and prognosis of neurodegenerative diseases such as Parkinson's and epilepsy. This supports the development of diagnostic biomarkers and tools for assessing disease staging.
- In the field of cybersecurity, the classification and detection of threats also benefit from adapting transformer models to this domain. These adaptations help uncover patterns hidden in traces of cyberattacks and viruses, enhancing threat detection and analysis.