Transformadores: Fundamentos teoricos y Aplicaciones

02/18/2023
by   Jordi de la Torre, et al.
0

Transformers are a neural network architecture originally designed for natural language processing that it is now a mainstream tool for solving a wide variety of problems, including natural language processing, sound, image, reinforcement learning, and other problems with heterogeneous input data. Its distinctive feature is its self-attention system, based on attention to one's own sequence, which derives from the previously introduced attention system. This article provides the reader with the necessary context to understand the most recent research articles and presents the mathematical and algorithmic foundations of the elements that make up this type of network. The different components that make up this architecture and the variations that may exist are also studied, as well as some applications of the transformer models. This article is in Spanish to bring this scientific knowledge to the Spanish-speaking community.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/30/2022

Deep Reinforcement Learning with Swin Transformer

Transformers are neural network models that utilize multiple layers of s...
research
02/04/2019

Attention, please! A Critical Review of Neural Attention Models in Natural Language Processing

Attention is an increasingly popular mechanism used in a wide range of n...
research
02/01/2022

Natural Language to Code Using Transformers

We tackle the problem of generating code snippets from natural language ...
research
03/23/2021

Attention-based neural re-ranking approach for next city in trip recommendations

This paper describes an approach to solving the next destination city re...
research
02/03/2021

Learning to Match Mathematical Statements with Proofs

We introduce a novel task consisting in assigning a proof to a given mat...
research
02/24/2021

Probing Classifiers: Promises, Shortcomings, and Alternatives

Probing classifiers have emerged as one of the prominent methodologies f...
research
03/27/2023

Evaluating self-attention interpretability through human-grounded experimental protocol

Attention mechanisms have played a crucial role in the development of co...

Please sign up or login with your details

Forgot password? Click here to reset