Quant Mashup - Gatambook Deep Latent Variable Models [Gatambook]In our previous blog post, we introduced latent variable models, where the latent variable can be thought of as a feature vector that has been “encoded” efficiently. This encoding turns the feature vector X into a context vector z. Latent variable models sound very GenAI-zy, but they descend(...) Features Selection in the Age of Generative AI [Gatambook]Features are inputs to machine learning algorithms. Sometimes also called independent variables, covariates, or just X, they can be used for supervised or unsupervised learning, or for optimization. For example, at QTS, we use more than 100 of them as inputs to dynamically calibrate the allocation(...) Deep Reinforcement Learning for Portfolio Optimization [Gatambook]We wrote a lot about transformers in the last three blog posts. Their sole purpose was for feature transformation / importance weighting. These transformed and attention-weighted features will be used as input to downstream applications. In this blog post, we will discuss one such application:(...) Cross-Attention for Cross-Asset Applications [Gatambook]In the previous blog post, we saw how we can apply self-attention transformers to a matrix of time series features of a single stock. The output of that transformer is a transformed feature vector r of dimension 768 × 1. 768 is the result of 12 × 64: all the lagged features are concatenated /(...) Applying Transformers to Financial Time Series [Gatambook]In the previous blog post, we gave a very simple example of how traders can use self-attention transformers as a feature selection method: in this case, to select which previous returns of a stock to use for predictions or optimizations. To be precise, the transformer assigns weights on the(...)